uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,108,101,562,915 | arxiv | \section{Introduction}
As of May 15, 2020, 4593395 cases and 306376 deaths from 2019 novel coronavirus disease (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), were recorded worldwide \cite{csse2020}. Coronaviruses are enveloped non-segmented positive-sense RNA viruses that belong to the Coronaviridae family and the order Nidovirales, and are widely distributed among humans and other mammals \cite{richman2016clinical}. The novel coronavirus, COVID-19 started in mainland China, with a geographical emphasis at Wuhan, the capital city of Hubei province \cite{wang2020novel} and has widely spread all over the world. Many of the initial cases were usually introduced to the wholesale Huanan seafood market, which also traded live animals. Clinical trials of hospitalized patients found that patients exhibit symptoms consistent with viral pneumonia at the onset of COVID-19, most commonly fever, cough, sore throat and fatigue \cite{cdcgov2020}. Some patients reported changes in their ground-glass lungs; normal or lower than average white lymphocyte blood cell counts and platelet counts; hypoxemia; and deranged liver and kidney function. Most were said to be geographically related to the wholesale market of Huanan seafood \cite{chinadaily2019}. Severe outbreaks occur in USA ($1457593$ cases), Spain ($272646$ cases), Russia (262843 cases), UK (236711 cases), Italy (223885) and so many countries and the disease continues to spread globally. This has been declared a pandemic by the World Health Organization. It is the third zoonotic human coronavirus that has arisen in the present century, after the $2002$ severe acute respiratory syndrome coronavirus (SARS-CoV), which spread to 37 countries and the 2012 Middle East respiratory syndrome coronavirus (MERS-CoV), which spread to 27 countries.
The 2019 pandemic novel coronavirus was first confirmed in India on January 30, 2020, in the state of Kerala. A total of $82087$ confirmed cases and $2648$ deaths in the country have been reported as of May 15, 2020 \cite{indiacovid2020track}. The Indian government has introduced social distance as a precaution to avoid the possibility of a large-scale population movement that can accelerate the spread of the disease. India government implemented a 14-hour voluntary public curfew on 22 March 2020. Furthermore, the Prime Minister of India also ordered a nationwide lockdown at midnight on March 24, 2020 to slow the
spread of COVID-19. Despite no vaccine, social distancing has identified as the most commonly used prevention and control strategy \cite{ferguson2020report}. The purpose of these initiatives is the restriction of social interaction in workplaces, schools, and other public spheres, except for essential public services such as fire, police, hospitals. No doubt the spread of this virus outbreak has seriously disrupted the life, economy and health of citizens. This is a great concern for everyone how long this scenario will last and when the disease will be controlled.
Mathematical modeling based on system of differential equations may provide a comprehensive mechanism for the dynamics of a disease transmission~\cite{sardar2020realistic}. Several modeling studies have already been performed for the COVID-19 outbreak \cite{tang2020updated,quilty2020effectiveness,shen2020modelling,tang2020estimation, wu2020nowcasting}. Based on data collected from December 31, 2019 till January 28, 2020, Wu et al~\cite{wu2020nowcasting} developed a susceptible exposed infectious recovered model (SEIR) to clarify the transmission dynamics and projected national and global spread of disease. They also calculated around 2.68 is the basic reproductive number for COVID-19. Tang et al~\cite{tang2020estimation} proposed a compartmental deterministic model that would combine the clinical development of the disease, the epidemiological status of the patient and the measures for intervention. Researchers found that the amount of control reproduction number may be as high as 6.47, and that methods of intervention including intensive touch tracing followed by quarantine and isolation would effectively minimize COVID cases \cite{tang2020estimation}.
For the basic reproductive number, Read et al. reported a value of 3.1 based on the data fitting of an SEIR model, using an assumption of Poisson-distributed daily time increments \cite{read2020novel}. A report by Cambridge University has indicated that India's countrywide three-week lockdown would not be adequate to prevent a resurgence of the new coronavirus epidemic that could bounce back in months and cause thousands of infections \cite{singh2020age}. They suggested that two or three lockdowns can extend the slowdown longer with five-day breaks in between or a single 49-day lockdown. Data-driven mathematical modeling plays a key role in disease prevention, planning for future outbreaks and determining the effectiveness of control. Several data-driven modeling experiments have been performed in various regions \cite{tang2020estimation,chen2020mathematical}.
Currently, there are very limited works that studied the impact of lockdown on COVID-19 transmission dynamics in India.
In the present manuscript, we proposed a new mathematical model for COVID-19 that incorporates the lockdown effect. We also considered variability in transmission between symptomatic and asymptomatic populations with former being a fast spreader of the disease. Analyzing COVID-19 daily notified cases from five affected states (Maharashtra, Delhi, Tamil Nadu, Gujarat and Punjab) and from overall India, we studied the effect of social distancing measures implemented by the Govt. on notified cases reduction in those regions. We also estimates the basic reproduction numbers ($R_{0}$) for these six locations. Using a post-processing BMA technique, we ensemble our COVID-19 mathematical model with different statistical forecast model to obtain a projection of notified cases in those six locations for the period May 17, 2020 till May 31, 2020. A global sensitivity analysis is carried out to determine the correlation of two epidemiologically measurable parameters on lockdown effect and also on $R_{0}$. Finally to determine the COVID-19 transmission trend during the projection period (May 17, 2020 till May 31, 2020), we estimate the effective reproduction number ($R_{t}$) for the mentioned six locations.
\section{Method}
Based on the development and epidemiological characteristics of COVID-19, a SEIR type model is more appropriate to study the dynamics of this current pandemic \citep{kucharski2020early, peng2020epidemic, sardar2020realistic}. The model we developed in this paper is based on the interaction of seven mutually exclusive sub-classes namely, Susceptible ($S$), Lockdown ($L$), Exposed ($E$), Asymptomatic ($A$), Symptomatic ($I$), Hospitalized ($C$), and Recovered ($R$).
Susceptible population ($S$) increased due to constant recruitment rate $\Pi_{H}$ and those individuals coming back from lockdown compartment after the lockdown period $ \displaystyle\frac{1}{\omega}$. Population in the susceptible class decreased due to new infection in contact with symptomatic and asymptomatic infected population, natural death and also a fraction of the susceptible individuals become home-quarantine due to lockdown at a rate $l$. We also assumed variability in disease transmission in asymptomatic and symptomatic population with later being a fast spreader of infection with a variability factor $\left( 0 \leq \rho \leq 1\right)$~\citep{gumel2004modelling, mandal2020prudent}.
Lockdown population ($L$) increased by those susceptible who are home-quarantined during the lockdown period $ \displaystyle\frac{1}{\omega}$, at a rate $l$. Population under lockdown is decreased due to natural death and those individuals who become susceptible again after the lockdown period $ \displaystyle\frac{1}{\omega}$. For simplicity, we assume ideal lockdown scenario \textit{i.e.} all population under lockdown maintained proper social distancing and do not contribute to new infection.
Population in the exposed compartment ($E$) increased by new infection coming from susceptible compartment. A fraction $\kappa$ of the exposed individuals become symptomatic infected and remaining fraction ($1-\kappa$) become asymptomatic infected after the disease incubation period $\frac{1}{\sigma}$. Exposed population also decreased due to natural death at a rate $\mu$.
Asymptomatic infected compartment ($A$) increased due to a fraction ($1-\kappa$) of infection coming from exposed compartment. Since, asymptomatic COVID-19 cases are hard to detect therefore, we assume that asymptomatic infection are not notified. Population in this compartment is decreased due to natural recovery and deaths at a rate $\gamma_1$ and $\mu$, respectively.
Population in the symptomatic infected compartment ($I$) increased due to a fraction $\kappa$ of infection coming from exposed compartment after the incubation period $\frac{1}{\sigma}$. This compartment decreased due to natural recovery at a rate $\gamma_2$, natural death at a rate $\mu$ and those infected population who are notified \& hospitalized at a rate $\tau$.
Notified \& hospitalized infected population ($C$) increased due to influx of infection coming from symptomatic infected class at a rate $\tau$. This population decreased due to natural death at a rate $\mu$, disease related deaths at a rate $\delta$, and recovery from COVID-19 at a rate $\gamma_3$. We assume that population of this compartment do not mix with the general population in the community \textit{i.e.} this compartment do not contribute in the COVID-19 transmission.
Finally, recovered population ($R$) increased due to influx of individuals coming from asymptomatic ($A$), symptomatic ($I$), and notified \& hospitalized individuals ($C$) at a rate $\gamma_1$, $\gamma_2$, and $\gamma_3$, respectively. As we are analyzing this study in a shorter time frame therefore, we assume definitive immunity \textit{i.e.} recovered population do not contribute to new COVID-19 infection. Thus recovered population decreased due to natural death at a rate $\mu$.
Based on the above assumptions the system of equations that represent COVID-19 transmission with and without lockdown are provided below:
\subsection*{\textbf{Model without lock-down}}
\begin{eqnarray}\label{EQ:eqn 2.1}
\displaystyle{ \frac{dS}{dt} } &=& \Pi_{H}-\frac{\beta_1 I S}{\left(N - C\right) }-\frac{\rho \beta_1 A S}{\left(N-C \right) }-\mu S\nonumber\\
\displaystyle{ \frac{dE}{dt} } &=& \frac{\beta_1 I S}{\left(N - C\right) }+\frac{\rho \beta_1 A S}{\left(N-C \right) }-(\mu+\sigma)E,\nonumber \\
\displaystyle{ \frac{dA}{dt} } &=& (1-\kappa)\sigma E-(\gamma_1+\mu)A,\nonumber \\
\displaystyle{ \frac{dI}{dt} } &=& \kappa \sigma E-(\gamma_2+\tau+\mu)I, \\
\displaystyle{\frac{dC}{dt} } &=& \tau I-(\delta+\gamma_3+\mu)C, \nonumber \\
\displaystyle{ \frac{dR}{dt} } &=& \gamma_1 A+\gamma_2 I+\gamma_3 C-\mu R ,\nonumber
\end{eqnarray}
\subsection*{\textbf{Model with lock-down}}
\begin{eqnarray}\label{EQ:eqn 3.1}
\displaystyle{ \frac{dS}{dt} } &=& \Pi_{H} + \omega L -\frac{\beta_1 I S}{\left( N-L-C\right) }-\frac{\rho \beta_1 A S}{\left( N-L-C \right) }-\mu S-lS\nonumber\\
\displaystyle{ \frac{dL}{dt} } &=& l S - (\mu + \omega) L,\nonumber\\
\displaystyle{ \frac{dE}{dt} } &=& \frac{\beta_1 I S}{\left( N-L-C\right) }+\frac{\rho \beta_1 A S}{\left( N-L-C \right) }-(\mu+\sigma)E,\nonumber \\
\displaystyle{ \frac{dA}{dt} } &=& (1-\kappa)\sigma E-(\gamma_1+\mu)A,\nonumber \\
\displaystyle{ \frac{dI}{dt} } &=& \kappa \sigma E-(\gamma_2+\tau+\mu)I, \\
\displaystyle{\frac{dC}{dt} } &=& \tau I-(\delta+\gamma_3+\mu)C, \nonumber \\
\displaystyle{ \frac{dR}{dt} } &=& \gamma_1 A+\gamma_2 I+\gamma_3 C-\mu R .\nonumber
\end{eqnarray}A diagram of our model is provided in Fig~\ref{Fig:Flow_India_covid}. Information of our model~ parameters is provided in Table~\ref{tab:mod1}.
\subsection*{\textbf{Mathematical properties of the model}}
We studied the positivity and boundedness of solution of the model \eqref{EQ:eqn 2.1} (see supplementary appendix). The system \eqref{EQ:eqn 2.1} demonstrates two equilibria, that is, the disease-free equlibrium and an unique endemic equilibrium (see supplementary appendix). The disease-free state is locally asymptotically stable whenever the corresponding basic reproduction number ($R_0$) is less than unity (see supplementary appendix). By using a nonlinear Lyapunov function, it is also seen that the disease-free equilibrium is globally asymptotically stable whenever $R_0<1$ (see supplementary appendix). In addition, the model \eqref{EQ:eqn 2.1} has an unique endemic equilibrium if $R_0$ exceeds unity. Furthermore, using the central manifold theory, the local stability of the endemic equilibrium is established whenever $R_0>1$ (see supplementary appendix).
\subsection*{\textbf{Data}}
Daily COVID-19 reported cases from Maharashtra (MH), Delhi (DL), Tamil Nadu (TN), Gujarat (GJ), Punjab (PJ) and whole India (IND) for the time period March 14, 2020 till May 3, 2020 are considered for our study. These five states are deeply affected by current COVID-19 outbreak in India \citep{indiacovid2020track}. Daily COVID-19 notified cases were collected from \citep{indiacovid2020track}. Demographic data of the different locations are taken from \cite{aadhaar20, Nitiayog2020}.
\subsection*{\textbf{Estimation procedure}}
Several important epidemiological parameters (see Table~\ref{tab:mod1}) of our mathematical model~\eqref{EQ:eqn 3.1} are estimated using COVID-19 daily reported cases from the mentioned six locations. Total time duration of lockdown implemented by Govt. is 54 days starting from March 25, 2020 till May 17, 2020. Time-series data of daily COVID-19 cases in our study for the locations MH, DL, TN, GJ, PJ, and IND, respectively contains both with and without lockdown effect. Therefore, a combination of our mathematical models~\eqref{EQ:eqn 2.1}\&~\eqref{EQ:eqn 3.1} (with and without lockdown) are used for calibration. From our models~\eqref{EQ:eqn 2.1}\&~\eqref{EQ:eqn 3.1}, new COVID-19 notified cases during the $i^{th}$ time interval $\left[t_{i}, t_{i}+\Delta t_{i}\right]$ is
\begin{equation}
\displaystyle H_{i} (\hat{\theta}) = \displaystyle \tau \int_{t_{i}}^{t_{i} + \Delta t_{i}} I(\xi, \hat{\theta}) \hspace{0.2cm} d\xi,\\
\label{EQ:new-cases-from-model}
\end{equation} where, $\Delta t_{i}$ is the time step length and $\hat{\theta}$ is the set of unknown parameters of the models~\eqref{EQ:eqn 2.1}\&~\eqref{EQ:eqn 3.1} that are estimated. Then $K$ observation from the data and from the models~\eqref{EQ:eqn 2.1}\&~\eqref{EQ:eqn 3.1} are $\left\lbrace D_{1}, D_{2},..., D_{K} \right\rbrace$ and $\lbrace H_{1} (\hat{\theta}) , H_{2} (\hat{\theta}),...., H_{K} (\hat{\theta}) \rbrace $, respectively. Therefore, we constructed the sum of squares function~\citep{sardar2013optimal} as:
\begin{equation}
\displaystyle SS (\hat{\theta}) = \displaystyle \sum_{i = 1}^{K} \left[D_{i} - H_{i} (\hat{\theta})\right]^2,\\
\label{EQ:sum-of-square-function}
\end{equation}
MATLAB based nonlinear least square solver $fmincon$ is used to fit simulated and observed daily COVID-19 notified cases for the mentioned states and the whole country. Delayed Rejection Adaptive Metropolis~\citep{haario2006dram} (DRAM) algorithm is used to sample the 95\% confidence region. An elaboration of this model fitting technique is provided in \citep{sardar2017mathematical}.
\subsection*{\textbf{Statistical forecast models and the ensemble model}}
COVID-19 mathematical model we developed in this study may be efficient in capturing the transmission dynamics. However, as solution of the mathematical model is always smooth therefore, our model may not be able to replicate the fluctuations occurring in daily time-series data. Moreover, forecast of future COVID-19 cases based on a single mathematical model may not be very reliable approach. For this purpose, we used two statistical forecast models namely, Auto-regressive Integrated Moving Average (ARIMA); and ARMA errors, Trend and Seasonal components (TBATS) respectively. A Hybrid statistical model (HYBRID) based on the combination of ARIMA and TBATS is also used during forecast. Calibration of three statistical forecast models (ARIMA, TBATS and HYBRID) using COVID-19 daily notified cases from MH, DL, TN, GJ, PJ, and IND, respectively during March 14, 2020 till May 3, 2020, are done using the $R$ package 'forecastHybrid'~\cite{Forecast2020}. Each individuals models (ARIMA and TBATS) are first fitted to the aforesaid time-series data and then we combined each models with weightage based on in sample error to obtain the HYBRID model~\cite{Forecast2020}. Prediction skill of the each three statistical forecast model (ARIMA, TBATS and HYBRID) are tested on the daily COVID-19 notified cases during May 4, 2020 till May 8, 2020 for each of the six locations (see supplementary Table~\ref{Tab:Goodness-of-Fit}). Based on the prediction skill (see supplementary Table~\ref{Tab:Goodness-of-Fit}), the best statistical forecast model is ensemble with our COVID-19 mathematical models~\eqref{EQ:eqn 2.1} and~\eqref{EQ:eqn 3.1}. A post-processing BMA technique based on 'DRAM' algorithm \cite{haario2006dram} is used to determine the weightage (see supplementary Table~\ref{Tab:estimated-weights} and Fig~\ref{Fig:Marginal-distribution-MH} to Fig~\ref{Fig:Marginal-distribution-IND}) to combine the best statistical model with the COVID-19 mathematical models~\eqref{EQ:eqn 2.1} and~\eqref{EQ:eqn 3.1}.
\subsection*{\textbf{Disease forecasting under different lockdown scenario}}
Govt. have implemented lockdown all over India on March 25, 2020 and it will continue till May 17, 2020. The short and medium scale industries are largely affected by the lockdown~\cite{Economic_Times2, India_Today4}. To partially recover the economy, Govt. of India continuously relaxing the lockdown rules from April 20, 2020 \cite{Economictimes2020a, financialexpress20a, TheHindu20a}. To forecast COVID-19 cases for the period May 17, 2020 till May 31, 2020, for the six locations (MH, DL, TN, GJ, PJ and IND) based on the Govt. strategy, we considered following scenarios:\vspace{0.2cm}
Forecast based on current lockdown rate: We have estimated the average lockdown rate for our COVID-19 mathematical model (see Table~\ref{tab:mod1} and Table~\ref{Tab:estimated-parameters-Table}). Using this lockdown rate and using other parameters (estimated and known) of our mathematical models~\eqref{EQ:eqn 2.1} \&~\eqref{EQ:eqn 3.1}, we forecast COVID-19 notified cases during May 17, 2020 till May 31, 2020 for the locations MH, DL, TN, GJ, PJ, and IND, respectively. Finally, forecast based on our mathematical model is ensemble with the result based on the best statistical forecast model for a location mentioned earlier.\vspace{0.2cm}
Forecast based on 15\% reduction in current lockdown rate: We followed same procedure as the previous scenario with 15\% decrement in the estimate of lockdown rate~(see Table~\ref{tab:mod1} and Table~\ref{Tab:estimated-parameters-Table}) to obtained the forecast during the mentioned time period.\vspace{0.2cm}
Forecast based on 20\% reduction in current lockdown rate: we followed the same procedure as previous two scenarios with 20\% decrement in the estimate of lockdown rate (see Table~\ref{tab:mod1} and Table~\ref{Tab:estimated-parameters-Table}) to obtained the forecast during the mentioned time period. \vspace{0.2cm}
Forecast based on 30\% reduction in current lockdown rate: we followed the same procedure as previous three scenarios with 30\% decrement in the estimate of lockdown rate (see Table~\ref{tab:mod1} and Table~\ref{Tab:estimated-parameters-Table}) to obtain the forecast during the mentioned time period.\vspace{0.2cm}
Forecast based on no lockdown: Continue as earlier and assuming lockdown is lifted after May 17, 2020, we forecast COVID-19 notified cases during May 17, 2020 till May 31, 2020, for the six mentioned locations.\vspace{0.2cm}
\subsection*{\textbf{Estimation of the basic and the effective reproduction number}}
Since we assumed that population under the lockdown do not contact with the infection from the community therefore the basic reproduction number ($R_{0}$) \citep{van2002reproduction} for our mathematical model with and without lockdown~(see Fig~\ref{Fig:Flow_India_covid} and supplementary method) are same and its expression is provided below:
\begin{align*}
R_0=\frac{\beta_1\kappa \sigma}{(\mu+\sigma)(\gamma_2+\tau+\mu)}+\frac{\rho \beta_1(1-\kappa)\sigma}{(\mu+\sigma)(\gamma_1+\mu)}.
\end{align*}
The effective reproductive number ($R_{t}$) is defined as the expected number of secondary infection per infectious in a population made up of both susceptible and non-susceptible hosts \citep{rothman2008modern}. If $R_{t} > 1$, the number of new cases will increase, for $R_{t} =1$, the disease become endemic, and when $R_{t} < 1$ there will be a decline in new cases.
Following~\citep{rothman2008modern}, the expression of $R_{t}$ is given as follows:
\begin{align*}
\displaystyle R_t= R_{0} \times \hat{s},
\end{align*} where, $\hat{s}$ is the fraction of the host population that is susceptible.
$R_{0}$ can easily be estimated by plugin the sample values of the unknown parameters (see Table \ref{Tab:estimated-parameters-Table}) of the model without lockdown~\eqref{EQ:eqn 2.1} in the expressions of $R_{0}$.
Following procedure is adapted to estimate $R_{t}$ during May 17, 2020 till May 31, 2020 under two lockdown scenarios:
\begin{description}
\item[$\bullet$] Using current estimate of the lockdown rate and different parameters of our mathematical model~(see Table~\ref{tab:mod1} and Table~\ref{Tab:estimated-parameters-Table}), we estimate $\hat{s}$ and $R_{t}$ during May 17, 2020 till May 31, 2020, for the locations MH, DL, TN, GJ, PJ and IND, respectively.
\item[$\bullet$] Using different parameters~(see Table~\ref{tab:mod1} and Table \ref{Tab:estimated-parameters-Table}) of our mathematical model without lockdown~\eqref{EQ:eqn 2.1}, we estimated $\hat{s}$ and $R_{t}$ during May 17, 2020 till May 31, 2020 for the mentioned six locations.
\end{description}
\subsection*{\textbf{Sensitivity analysis and effective lockdown strategy}}
To determine an effective lockdown policy in those six locations (MH, DL, TN, GJ, PJ and IND) will require some correlation between lockdown effect with some epidemiologically measurable parameters of our mathematical model~(see Fig~\ref{Fig:Flow_India_covid}). There are several important parameters of our mathematical model~(see Table~\ref{tab:mod1}) and among them there are two parameters that are measurable namely, $\kappa$: fraction of new infected that become symptomatic (COVID-19 testing will provide an accurate estimate) and $\tau$: Average notification \& hospitalization rate of symptomatic COVID-19 infection (this parameter is proportional to the number of COVID-19 testing). Lockdown effect is measured as the difference between the total number of cases projected by our ensemble model with and without lockdown. A global sensitivity analysis~\cite{marino2008methodology} is performed to determine the effect of the mentioned two parameters on the lockdown effect and on the basic reproduction number ($R_{0}$). Using Latin Hyper cube sampling (LHS), we draw $1000$ samples for $\kappa$ and $\tau$, respectively from their respective ranges~(see Table~\ref{tab:mod1}). Partial rank correlation and its corresponding $p$-value are examined to determine the relation between two mentioned parameters with the lockdown effect and $R_{0}$, respectively.
\section{Results and discussion}
Three models (mathematical, statistical forecast and ensemble) fitting to daily COVID-19 notified cases during March 14, 2020 till May 3, 2020, for Maharashtra (MH), Delhi (DL), Tamil Nadu (TN), Gujarat (GJ), Punjab (PJ), and India (IND) is depicted in Fig~\ref{Fig:Model-fitting}. Among the three statistical forecast models (ARIMA, TBATS and HYBRID), the ARIMA model performed better on the test prediction data (May 4, 2020 till May 8, 2020) of DL and GJ (see supplementary Table~\ref{Tab:Goodness-of-Fit}). Whereas, the TBATS model provide better result in compare to other two models for the test prediction data (May 4, 2020 till May 8, 2020) of PJ (see supplementary Table~\ref{Tab:Goodness-of-Fit}). For the remaining three locations (MH, TN and IND), the HYBRID model provide the best result (see supplementary Table~\ref{Tab:Goodness-of-Fit}) on the test prediction data (May 4, 2020 till May 8, 2020). The ensemble model, which is a combination of our COVID-19 mathematical models~\eqref{EQ:eqn 2.1} \&~\eqref{EQ:eqn 3.1} and the best statistical forecast model (region specific), is performed well in capturing COVID-19 daily time-series data trend in all the six locations. Posterior distribution of the weights at which we combine our COVID-19 mathematical model with the best statistical forecast model in the six different locations are provided in supplementary method (see Fig~\ref{Fig:Marginal-distribution-MH} to Fig~\ref{Fig:Marginal-distribution-IND} and Table~\ref{Tab:estimated-weights}).
In MH, DL, and GJ, the estimate of the symptomatic influx fraction ($\kappa$) suggest that low percentage (about 11\% to 20\%) of symptomatic infected in the population (see Table~\ref{Tab:estimated-parameters-Table}). However, in TN and PJ, relatively higher percentage (about 82\% to 88\%) of symptomatic infection is found (see Table~\ref{Tab:estimated-parameters-Table}). In overall India, our estimate shows that currently about 62\% of new infection are symptomatic (see Table~\ref{Tab:estimated-parameters-Table}). Except for GJ, in other five locations, estimate of the transmission rate ($\beta_{1}$) are found to be in same scale (see Table~\ref{Tab:estimated-parameters-Table}). Relatively higher value of $\beta_{1}$ is found in Gujrat (see Table~\ref{Tab:estimated-parameters-Table}). Low value of the transmission variability factor ($\rho$) indicates most of the community infection in GJ are due to contact with the symptomatic infected population. As in GJ, the value of $\kappa$ is found to be small (about 11\%) therefore, relatively smaller symptomatic population producing most of the infection in GJ. This indicate that there may be a possibility of existence of super-spreaders among the symptomatic infected in GJ. This observation is agree with a recent survey result in GJ~\cite{NDTVsuperspreaders}. Except for PJ, in other five locations, the estimates of $\rho$ (below 50\%) are found to be low (see Table~\ref{Tab:estimated-parameters-Table}). This indicates small contribution of the asymptomatic infected population towards the new infection produced in MH, DL, TN, GJ and IND, respectively. Estimate of the lockdown rate in the five states (MH, DL, TN, GJ and PJ) suggest that around 50\% to 88\% of the total susceptible population are successfully home quarantined during the lockdown period (see Table~\ref{Tab:estimated-parameters-Table}). Thus, lockdown is overall successful in those five states. However, this is not the case for overall India, our estimate suggest that about 11\% of the total susceptible population in India maintained proper social distancing during the lockdown period (see Table~\ref{Tab:estimated-parameters-Table}).
Our estimate of the basic reproduction number ($R_{0}$)~(see Table~\ref{Tab:estimated-R0-Table}), in the six locations found to be in good agreement of the world-wide estimate provided by WHO~\cite{liu2020reproductive}. We performed a global sensitivity analysis of two epidemiologically measurable parameters of our mathematical model~(see Fig~\ref{Fig:Flow_India_covid}) namely $\kappa$: fraction of new infected that become symptomatic and $\tau$: Average notification \& hospitalization rate of symptomatic COVID-19 infection, on $R_{0}$. Partial rank correlation and its corresponding $p$-value (see Fig~\ref{Fig:sensitivity-analysis-R0}) suggest that $\tau$ has a negative correlation on $R_{0}$. Thus, more testing will isolate more infection from the community and therefore may reduce the COVID-19 community transmission. Furthermore, high positive correlation of $\kappa$ with $R_{0}$ (see Fig~\ref{Fig:sensitivity-analysis-R0}) indicates the possibility of high COVID-19 transmission in those areas where population have higher percentage of symptomatic infection.
Ensemble model forecast of notified COVID-19 cases between May 17, 2020 till May 31, 2020 (see Table~\ref{Tab:cases-preiction-Table}, Fig.~\ref{Fig:Prediction-cases-India}, and Fig~\ref{Fig:Prediction-cases-MH} to Fig~\ref{Fig:Marginal-distribution-PJ} in supplementary appendix) indicate that in the coming few days, a high increment in the COVID-19 notified cases may be observed in MH, DL, TN, GJ, PJ, and IND. Furthermore, our ensemble model prediction during the mentioned period suggest that around 117645 to 128379 cases may occurred in overall India (see Table~\ref{Tab:cases-preiction-Table}). These numbers are much higher than the total cumulative cases between March 2, 2020 till May 15, 2020, in whole India.
A global sensitivity analysis of $\kappa$ and $\tau$ on the lockdown effect suggest that both of these parameters have high positive correlation with the lockdown effect in all the six locations (see Fig.~\ref{Fig:Sensitivity-lockdown-effect}). Therefore, lockdown will be effective in those region where higher percentage of symptomatic infection is found in the population and also larger COVID-19 mass testing will be required to isolate the cases.
To measure the COVID-19 transmission trend during May 17, 2020 till May 31, 2020, we estimated the effective reproduction number ($R_{t}$) during the mentioned period for MH, DL, TN, GJ, PJ and IND, respectively (see Fig.~\ref{Fig:Effective-reproduction-number}). Our result suggest that, a decreasing trend in new notified COVID-19 cases ($R_{t} < 1$) may be seen after May 31, 2020 if current lockdown measures (see Table~\ref{Tab:estimated-parameters-Table}) are maintained in DL, TN and PJ, respectively. Furthermore, if social distancing measures are removed after May 17, 2020, we may see a rise in the daily COVID-19 cases in all of the six locations (see Fig.~\ref{Fig:Effective-reproduction-number}).
\section{Conclusion}
Up to May 15, 2020, total number of reported COVID-19 cases and deaths in India are \textbf{81794} and \textbf{2649}, respectively \citep{indiacovid2020track}. This tally rises with few thousand new notified cases every day reported from different locations in India \citep{indiacovid2020track}. Currently, there is no treatment or vaccine available for COVID-19. Therefore, only measure to control the outbreak may be home quarantined (lockdown) a larger percentage of susceptible population. However, this disease control strategy may have some negative impact on the economy. Therefore, it is utmost important to determine an effective lockdown policy that may reduce COVID-19 transmission in the community as well as save the Indian economy from drowning. This policy may be found by studying the dynamics and prediction of a mechanistic mathematical model for COVID-19 transmission and testing the results in real life situation.
In this present study, we consider a new mathematical model on COVID-19 transmission that incorporates the lockdown effect (see Fig~\ref{Fig:Flow_India_covid}). In our models~\eqref{EQ:eqn 2.1} \&~\eqref{EQ:eqn 3.1}, we also considered transmission variability between symptomatic and asymptomatic population with former being a fast spreader of the disease. Using daily time-series data of notified COVID-19 cases from five states (Maharashtra, Delhi, Tamil Nadu, Gujarat and Punjab) and overall India, we studied the effect of lockdown measures on the reduction of notified cases in those regions. Our result suggest that lockdown will be effective in those locations where higher percentage of symptomatic infection exist in the population. Furthermore, a large scale COVID-19 mass testing is required to reduce community infection in those locations. Using a post-processing BMA technique, we ensemble the prediction of our mathematical model with the results obtained from different statistical forecast model. Our ensemble model forecast of COVID-19 daily notified cases during May 17, 2020 till May 31, 2020, suggested a very high rise in the COVID-19 notified cases in the mentioned time duration in most of the locations. Furthermore, estimation of the effective reproduction number ($R_{t}$) during the mentioned time duration indicates if the lockdown measures are completely removed after May 17, 2020, in those locations, a high spike in COVID-19 notified cases may be seen during the mentioned forecasting period. We provide a suggestion for the Indian Govt. and policy makers to acquire the following steps for effective containment of COVID-19 transmission:
\begin{enumerate}
\item Perform a survey to find the percentage of symptomatic infection in different states and regions.
\item Focus implementing extensive lockdown in those locations only where the percentage of symptomatic infection is high.
\item Provide relaxation in lockdown in other locations for some time. This process will increase the percentage of symptomatic infection.
\item Repeat step-2, when a region has a sufficient percentage of symptomatic infection.
\end{enumerate}
There are some drawback in our study and may be modified in future. We assume that lockdown population ($L$) and notified \& hospitalized infection ($C$) do not mix with the general population in the community. However, there are numerous evidences where disease transmitted from the hospital and from home confined individuals~\cite{NDTVhospital20}. We shall leave these challenges for our future objectives.
\section*{Conflict of interests}
The authors declare that they have no conflicts of interest.
\section*{Acknowledgments}
The authors are grateful to editor-in-chief, handling editor and learned reviewers for their comments and suggestions on the earlier version of this manuscript. The comments immensely improve the standard of this article.\\
Dr. Tridip Sardar acknowledges the Science \& Engineering Research Board (SERB) major project grant (File No: EEQ/2019/000008 dt. 4/11/2019), Government of India.\\
Sk Shahid Nadim receives funding as senior research fellowship from Council of Scientific \& Industrial Research (Grant No: 09/093(0172)/2016/EMR-I), Government of India, New Delhi.\\
The Funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
\clearpage
\bibliographystyle{ieeetr}
\biboptions{square}
\section{Introduction}
As of May 15, 2020, 4593395 cases and 306376 deaths from 2019 novel coronavirus disease (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), were recorded worldwide \cite{csse2020}. Coronaviruses are enveloped non-segmented positive-sense RNA viruses that belong to the Coronaviridae family and the order Nidovirales, and are widely distributed among humans and other mammals \cite{richman2016clinical}. The novel coronavirus, COVID-19 started in mainland China, with a geographical emphasis at Wuhan, the capital city of Hubei province \cite{wang2020novel} and has widely spread all over the world. Many of the initial cases were usually introduced to the wholesale Huanan seafood market, which also traded live animals. Clinical trials of hospitalized patients found that patients exhibit symptoms consistent with viral pneumonia at the onset of COVID-19, most commonly fever, cough, sore throat and fatigue \cite{cdcgov2020}. Some patients reported changes in their ground-glass lungs; normal or lower than average white lymphocyte blood cell counts and platelet counts; hypoxemia; and deranged liver and kidney function. Most were said to be geographically related to the wholesale market of Huanan seafood \cite{chinadaily2019}. Severe outbreaks occur in USA ($1457593$ cases), Spain ($272646$ cases), Russia (262843 cases), UK (236711 cases), Italy (223885) and so many countries and the disease continues to spread globally. This has been declared a pandemic by the World Health Organization. It is the third zoonotic human coronavirus that has arisen in the present century, after the $2002$ severe acute respiratory syndrome coronavirus (SARS-CoV), which spread to 37 countries and the 2012 Middle East respiratory syndrome coronavirus (MERS-CoV), which spread to 27 countries.
The 2019 pandemic novel coronavirus was first confirmed in India on January 30, 2020, in the state of Kerala. A total of $82087$ confirmed cases and $2648$ deaths in the country have been reported as of May 15, 2020 \cite{indiacovid2020track}. The Indian government has introduced social distance as a precaution to avoid the possibility of a large-scale population movement that can accelerate the spread of the disease. India government implemented a 14-hour voluntary public curfew on 22 March 2020. Furthermore, the Prime Minister of India also ordered a nationwide lockdown at midnight on March 24, 2020 to slow the
spread of COVID-19. Despite no vaccine, social distancing has identified as the most commonly used prevention and control strategy \cite{ferguson2020report}. The purpose of these initiatives is the restriction of social interaction in workplaces, schools, and other public spheres, except for essential public services such as fire, police, hospitals. No doubt the spread of this virus outbreak has seriously disrupted the life, economy and health of citizens. This is a great concern for everyone how long this scenario will last and when the disease will be controlled.
Mathematical modeling based on system of differential equations may provide a comprehensive mechanism for the dynamics of a disease transmission~\cite{sardar2020realistic}. Several modeling studies have already been performed for the COVID-19 outbreak \cite{tang2020updated,quilty2020effectiveness,shen2020modelling,tang2020estimation, wu2020nowcasting}. Based on data collected from December 31, 2019 till January 28, 2020, Wu et al~\cite{wu2020nowcasting} developed a susceptible exposed infectious recovered model (SEIR) to clarify the transmission dynamics and projected national and global spread of disease. They also calculated around 2.68 is the basic reproductive number for COVID-19. Tang et al~\cite{tang2020estimation} proposed a compartmental deterministic model that would combine the clinical development of the disease, the epidemiological status of the patient and the measures for intervention. Researchers found that the amount of control reproduction number may be as high as 6.47, and that methods of intervention including intensive touch tracing followed by quarantine and isolation would effectively minimize COVID cases \cite{tang2020estimation}.
For the basic reproductive number, Read et al. reported a value of 3.1 based on the data fitting of an SEIR model, using an assumption of Poisson-distributed daily time increments \cite{read2020novel}. A report by Cambridge University has indicated that India's countrywide three-week lockdown would not be adequate to prevent a resurgence of the new coronavirus epidemic that could bounce back in months and cause thousands of infections \cite{singh2020age}. They suggested that two or three lockdowns can extend the slowdown longer with five-day breaks in between or a single 49-day lockdown. Data-driven mathematical modeling plays a key role in disease prevention, planning for future outbreaks and determining the effectiveness of control. Several data-driven modeling experiments have been performed in various regions \cite{tang2020estimation,chen2020mathematical}.
Currently, there are very limited works that studied the impact of lockdown on COVID-19 transmission dynamics in India.
In the present manuscript, we proposed a new mathematical model for COVID-19 that incorporates the lockdown effect. We also considered variability in transmission between symptomatic and asymptomatic populations with former being a fast spreader of the disease. Analyzing COVID-19 daily notified cases from five affected states (Maharashtra, Delhi, Tamil Nadu, Gujarat and Punjab) and from overall India, we studied the effect of social distancing measures implemented by the Govt. on notified cases reduction in those regions. We also estimates the basic reproduction numbers ($R_{0}$) for these six locations. Using a post-processing BMA technique, we ensemble our COVID-19 mathematical model with different statistical forecast model to obtain a projection of notified cases in those six locations for the period May 17, 2020 till May 31, 2020. A global sensitivity analysis is carried out to determine the correlation of two epidemiologically measurable parameters on lockdown effect and also on $R_{0}$. Finally to determine the COVID-19 transmission trend during the projection period (May 17, 2020 till May 31, 2020), we estimate the effective reproduction number ($R_{t}$) for the mentioned six locations.
\section{Method}
Based on the development and epidemiological characteristics of COVID-19, a SEIR type model is more appropriate to study the dynamics of this current pandemic \citep{kucharski2020early, peng2020epidemic, sardar2020realistic}. The model we developed in this paper is based on the interaction of seven mutually exclusive sub-classes namely, Susceptible ($S$), Lockdown ($L$), Exposed ($E$), Asymptomatic ($A$), Symptomatic ($I$), Hospitalized ($C$), and Recovered ($R$).
Susceptible population ($S$) increased due to constant recruitment rate $\Pi_{H}$ and those individuals coming back from lockdown compartment after the lockdown period $ \displaystyle\frac{1}{\omega}$. Population in the susceptible class decreased due to new infection in contact with symptomatic and asymptomatic infected population, natural death and also a fraction of the susceptible individuals become home-quarantine due to lockdown at a rate $l$. We also assumed variability in disease transmission in asymptomatic and symptomatic population with later being a fast spreader of infection with a variability factor $\left( 0 \leq \rho \leq 1\right)$~\citep{gumel2004modelling, mandal2020prudent}.
Lockdown population ($L$) increased by those susceptible who are home-quarantined during the lockdown period $ \displaystyle\frac{1}{\omega}$, at a rate $l$. Population under lockdown is decreased due to natural death and those individuals who become susceptible again after the lockdown period $ \displaystyle\frac{1}{\omega}$. For simplicity, we assume ideal lockdown scenario \textit{i.e.} all population under lockdown maintained proper social distancing and do not contribute to new infection.
Population in the exposed compartment ($E$) increased by new infection coming from susceptible compartment. A fraction $\kappa$ of the exposed individuals become symptomatic infected and remaining fraction ($1-\kappa$) become asymptomatic infected after the disease incubation period $\frac{1}{\sigma}$. Exposed population also decreased due to natural death at a rate $\mu$.
Asymptomatic infected compartment ($A$) increased due to a fraction ($1-\kappa$) of infection coming from exposed compartment. Since, asymptomatic COVID-19 cases are hard to detect therefore, we assume that asymptomatic infection are not notified. Population in this compartment is decreased due to natural recovery and deaths at a rate $\gamma_1$ and $\mu$, respectively.
Population in the symptomatic infected compartment ($I$) increased due to a fraction $\kappa$ of infection coming from exposed compartment after the incubation period $\frac{1}{\sigma}$. This compartment decreased due to natural recovery at a rate $\gamma_2$, natural death at a rate $\mu$ and those infected population who are notified \& hospitalized at a rate $\tau$.
Notified \& hospitalized infected population ($C$) increased due to influx of infection coming from symptomatic infected class at a rate $\tau$. This population decreased due to natural death at a rate $\mu$, disease related deaths at a rate $\delta$, and recovery from COVID-19 at a rate $\gamma_3$. We assume that population of this compartment do not mix with the general population in the community \textit{i.e.} this compartment do not contribute in the COVID-19 transmission.
Finally, recovered population ($R$) increased due to influx of individuals coming from asymptomatic ($A$), symptomatic ($I$), and notified \& hospitalized individuals ($C$) at a rate $\gamma_1$, $\gamma_2$, and $\gamma_3$, respectively. As we are analyzing this study in a shorter time frame therefore, we assume definitive immunity \textit{i.e.} recovered population do not contribute to new COVID-19 infection. Thus recovered population decreased due to natural death at a rate $\mu$.
Based on the above assumptions the system of equations that represent COVID-19 transmission with and without lockdown are provided below:
\subsection*{\textbf{Model without lock-down}}
\begin{eqnarray}\label{EQ:eqn 2.1}
\displaystyle{ \frac{dS}{dt} } &=& \Pi_{H}-\frac{\beta_1 I S}{\left(N - C\right) }-\frac{\rho \beta_1 A S}{\left(N-C \right) }-\mu S\nonumber\\
\displaystyle{ \frac{dE}{dt} } &=& \frac{\beta_1 I S}{\left(N - C\right) }+\frac{\rho \beta_1 A S}{\left(N-C \right) }-(\mu+\sigma)E,\nonumber \\
\displaystyle{ \frac{dA}{dt} } &=& (1-\kappa)\sigma E-(\gamma_1+\mu)A,\nonumber \\
\displaystyle{ \frac{dI}{dt} } &=& \kappa \sigma E-(\gamma_2+\tau+\mu)I, \\
\displaystyle{\frac{dC}{dt} } &=& \tau I-(\delta+\gamma_3+\mu)C, \nonumber \\
\displaystyle{ \frac{dR}{dt} } &=& \gamma_1 A+\gamma_2 I+\gamma_3 C-\mu R ,\nonumber
\end{eqnarray}
\subsection*{\textbf{Model with lock-down}}
\begin{eqnarray}\label{EQ:eqn 3.1}
\displaystyle{ \frac{dS}{dt} } &=& \Pi_{H} + \omega L -\frac{\beta_1 I S}{\left( N-L-C\right) }-\frac{\rho \beta_1 A S}{\left( N-L-C \right) }-\mu S-lS\nonumber\\
\displaystyle{ \frac{dL}{dt} } &=& l S - (\mu + \omega) L,\nonumber\\
\displaystyle{ \frac{dE}{dt} } &=& \frac{\beta_1 I S}{\left( N-L-C\right) }+\frac{\rho \beta_1 A S}{\left( N-L-C \right) }-(\mu+\sigma)E,\nonumber \\
\displaystyle{ \frac{dA}{dt} } &=& (1-\kappa)\sigma E-(\gamma_1+\mu)A,\nonumber \\
\displaystyle{ \frac{dI}{dt} } &=& \kappa \sigma E-(\gamma_2+\tau+\mu)I, \\
\displaystyle{\frac{dC}{dt} } &=& \tau I-(\delta+\gamma_3+\mu)C, \nonumber \\
\displaystyle{ \frac{dR}{dt} } &=& \gamma_1 A+\gamma_2 I+\gamma_3 C-\mu R .\nonumber
\end{eqnarray}A diagram of our model is provided in Fig~\ref{Fig:Flow_India_covid}. Information of our model~ parameters is provided in Table~\ref{tab:mod1}.
\subsection*{\textbf{Mathematical properties of the model}}
We studied the positivity and boundedness of solution of the model \eqref{EQ:eqn 2.1} (see supplementary appendix). The system \eqref{EQ:eqn 2.1} demonstrates two equilibria, that is, the disease-free equlibrium and an unique endemic equilibrium (see supplementary appendix). The disease-free state is locally asymptotically stable whenever the corresponding basic reproduction number ($R_0$) is less than unity (see supplementary appendix). By using a nonlinear Lyapunov function, it is also seen that the disease-free equilibrium is globally asymptotically stable whenever $R_0<1$ (see supplementary appendix). In addition, the model \eqref{EQ:eqn 2.1} has an unique endemic equilibrium if $R_0$ exceeds unity. Furthermore, using the central manifold theory, the local stability of the endemic equilibrium is established whenever $R_0>1$ (see supplementary appendix).
\subsection*{\textbf{Data}}
Daily COVID-19 reported cases from Maharashtra (MH), Delhi (DL), Tamil Nadu (TN), Gujarat (GJ), Punjab (PJ) and whole India (IND) for the time period March 14, 2020 till May 3, 2020 are considered for our study. These five states are deeply affected by current COVID-19 outbreak in India \citep{indiacovid2020track}. Daily COVID-19 notified cases were collected from \citep{indiacovid2020track}. Demographic data of the different locations are taken from \cite{aadhaar20, Nitiayog2020}.
\subsection*{\textbf{Estimation procedure}}
Several important epidemiological parameters (see Table~\ref{tab:mod1}) of our mathematical model~\eqref{EQ:eqn 3.1} are estimated using COVID-19 daily reported cases from the mentioned six locations. Total time duration of lockdown implemented by Govt. is 54 days starting from March 25, 2020 till May 17, 2020. Time-series data of daily COVID-19 cases in our study for the locations MH, DL, TN, GJ, PJ, and IND, respectively contains both with and without lockdown effect. Therefore, a combination of our mathematical models~\eqref{EQ:eqn 2.1}\&~\eqref{EQ:eqn 3.1} (with and without lockdown) are used for calibration. From our models~\eqref{EQ:eqn 2.1}\&~\eqref{EQ:eqn 3.1}, new COVID-19 notified cases during the $i^{th}$ time interval $\left[t_{i}, t_{i}+\Delta t_{i}\right]$ is
\begin{equation}
\displaystyle H_{i} (\hat{\theta}) = \displaystyle \tau \int_{t_{i}}^{t_{i} + \Delta t_{i}} I(\xi, \hat{\theta}) \hspace{0.2cm} d\xi,\\
\label{EQ:new-cases-from-model}
\end{equation} where, $\Delta t_{i}$ is the time step length and $\hat{\theta}$ is the set of unknown parameters of the models~\eqref{EQ:eqn 2.1}\&~\eqref{EQ:eqn 3.1} that are estimated. Then $K$ observation from the data and from the models~\eqref{EQ:eqn 2.1}\&~\eqref{EQ:eqn 3.1} are $\left\lbrace D_{1}, D_{2},..., D_{K} \right\rbrace$ and $\lbrace H_{1} (\hat{\theta}) , H_{2} (\hat{\theta}),...., H_{K} (\hat{\theta}) \rbrace $, respectively. Therefore, we constructed the sum of squares function~\citep{sardar2013optimal} as:
\begin{equation}
\displaystyle SS (\hat{\theta}) = \displaystyle \sum_{i = 1}^{K} \left[D_{i} - H_{i} (\hat{\theta})\right]^2,\\
\label{EQ:sum-of-square-function}
\end{equation}
MATLAB based nonlinear least square solver $fmincon$ is used to fit simulated and observed daily COVID-19 notified cases for the mentioned states and the whole country. Delayed Rejection Adaptive Metropolis~\citep{haario2006dram} (DRAM) algorithm is used to sample the 95\% confidence region. An elaboration of this model fitting technique is provided in \citep{sardar2017mathematical}.
\subsection*{\textbf{Statistical forecast models and the ensemble model}}
COVID-19 mathematical model we developed in this study may be efficient in capturing the transmission dynamics. However, as solution of the mathematical model is always smooth therefore, our model may not be able to replicate the fluctuations occurring in daily time-series data. Moreover, forecast of future COVID-19 cases based on a single mathematical model may not be very reliable approach. For this purpose, we used two statistical forecast models namely, Auto-regressive Integrated Moving Average (ARIMA); and ARMA errors, Trend and Seasonal components (TBATS) respectively. A Hybrid statistical model (HYBRID) based on the combination of ARIMA and TBATS is also used during forecast. Calibration of three statistical forecast models (ARIMA, TBATS and HYBRID) using COVID-19 daily notified cases from MH, DL, TN, GJ, PJ, and IND, respectively during March 14, 2020 till May 3, 2020, are done using the $R$ package 'forecastHybrid'~\cite{Forecast2020}. Each individuals models (ARIMA and TBATS) are first fitted to the aforesaid time-series data and then we combined each models with weightage based on in sample error to obtain the HYBRID model~\cite{Forecast2020}. Prediction skill of the each three statistical forecast model (ARIMA, TBATS and HYBRID) are tested on the daily COVID-19 notified cases during May 4, 2020 till May 8, 2020 for each of the six locations (see supplementary Table~\ref{Tab:Goodness-of-Fit}). Based on the prediction skill (see supplementary Table~\ref{Tab:Goodness-of-Fit}), the best statistical forecast model is ensemble with our COVID-19 mathematical models~\eqref{EQ:eqn 2.1} and~\eqref{EQ:eqn 3.1}. A post-processing BMA technique based on 'DRAM' algorithm \cite{haario2006dram} is used to determine the weightage (see supplementary Table~\ref{Tab:estimated-weights} and Fig~\ref{Fig:Marginal-distribution-MH} to Fig~\ref{Fig:Marginal-distribution-IND}) to combine the best statistical model with the COVID-19 mathematical models~\eqref{EQ:eqn 2.1} and~\eqref{EQ:eqn 3.1}.
\subsection*{\textbf{Disease forecasting under different lockdown scenario}}
Govt. have implemented lockdown all over India on March 25, 2020 and it will continue till May 17, 2020. The short and medium scale industries are largely affected by the lockdown~\cite{Economic_Times2, India_Today4}. To partially recover the economy, Govt. of India continuously relaxing the lockdown rules from April 20, 2020 \cite{Economictimes2020a, financialexpress20a, TheHindu20a}. To forecast COVID-19 cases for the period May 17, 2020 till May 31, 2020, for the six locations (MH, DL, TN, GJ, PJ and IND) based on the Govt. strategy, we considered following scenarios:\vspace{0.2cm}
Forecast based on current lockdown rate: We have estimated the average lockdown rate for our COVID-19 mathematical model (see Table~\ref{tab:mod1} and Table~\ref{Tab:estimated-parameters-Table}). Using this lockdown rate and using other parameters (estimated and known) of our mathematical models~\eqref{EQ:eqn 2.1} \&~\eqref{EQ:eqn 3.1}, we forecast COVID-19 notified cases during May 17, 2020 till May 31, 2020 for the locations MH, DL, TN, GJ, PJ, and IND, respectively. Finally, forecast based on our mathematical model is ensemble with the result based on the best statistical forecast model for a location mentioned earlier.\vspace{0.2cm}
Forecast based on 15\% reduction in current lockdown rate: We followed same procedure as the previous scenario with 15\% decrement in the estimate of lockdown rate~(see Table~\ref{tab:mod1} and Table~\ref{Tab:estimated-parameters-Table}) to obtained the forecast during the mentioned time period.\vspace{0.2cm}
Forecast based on 20\% reduction in current lockdown rate: we followed the same procedure as previous two scenarios with 20\% decrement in the estimate of lockdown rate (see Table~\ref{tab:mod1} and Table~\ref{Tab:estimated-parameters-Table}) to obtained the forecast during the mentioned time period. \vspace{0.2cm}
Forecast based on 30\% reduction in current lockdown rate: we followed the same procedure as previous three scenarios with 30\% decrement in the estimate of lockdown rate (see Table~\ref{tab:mod1} and Table~\ref{Tab:estimated-parameters-Table}) to obtain the forecast during the mentioned time period.\vspace{0.2cm}
Forecast based on no lockdown: Continue as earlier and assuming lockdown is lifted after May 17, 2020, we forecast COVID-19 notified cases during May 17, 2020 till May 31, 2020, for the six mentioned locations.\vspace{0.2cm}
\subsection*{\textbf{Estimation of the basic and the effective reproduction number}}
Since we assumed that population under the lockdown do not contact with the infection from the community therefore the basic reproduction number ($R_{0}$) \citep{van2002reproduction} for our mathematical model with and without lockdown~(see Fig~\ref{Fig:Flow_India_covid} and supplementary method) are same and its expression is provided below:
\begin{align*}
R_0=\frac{\beta_1\kappa \sigma}{(\mu+\sigma)(\gamma_2+\tau+\mu)}+\frac{\rho \beta_1(1-\kappa)\sigma}{(\mu+\sigma)(\gamma_1+\mu)}.
\end{align*}
The effective reproductive number ($R_{t}$) is defined as the expected number of secondary infection per infectious in a population made up of both susceptible and non-susceptible hosts \citep{rothman2008modern}. If $R_{t} > 1$, the number of new cases will increase, for $R_{t} =1$, the disease become endemic, and when $R_{t} < 1$ there will be a decline in new cases.
Following~\citep{rothman2008modern}, the expression of $R_{t}$ is given as follows:
\begin{align*}
\displaystyle R_t= R_{0} \times \hat{s},
\end{align*} where, $\hat{s}$ is the fraction of the host population that is susceptible.
$R_{0}$ can easily be estimated by plugin the sample values of the unknown parameters (see Table \ref{Tab:estimated-parameters-Table}) of the model without lockdown~\eqref{EQ:eqn 2.1} in the expressions of $R_{0}$.
Following procedure is adapted to estimate $R_{t}$ during May 17, 2020 till May 31, 2020 under two lockdown scenarios:
\begin{description}
\item[$\bullet$] Using current estimate of the lockdown rate and different parameters of our mathematical model~(see Table~\ref{tab:mod1} and Table~\ref{Tab:estimated-parameters-Table}), we estimate $\hat{s}$ and $R_{t}$ during May 17, 2020 till May 31, 2020, for the locations MH, DL, TN, GJ, PJ and IND, respectively.
\item[$\bullet$] Using different parameters~(see Table~\ref{tab:mod1} and Table \ref{Tab:estimated-parameters-Table}) of our mathematical model without lockdown~\eqref{EQ:eqn 2.1}, we estimated $\hat{s}$ and $R_{t}$ during May 17, 2020 till May 31, 2020 for the mentioned six locations.
\end{description}
\subsection*{\textbf{Sensitivity analysis and effective lockdown strategy}}
To determine an effective lockdown policy in those six locations (MH, DL, TN, GJ, PJ and IND) will require some correlation between lockdown effect with some epidemiologically measurable parameters of our mathematical model~(see Fig~\ref{Fig:Flow_India_covid}). There are several important parameters of our mathematical model~(see Table~\ref{tab:mod1}) and among them there are two parameters that are measurable namely, $\kappa$: fraction of new infected that become symptomatic (COVID-19 testing will provide an accurate estimate) and $\tau$: Average notification \& hospitalization rate of symptomatic COVID-19 infection (this parameter is proportional to the number of COVID-19 testing). Lockdown effect is measured as the difference between the total number of cases projected by our ensemble model with and without lockdown. A global sensitivity analysis~\cite{marino2008methodology} is performed to determine the effect of the mentioned two parameters on the lockdown effect and on the basic reproduction number ($R_{0}$). Using Latin Hyper cube sampling (LHS), we draw $1000$ samples for $\kappa$ and $\tau$, respectively from their respective ranges~(see Table~\ref{tab:mod1}). Partial rank correlation and its corresponding $p$-value are examined to determine the relation between two mentioned parameters with the lockdown effect and $R_{0}$, respectively.
\section{Results and discussion}
Three models (mathematical, statistical forecast and ensemble) fitting to daily COVID-19 notified cases during March 14, 2020 till May 3, 2020, for Maharashtra (MH), Delhi (DL), Tamil Nadu (TN), Gujarat (GJ), Punjab (PJ), and India (IND) is depicted in Fig~\ref{Fig:Model-fitting}. Among the three statistical forecast models (ARIMA, TBATS and HYBRID), the ARIMA model performed better on the test prediction data (May 4, 2020 till May 8, 2020) of DL and GJ (see supplementary Table~\ref{Tab:Goodness-of-Fit}). Whereas, the TBATS model provide better result in compare to other two models for the test prediction data (May 4, 2020 till May 8, 2020) of PJ (see supplementary Table~\ref{Tab:Goodness-of-Fit}). For the remaining three locations (MH, TN and IND), the HYBRID model provide the best result (see supplementary Table~\ref{Tab:Goodness-of-Fit}) on the test prediction data (May 4, 2020 till May 8, 2020). The ensemble model, which is a combination of our COVID-19 mathematical models~\eqref{EQ:eqn 2.1} \&~\eqref{EQ:eqn 3.1} and the best statistical forecast model (region specific), is performed well in capturing COVID-19 daily time-series data trend in all the six locations. Posterior distribution of the weights at which we combine our COVID-19 mathematical model with the best statistical forecast model in the six different locations are provided in supplementary method (see Fig~\ref{Fig:Marginal-distribution-MH} to Fig~\ref{Fig:Marginal-distribution-IND} and Table~\ref{Tab:estimated-weights}).
In MH, DL, and GJ, the estimate of the symptomatic influx fraction ($\kappa$) suggest that low percentage (about 11\% to 20\%) of symptomatic infected in the population (see Table~\ref{Tab:estimated-parameters-Table}). However, in TN and PJ, relatively higher percentage (about 82\% to 88\%) of symptomatic infection is found (see Table~\ref{Tab:estimated-parameters-Table}). In overall India, our estimate shows that currently about 62\% of new infection are symptomatic (see Table~\ref{Tab:estimated-parameters-Table}). Except for GJ, in other five locations, estimate of the transmission rate ($\beta_{1}$) are found to be in same scale (see Table~\ref{Tab:estimated-parameters-Table}). Relatively higher value of $\beta_{1}$ is found in Gujrat (see Table~\ref{Tab:estimated-parameters-Table}). Low value of the transmission variability factor ($\rho$) indicates most of the community infection in GJ are due to contact with the symptomatic infected population. As in GJ, the value of $\kappa$ is found to be small (about 11\%) therefore, relatively smaller symptomatic population producing most of the infection in GJ. This indicate that there may be a possibility of existence of super-spreaders among the symptomatic infected in GJ. This observation is agree with a recent survey result in GJ~\cite{NDTVsuperspreaders}. Except for PJ, in other five locations, the estimates of $\rho$ (below 50\%) are found to be low (see Table~\ref{Tab:estimated-parameters-Table}). This indicates small contribution of the asymptomatic infected population towards the new infection produced in MH, DL, TN, GJ and IND, respectively. Estimate of the lockdown rate in the five states (MH, DL, TN, GJ and PJ) suggest that around 50\% to 88\% of the total susceptible population are successfully home quarantined during the lockdown period (see Table~\ref{Tab:estimated-parameters-Table}). Thus, lockdown is overall successful in those five states. However, this is not the case for overall India, our estimate suggest that about 11\% of the total susceptible population in India maintained proper social distancing during the lockdown period (see Table~\ref{Tab:estimated-parameters-Table}).
Our estimate of the basic reproduction number ($R_{0}$)~(see Table~\ref{Tab:estimated-R0-Table}), in the six locations found to be in good agreement of the world-wide estimate provided by WHO~\cite{liu2020reproductive}. We performed a global sensitivity analysis of two epidemiologically measurable parameters of our mathematical model~(see Fig~\ref{Fig:Flow_India_covid}) namely $\kappa$: fraction of new infected that become symptomatic and $\tau$: Average notification \& hospitalization rate of symptomatic COVID-19 infection, on $R_{0}$. Partial rank correlation and its corresponding $p$-value (see Fig~\ref{Fig:sensitivity-analysis-R0}) suggest that $\tau$ has a negative correlation on $R_{0}$. Thus, more testing will isolate more infection from the community and therefore may reduce the COVID-19 community transmission. Furthermore, high positive correlation of $\kappa$ with $R_{0}$ (see Fig~\ref{Fig:sensitivity-analysis-R0}) indicates the possibility of high COVID-19 transmission in those areas where population have higher percentage of symptomatic infection.
Ensemble model forecast of notified COVID-19 cases between May 17, 2020 till May 31, 2020 (see Table~\ref{Tab:cases-preiction-Table}, Fig.~\ref{Fig:Prediction-cases-India}, and Fig~\ref{Fig:Prediction-cases-MH} to Fig~\ref{Fig:Marginal-distribution-PJ} in supplementary appendix) indicate that in the coming few days, a high increment in the COVID-19 notified cases may be observed in MH, DL, TN, GJ, PJ, and IND. Furthermore, our ensemble model prediction during the mentioned period suggest that around 117645 to 128379 cases may occurred in overall India (see Table~\ref{Tab:cases-preiction-Table}). These numbers are much higher than the total cumulative cases between March 2, 2020 till May 15, 2020, in whole India.
A global sensitivity analysis of $\kappa$ and $\tau$ on the lockdown effect suggest that both of these parameters have high positive correlation with the lockdown effect in all the six locations (see Fig.~\ref{Fig:Sensitivity-lockdown-effect}). Therefore, lockdown will be effective in those region where higher percentage of symptomatic infection is found in the population and also larger COVID-19 mass testing will be required to isolate the cases.
To measure the COVID-19 transmission trend during May 17, 2020 till May 31, 2020, we estimated the effective reproduction number ($R_{t}$) during the mentioned period for MH, DL, TN, GJ, PJ and IND, respectively (see Fig.~\ref{Fig:Effective-reproduction-number}). Our result suggest that, a decreasing trend in new notified COVID-19 cases ($R_{t} < 1$) may be seen after May 31, 2020 if current lockdown measures (see Table~\ref{Tab:estimated-parameters-Table}) are maintained in DL, TN and PJ, respectively. Furthermore, if social distancing measures are removed after May 17, 2020, we may see a rise in the daily COVID-19 cases in all of the six locations (see Fig.~\ref{Fig:Effective-reproduction-number}).
\section{Conclusion}
Up to May 15, 2020, total number of reported COVID-19 cases and deaths in India are \textbf{81794} and \textbf{2649}, respectively \citep{indiacovid2020track}. This tally rises with few thousand new notified cases every day reported from different locations in India \citep{indiacovid2020track}. Currently, there is no treatment or vaccine available for COVID-19. Therefore, only measure to control the outbreak may be home quarantined (lockdown) a larger percentage of susceptible population. However, this disease control strategy may have some negative impact on the economy. Therefore, it is utmost important to determine an effective lockdown policy that may reduce COVID-19 transmission in the community as well as save the Indian economy from drowning. This policy may be found by studying the dynamics and prediction of a mechanistic mathematical model for COVID-19 transmission and testing the results in real life situation.
In this present study, we consider a new mathematical model on COVID-19 transmission that incorporates the lockdown effect (see Fig~\ref{Fig:Flow_India_covid}). In our models~\eqref{EQ:eqn 2.1} \&~\eqref{EQ:eqn 3.1}, we also considered transmission variability between symptomatic and asymptomatic population with former being a fast spreader of the disease. Using daily time-series data of notified COVID-19 cases from five states (Maharashtra, Delhi, Tamil Nadu, Gujarat and Punjab) and overall India, we studied the effect of lockdown measures on the reduction of notified cases in those regions. Our result suggest that lockdown will be effective in those locations where higher percentage of symptomatic infection exist in the population. Furthermore, a large scale COVID-19 mass testing is required to reduce community infection in those locations. Using a post-processing BMA technique, we ensemble the prediction of our mathematical model with the results obtained from different statistical forecast model. Our ensemble model forecast of COVID-19 daily notified cases during May 17, 2020 till May 31, 2020, suggested a very high rise in the COVID-19 notified cases in the mentioned time duration in most of the locations. Furthermore, estimation of the effective reproduction number ($R_{t}$) during the mentioned time duration indicates if the lockdown measures are completely removed after May 17, 2020, in those locations, a high spike in COVID-19 notified cases may be seen during the mentioned forecasting period. We provide a suggestion for the Indian Govt. and policy makers to acquire the following steps for effective containment of COVID-19 transmission:
\begin{enumerate}
\item Perform a survey to find the percentage of symptomatic infection in different states and regions.
\item Focus implementing extensive lockdown in those locations only where the percentage of symptomatic infection is high.
\item Provide relaxation in lockdown in other locations for some time. This process will increase the percentage of symptomatic infection.
\item Repeat step-2, when a region has a sufficient percentage of symptomatic infection.
\end{enumerate}
There are some drawback in our study and may be modified in future. We assume that lockdown population ($L$) and notified \& hospitalized infection ($C$) do not mix with the general population in the community. However, there are numerous evidences where disease transmitted from the hospital and from home confined individuals~\cite{NDTVhospital20}. We shall leave these challenges for our future objectives.
\section*{Conflict of interests}
The authors declare that they have no conflicts of interest.
\section*{Acknowledgments}
The authors are grateful to editor-in-chief, handling editor and learned reviewers for their comments and suggestions on the earlier version of this manuscript. The comments immensely improve the standard of this article.\\
Dr. Tridip Sardar acknowledges the Science \& Engineering Research Board (SERB) major project grant (File No: EEQ/2019/000008 dt. 4/11/2019), Government of India.\\
Sk Shahid Nadim receives funding as senior research fellowship from Council of Scientific \& Industrial Research (Grant No: 09/093(0172)/2016/EMR-I), Government of India, New Delhi.\\
The Funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
\clearpage
\bibliographystyle{ieeetr}
\biboptions{square}
|
1,108,101,562,916 | arxiv | \section{\bf #1}}}
\newcommand{\SUBSECTION}[1]{\bigskip{\large\subsection{\bf #1}}}
\newcommand{\SUBSUBSECTION}[1]{\bigskip{\large\subsubsection{\bf #1}}}
\begin{titlepage}
\begin{center}
\vspace*{2cm}
{\large \bf Muon decays in the Earth's atmosphere, differential aging and the paradox
of the twins}
\vspace*{1.5cm}
\end{center}
\begin{center}
{\bf J.H.Field}
\end{center}
\begin{center}
{
D\'{e}partement de Physique Nucl\'{e}aire et Corpusculaire
Universit\'{e} de Gen\`{e}ve . 24, quai Ernest-Ansermet
CH-1211 Gen\`{e}ve 4.
}
\newline
\newline
E-mail: [email protected]
\end{center}
\vspace*{2cm}
\begin{abstract}
Observation of the decay of muons produced in the Earth's atmosphere by
cosmic ray interactions provides a graphic illustration of the
counter-intuitive space-time predictions of special relativity theory.
Muons at rest in the atmosphere, decaying simultaneously, are
subject to a universal time-dilatation effect when viewed from a
moving frame and so are also
observed to decay simultaneously in all such frames. The analysis of
this example reveals the underlying physics of the differential aging
effect in Langevin's travelling-twin thought experiment.
\par \underline{PACS 03.30.+p}
\vspace*{1cm}
\end{abstract}
\end{titlepage}
\SECTION{\bf{Introduction}}
The present paper is one in a recent series, devoted to space-time physics, by the
present author.
In Ref.~\cite{JHFRSPB} the classical `Rockets-and-String'
~\cite {RS} and `Pole-and-Barn'~\cite{PB} paradoxes of special relativity were re-analysed taking
into account the distinction between the real and apparent\footnote{i.e. as naively predicted by
the standard space-time Lorentz transformation} positions of uniformly moving objects. Different
results were obtained from those of the usual text-book interpretations of these experiments and a new
causality-violating paradox was pointed out. This paradox, as well as the related `backwards running
clocks' one of Soni~\cite{Soni}, was resolved in Ref.~\cite{JHFLLT} where, in order to avoid these
paradoxes as well as manifest
breakdown of translational invariance, in some applications of the standard space-time Lorentz
transformation, the use of a `local' Lorentz transformation. i.e. one where the transformed event in
the moving frame lies at the coordinate origin in this frame, was advocated. When this is done the
closely correlated `relativity of simultaneity' (RS) and `length contraction' (LC) effects of
conventional special relativity theory do not
occur. The connection between these effects is explained in Ref.~\cite{JHFRSPB}.
Ref.~\cite{JHFLLT} also contains a `mini review' of all experimental tests of special relativity
where it is pointed out that, whereas time dilatation is well-confirmed experimentally,
no experimental evidence exists for the RS and LC effects. In the later papers
~\cite{JHFUMC,JHFCRCS,JHFACOORD} it is explained how the spurious RS and LC effects result from a
misuse of the time symbols in the standard space-time Lorentz transformation.
Refs.~\cite{JHFCRCS,JHFACOORD} present
the argument in a pedagogical manner, whereas Ref.~\cite{JHFUMC} contains a concise summary of it.
\par In the following section the necessary formulae for the analysis of the muon decay thought
experiment ---essentially the prediction of a universal time dilatation effect--- are derived from
first principles. Here there is considerable overlap with work presented in
Refs.~\cite{JHFUMC,JHFCRCS,JHFACOORD}
The analysis of the thought experiment presented in Section 3 shows the
absence of the spurious text-book RS effect: Muons which decay simultaneously
in a common proper frame, are observed to do so in all inertial frames. Finally the results
of the analysis presented in this paper are used to shed light on the physical basis
of the differential aging effect in the travelling-twin thought experiment~\cite{Langevin}.
\SECTION{\bf{Operational meaning of the space-time Lorentz transformation: Rates and spatial
separations of moving clocks}}
The Lorentz transformation (LT) relates observations ($x$,$y$,$z$,$t$) of the coordinates of space-time events in one inertial frame S,
to observations of the coordinates
($x'$,$y'$,$z'$,$t'$) of the same events in another inertial frame S'.
As is conventional, the Cartesian spatial coordinate
axes of the two frames are parallel, and the origin of the frame S' moves with constant speed, $v$,
along the $x$-axis.
In any actual experiment, times are recorded by clocks, and positions specified by marks on fixed
rulers (or their equivalent). Therefore, in order to relate the space-time coordinates appearing in the
LT to actual physical measurements they must be identified with clock readings and length interval
measurements~\cite{JHFSTP1}. This can be done in two distinct ways depending on whether the experimenter observing the
clocks and performing the length measurements is at rest in S or in S'. In the former case (only events
with spatial coordinates along the $x$-axis are considered so that $y$ = $y'$ = $z$ = $z'$ = $0$) the
appropriate LT, for a clock, C', situated at the origin of S', is:
\begin{eqnarray}
x'({\rm C}')& = & \gamma_v[x({\rm C}')-v\tau] = 0 \\
t'& = & \gamma_v[\tau-\frac{\beta_v x({\rm C}')}{c}]
\end{eqnarray}
and in the latter case, for a clock, C, situated at the origin of S, is:
\begin{eqnarray}
x({\rm C}) & = & \gamma_v[x'({\rm C})+c\tau'] = 0 \\
t& = & \gamma_v[\tau'+\frac{\beta_v x'({\rm C})}{c}]
\end{eqnarray}
In these equations $\beta_v \equiv v/c$, $\gamma_v \equiv 1/ \sqrt{1-\beta_v^2}$ and $c$ is the speed of
light in vacuum. The clocks in S and S' are synchronised so that for (2.1) and (2.2),
$t'= \tau = 0$ when $x = x'= 0$, and for (2.3) and (2.4),
$t = \tau' = 0$ when $x = x'= 0$.
In (2.1) and (2.2) the transformed events lie on the worldline of a clock, C', at rest in S',
which is observed from S. The observed time in S registered by C'( which is in motion in this frame)
is $t'$, while $\tau$ is the time registered by the clock, C, identical to C', but at rest in S. In contrast,
in (2.3) and (2.4) the transformed events lie on the worldline of C, which is observed from S'. The time
$t$ is that registered by C as observed from S' and $\tau'$ is the time registered by C' as observed
in its own proper frame. Thus two distinct experiments are possible involving one stationary and one moving
clock, depending on whether the experimenter performing the space and time measurements is in the rest
frame of one, or the other, of the two clocks. To describe both of these experiments, four different
time symbols, $\tau$, $\tau'$, $t$ and $t'$, with different operational meanings, are required.
\par From (2.1), the equation of motion of C' in S is:
\begin{equation}
x({\rm C}') = v \tau
\end{equation}
while from (2.3) the equation of motion of C in S' is:
\begin{equation}
x'({\rm C}) = -v \tau'
\end{equation}
Using (2.5) to eliminate $x$ from (2.2), and in view of the definition of $\gamma_v$:
\begin{equation}
\tau = \gamma_v t'
\end{equation}
Similarly, using (2.6) to eliminate $x'$ from (2.4) gives:
\begin{equation}
\tau' = \gamma_v t
\end{equation}
(2.7) and (2.8) are expressions of the relativistic Time Dilatation (TD) effect in the two
`reciprocal' experiments that may be performed using the clocks C and C'. They show that, according,
to the LT, `moving clocks run slow' in a universal manner (no spatial coordinates appear in (2.7) and (2.8)).
In fact:
\begin{equation}
\frac{{\rm rate~of~moving~clock}}{{\rm rate~of~stationary~clock}} =\frac{ t'}{\tau}
= \frac{t}{\tau'} =\frac{1}{\gamma_v}
\end{equation}
\par To discuss measurements of the spatial separations of moving clocks, at least two clocks,
(say, ${\rm C}'_{A}$ and ${\rm C}'_{B}$, at rest in S') must be considered. It is assumed that they lie along the $x'$-axis
separated by the distance $L'$, ${\rm C}'_{A}$ being at the origin of S' and ${\rm C}'_{B}$ at $x' = L'$. The space transformation
equations analogous to (2.1) for ${\rm C}'_{A}$ and ${\rm C}'_{B}$ and are then:
\begin{eqnarray}
x'({\rm C}'_{A}) & = & \gamma_v[x({\rm C}'_{A})-v\tau] = 0 \\
x'({\rm C}'_{B})-L' & = & \gamma_v[x({\rm C}'_{B})-L-v\tau] = 0
\end{eqnarray}
Inspection of (2.11) shows that $L = x({\rm C}'_{B},\tau = 0)$, a relation valid for
all values of $v$ for the choice of coordinate systems in (2.10) and (2.11).
In particular, it is valid when $v \rightarrow 0$, $\gamma_v \rightarrow 1$, and $x \rightarrow x'$. Then
for $v = 0$:
\begin{equation}
x'({\rm C}'_{B})-L' = x'({\rm C}'_{B})-L
\end{equation}
so that
\begin{equation}
L' = L
\end{equation}
The spatial separation of the clocks is therefore a Lorentz-invariant quantity.
\par Suppose now that two objects move with speeds $u_1$, $u_2$ ($u_1 > u_2$) along the positive
$x$-axis in S and that they are coincident with the origins of S and S' at the time $\tau = 0$. At later times
$\tau$, $t'$, the separation of the objects in S is $(u_1-u_2)\tau$ and in S' is $(u'_1-u'_2) t'$.
where $u'_1 - u'_2$ is the relative velocity of the objects in S'. In view of (2.13)
and the time dilatation formula (2.7) it follows that
\begin{equation}
u'_1 - u'_2 = \gamma_v (u_1-u_2)
\end{equation}
A particular case of (2.14), to be used in the following section, is $u_1 = v$, $u_2 = u'_1 = 0$ giving
\begin{equation}
-u'_2 \equiv v' = \gamma_v v
\end{equation}
where $v'$ is the observed speed of the origin of S along the negative $x$-axis in S´.
The relation (2.14) is the
transformation formula of the {\it relative velocity} of two objects between two inertial frames, to be
contrasted with the relativistic parallel velocity addition formula:
\begin{equation}
w = \frac{u-v}{1-\frac{uv}{c^2}}
\end{equation}
which relates kinematical configurations of a single moving object in the frames S and S'.
In the case $u = 0$, (2.16) gives
$w = -v$ and this equation relates the kinematical configuration in S in the primary experiment
described by (2.1) and (2.2) to that in S' in the (physically independent) reciprocal experiment
described by (2.3) and (2.4).
In contrast (2.14) describes the observed {\it relative velocity transformation} within the primary
experiment. For further discussion of this important point see Refs.~\cite{JHFSTP3,JHFRECP}.
\SECTION{\bf{Muons are clocks that demonstrate time dilatation and differential aging}}
Muon decays constitute an excellent laboratory for testing the predictions of special relativity.
For example, the TD effect of Eqn(2.7) was experimentally
confirmed at the per mille level of relative precision in the ultrarelativisic domain
($\gamma_v \simeq 30$) by observation of the decay of muons in the
storage ring of the last CERN muon $g-2$ experiment~\cite{NatureTD}.
In the present paper, it is shown that thought experiments involving muons provide a graphic illustration
of the predicted space-time behaviour, in special relativity, of clocks in different inertial frames.
\par Unlike most other unstable particles, muons are particularly suitable for
precise tests of the TD effect because of the ease of their production from pion
decay and their long mean lifetime of 2.2 $\mu$s. The former yields high events statistics and the latter
the possibility of precise time interval measurements using accurate clocks in the
laboratory frame~\cite{NatureTD}.
\par The thought experiment developed in the present paper is an elaboration of the well-known
demonstration that the very presence of cosmic muons at the Earth's surface is, by itself,
sufficient to demonstrate the existence of the TD effect~\cite{FL,TW,TL,CC}. Muons are produced predominantly
by the weak decay of charged pions $\pi^{\pm} \rightarrow \mu^{\pm} \nu$. The velocity of the muon,
$v_{\mu}$, depends upon that of the parent pion, $v_{\pi}$, and
the centre-of-mass decay angle, $\theta^{\star}$.
If the pion has the same velocity, $v_{\mu}^{\star} = c(m_{\pi}^2-m_{\mu}^2)/(m_{\pi}^2+m_{\mu}^2)$,
as the muon in the pion rest frame, (corresponding to a pion momentum of 49.5 MeV/c)
and $\cos \theta^{\star} = -1$, the muon is produced at rest in the laboratory system.
The maximum muon decay energy $E_{\mu}^{max}$ correponds to $\cos \theta^{\star} = 1$ and is given,
in terms of the parent pion energy $E_{\pi}$, and the pion velocity $v_{\pi} = c \beta_{\pi}$, by the
relation:
\begin{equation}
E_{\mu}^{max} = E_{\pi} \frac{[m_{\pi}^2(1+\beta_{\pi})+m_{\mu}^2(1-\beta_{\pi})]}{2 m_{\pi}^2}
\end{equation}
For ultra-relativistic parent pions with $\beta_{\pi} \simeq 1$, $ E_{\mu}^{max} \simeq E_{\pi}$.
\par Due to the thickness of the Earth's atmosphere, the majority of interactions of primary
cosmic protons, that produce the parent pions of cosmic muons, occur at high altitude, $\simeq$ 20 km above the Earth's surface. A muon with
speed close to that of light then takes at least $\simeq$ 700 $\mu s$ to reach the surface of the Earth.
This may be compared with the muon mean lifetime of 2.2 $\mu s$. Without the TD effect, only
a fraction $\exp[-700/2.2] \simeq 10^{-138}$ of the muons produced at altitude would reach
the Earth's surface. However a 10 GeV muon, which has $\gamma_v \simeq 94$, has a 3.5 $\%$
probability to
reach the Earth's surface, before decaying, when the TD effect is taken into account.
\par In the thought experiment considered here it is assumed that two muons $\mu_{\rm A}$ and $\mu_{\rm A'}$
are produced simultaneously at the same point A (see Fig.1a) by decay of pions from a primary cosmic
ray interaction with the nucleus of a gas atom in the atmosphere. The muon $\mu_{\rm A}$ is produced at rest in
the atmosphere (inertial frame S) while $\mu_{\rm A'}$ (with proper frame S') is produced with velocity $v = c \beta_v = \sqrt{3}/2$, so that
$\gamma_v = 2$. It happens that both muons decay after time $T$ in their proper frames.
Because of the TD effect, the muon $\mu_{\rm A'}$ will then be seen by an observer at rest in
the atmosphere to decay after time $\tau = \gamma_v T = 2T$ at a point B at a distance
$L = 2Tv = 2.28$km from A. It is also supposed that at the same time, $\tau = 0$, that $\mu_{\rm A}$ and $\mu_{\rm A'}$
are created, another muon, $\mu_{\rm B}$ , (also with proper decay lifetime $T$) is created at rest in the
atmosphere at the
point B, by decay of pion from another primary cosmic ray interaction. Since $\mu_{\rm A}$ and $\mu_{\rm B}$ are at rest in
the atmosphere and have no TD effect, they will decay
simultaneously at $\tau = T$ (Fig.1b) in the frame S. At this instant the muon $\mu_{\rm A'}$ is still undecayed and is
at the point M, midway between A and B, When $\mu_{\rm A'}$ decays (Fig.1c) $\mu_{\rm A}$ and $\mu_{\rm B}$ no longer
exist, however the centres of mass of their, by now distant, decay products $e$, $\nu$ and $\bar{\nu}$,
denoted as ($\mu_{\rm A}$ ) and ($\mu_{\rm B}$ ), and indicated in Figs. 1-3 as two concentric circles, still remain at the points A and B.
\begin{figure}[htbp]
\begin{center}\hspace*{-0.5cm}\mbox{
\epsfysize15.0cm\epsffile{muclocksf1c.eps}}
\caption{ {\em The sequence of muon decay events as observed from the atmosphere (frame S).
a) Muons $\mu_{\rm A'}$ , $\mu_{\rm A}$ and $\mu_{\rm B}$ are simultaneously created. Muon $\mu_{\rm A'}$ moves to the right with
velocity $v = (\sqrt{3}/2)c$. b) At time $\tau = T$, muons $\mu_{\rm A}$ and $\mu_{\rm B}$ decay simultaneously.
At this time $\mu_{\rm A'}$ is observed from S to be aligned with the mid-point, M, of A and B.
c) At time $\tau = \gamma_v T$, muon $\mu_{\rm A'}$ is observed to decay. At this time it is at B,
the centre of mass of the decay products of $\mu_{\rm B}$ . For clarity, the muons are shown displaced vertically.}}
\label{fig-fig1}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}\hspace*{-0.5cm}\mbox{
\epsfysize15.0cm\epsffile{muclocksf2c.eps}}
\caption{{\em The sequence of muon decay events as observed in the proper frame (S') of \newline $\mu_{\rm A'}$ .
a) Muons $\mu_{\rm A'}$ , $\mu_{\rm A}$ and $\mu_{\rm B}$ are simultaneously created. Muons $\mu_{\rm A}$ and $\mu_{\rm B}$ are observed to
move to the left with
velocity $v \gamma_v = \sqrt{3}c$. b) At time $\tau' = T/\gamma_v$, $\mu_{\rm A}$ and $\mu_{\rm B}$ decay simultaneously.
At this time, as in Fig.1, $\mu_{\rm A'}$ is aligned with the mid-point, M, of A and B.
c) At time $\tau' = T$ muon $\mu_{\rm A'}$ decays. At this time, as in Fig.1, it is aligned with B,
the centre of mass of the decay products of $\mu_{\rm B}$ . For clarity, the muons are shown displaced vertically.}}
\label{fig-fig2}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}\hspace*{-0.5cm}\mbox{
\epsfysize15.0cm\epsffile{muclocksf3c.eps}}
\caption{{\em The sequence of muon decay events as observed from the frame (S'') that
moves in S, parallel to the direction of motion of $\mu_{\rm A'}$ , with the velocity $w = v/2 = c\sqrt{3}/4$.
a) Muons $\mu_{\rm A'}$ , $\mu_{\rm A}$ and $\mu_{\rm B}$ are simultaneously created. Muon $\mu_{\rm A'}$ is observed to move to the right with
velocity $w \gamma_w$ and $\mu_{\rm A}$ and $\mu_{\rm B}$ move to the left with
velocity $w \gamma_w$. b) At time $\tau'' = L/(2w\gamma_w)$, $\mu_{\rm A}$ and $\mu_{\rm B}$ decay simultaneously. At this time,
as in Figs.1 and 2, $\mu_{\rm A'}$ is aligned with the mid-point, M, of A and B. c) At time $\tau'' = L/(w\gamma_w)$
$\mu_{\rm A'}$ decays. As in Figs.1 and 2 it is aligned, at with this time, with B, the centre of mass of the
decay products of $\mu_{\rm B}$ . For clarity, the muons are shown displaced vertically.}}
\label{fig-fig3}
\end{center}
\end{figure}
\par The sequence of events that would be seen by an observer in the rest frame, S,' of $\mu_{\rm A'}$
corresponding to those of Fig.1, is shown in Fig.2. According to the relative velocity transformation
formula (2.15), $\mu_{\rm A}$ and $\mu_{\rm B}$ are observed to move to the left with velocity $v\gamma_v$. The
configuration at $\tau = \tau' = 0$ is shown in Fig.2a. According to the time dilatation relation (2.7),
appropriate to this case, $\mu_{\rm A}$ and $\mu_{\rm B}$ are seen to decay simultaneously at time
$\tau' = T/\gamma_v = 1.2\mu s$ (Fig.2b). As in Fig.1, it can be sees that $\mu_{\rm A'}$ is aligned with M,
the mid point of
the line segment AB, at the time of simultaneous decay of $\mu_{\rm A}$ and \mbox{$\mu_{\rm B}$ .} As shown in Fig.2c,
$\mu_{\rm A'}$ decays at time $\tau' = T = 2.2\mu s$, when it is aligned with B, as is also the case for
an observer in the frame S, as shown in Fig.1c. Since $\mu_{\rm A}$ and $\mu_{\rm B}$ are seen to decay simultaneously
by observers at rest in both S and S', there is no RS effect.
\par The sequence of events that would be observed in the frame S'', moving with velocity $w$ in
the positive $x$-direction will now be considered. According to the relative velocity transformation
formula (2.14), $\mu_{\rm A'}$ is observed to move with speed $\gamma_w(v-w)$ in the positive $x$-direction
in S'' since the {\it relative velocity} of S'' and S' in the frame S is $v-w$. Also, according
to (2.15), $\mu_{\rm A}$ and $\mu_{\rm B}$ are observed to move with speed $w \gamma_w$ in the negative $x$-direction
in S''. The velocity of $\mu_{\rm A'}$ relative to $\mu_{\rm A}$ and $\mu_{\rm B}$ in the frame S'' is then $v \gamma_w$.
The sequence of events seem by an observer at rest in S'' is illustrated, for the case $w = v/2$,
in Fig.3. Using the time dilatation realation (2.7), with $v$ set equal to $w$ in order to relate
times in S and S'', $\mu_{\rm A}$ and $\mu_{\rm B}$ decay at time $\tau'' = T/\gamma_w$, when $\mu_{\rm A'}$ is aligned with M
(see Fig.3b). Note that the time of this event in S'' depends only on the value of $w$, being independent
of the value of $v$. The muon $\mu_{\rm A'}$ decays at time $\tau'' = \gamma_v T/\gamma_w$ when it is aligned
with B (see Fig.3c).
\par In all three frames, S, S' and S'', $\mu_{\rm A}$ and $\mu_{\rm B}$ are observed to decay simultaneously
and earlier than $\mu_{\rm A'}$ . These decay times, in each frame, are presented in Table 1. The
entries in this table satisfy the following condition:
\begin{equation}
\frac{\tau_D(\mu_{\rm A'})}{\tau_D(\mu_{\rm A})} = \frac{\tau'_D(\mu_{\rm A'})}{\tau'_D(\mu_{\rm A})}
= \frac{\tau''_D(\mu_{\rm A'})}{\tau''_D(\mu_{\rm A})} = \gamma _v
\end{equation}
Since the last member of this equation is independent of $w$, it follows that ratio of
the decay times given by the time dilatation relation (2.7) is the same for all inertial
observers, and so is an invariant, fixed by the velocity of $\mu_{\rm A'}$ in the rest frame of $\mu_{\rm A}$
where the time dilatation effect is defined ---i.e. observer at rest in S, observed muon
at rest in S'.
\par Different (but reciprocal, i.e. related to those of Eqn(3.2) by exchange of primed
and unprimed quantities) results would be obtained in the situation where the observer of the
time dilatation effect is at rest in S', while the observed muon is a rest in S, so that
the reciprocal time dilatation relation (2.8) is applicable.
\par Inspection of, and reflection on, Figs.1 and 2 reveals the physical basis of the differential
aging effect in the `twin paradox' introduced by Langevin~\cite{Langevin}. The `travelling twin' can
be indentified with $\mu_{\rm A'}$ , the `stay at home' one with either $\mu_{\rm A}$ or $\mu_{\rm B}$ since their associated
`clocks' are synchronous. At the end of the outward journey, when $\mu_{\rm A'}$ arrives at B, the
synchronous clocks at A and B were seen in S to run twice as fast as that of $\mu_{\rm A'}$ ---in fact
$\mu_{\rm A}$ and $\mu_{\rm B}$ have already decayed when $\mu_{\rm A'}$ arrives at B. In Fig.1, the observer in S sees
$\mu_{\rm A'}$ aging less rapidly than $\mu_{\rm A}$ or $\mu_{\rm B}$ . However, as required by the time dilatation relation
(2.7), an observer in S' (Fig.2) sees $\mu_{\rm A}$ or $\mu_{\rm B}$ {\it aging more rapidly}, by a factor two,
than $\mu_{\rm A'}$ . This is true even though these clocks are in motion relative to the observer
on S'. Fig.2 also reveals that the physical basis of time dilatation, and differential aging, is not, as hitherto,
to be found in
`length contraction' in the frame S', but instead in the greater velocity of $\mu_{\rm A}$ or $\mu_{\rm B}$ relative to $\mu_{\rm A'}$
in the `travelling frame' S', than that of $\mu_{\rm A'}$ relative to $\mu_{\rm A}$ or $\mu_{\rm B}$ in the `base frame'
S from which the time dilatation effect is observed. For futher discussion of base and travelling
frames in relation to primary and reciprocal space-time experiments see
Refs.~\cite{JHFSTP3,JHFRECP}.
\par The much-discussed `twin paradox' arises when it is attempted to describe the sequence of events
shown in Fig.2 by use of the time dilatation relation (2.8) of the reciprocal experiment which requires
clocks in S to run slower (not faster, as in Fig.2) than those in S', when observed in the latter
frame. This is a nonsensical interpretation since the time variables $\tau$ and $t'$ appearing in the
time dilatation relation (2.7) have a completely different operational meaning to those,
$\tau'$ and $t$ in the time dilatation relation of the reciprocal experiment. See Ref.~\cite{JHFSTP3}
for a more detailed discussion of the standard and incorrect interpretation of the twin paradox
based on the spurious LC effect, in contrast with its correct interpretation following from
the relative velocity transformation formula (2.14).
\begin{table}
\begin{center}
\begin{tabular}{|c|c c c|} \hline
Frame &\multicolumn{1}{c|}{ $\tilde{\tau}_D(\mu_{{\rm A'}})$}
&\multicolumn{2}{c|}{ $\tilde{\tau}_D(\mu_{{\rm A}}) = \tilde{\tau}_D(\mu_{{\rm B}})$} \\ \hline
S &\multicolumn{1}{c|}{ $\gamma_v T$}
&\multicolumn{2}{c|}{$T$} \\
S' &\multicolumn{1}{c|}{$T$}
&\multicolumn{2}{c|}{$T/\gamma_v$} \\
S'' &\multicolumn{1}{c|}{$\gamma_vT/\gamma_w$}
&\multicolumn{2}{c|}{$T/\gamma_w$} \\
\hline
\end{tabular}
\caption[]{{\em Decay times of muons $\mu_{\rm A'}$ and $\mu_{\rm A}$ and $\mu_{\rm B}$ in the frames S, S' and S''.
$\tilde{\tau}$ denotes the proper time in each frame.}}
\end{center}
\end{table}
\pagebreak
|
1,108,101,562,917 | arxiv |
\section{Introduction}
Question answering (\abr{qa}) systems have impressive recent victories---beating trivia masters~\cite{ferruci-10} and superhuman reading~\cite{najberg-18}---but these triumphs hold only if they \emph{generalize}; \abr{qa} systems should be able to answer questions even if they do not look like training examples.
While other work (Section~\ref{sec:related}) focuses on demographic representation in \abr{nlp} resources, our focus is how well \abr{qa} models generalize across demographic subset{}s.
After mapping mentions to a knowledge base (Section~\ref{sec:mapping}), we show existing \abr{qa} datasets lack diversity in the gender and national origin of the people mentioned: English-language \abr{qa} datasets mostly ask about \abr{us} men from a few professions (Section~\ref{sec:distribution}).
This is problematic because most English speakers (and users of English \abr{qa} systems) are not from the \abr{us} or \abr{uk}.
Moreover, multilingual \abr{qa} datasets are often \emph{translated} from English datasets~\cite{lewis-etal-2020-mlqa,
artetxe2019xquad}.
However, no work has verified that \abr{qa} systems generalize to infrequent demographic groups.
Section~\ref{sec:accuracy} investigates whether statistical tests reveal patterns on demographic subgroups.
Despite skewed distributions, accuracy is not correlated with gender or nationality, though it is with professional field.
For instance, Natural Questions~\cite[\abr{nq}]{kwiatkowski-19} systems do well with entertainers but poorly with scientists, which are handled well in \camelabr{Trivia}{qa}{}.
However, absence of evidence is not evidence of absence (Section~\ref{sec:conclusion}), and existing \abr{qa} datasets are not yet diverse enough to vet \abr{qa}'s generalization.
\section{Mapping Questions to Entities}
\label{sec:mapping}
We analyze four \abr{qa} tasks: \nq{},\footnote{For \nq{}, we only consider questions with short answers.} \textsc{sq}{\small u}\textsc{ad}~\cite{rajpurkar-16}, \qb~\cite{boyd-graber-12} and \camelabr{Trivia}{qa}~\cite{joshi-17}.
Google \abr{cloud-nl}\footnote{\smallurl{https://cloud.google.com/natural-language/docs/analyzing-entities}} finds and links entity mentions in \qa{} examples.\footnote{We analyze the dev fold, which is {\bf consistent with the training fold} (Table~\ref{tab:entity-distribution} and~\ref{tab:demographics}), as we examine accuracy.}
\subsection{Focus on \emph{People}}
\label{subsec:people}
Many entities appear in examples (Table~\ref{tab:entity-distribution}) but \emph{people} form a majority in our \abr{qa} tasks (except \textsc{sq}{\small u}\textsc{ad}{}). Existing work in \abr{ai} fairness focuses on disparate impacts on people, and model behaviors are prone to harm especially when it comes to \emph{people}; hence, our primary intent is to understand how demographic characteristic{}s of ``people'' correlate with model correctness.
The people asked about in a question can be in the answer---``who founded Sikhism?'' (A: \answer{\entity{Guru Nanak}}), in the question---``what did \entity{Clara Barton} found?'' (A: American Red Cross), or the title of the source document---``what play featuring General Uzi premiered in Lagos in 2001?'' (A: \answer{King Baabu}) is in the page on \entity{Wole Soyinka}.
We search until we find an entity: first in the answer, then the question if no entity is found in the answer, and finally the document title.
Demographics are a natural way to categorize these entities and we consider the high-coverage demographic {\bf characteristic{}s} from \wikidata{}.\footnote{\smallurl{https://www.wikidata.org/wiki/Wikidata:Database_download}}
Given an entity, Wikidata has good coverage for all datasets: gender ($>99\%$ ), nationality ($>93\%$), and profession ($>94\%$).
For each characteristic{}, we use the knowledge base to extract the specific {\bf value{}} for a person (e.g., the value{} ``poet'' for the characteristic{} ``profession'').
However, the value{}s defined by \wikidata{} have inconsistent granularity, so we collapse near-equivalent value{}s (E.g., ``writer'', ``author'', ``poet'', etc. See Appendix~\ref{appendix:country-collapse}--\ref{appendix:professions-collapse} for an exhaustive list).
For questions with multiple values (where multiple entities appear in the answer, or a single entity has multiple value{}s), we create a new value concatenating them together.
An `others’ value{} subsumes value{}s with fewer than fifteen examples; people without a value{} become `not found' for that characteristic{}.
\label{entity-linking-procedure}
\label{entity-linking-validation}
Three authors manually verify entity assignments by vetting fifty random questions from each dataset. Questions with at least one entity had near-perfect 96\% inter-annotator agreement for \abr{cloud-nl}'s annotations, while for questions where \abr{cloud-nl} didn't find any entity, agreement is 98\%.
Some errors were benign: incorrect entities sometimes retain correct demographic value{}s; e.g., \entity{Elizabeth~II} instead of \entity{Elizabeth~I}. Other times, coarse-grained nationality ignores nuance, such as the distinction between \emph{Greece} and \emph{Ancient Greece}.
\subsection{Who is in Questions?}
\label{sec:distribution}
Our demographic analysis reveals skews in all datasets, reflecting differences in task focus (Table~\ref{tab:demographics}).
\nq{} is sourced from search queries and skews toward popular culture.
\qb{} nominally reflects an undergraduate curriculum and captures more ``academic'' knowledge.
\camelabr{Trivia}{qa}{} is popular trivia, and \textsc{sq}{\small u}\textsc{ad}{} reflects Wikipedia articles.
Across all datasets, men are asked about more than women, and the \abr{us} is the subject of the majority of questions except in \camelabr{Trivia}{qa}{}, where the plurality of questions are about the \abr{uk}.
\nq{} has the highest coverage of women through its focus on entertainment (\demovalue{Film/TV}{}, \demovalue{Music}{} and \demovalue{Sports}{}).
\tablefile{demographics}
\section{What Questions can \abr{qa} Answer?}
\label{sec:accuracy}
\qa{} datasets have different representations of demographic characteristic{}s;
is this focus benign or do these differences carry through to model accuracy?
We analyze a \abr{sota} system for each of our four tasks.
For \nq{} and \textsc{sq}{\small u}\textsc{ad}{}, we use a fine-tuned \abr{bert}{}~\cite{alberti2019bert} with curated training data (e.g., downsample questions without answers and split documents into multiple training instances). For the open-domain \camelabr{Trivia}{qa}{} task, we use \abr{orqa}~\cite{lee2019latent} that uses \abr{bert}{}-based reader and retriever components. Finally, for \qb{}, we use the competition winner from \citet{wallace-19}, a \abr{bert}-based reranker of a \abr{tf-idf} retriever.
Accuracy (exact-match) and average F1 are both common \abr{qa} metrics~\cite{rajpurkar-16}. Since both are related and some statistical tests require binary scores, we focus on exact-match.
Rather than focus on aggregate accuracy, we focus on demographic subset{}s' accuracy (Figure~\ref{fig:accuracies}).
For instance, while 66.2\% of questions about people are correct in \qb{}, the number is lower for the Dutch (\demovalue{Netherlands}) (55.6\%) and higher for \demovalue{Ireland} (87.5\%).
Unsurprisingly, accuracy is consistently low on the `not\_found' subset{}, where \wikidata{} lacks a person's demographic value{}.
Are the differences we observe across strata significant?
We probe this in two ways: using $\chi^2$ testing~\cite{plackett1983karl} to see \emph{if} trends exist and using logistic regression to explore those that do.
\subsection{Do Demographic Values Affect Accuracy?}
\label{subsec:chi-squared-test}
The $\chi^2$ test is a non-parametric test of whether two variables are independent.
To see if accuracy and characteristic{}s are independent, we apply a $\chi^2$ test to a $n \times 2$ contingency table with $n$ rows representing the frequency of that characteristic{}'s subset{}s contingent on whether the model prediction is correct or not (Table~\ref{tab:contingency}).
If we reject the null with a Bonferroni correction~\cite[divide the $p$-value threshold by three, as we have multiple tests for each dataset]{holm-79}, that suggests possible relationships:
gender in \nq{} ($p=$\SI{2.36e-12}), and professional field in \nq{} ($p=0.0142$), \qb{} ($p=$\SI{2.34e-07}) and \camelabr{Trivia}{qa}{} ($p=0.0092$).
However, we find no significant relationship between nationality and accuracy in any dataset.
While $\chi^{2}$ identifies \emph{which} characteristic{}s impact model accuracy, it does not characterize \emph{how}.
For instance, $\chi^2$ indicates \nq{}'s gender is significant, but is this because accuracy is higher for women, or because the presence of both genders in examples lowers the accuracy?
\tablefile{contingency}
\subsection{Exploration with Logistic Regression}
\label{subsec:logistic-regression}
\newcommand{\hfeat}[1]{\textbf{\lrfeature{#1}}}
Thus, we formulate a simple logistic regression:
can an example's demographic value{}s predict if a model answers correctly?
Logistic regression and related models are the workhorse for discovering and explaining the relationship between variables in history~\cite{mccloskey-87}, education~\cite{Linden-2013}, political science~\cite{poole-11}, and sports~\cite{glickman-99}.
Logistic regression is also a common tool in \abr{nlp}: to find linguistic constructs that allow determiner omission~\cite{kiss-10} or to understand how a scientific paper's attributes effect citations~\cite{yogatama-11}.
Unlike model calibration~\cite{niculescu2005predicting}, whose goal it to maximize prediction accuracy, the goal here is \emph{explanation}.
We define binary features for demographic value{}s of characteristic{}s the $\chi^2$ test found significant (thus, \textsc{sq}{\small u}\textsc{ad}{}, the nationality characteristic{}, and gender characteristic{} for all but \nq{} are excluded).
For instance, a question about \entity{Abidali Neemuchwala} would have features for \hfeat{g\_male}, \hfeat{o\_executive} but zero for everything else.\footnote{Exhaustive list of demographic features in the Appendix.}
Real-valued features, \hfeat{multi\_entities} and \hfeat{multi\_answers}, capture the effect of multiple person-entities and multiple gold-answers (scaled with the base two logarithm). %
\tablefile{logistic-regression-linear-no-lasso}
But that is not the only reason an answer may be difficult or easy.
Following \citet{sugawara-18}, we incorporate features that reveal the questions' difficulty.
For instance, questions that clearly hint the answer type reduce ambiguity.
The \hfeat{t\_who} checks if the token ``who'' is in the start of the question.
Similarly, \hfeat{t\_what}, \hfeat{t\_when}, and \hfeat{t\_where} capture other entity-types.
Questions are also easier if evidence only differs from the question by a couple of words; thus, \hfeat{q\_sim} is the Jaccard similarity between question and evidence tokens.
Finally, the binary feature \hfeat{e\_train\_count} marks if the person-entities occur in training data more than twice.
We first drop features with negligible effect on accuracy using \lasso{} (regularization $\lambda=1$) by removing zero coefficients.
For remaining features, Wald statistics~\cite{fahrmeir2007regression} estimate $p$-values.
Although we initially use quadratic features they are all eliminated during feature reduction.
Thus, we only report the linear features with a minimal significance ($p$-value < 0.1).
\subsection{How do Properties Affect Accuracy?}
Recall that logistic regression uses features to predict whether the \abr{qa} system will get the answer right or not.
Features associated with correct answers have positive weights (like those derived from \citet{sugawara-18}, \hfeat{q\_sim} and \hfeat{e\_train\_count}), those associated with incorrect answers have negative weights, and features without effect will be near zero.
Among the \hfeat{t\_wh*} features, \hfeat{t\_who} significantly correlates with model correctness, especially in \nq{} and \qb{}, where questions asked directly about a person.
However, our goal is to see if, \emph{after} accounting for obvious reasons a question could be easy, demographic properties can explain \abr{qa} accuracy.
The strongest effect is for professions (Table~\ref{tab:logistic-regression}).
For instance, while \nq{} and \qb{} systems struggle on science questions, \camelabr{Trivia}{qa}{}'s does not.
Science has roughly equivalent representation (Table~\ref{tab:demographics}), suggesting \qb{} questions are harder.
While \hfeat{multi\_answer} (and \hfeat{multi\_entities}) reveal harder \nq{} questions, it has a positive effect in \camelabr{Trivia}{qa}{}, as \camelabr{Trivia}{qa}{} uses multiple answers for alternate formulations of answers (Appendix~\ref{appendix:nq-examples}, \ref{appendix:trivia-qa-examples}), which aids machine reading, while multiple \abr{nq} answers are often a sign of ambiguity~\cite{Boyd-Graber-20, Si:Zhao:Boyd-Graber-2021}:
``\emph{Who says that which we call a rose?}'' A: \entity{Juliet}, A: \entity{William Shakespeare}.
For male and female genders, \nq{} has no statistically significant effect on accuracy, only questions about entities with multiple genders depresses accuracy.
Given the many findings of gender bias in \abr{nlu}~\cite{zhao-17,webster-18,zhao-18,stanovsky-19}, this is surprising.
However, we caution against accepting this conclusion without further investigation given the strong correlation of gender with professional field~\cite{goulden-11}, where we do see significant effects.
Taken together, the $\chi^{2}$ and logistic regression analysis give us reason to be optimistic:
although data are skewed for all subset{}s, \abr{qa} systems might well generalize from limited training data across gender and nationality.
\section{Related Work}
\label{sec:related}
Language is a reflection of culture.
Like other cultural artifacts---encyclopedias~\cite{reagle-11}, and films~\cite{sap-etal-2017-connotation}---\abr{qa} has more men than women.
Other artifacts like children's books have more gender balance but reflect other aspects of culture~\cite{larrick-65}.
The \abr{nlp} literature is also grappling with demographic discrepancies.
Standard coreference systems falter on gender-balanced corpora~\cite{webster-18}, and \citet{zhao-18} create synthetic training data to reduce bias.
Similar coreference issues plague machine translation systems~\cite{stanovsky-19}, and \citet{li-20} use \abr{qa} to probe biases of \abr{nlp} systems.
\citet{Sen_2020} show that there are shortcomings in \qa{} datasets and evaluations by analysing their out-of-domain generalization capabilities and ability to handle question variation.
Joint models of vision and language suggest that biases come from language, rather than from vision~\cite{ross-etal-2021-measuring}.
However, despite a range of mitigation techniques~\citep[inter alia]{zhao-17} none, to our knowledge, have been successfully applied to \abr{qa}, especially from the demographic viewpoint.
\section{Discussion and Conclusion}
\label{sec:conclusion}
This paper delivers both good news and bad news.
While datasets remain imperfect and reflect societal imperfections, for many demographic properties, we do not find strong evidence that \abr{qa} suffers from this skew.
However, this is an absence of evidence rather than evidence of absence: these are skewed datasets that have fewer than a quarter of the questions about women.
It is difficult to make confident assessments on such small datasets---many demographic value{}s were excluded because they appeared infrequently (or not at all).
Improving the diversity of \abr{qa} datasets can help us be more certain that \abr{qa} systems do generalize and reflect the diverse human experience.
Considering such shortcomings, \citet{Rodriguez_2021} advocate improving evaluation by focusing on more important examples for ranking models; demographic properties could further refine more holistic evaluations.
A broader analysis beyond person entities would indeed be a natural extension of this work. Label propagation can expand the analysis beyond people: the \entity{Hershey-Chase} experiment is associated with \entity{Alfred Hershey} and \entity{Martha Chase}, so it would—given the neighboring entities in the Wikipedia link graph—be 100\% American, 50\% male, and 50\% female.
Another direction for future work is accuracy under counterfactual perturbation: swapping real-world entities (in contrast with nonce entities in \citet{li-20}) with different demographic values.
Nonetheless, particularly for professional fields, imbalances remain.
The lack of representation in \abr{qa} could cause us to think that things are better than they are because of Simpson's paradox~\cite{blyth-72}: gender and profession are not independent!
For example, in \nq{}, our accuracy on women is higher in part because of its tilt toward entertainment, and we cannot say much about women scientists.
We therefore caution against interpreting strong model performance on existing \abr{qa} datasets as evidence that the task is ‘solved’.
Instead, future work must consider better dataset construction strategies and robustness of accuracy metrics to different subset{}s of available data, as well as unseen examples.
\section*{Ethical Considerations}
\label{sec:ethics}
This work analyses demographic subsets across \qa{} datasets based on Gender, Nationality and Profession.
We believe the work makes a positive contribution to representation and diversity by pointing out the skewed distribution of existing \abr{qa} datasets.
To avoid noise being interpreted as signal given the lack of diversity in these datasets, we could not include various subgroups that we believe should have been part of this study: non-binary, intersectional groups (e.g., women scientists in \abr{nq}), people indigenous to subnational regions, etc.
We believe increasing representation of all such groups in \abr{qa} datasets would improve upon the status quo.
We infer properties of mentions using Google Cloud-\abr{nl} to link the entity in a \abr{qa} example to an entry in the \textsc{WikiData} knowledge base to attribute gender, profession and nationality.
We acknowledge that this is not foolproof and itself vulnerable to bias, although our small-scale accuracy evaluation did not reveal any concerning patterns.
All human annotations are provided by authors to verify entity-linkings and were fairly compensated.
\section*{Acknowledgements}
We thank Michael Collins, Slav Petrov, Tulsee Doshi, Sephora Madjiheurem, Benjamin B\"orschinger, Pedro Rodriguez, Massimiliano Ciaramita, Kenton Lee, Alex Beutal, Kenton Lee, and Emily Pitler for their early and insightful comments on the proposal and drafts. Additionally, insights about Google's \abr{Cloud-NL} entity linking tool and \abr{WikiData KB} from Jan Botha, Denny Vrandecic, Livio Soares and Tom Kwiatkowski were quite useful in designing the entity linking and attribute extraction pipeline.
\section{Entity collapses of demographic value{}s}
While mapping \qa{} examples to person entities and values for their corresponding demographic characteristics (Section \ref{sec:mapping}), we encountered many nearby values: `Poet', `Writer', `Author'. We collapse such values into a single label which we use for further analysis. This section enlists all the collapses that we encounter for determining nationality of people (Appendix~\ref{appendix:country-collapse}) and their professions (Appendix~\ref{appendix:professions-collapse}).
\subsection{Entity-collapses for Nationality values}
\label{appendix:country-collapse}
\paragraph{\textbf{\texttt{US:}}}
\feature{kingdom of hawaii}, \feature{united states}, \feature{united states of america}\\
\paragraph{\textbf{\texttt{UK:}}}
\feature{commonwealth of england}, \feature{great britain}, \feature{kingdom of england}, \feature{kingdom of mercia}, \feature{kingdom of scotland}, \feature{kingdom of wessex}, \feature{united kingdom}, \feature{united kingdom of great britain and ireland}\\
\paragraph{\textbf{\texttt{Albania:}}}
\feature{kingdom of albania}\\
\paragraph{\textbf{\texttt{Austria:}}}
\feature{austrian empire}, \feature{federal state of austria}, \feature{first republic of austria}\\
\paragraph{\textbf{\texttt{Cyprus:}}}
\feature{kingdom of cyprus}, \feature{republic of cyprus}, \feature{turkish republic of northern cyprus}\\
\paragraph{\textbf{\texttt{Denmark:}}}
\feature{kingdom of denmark}\\
\paragraph{\textbf{\texttt{France:}}}
\feature{kingdom of france}\\
\paragraph{\textbf{\texttt{Germany:}}}
\feature{german confederation}, \feature{german democratic republic}, \feature{german empire}, \feature{german reich}, \feature{germany}, \feature{kingdom of hanover}, \feature{kingdom of prussia}, \feature{kingdom of saxony}, \feature{nazi germany}, \feature{north german confederation}, \feature{prussia}, \feature{republic of german-austria}, \feature{west germany}\\
\paragraph{\textbf{\texttt{Greece:}}}
\feature{ancient greece}, \feature{greece}\\
\paragraph{\textbf{\texttt{Hungary:}}}
\feature{hungary}, \feature{kingdom of hungary}, \feature{people's republic of hungary}\\
\paragraph{\textbf{\texttt{Ireland:}}}
\feature{irish republic}, \feature{kingdom of ireland}\\
\paragraph{\textbf{\texttt{Italy:}}}
\feature{ancient rome}, \feature{florence}, \feature{holy roman empire}, \feature{kingdom of italy}, \feature{kingdom of sardinia}\\
\paragraph{\textbf{\texttt{Netherlands:}}}
\feature{dutch republic}, \feature{kingdom of the netherlands}\\
\paragraph{\textbf{\texttt{Poland:}}}
\feature{kingdom of poland}, \feature{poland}\\
\paragraph{\textbf{\texttt{Portugal:}}}
\feature{kingdom of portugal}\\
\paragraph{\textbf{\texttt{Romania:}}}
\feature{kingdom of romania}, \feature{romania}, \feature{socialist republic of romania}\\
\paragraph{\textbf{\texttt{Spain:}}}
\feature{crown of castile}, \feature{kingdom of aragon}, \feature{kingdom of castile}, \feature{kingdom of navarre}, \feature{spain}\\
\paragraph{\textbf{\texttt{Yugoslavia:}}}
\feature{federal republic of yugoslavia}, \feature{kingdom of yugoslavia}, \feature{socialist federal republic of yugoslavia}, \feature{yugoslavia}\\
\paragraph{\textbf{\texttt{Iraq:}}}
\feature{ba'athist iraq}, \feature{iraq}, \feature{kingdom of iraq}, \feature{mandatory iraq}, \feature{republic of iraq (1958–68)}\\
\paragraph{\textbf{\texttt{Israel:}}}
\feature{israel}, \feature{kingdom of israel}, \feature{land of israel}\\
\paragraph{\textbf{\texttt{Russia:}}}
\feature{russia}, \feature{russian empire}, \feature{russian soviet federative socialist republic}, \feature{soviet union}, \feature{tsardom of russia}\\
\paragraph{\textbf{\texttt{India:}}}
\feature{british raj}, \feature{delhi sultanate}, \feature{dominion of india}, \feature{india}\\
\paragraph{\textbf{\texttt{China:}}}
\feature{china}, \feature{people's republic of china}, \feature{republic of china (1912–1949)}\\
\paragraph{\textbf{\texttt{Egypt:}}}
\feature{ancient egypt}, \feature{egypt}, \feature{kingdom of egypt}, \feature{republic of egypt}\\
\subsection{Entity-collapses for \textit{Profession} values}
\label{appendix:professions-collapse}
\paragraph{\textbf{\texttt{Writing:}}}
\feature{author}, \feature{biographer}, \feature{cartoonist}, \feature{children's writer}, \feature{comedy writer}, \feature{comics artist}, \feature{comics writer}, \feature{contributing editor}, \feature{cookery writer}, \feature{detective writer}, \feature{diarist}, \feature{editor}, \feature{editorial columnist}, \feature{essayist}, \feature{fairy tales writer}, \feature{grammarian}, \feature{hymnwriter}, \feature{journalist}, \feature{lexicographer}, \feature{librettist}, \feature{linguist}, \feature{literary}, \feature{literary critic}, \feature{literary editor}, \feature{literary scholar}, \feature{memoirist}, \feature{newspaper editor}, \feature{non-fiction writer}, \feature{novelist}, \feature{opinion journalist}, \feature{philologist}, \feature{photojournalist}, \feature{physician writer}, \feature{playwright}, \feature{poet}, \feature{poet lawyer}, \feature{preface author}, \feature{prosaist}, \feature{religious writer}, \feature{science fiction writer}, \feature{science writer}, \feature{scientific editor}, \feature{screenwriter}, \feature{short story writer}, \feature{tragedy writer}, \feature{travel writer}, \feature{women letter writer}, \feature{writer}\\
\paragraph{\textbf{\texttt{Sports:}}}
\feature{amateur wrestler}, \feature{american football coach}, \feature{american football player}, \feature{archer}, \feature{artistic gymnast}, \feature{association football manager}, \feature{association football player}, \feature{association football referee}, \feature{athlete}, \feature{athletics competitor}, \feature{australian rules football player}, \feature{badminton player}, \feature{ballet dancer}, \feature{ballet master}, \feature{ballet pedagogue}, \feature{baseball player}, \feature{basketball coach}, \feature{basketball player}, \feature{biathlete}, \feature{biathlon coach}, \feature{boxer}, \feature{bridge player}, \feature{canadian football player}, \feature{chess player}, \feature{choreographer}, \feature{coach}, \feature{cricket umpire}, \feature{cricketer}, \feature{dancer}, \feature{darts player}, \feature{field hockey player}, \feature{figure skater}, \feature{figure skating choreographer}, \feature{figure skating coach}, \feature{formula one driver}, \feature{gaelic football player}, \feature{golfer}, \feature{gridiron football player}, \feature{gymnast}, \feature{head coach}, \feature{hurler}, \feature{ice dancer}, \feature{ice hockey coach}, \feature{ice hockey player}, \feature{jockey}, \feature{judoka}, \feature{lacrosse player}, \feature{long-distance runner}, \feature{marathon runner}, \feature{marimba player}, \feature{martial artist}, \feature{middle-distance runner}, \feature{mixed martial artist}, \feature{motorcycle racer}, \feature{poker player}, \feature{polo player}, \feature{pool player}, \feature{professional wrestler}, \feature{quidditch player}, \feature{racing automobile driver}, \feature{racing driver}, \feature{rink hockey player}, \feature{rugby league player}, \feature{rugby player}, \feature{rugby union coach}, \feature{rugby union player}, \feature{runner}, \feature{short track speed skater}, \feature{skateboarder}, \feature{skeleton racer}, \feature{snooker player}, \feature{snowboarder}, \feature{sport cyclist}, \feature{sport shooter}, \feature{sporting director}, \feature{sports agent}, \feature{sports commentator}, \feature{sprinter}, \feature{squash player}, \feature{surfer}, \feature{swimmer}, \feature{table tennis player}, \feature{taekwondo athlete}, \feature{tennis coach}, \feature{tennis player}, \feature{thai boxer}, \feature{track and field coach}, \feature{viol player}, \feature{volleyball player}, \feature{water polo player}\\
\paragraph{\textbf{\texttt{Music:}}}
\feature{bass guitar}, \feature{bassist}, \feature{blues musician}, \feature{child singer}, \feature{classical composer}, \feature{classical guitarist}, \feature{classical pianist}, \feature{collector of folk music}, \feature{composer}, \feature{conductor}, \feature{country musician}, \feature{drummer}, \feature{film score composer}, \feature{ghost singer}, \feature{guitar maker}, \feature{guitarist}, \feature{heavy metal singer}, \feature{instrument maker}, \feature{instrumentalist}, \feature{jazz guitarist}, \feature{jazz musician}, \feature{jazz singer}, \feature{keyboardist}, \feature{lyricist}, \feature{multi-instrumentalist}, \feature{music arranger}, \feature{music artist}, \feature{music critic}, \feature{music director}, \feature{music interpreter}, \feature{music pedagogue}, \feature{music pedagogy}, \feature{music producer}, \feature{music publisher}, \feature{music theorist}, \feature{music video director}, \feature{musical}, \feature{musical instrument maker}, \feature{musician}, \feature{musicologist}, \feature{opera composer}, \feature{opera singer}, \feature{optical instrument maker}, \feature{organist}, \feature{pianist}, \feature{playback singer}, \feature{professor of music composition}, \feature{rapper}, \feature{record producer}, \feature{recording artist}, \feature{rock drummer}, \feature{rock musician}, \feature{saxophonist}, \feature{session musician}, \feature{singer}, \feature{singer-songwriter}, \feature{songwriter}, \feature{violinist}\\
\paragraph{\textbf{\texttt{Fictional:}}}
\feature{fictional aviator}, \feature{fictional businessperson}, \feature{fictional character}, \feature{fictional cowboy}, \feature{fictional domestic worker}, \feature{fictional firefighter}, \feature{fictional journalist}, \feature{fictional mass murderer}, \feature{fictional pirate}, \feature{fictional police officer}, \feature{fictional politician}, \feature{fictional schoolteacher}, \feature{fictional scientist}, \feature{fictional seaman}, \feature{fictional secretary}, \feature{fictional soldier}, \feature{fictional space traveller}, \feature{fictional taxi driver}, \feature{fictional vigilante}, \feature{fictional waitperson}, \feature{fictional writer}\\
\paragraph{\textbf{\texttt{Politics:}}}
\feature{activist}, \feature{ambassador}, \feature{animal rights advocate}, \feature{anti-vaccine activist}, \feature{civil rights advocate}, \feature{civil servant}, \feature{climate activist}, \feature{colonial administrator}, \feature{consort}, \feature{dictator}, \feature{diplomat}, \feature{drag queen}, \feature{duke}, \feature{emperor}, \feature{feminist}, \feature{foreign minister}, \feature{government agent}, \feature{governor}, \feature{human rights activist}, \feature{internet activist}, \feature{khan}, \feature{king}, \feature{leader}, \feature{lgbt rights activist}, \feature{military commander}, \feature{military leader}, \feature{military officer}, \feature{military personnel}, \feature{military theorist}, \feature{minister}, \feature{monarch}, \feature{peace activist}, \feature{political activist}, \feature{political philosopher}, \feature{political scientist}, \feature{political theorist}, \feature{politician}, \feature{president}, \feature{prince}, \feature{princess}, \feature{protestant reformer}, \feature{queen}, \feature{queen consort}, \feature{queen regnant}, \feature{religious leader}, \feature{revolutionary}, \feature{ruler}, \feature{secretary}, \feature{social reformer}, \feature{socialite}, \feature{tribal chief}\\
\paragraph{\textbf{\texttt{Artist:}}}
\feature{architect}, \feature{artist}, \feature{baker}, \feature{blacksmith}, \feature{car designer}, \feature{chef}, \feature{costume designer}, \feature{design}, \feature{designer}, \feature{fashion designer}, \feature{fashion photographer}, \feature{fresco painter}, \feature{furniture designer}, \feature{game designer}, \feature{glass artist}, \feature{goldsmith}, \feature{graffiti artist}, \feature{graphic artist}, \feature{graphic designer}, \feature{house painter}, \feature{illustrator}, \feature{industrial designer}, \feature{interior designer}, \feature{jewellery designer}, \feature{landscape architect}, \feature{landscape painter}, \feature{lighting designer}, \feature{painter}, \feature{photographer}, \feature{postage stamp designer}, \feature{printmaker}, \feature{production designer}, \feature{scientific illustrator}, \feature{sculptor}, \feature{sound designer}, \feature{textile designer}, \feature{type designer}, \feature{typographer}, \feature{visual artist}\\
\paragraph{\textbf{\texttt{Film/tv:}}}
\feature{actor}, \feature{character actor}, \feature{child actor}, \feature{documentary filmmaker}, \feature{dub actor}, \feature{factory owner}, \feature{fashion model}, \feature{film actor}, \feature{film critic}, \feature{film director}, \feature{film editor}, \feature{film producer}, \feature{filmmaker}, \feature{glamour model}, \feature{line producer}, \feature{model}, \feature{pornographic actor}, \feature{reality television participant}, \feature{runway model}, \feature{television actor}, \feature{television director}, \feature{television editor}, \feature{television presenter}, \feature{television producer}, \feature{voice actor}\\
\paragraph{\textbf{\texttt{Executive:}}}
\feature{bank manager}, \feature{business executive}, \feature{business magnate}, \feature{businessperson}, \feature{chief executive officer}, \feature{entrepreneur}, \feature{executive officer}, \feature{executive producer}, \feature{manager}, \feature{real estate entrepreneur}, \feature{talent manager}\\
\paragraph{\textbf{\texttt{Stage:}}}
\feature{circus performer}, \feature{comedian}, \feature{entertainer}, \feature{mime artist}, \feature{musical theatre actor}, \feature{stage actor}, \feature{stand-up comedian}, \feature{theater director}\\
\paragraph{\textbf{\texttt{Law/crime:}}}
\feature{art thief}, \feature{attorney at law}, \feature{bank robber}, \feature{canon law jurist}, \feature{courtier}, \feature{criminal}, \feature{judge}, \feature{jurist}, \feature{lawyer}, \feature{official}, \feature{private investigator}, \feature{robber}, \feature{serial killer}, \feature{spy}, \feature{thief}, \feature{war criminal}\\
\paragraph{\textbf{\texttt{History:}}}
\feature{anthropologist}, \feature{archaeologist}, \feature{art historian}, \feature{church historian}, \feature{classical archaeologist}, \feature{egyptologist}, \feature{explorer}, \feature{historian}, \feature{historian of classical antiquity}, \feature{historian of mathematics}, \feature{historian of science}, \feature{historian of the modern age}, \feature{labor historian}, \feature{legal historian}, \feature{literary historian}, \feature{military historian}, \feature{music historian}, \feature{paleoanthropologist}, \feature{paleontologist}, \feature{philosophy historian}, \feature{polar explorer}, \feature{scientific explorer}\\
\paragraph{\textbf{\texttt{Science/tech:}}}
\feature{aerospace engineer}, \feature{alchemist}, \feature{anesthesiologist}, \feature{artificial intelligence researcher}, \feature{astrologer}, \feature{astronaut}, \feature{astronomer}, \feature{astrophysicist}, \feature{auto mechanic}, \feature{bacteriologist}, \feature{biochemist}, \feature{biologist}, \feature{botanist}, \feature{bryologist}, \feature{cardiologist}, \feature{chemical engineer}, \feature{chemist}, \feature{chief engineer}, \feature{civil engineer}, \feature{climatologist}, \feature{cognitive scientist}, \feature{combat engineer}, \feature{computer scientist}, \feature{cosmologist}, \feature{crystallographer}, \feature{earth scientist}, \feature{ecologist}, \feature{educational psychologist}, \feature{electrical engineer}, \feature{engineer}, \feature{environmental scientist}, \feature{epidemiologist}, \feature{ethnologist}, \feature{ethologist}, \feature{evolutionary biologist}, \feature{geochemist}, \feature{geographer}, \feature{geologist}, \feature{geophysicist}, \feature{immunologist}, \feature{industrial engineer}, \feature{inventor}, \feature{marine biologist}, \feature{mathematician}, \feature{mechanic}, \feature{mechanical automaton engineer}, \feature{mechanical engineer}, \feature{meteorologist}, \feature{microbiologist}, \feature{mining engineer}, \feature{naturalist}, \feature{neurologist}, \feature{neuroscientist}, \feature{nuclear physicist}, \feature{nurse}, \feature{ontologist}, \feature{ornithologist}, \feature{patent inventor}, \feature{pharmacologist}, \feature{physician}, \feature{physicist}, \feature{physiologist}, \feature{planetary scientist}, \feature{psychiatrist}, \feature{psychoanalyst}, \feature{psychologist}, \feature{railroad engineer}, \feature{railway engineer}, \feature{research assistant}, \feature{researcher}, \feature{scientist}, \feature{social psychologist}, \feature{social scientist}, \feature{sociologist}, \feature{software engineer}, \feature{space scientist}, \feature{statistician}, \feature{structural engineer}, \feature{theoretical biologist}, \feature{theoretical physicist}, \feature{virologist}, \feature{zoologist}\\
\paragraph{\textbf{\texttt{Education:}}}
\feature{academic}, \feature{adjunct professor}, \feature{associate professor}, \feature{educator}, \feature{head teacher}, \feature{high school teacher}, \feature{history teacher}, \feature{lady margaret's professor of divinity}, \feature{pedagogue}, \feature{professor}, \feature{school teacher}, \feature{sex educator}, \feature{teacher}, \feature{university teacher}\\
\paragraph{\textbf{\texttt{Economics:}}}
\feature{economist}\\
\paragraph{\textbf{\texttt{Religion:}}}
\feature{anglican priest}, \feature{bible translator}, \feature{bishop}, \feature{catholic priest}, \feature{christian monk}, \feature{lay theologian}, \feature{monk}, \feature{pastor}, \feature{pope}, \feature{preacher}, \feature{priest}, \feature{theologian}\\
\paragraph{\textbf{\texttt{Military:}}}
\feature{air force officer}, \feature{aircraft pilot}, \feature{commanding officer}, \feature{fighter pilot}, \feature{general officer}, \feature{helicopter pilot}, \feature{intelligence officer}, \feature{naval officer}, \feature{officer of the french navy}, \feature{police officer}, \feature{soldier}, \feature{starship pilot}, \feature{test pilot}\\
\paragraph{\textbf{\texttt{Translation:}}}
\feature{translator}\\
\paragraph{\textbf{\texttt{Philosophy:}}}
\feature{analytic philosopher}, \feature{philosopher}, \feature{philosopher of language}, \feature{philosopher of science}\\
\paragraph{\textbf{\texttt{Polymath:}}}
\feature{polymath}\\
\section{Logistic Regression features.}
\label{appendix:logistic-regression}
This section enlists a full set of features used for the logistic regression analysis after feature reduction, each with their coefficients, standard error, Wald Statistic and significance level in Table~\ref{tab:logistic-regression-appendix}. We also describe the templates and the implementation details of the features using in our logistic regression analysis (Section~\ref{subsec:logistic-regression}) in Appendix~\ref{appendix:feature-implementation}, and finally enlist some randomly sampled examples both from \nq{} and \camelabr{Trivia}{qa}{} datasets in Appendix~\ref{appendix:multi-answers-examples} to show how \lrfeature{multi\_answers} feature has disparate effects on them.
\subsection{Implementation of Logistic Regression features}
\label{appendix:feature-implementation}
\begin{itemize}
\item \lrfeature{q\_sim}: For closed-domain \qa{} tasks like \nq{} and \textsc{sq}{\small u}\textsc{ad}{}, this feature measures (sim)ilarity between (q)uestion text and evidence sentence---the sentence from the evidence passage which contains the answer text---using Jaccard similarity over unigram tokens~\cite{sugawara-18}. Since we do not include \textsc{sq}{\small u}\textsc{ad}{} in our logistic regression analysis (Section~\ref{subsec:logistic-regression}, this feature is only relevant for \nq{}.
\item \lrfeature{e\_train\_count}: This binary feature represents if distinct (e)ntities appearing in a \qa{} example (through the approach described in Section~\ref{sec:mapping}) appears more than twice in the particular dataset's training fold. We avoid logarithm here as even the log frequency for some commonly occurring entities exceeds the expected feature value range.
\item \lrfeature{t\_wh*}: This represents the features that captures the expected entity type of the answer: \lrfeature{t\_who}, \lrfeature{t\_what}, \lrfeature{t\_where}, \lrfeature{t\_when}. Each binary feature captures if the particular \lrfeature{"wh*"} word appears in the first ten (t)okens of the question text.\footnote{\qb{} questions often start with ``For 10 points, name this writer \textit{who}...''}
\item \lrfeature{multi\_entities}: For number of linked person-entities in a example as described in Section~\ref{sec:mapping} as $n$, this feature is $log_2(n)$. Hence, this feature is 0 for example with just single person entity.
\item \lrfeature{multi\_answers}: For number of gold-answers annotated in a example as $n$, this feature is $log_2(n)$. Hence, this feature is 0 for example with just answer.
\item \lrfeature{g\_*}: Binary demographic feature signaling the presence of the (g)ender characterized by the feature. For instance, \lrfeature{g\_female} signals if the question is about a female person.
\item \lrfeature{o\_*}: Binary demographic feature signaling the presence of the occupation (or profession) as characterized by the feature. For instance, \lrfeature{o\_writer} signals if the question is about a writer.
\end{itemize}
\subsection{Examples with \lrfeature{multi\_answers} feature}
\label{appendix:multi-answers-examples}
In the Logistic Regression analysis (Section~\ref{subsec:logistic-regression}), we create two features: \lrfeature{multi\_answers} and \lrfeature{multi\_entities}. Former captures the presence of multiple gold answers to the question in a given example, while latter signals presence of multiple person entities — all in either the answers, the question text or the document title for a given example. While \lrfeature{multi\_entities} has consistent negative co-relation with model correctness (Appendix \ref{appendix:logistic-regression}), \lrfeature{multi\_answers} has a disparate effect.
Though it signals towards incorrectly answered examples in \nq{}, it has a statistically significant positive correlation with model correctness for \camelabr{Trivia}{qa}{} examples. Going through the examples, it reveals that \camelabr{Trivia}{qa}{} uses multiple answers to give alternate formulations of an answer, which aids machine reading, while multiple \nq{} answers are often a sign of question ambiguity~\cite{min-etal-2020-ambigqa}.
To demonstrate that, we enlist here examples from development fold of both \nq{} (Appendix~\ref{appendix:nq-examples}) and \camelabr{Trivia}{qa}{} (Appendix~\ref{appendix:trivia-qa-examples}) that have multiple gold answers.
\subsubsection{\nq{} examples with multiple answers:}
\label{appendix:nq-examples}
\setlength{\parindent}{0em}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-4135209844918483842}} \\
Q: \textit{who carried the us flag in the 2014 olympics} \\
A: \entity{Todd Lodwick} \\
A: \entity{Julie Chu}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{8838716539218945006}} \\
Q: \textit{who says that which we call a rose} \\
A: \entity{William Shakespeare} \\
A: \entity{Juliet}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-6197052503812142206}} \\
Q: \textit{who has won the most superbowls as a player} \\
A: \entity{Charles Haley} \\
A: \entity{Tom Brady}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-2840415450119119129}} \\
Q: \textit{who started the guinness book of world records} \\
A: \entity{Hugh Beaver} \\
A: \entity{Norris and Ross McWhirter}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{6997422338613101186}} \\
Q: \textit{who played the nurse on andy griffith show} \\
A: \entity{Langdon} \\
A: \entity{Julie Adams}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-7064677612340044331}} \\
Q: \textit{who wrote the song if i were a boy} \\
A: \entity{BC Jean} \\
A: \entity{Toby Gad}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{3248410603422198181}} \\
Q: \textit{who conducted the opening concert at carnegie hall} \\
A: \entity{Walter Damrosch} \\
A: \entity{Pyotr Ilyich Tchaikovsky}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-3772952199709196386}} \\
Q: \textit{who founded amazon where is the headquarters of amazon} \\
A: \entity{founded by Jeff Bezos} \\
A: \entity{based in Seattle , Washington}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{4053461415821443645}} \\
Q: \textit{who wrote song what a friend we have in jesus} \\
A: \entity{Joseph M. Scriven} \\
A: \entity{Charles Crozat Converse}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-5670674709553776773}} \\
Q: \textit{who sings the theme song for the proud family} \\
A: \entity{Solange Knowles} \\
A: \entity{Solange Knowles}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{2978779480736570480}} \\
Q: \textit{days of our lives cast doug and julie} \\
A: \entity{Bill Hayes} \\
A: \entity{Susan Seaforth}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{6173192803639008655}} \\
Q: \textit{who has appeared in the most royal rumbles} \\
A: \entity{Isaac Yankem / `` Diesel '' / Kane} \\
A: \entity{Shawn Michaels}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{7561389892504775773}} \\
Q: \textit{who wrote the song stop the world and let me off} \\
A: \entity{Carl Belew} \\
A: \entity{W.S. Stevenson}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-8366545547296627039}} \\
Q: \textit{who wrote the song photograph by ringo starr} \\
A: \entity{Ringo Starr} \\
A: \entity{George Harrison}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-5674327280636928690}} \\
Q: \textit{who sings you're welcome in moana credits} \\
A: \entity{Lin - Manuel Miranda} \\
A: \entity{Jordan Fisher}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-2432292250757146771}} \\
Q: \textit{who wrote the song i hate you i love you} \\
A: \entity{Garrett Nash} \\
A: \entity{Olivia O'Brien}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-3632974700795137148}} \\
Q: \textit{who is the owner of reading football club} \\
A: \entity{Xiu Li Dai} \\
A: \entity{Yongge Dai}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{7163132803738849961}} \\
Q: \textit{who played guitar on my guitar gently weeps} \\
A: \entity{Eric Clapton} \\
A: \entity{George Harrison}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{1318031841813121387}} \\
Q: \textit{who sang the theme song to that 70s show} \\
A: \entity{Todd Griffin} \\
A: \entity{Cheap Trick}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{1393634180793653648}} \\
Q: \textit{who came up with the initial concept of protons and neutrons} \\
A: \entity{Werner Heisenberg} \\
A: \entity{Dmitri Ivanenko}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{9134704289334516617}} \\
Q: \textit{who missed the plane the day the music died} \\
A: \entity{Waylon Jennings} \\
A: \entity{Tommy Allsup}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{8466196474705624263}} \\
Q: \textit{who was running as vice president in 1984} \\
A: \entity{Congresswoman Ferraro} \\
A: \entity{Vice President George H.W. Bush}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{5579013873387598720}} \\
Q: \textit{who has won the canada open women's doubles} \\
A: \entity{Mayu Matsumoto} \\
A: \entity{Wakana Nagahara}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{5584540254904933863}} \\
Q: \textit{who sang what are we doing in love} \\
A: \entity{Dottie West} \\
A: \entity{Kenny Rogers}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-8677459248394445003}} \\
Q: \textit{who is hosting e live from the red carpet} \\
A: \entity{Ryan Seacrest} \\
A: \entity{Giuliana Rancic}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-1342189058950802702}} \\
Q: \textit{who made the poppies at tower of london} \\
A: \entity{Paul Cummins} \\
A: \entity{Tom Piper}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{6014950976264156000}} \\
Q: \textit{who sang never gonna let you go} \\
A: \entity{Joe Pizzulo} \\
A: \entity{Leeza Miller}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-8052136860650205450}} \\
Q: \textit{who wrote the song rainy days and mondays} \\
A: \entity{Roger Nichols} \\
A: \entity{Paul Williams}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{7903911150166287814}} \\
Q: \textit{what position did doug peterson play in the nfl} \\
A: \entity{quarterback} \\
A: \entity{holder on placekicks}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{583026970021621830}} \\
Q: \textit{who invented the first home video security system} \\
A: \entity{Marie Van Brittan Brown} \\
A: \entity{her husband Albert Brown}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{5427679691711111925}} \\
Q: \textit{who were the two mathematicians that invented calculus} \\
A: \entity{Isaac Newton} \\
A: \entity{Gottfried Leibniz}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-9163844183450408581}} \\
Q: \textit{nba record for most double doubles in a season} \\
A: \entity{Tim Duncan leads the National Basketball Association ( NBA ) in the points - rebounds combination with 840} \\
A: \entity{John Stockton leads the points - assists combination with 714}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-8109367537690343895}} \\
Q: \textit{who were the twins that played for kentucky} \\
A: \entity{Andrew Michael Harrison} \\
A: \entity{Aaron Harrison}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{4784420206031467202}} \\
Q: \textit{who wrote he ain't heavy he's my brother lyrics} \\
A: \entity{Bobby Scott} \\
A: \entity{Bob Russell}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{4136958282795887427}} \\
Q: \textit{who opens the church of the holy sepulchre} \\
A: \entity{the Nusaybah family} \\
A: \entity{the Joudeh Al - Goudia family}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-2610209560699528896}} \\
Q: \textit{who is the writer of 50 shades of grey} \\
A: \entity{Erika Mitchell Leonard} \\
A: \entity{E.L. James}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{8968036245733884389}} \\
Q: \textit{when did stephen curry won the mvp award} \\
A: \entity{2015 ,} \\
A: \entity{2015}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-1899514742808499173}} \\
Q: \textit{who are nominated for president of india 2017} \\
A: \entity{Ram Nath Kovind} \\
A: \entity{Meira Kumar}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-3019484115332998709}} \\
Q: \textit{what movie is count on me by bruno mars in} \\
A: \entity{A Turtle 's Tale : Sammy 's Adventures} \\
A: \entity{Diary of a Wimpy Kid : The Long Haul}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{810060125994185205}} \\
Q: \textit{who was the first to say i'm going to disney world} \\
A: \entity{Dick Rutan} \\
A: \entity{Jeana Yeager}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{339027965927992295}} \\
Q: \textit{who sings the whiskey ain't workin anymore} \\
A: \entity{Travis Tritt} \\
A: \entity{Marty Stuart}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{5995814638252489040}} \\
Q: \textit{who played scotty baldwins father on general hospital} \\
A: \entity{Peter Hansen} \\
A: \entity{Ross Elliott}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{3723628014502752965}} \\
Q: \textit{who wrote cant get you out of my head lyrics} \\
A: \entity{Cathy Dennis} \\
A: \entity{Rob Davis}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{3886074985605209321}} \\
Q: \textit{who sings find out who your friends are with tracy lawrence} \\
A: \entity{Tim McGraw} \\
A: \entity{Kenny Chesney}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{3624266518328727040}} \\
Q: \textit{who invented the printing press and what year} \\
A: \entity{Johannes Gutenberg} \\
A: \entity{circa 1439}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-4951004239400083779}} \\
Q: \textit{who plays chris grandy in 13 going on 30} \\
A: \entity{Jim Gaffigan} \\
A: \entity{Alex Black}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{2672721743911117185}} \\
Q: \textit{who developed a set of postulates to prove that specific microorganisms cause disease} \\
A: \entity{Robert Koch} \\
A: \entity{Friedrich Loeffler}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{2166092801797515500}} \\
Q: \textit{who is the director of taarak mehta ka ooltah chashmah} \\
A: \entity{Harshad Joshi} \\
A: \entity{Malav Suresh Rajda}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-3389723371168293793}} \\
Q: \textit{who has the most olympic medals in figure skating} \\
A: \entity{Tessa Virtue} \\
A: \entity{Scott Moir}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-8391680223788694572}} \\
Q: \textit{who wrote if i were a boy reba or beyonce} \\
A: \entity{BC Jean} \\
A: \entity{Toby Gad}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{1070572237499172286}} \\
Q: \textit{who wrote the song after you've gone} \\
A: \entity{Turner Layton} \\
A: \entity{Henry Creamer}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{2343902375984110832}} \\
Q: \textit{who does the voice of mickey mouse on mickey mouse clubhouse} \\
A: \entity{Wayne Allwine} \\
A: \entity{Bret Iwan}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{7013863939803495694}} \\
Q: \textit{who sings love me tender in princess diaries 2} \\
A: \entity{Norah Jones} \\
A: \entity{Adam Levy}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{4925057086725798331}} \\
Q: \textit{who wrote yakkity yak don't talk back} \\
A: \entity{Jerry Leiber} \\
A: \entity{Mike Stoller}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{647605647914971565}} \\
Q: \textit{who wrote lyrics for phantom of the opera} \\
A: \entity{Charles Hart} \\
A: \entity{Richard Stilgoe}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-6371603500131574271}} \\
Q: \textit{who sings somebody's watching me with michael jackson} \\
A: \entity{Rockwell} \\
A: \entity{Jermaine Jackson}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-4036503601399675973}} \\
Q: \textit{when did michael jordan return to the nba} \\
A: \entity{1995} \\
A: \entity{2001}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{4323871331649279373}} \\
Q: \textit{who invented the printing press and in what year} \\
A: \entity{Johannes Gutenberg} \\
A: \entity{1440}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{7234277123646852447}} \\
Q: \textit{who sings war don't let me down} \\
A: \entity{American production duo The Chainsmokers} \\
A: \entity{vocals of American singer Daya}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{4245798066923223457}} \\
Q: \textit{who has the most all star mvp awards} \\
A: \entity{Bob Pettit} \\
A: \entity{Kobe Bryant}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-3585157729928173881}} \\
Q: \textit{who plays hulk in the thor and avengers series of movies} \\
A: \entity{Fred Tatasciore} \\
A: \entity{Rick D. Wasserman}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{-7892904540301629325}} \\
Q: \textit{who wrote the song going to kansas city} \\
A: \entity{Jerry Leiber} \\
A: \entity{Mike Stoller}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{1838851770314085590}} \\
Q: \textit{who plays sheila carter on the bold and the beautiful} \\
A: \entity{Kimberlin Brown} \\
A: \entity{Michelle Stafford}}
\subsubsection{\camelabr{Trivia}{qa}{} multi-answer examples:}
\label{appendix:trivia-qa-examples}
\small{We randomly sample 100 examples from \camelabr{Trivia}{qa}{} where questions had multiple answers.}
\\
\setlength{\parindent}{0cm}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_6110}} \\
Q: \textit{On which island in the North Sea did both St Aidan and St Cuthbert live?} \\
A: \entity{Lindisfarne } \\
A: \entity{LINDISFARNE}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{tc\_1008}} \\
Q: \textit{To the nearest two, how many tennis Grand Slam titles did Jimmy Connors win?} \\
A: \entity{10} \\
A: \entity{ten}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_26211}} \\
Q: \textit{In the TV series Doctor Who, who was the creator of the Daleks and arch enemy of the Doctor?} \\
A: \entity{Davros} \\
A: \entity{Creator of the Daleks}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_22212}} \\
Q: \textit{In which book of the bible is the story of Samson and Delilah?} \\
A: \entity{Judge (disambiguation)} \\
A: \entity{Judges}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{bt\_1538}} \\
Q: \textit{What is cartoon character Mr. Magoo's first name} \\
A: \entity{Quincy (disambiguation)} \\
A: \entity{Quincy}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qz\_2444}} \\
Q: \textit{What is Robin Williams character called in Good Morning Vietnam?} \\
A: \entity{Adrian}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_22693}} \\
Q: \textit{What was the first name of the jazz trombonist Kid Ory?} \\
A: \entity{Eadweard} \\
A: \entity{Edward}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qw\_1606}} \\
Q: \textit{Which of Queen Elizabeth's children is the lowest in succession to (i.e. furthest away from) the throne?} \\
A: \entity{Anne} \\
A: \entity{Ann (name)}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{odql\_5503}} \\
Q: \textit{"Which radio comedian's catchphrase was ""daft as a brush""?"} \\
A: \entity{KEN PLATT} \\
A: \entity{Ken Platt}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qg\_2992}} \\
Q: \textit{According to Sammy Haggar, what can't he drive?} \\
A: \entity{55} \\
A: \entity{fifty-five}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qf\_3440}} \\
Q: \textit{What was Grace Darling's father's job?} \\
A: \entity{Lighthouse-keeper} \\
A: \entity{Lighthouse keeper}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qw\_12369}} \\
Q: \textit{"What year did Jean-Francois Champollion publish the first correct translation of Egyptian hieroglyphs from the Rosetta Stone, the Roman Catholic Church take Galileo Galilei's ""Dialogue"" off their list of banned books, and Britain repeal the death penalty for over 100 crimes?"} \\
A: \entity{one thousand, eight hundred and twenty-two} \\
A: \entity{1822}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qw\_4143}} \\
Q: \textit{What is the title of the most famous painting by Franz Hals?} \\
A: \entity{Laughing Cavalier} \\
A: \entity{The Laughing Cavalier}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qb\_2647}} \\
Q: \textit{What is the title of the 1944 film starring Barbara Stanwyck as the wife who seduces an insurance salesman into killing her husband?} \\
A: \entity{Double indemnity (disambiguation)} \\
A: \entity{Double Indemnity}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_22920}} \\
Q: \textit{Who was the choreographer of the dance troupe Hot Gossip?} \\
A: \entity{Arlene Philips} \\
A: \entity{Arlene Phillips}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{tc\_719}} \\
Q: \textit{River Phoenix died during the making of which movie?} \\
A: \entity{Dark Blood (film)} \\
A: \entity{Dark Blood}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_19457}} \\
Q: \textit{Who won the first ever boxing gold for women? She shares her surname with two US Presidents.} \\
A: \entity{Nicola Adams} \\
A: \entity{Adams, Nicola}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_8996}} \\
Q: \textit{Actor Norman Painting died in November 2009, which part in a log running radio series did he make his own?} \\
A: \entity{PHIL ARCHER} \\
A: \entity{Phil Archer}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{dpql\_6111}} \\
Q: \textit{Which Jersey-born actor played Superman in Man of Steel?} \\
A: \entity{Henry Cavill} \\
A: \entity{Henry William Dalgliesh Cavill}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qz\_2135}} \\
Q: \textit{Name the game show, presented by Leslie Grantham and Melinda Messenger, where contestants were set physical and mental challenges?} \\
A: \entity{Fort Boyard (disambiguation)} \\
A: \entity{Fort Boyard}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{odql\_3205}} \\
Q: \textit{Who wrote the novel 'The Beach' on which the film was based?} \\
A: \entity{Alex Garland} \\
A: \entity{ALEX GARLAND}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qz\_2999}} \\
Q: \textit{In what year did Edward Vlll abdicate?} \\
A: \entity{one thousand, nine hundred and thirty-six} \\
A: \entity{1936}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{tc\_723}} \\
Q: \textit{Which artist David was born in Bradford UK?} \\
A: \entity{Hockney} \\
A: \entity{David Hockney}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{odql\_3708}} \\
Q: \textit{Three Liverpool players were in the 1966 England World Cup winning squad. Roger Hunt and Ian Callaghan were two – who was the third?} \\
A: \entity{Gerry Byrne} \\
A: \entity{Gerry Byrne (disambiguation)}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qw\_11151}} \\
Q: \textit{Which artist has a daughter and two sons with Jane Asher, whom he married in 1981?} \\
A: \entity{Gerald Anthony Scarfe} \\
A: \entity{Gerald Scarfe}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qb\_3652}} \\
Q: \textit{Who wrote the novel ‘The Eagle Has landed’?} \\
A: \entity{Harry Patterson} \\
A: \entity{Jack Higgins}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{odql\_14683}} \\
Q: \textit{Who presents the BBC quiz show ‘Perfection’?} \\
A: \entity{Nick Knowles} \\
A: \entity{NICK KNOWLES}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_9464}} \\
Q: \textit{Who succeeded Brian Epstein as manager of The Beatles?} \\
A: \entity{Allan Klein} \\
A: \entity{Allen Klein}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{wh\_2615}} \\
Q: \textit{In which year did both T-Rex's Marc Bolan and Elvis Presley die ?} \\
A: \entity{1977} \\
A: \entity{one thousand, nine hundred and seventy-seven}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_9018}} \\
Q: \textit{Who played Hotlips Houlihan in the 1972 film MASH?} \\
A: \entity{Sally Kellerman} \\
A: \entity{SALLY KELLERMAN}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qz\_1516}} \\
Q: \textit{Who bought Chelsea football club for £1 in 1982?} \\
A: \entity{Ken Bates} \\
A: \entity{Kenneth Bates}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_23289}} \\
Q: \textit{What was the middle name of the author William Thackeray?} \\
A: \entity{Makepeace} \\
A: \entity{MAKEPEACE}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_7589}} \\
Q: \textit{What was the name of the older brother of Henry 8th?} \\
A: \entity{Arthur} \\
A: \entity{Arthur (name)}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{odql\_10316}} \\
Q: \textit{Which actor played 'Hadley', in the TV series of the same name?} \\
A: \entity{GERALD HARPER} \\
A: \entity{Gerald Harper}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_12933}} \\
Q: \textit{Operation Barbarossa, Hitler invades Russia.} \\
A: \entity{one thousand, nine hundred and forty-one} \\
A: \entity{1941}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qw\_5050}} \\
Q: \textit{"Which Italian nobel prize winner (1934) wrote novels such as ""Mal Gioconda"" and switched to writing plays in 1910?"} \\
A: \entity{Pirandello} \\
A: \entity{Luigi Pirandello}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{bt\_2403}} \\
Q: \textit{What was the name of the driver of the mail train robbed by the great train robbers} \\
A: \entity{Jack Mills (train driver)} \\
A: \entity{Jack Mills}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_2189}} \\
Q: \textit{What was the name of the private eye played by Trevor Eve on TV in the '70s?} \\
A: \entity{Shoestring (TV series)} \\
A: \entity{Eddie Shoestring}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qb\_5431}} \\
Q: \textit{Brazilian football legend Pele wore which number on his shirt?} \\
A: \entity{10} \\
A: \entity{ten}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qb\_4726}} \\
Q: \textit{Michael J Fox travels back to which year in the Wild West in the 1990 film ‘Back To The Future Part III’?} \\
A: \entity{one thousand, eight hundred and eighty-five} \\
A: \entity{1885}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{odql\_8275}} \\
Q: \textit{Later a 'Blue Peter' presenter, who played 'Steven Taylor', an assistant to William Hartnell's 'Doctor Who'?} \\
A: \entity{PETER PURVES} \\
A: \entity{Peter Purves}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_962}} \\
Q: \textit{Which city was the subject of the 1949 song 'Dirty Old Town' by Ewan McColl?} \\
A: \entity{Salford} \\
A: \entity{Salford (disambiguation)}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{tc\_1348}} \\
Q: \textit{In the late 60s Owen Finlay MacLaren pioneered what useful item for parents of small chldren?} \\
A: \entity{Baby Buggy} \\
A: \entity{Baby buggy}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qw\_12732}} \\
Q: \textit{General Franco, the Spanish military general, was head of state of Spain from October 1936 following the Spanish Civil War, until when?} \\
A: \entity{1975} \\
A: \entity{one thousand, nine hundred and seventy-five}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{bt\_4495}} \\
Q: \textit{Which of the Great Train Robbers became a florist outside Waterloo station until he was found hanged in a lock up} \\
A: \entity{Buster Edwards} \\
A: \entity{Ronald \%22Buster\%22 Edwards}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_20394}} \\
Q: \textit{Which TV presenter, who died in February 2013, was for over 20 years the host of 'Mr \& Mrs'?} \\
A: \entity{Derek Batey} \\
A: \entity{Derek Beatty}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{odql\_12918}} \\
Q: \textit{Which British political party leader is MP for Westmorland and Lonsdale?} \\
A: \entity{Tim Farron} \\
A: \entity{Timothy Farron}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{odql\_13785}} \\
Q: \textit{Who wrote the lyrics for 'Sing', written to celebrate the Queen's Diamond Jubilee?} \\
A: \entity{Gary Barlow} \\
A: \entity{GARY BARLOW}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_20830}} \\
Q: \textit{Which top National Hunt trainer's establishment is based at Seven Barrows?} \\
A: \entity{NICKY HENDERSON} \\
A: \entity{Nicky Henderson}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{wh\_2133}} \\
Q: \textit{Which T.V. Quiz show host used the catchphrase :- If its' up there, I'll give you the money myself ?} \\
A: \entity{LES DENNIS} \\
A: \entity{Les Dennis}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_15663}} \\
Q: \textit{The 27 episodes of which sitcom featuring Julia Mckenzie, Anton Rodgers and Ballard Berkley were first broadcast in the 1980s?} \\
A: \entity{Fresh Fields (TV series)} \\
A: \entity{Fresh Fields}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{odql\_4871}} \\
Q: \textit{When US President James Garfield was shot in Washington DC in July 1881, what was he doing?} \\
A: \entity{WAITING FOR A TRAIN} \\
A: \entity{Waiting for a Train}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{bb\_6592}} \\
Q: \textit{Which artist was born in Bradford in 1937?} \\
A: \entity{Hockney} \\
A: \entity{David Hockney}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qw\_10270}} \\
Q: \textit{Argentina invaded UK's Falkland Islands, Israel invaded Southern Lebanon, Canada became officially independent of the UK, Leonid Brezhnev, leader of the USSR, died, all in what year?} \\
A: \entity{one thousand, nine hundred and eighty-two} \\
A: \entity{1982}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{bt\_4206}} \\
Q: \textit{Who was the first woman to be seen on Channel 4} \\
A: \entity{Carol Vorderman} \\
A: \entity{Carol Voderman}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qw\_8871}} \\
Q: \textit{Lieutenant General James Thomas Brudenell, who commanded the Light Brigade of the British Army during the Crimean War, was the 7th Earl of what?} \\
A: \entity{Cardigan} \\
A: \entity{Cardigan (disambiguation)}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_13639}} \\
Q: \textit{Which model village did Samuel Greg build to house workers at his nearby Quarry Bank Mill?} \\
A: \entity{Styal} \\
A: \entity{STYAL}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{tc\_812}} \\
Q: \textit{Who was the defending champion when Martina Navratilova first won Wimbledon singles?} \\
A: \entity{Virginia Wade} \\
A: \entity{Sarah Virginia Wade}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_11790}} \\
Q: \textit{Opened in 1963, which London nightclub did Mark Birley name after his then wife?} \\
A: \entity{Annabel's} \\
A: \entity{ANNABELS}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qw\_8397}} \\
Q: \textit{In 1995, Steffi Graf became the only tennis player to have won each of the four grand slam events how many times?} \\
A: \entity{four} \\
A: \entity{4}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{dpql\_3151}} \\
Q: \textit{On which river does Ipswich stand?} \\
A: \entity{Orwell (disambiguation)} \\
A: \entity{Orwell}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qw\_14634}} \\
Q: \textit{"Which Bob Dylan song begins ""You got a lotta nerveTo say you are my friend. When I was down, You just stood there grinning""?"} \\
A: \entity{Positively Fourth Street} \\
A: \entity{Positively 4th Street}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{dpql\_1801}} \\
Q: \textit{Nick Begs was lead singer with which 80’s pop band?} \\
A: \entity{Kaja Googoo} \\
A: \entity{Kajagoogoo}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qw\_16011}} \\
Q: \textit{In 1483, who was appointed the first grand inquisitor of the Spanish Inquisition?} \\
A: \entity{Torquemada (disambiguation)} \\
A: \entity{Torquemada}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qw\_1933}} \\
Q: \textit{What remake of a British science-fiction serial broadcast by BBC Television in the summer of 1953 was staged live by BBC Four in 2005 with actors Jason Flemyng, Mark Gatiss, Andrew Tiernan, Indira Varma, David Tennant and Adrian Bower?} \\
A: \entity{Quatermass experiment} \\
A: \entity{The Quatermass Experiment}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{odql\_2323}} \\
Q: \textit{Which 2009 film is a biopic of John Lennon?} \\
A: \entity{'NOWHERE BOY'} \\
A: \entity{Nowhere Boy}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{bb\_522}} \\
Q: \textit{'The Battle of Trafalgar' is the work of which British painter?} \\
A: \entity{Joseph Turner} \\
A: \entity{Joseph Turner (disambiguation)}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qw\_4463}} \\
Q: \textit{Who discovered the two moons of Mars in 1877?} \\
A: \entity{Asaph Hall} \\
A: \entity{Asaph Hall III}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qz\_1111}} \\
Q: \textit{Which brand of beer does Homer Simpson drink regularly?} \\
A: \entity{Duff} \\
A: \entity{Duff (disambiguation)}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{wh\_8}} \\
Q: \textit{In the novel 'Treasure Island' name the pirate shot dead by Jim Hawkins in the rigging of the Hispaniola} \\
A: \entity{Israel Hands} \\
A: \entity{ISRAEL HANDS}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qg\_3884}} \\
Q: \textit{Dow Constantine and Susan Hutchinson are currently running for was position?} \\
A: \entity{King County executive} \\
A: \entity{King County Executive}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_25940}} \\
Q: \textit{Which actor played the part of Ross Poldark in the BBC’s mid 1970’s television series?} \\
A: \entity{Robin Ellis} \\
A: \entity{ROBIN ELLIS}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{odql\_12777}} \\
Q: \textit{For which 1960 film did Billy Wilder become the first person to win three Oscars for the same film?} \\
A: \entity{The Apartment} \\
A: \entity{The apartment}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{bb\_4540}} \\
Q: \textit{Famous for 'Die Welt als Wille und Vorstellung', Arthur Schopenhauer (1788-1860) was a German?} \\
A: \entity{Philosophers} \\
A: \entity{Philosopher}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qb\_8589}} \\
Q: \textit{What is the nickname of the frontiersman Nathaniel Poe, played by Daniel Day Lewis, in the 1992, film ‘The Last of the Mohicans’?} \\
A: \entity{Hawkeye} \\
A: \entity{Hawkeye (disambiguation)}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{bt\_2852}} \\
Q: \textit{Who played the part of Tina Seabrook in Casualty} \\
A: \entity{Claire Woodrow} \\
A: \entity{Claire Goose}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{wh\_557}} \\
Q: \textit{Who duetted with Syd Owen on the single Better Believe It, which was released as part of the Children in Need appeal in 1995 ?} \\
A: \entity{PATSY PALMER} \\
A: \entity{Patsy Palmer}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{odql\_3979}} \\
Q: \textit{What is the name of the character played by Nicole Kidman in the film 'Moulin Rouge'?} \\
A: \entity{Satine} \\
A: \entity{'SATINE'}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{odql\_10365}} \\
Q: \textit{Which British girl won the Women's Junior Singles title at Wimbledon this year (2008)?} \\
A: \entity{LAURA ROBSON} \\
A: \entity{Laura Robson}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qf\_1735}} \\
Q: \textit{In what year did Elvis Presley and his parents move from Tupelo to Memphis?} \\
A: \entity{one thousand, nine hundred and forty-eight} \\
A: \entity{1948}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{tc\_1468}} \\
Q: \textit{What was Pete Sampras seeded when he won his first US Open?} \\
A: \entity{twelve} \\
A: \entity{12}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qf\_2679}} \\
Q: \textit{Who on TV has played a scarecrow and a Time Lord?} \\
A: \entity{John Pertwee} \\
A: \entity{Jon Pertwee}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_11844}} \\
Q: \textit{In which year was Olaf Palme assassinated and the Chernobyl nuclear power station exploded?} \\
A: \entity{1986} \\
A: \entity{one thousand, nine hundred and eighty-six}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qf\_3578}} \\
Q: \textit{Cassandra was the pseudonym of which writer in the Daily Mirror?} \\
A: \entity{William Neil Connor} \\
A: \entity{William Connor}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qz\_3898}} \\
Q: \textit{How many times did Steffi Graf win the Ladies Singles at Wimbledon?} \\
A: \entity{seven} \\
A: \entity{7}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qw\_6782}} \\
Q: \textit{What is the disease that Stephen Hawking has?} \\
A: \entity{Motor neuron disease} \\
A: \entity{Motor neuron diseases}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qw\_15255}} \\
Q: \textit{"How many films were made by director Sir Peter Jackson from Tolkien's short book, ""The Hobbit""?"} \\
A: \entity{3} \\
A: \entity{three}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_7168}} \\
Q: \textit{Who invented the wind-up radio?} \\
A: \entity{Trevor Bayliss} \\
A: \entity{TREVOR BAYLISS}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_14433}} \\
Q: \textit{In Pride and Prejudice what was the first name of Mr Darcy?} \\
A: \entity{Fitzwilliam (disambiguation)} \\
A: \entity{Fitzwilliam}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{odql\_8230}} \\
Q: \textit{Which single by 'Leapy Lee' reached number two in the UK charts in 1968?} \\
A: \entity{'LITTLE ARROWS'} \\
A: \entity{Little Arrows}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{dpql\_1416}} \\
Q: \textit{Whose is the first tale in Chaucer’s Canterbury Tales?} \\
A: \entity{The Knight} \\
A: \entity{Knight (disambiguation)}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qw\_1672}} \\
Q: \textit{Which womens squash player won the World Open four times (1985, 1987, 1990 \& 1992) and the British Open eight times?} \\
A: \entity{Susan Devoy} \\
A: \entity{Susan Elizabeth Anne Devoy}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{qz\_832}} \\
Q: \textit{Who wrote the novels About A Boy, How To Be Good and High Fidelity?} \\
A: \entity{Nick Hornby} \\
A: \entity{Hornby, Nick}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_22884}} \\
Q: \textit{Which TV series was about a pop group called 'Little Ladies' featuring Charlotte Cornwell, Julie Covington and Rula Lenska?} \\
A: \entity{Rock Follies} \\
A: \entity{Rock Follies of '77}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{odql\_10746}} \\
Q: \textit{Who wrote the 1951 novel ‘The Caine Mutiny’?} \\
A: \entity{HERMAN WOUK} \\
A: \entity{Herman Wouk}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{bb\_285}} \\
Q: \textit{Said to refer erroneously to the temperature at which book paper catches fire, the title of Ray Bradbury's 1953 novel about a futuristic society in which reading books is illegal, is called 'Fahrenheit...' what? 972; 451; 100; or 25?} \\
A: \entity{451} \\
A: \entity{four hundred and fifty-one}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{odql\_7290}} \\
Q: \textit{Who was the driver of the limousine at the time of Diana Princess of Wales' death?} \\
A: \entity{HENRI PAUL} \\
A: \entity{Henri Paul}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{sfq\_4368}} \\
Q: \textit{Which island in the Grenadines of St. Vincent was bought by Colin Tennant in 1958? Princess Margaret built a holiday home there in the 1960's.} \\
A: \entity{MUSTIQUE} \\
A: \entity{Mustique}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{odql\_5476}} \\
Q: \textit{Which pop star had the real name of Ernest Evans?} \\
A: \entity{Chubby Checker} \\
A: \entity{'CHUBBY CHECKER'}}
\tiny{\setlength{\parindent}{0cm}
\textbf{id: \texttt{tc\_980}} \\
Q: \textit{"Which supermodel said, ""I look very scary in the mornings?"} \\
A: \entity{We don't wake up for less than \$10,000 a day} \\
A: \entity{Linda Evangelista}}
\section*{Appendix}
\latexfile{8a-country-collapse}
\latexfile{8b-profession-collapse}
\clearpage
\latexfile{9a-logistic-regression-full}
\latexfile{9b1-nq-multi-answers}
\latexfile{9b2-triviaqa-multi-answers}
\clearpage
\end{document}
|
1,108,101,562,918 | arxiv | \section{Introduction}
\footnotetext{Invited talk at ``Production
and decay of hyperons, charmed and beauty hadrons'', Strasbourg, France, Sep. 5-8, 1995}
The physics program at CLEO is at the forefront of heavy flavour research. The emphasis is on the
decay of charm hadrons, beauty mesons and tau leptons. There is also active research in
2-photon physics, Upsilon spectroscopy and production characteristics of charm hadrons.
In this talk, I will focus on the disagreement between the experimental value of the
B semileptonic branching fraction and predictions of theoretical models; the experimental value being
the smaller of the two.
In order to ``fix'' the model predictions, one has to increase the number of charm quarks produced in
the decay of a b quark, and also the rate for B decays of the type, $b\rightarrow c{\bar c}s$. I will discuss CLEO results
which shed light on this issue. I will first present results on an isospin violating decay of
the $\mbox{D}^{\ast}_s$ meson.
\section{Data Sample}
The results shown here are based on data taken at the Cornell Electron Storage Ring using the
CLEO-II detector. The CLEO-II detector has excellent charged and neutral particle detection over
$\approx 95\%$ of $4\pi$. Electrons and muons are detected with high efficiency and low fake rates.
Detector details can be found elsewhere\cite{ref:nim}.
The data were collected on the $\Upsilon$(4S) resonance, with center of mass energy
of 10.58 GeV, and in the continuum, 60 MeV below. The ON resonance luminosity was 3.3 fb$^{-1}$,
which corresponds to about $3.5 \times 10^6$ $B {\bar B}$ mesons produced. The OFF resonance luminosity,
which is used to model the continuum background under the $\Upsilon$(4S),
was 1.6 fb$^{-1}$. To study charm hadrons, one can use both ON and OFF resonance data, which amounts to
about $6.5~\times~10^6$ $c {\bar c}$ pairs produced. The total number of reconstructed charm hadrons
at present, which includes $D^0, D^+, D^{*0(+)}, D_s^{(*)}, \Lambda_c$, etc., is $\ge~1.0\times~10^5$.
The results presented here are based on about 70\% of the total luminosity.
\section{Isospin violating decay, $\mbox{D}^{\ast}_s \rightarrow \mbox{D}_s \pi^0$}
Up to now, only the radiative decay of the $\mbox{D}^{\ast}_s$ has been observed\cite{ref:pdg}. The only strong
decay allowed, $\mbox{D}^{\ast}_s \rightarrow \mbox{D}_s \pi^0$, is ``forbidden'' by isospin. However, isospin is not an
exact symmetry, e.g., $m_u \ne m_d$, and the presence of the decay
$\psi^{'}~\rightarrow~J/ \psi \pi^0$. It has
been argued on the basis of chiral perturbation theory that $\mbox{D}^{\ast}_s \rightarrow \mbox{D}_s \pi^0$ is
non-vanishing.
The decay is mediated by a virtual $\eta$, which has a significant $s {\bar s}$ content, which then
``mixes'' into a $\pi^0$, due to the fact that the former also has a large non-strange component. The
second step violates isospin. The tree level diagram for this decay, gluon emission to produce a
$\pi^0$, is OZI-suppressed, whereas the electromagnetic production mechanism is down by a factor of
$\alpha$. The amplitude for this decay mode is proportional to the mass difference
between the u and d quarks. Since the radiative decay, $\mbox{D}^{\ast}_s \rightarrow \mbox{D}_s \gamma$, is suppressed
due to the partial cancellation of the charm and strange quark magnetic moments, it is possible to
observe the isospin violating decay.
The $\mbox{D}_s$ meson is reconstructed in the $\phi \pi$ decay mode, which has a large
(detection efficiency $\times$ branching fraction) and is relatively background free\cite{ref:bart}.
The $\pi^0$ has to pass strict selection criteria in order
to be considered. In Fig.~\ref{fig:pizero}, I present the mass difference, $\Delta M = M(\mbox{D}_s \pi^0) - M(\mbox{D}_s)$,
for events which fall within the $\pi^0$ and $\mbox{D}_s$ mass regions. The points with error bars
indicate a clear
signal, yielding $14.7^{+4.6}_{-4.0}$ events. The dashed line is the contribution due to
random combinations,
which has been modelled using the sidebands in the $\pi^0$ and $\mbox{D}_s$ mass distributions. A fit to the
dashed histogram yields $-1.0^{+3.1}_{-2.4}$ events, consistent with zero. If, instead, we
plot the $\mbox{D}_s$ mass, after requiring cuts on the mass difference, we again have a clear signal.
Counting events in the signal region, 142~MeV/c$^2~<~\Delta M~<~146$~MeV/c$^2$,
we observe 16 signal and 5
background events. Taking into account that the sidebands are twice the width of the signal region, we
obtain
the binomial probability of getting 16 (or more) signal events out of a total of 21 events to be
$7.3 \times 10^{-5}$, which corresponds to a statistical significance of at least 3.9 standard
deviations. Normalizing this reaction to the radiative decay, we obtain the branching fraction ratio,
\begin{displaymath}
\frac{{\cal B} (\mbox{D}^{\ast}_s \rightarrow \mbox{D}_s \pi^0)}{{\cal B}
(\mbox{D}^{\ast}_s \rightarrow \mbox{D}_s \gamma)}~=0.062^{+0.020}_{-0.018} \pm 0.022
\end{displaymath}
\begin{figure}[htb]
\vspace{9pt}
\hspace{0mm}
\vspace{-0.5cm}
{\epsfig{figure=pizero.eps, height=6.43 cm}}
\vspace{-0.5cm}
\caption{ Mass Difference.}
\label{fig:pizero}
\end{figure}
The presence of both the radiative and pionic decay modes implies that
the spin-parity of the $\mbox{D}^{\ast}_s$ belongs to the ``natural'' series ($1^-$, 2$^+$,...). The most likely
scenario is $1^-$, same as $\mbox{D}^{\ast 0}$ and $\mbox{D}^{\ast +}$ \cite{ref:pdg}. In addition, the pionic
decay mode
is very close to the kinematic threshold; we use it to measure the mass difference of
$\mbox{D}^{\ast}_s$ and $\mbox{D}_s$, which is determined to be $143.76 \pm 0.39 \pm 0.40$ MeV/c$^2$, in excellent
agreement with the
previous CLEO measurement (using the radiative mode), $144.22 \pm 0.47 \pm 0.37$ MeV/c$^2$.
These values are somewhat larger but more precise than the
PDG\cite{ref:pdg} value of $142.4 \pm 1.7$ MeV/c$^2$.
\section{Semileptonic B decay and related issues}
One of the more intriguing issues in B physics is the disagreement between the experimental value
and theoretical predictions for the B semileptonic branching fraction. After accounting for QCD
corrections, the theoretical predictions range from $11\% - 12\%$, whereas the most model independent
experimental value (CLEO) is $(10.49 \pm 0.17 \pm 0.43)\%$.
This ``disagreement'' may not seem real, but the
problem is that these theoretical models also predict that the number of charm quarks ($n_c$)
produced per decay
of a b quark is about 1.30 instead of the measured value (CLEO),
\begin{displaymath}
n_c = 1.15 \pm 0.044
\end{displaymath}
These predictions imply
that the rate of the $b\rightarrow c{\bar c}s$ transition is boosted from 0.15 to about 0.30; the
lower the theoretical prediction for ${\cal B} (B \rightarrow X l \nu)$, the higher the prediction for
$n_c$ and $\Gamma (b\rightarrow c{\bar c}s)$.
Table~\ref{tab:nc} lists the latest CLEO results on the inclusive decay rates of the B meson
into various charm final states\cite{ref:note}.
\begin{table}[hbt]
\setlength{\tabcolsep}{2.0pc}
\caption{Inclusive B decays to charm hadrons.}
\label{tab:nc}
\begin{tabular}{cc}
\hline
Decay mode & Rate \\
\hline
${\bar B} \rightarrow \mbox{D}^0 X$ & $(64.6 \pm 3.2)\% $ \\
${\bar B} \rightarrow \mbox{D}^+ X$ & $(25.3 \pm 1.6)\% $ \\
${\bar B} \rightarrow \mbox{D}_s^+ X$ & $(11.8 \pm 1.7)\% $ \\
${\bar B} \rightarrow \Lambda_c X$ & $(4.0 \pm 1.0)\% $ \\
${\bar B} \rightarrow \Xi_c X$ & $(3.9 \pm 1.8)\% $ \\
${\bar B} \rightarrow c{\bar c} X$ & $(5.2 \pm 0.7)\% $ \\
\hline
$n_c$ & $1.15 \pm 0.044$ \\
\hline
\end{tabular}
\end{table}
If these theoretical models are right then $\Gamma (b\rightarrow c{\bar c}s) \approx 0.30$, and
$\Gamma(b\rightarrow c{\bar c}s)/\Gamma(b\rightarrow c{\bar u}d)~\approx~2/3$.
This does not change the experimental value of $n_c$, although a large experimental value of
$\Gamma (b\rightarrow c{\bar c}s)$ will imply that $n_c$ is being underestimated.
The $(b\rightarrow c{\bar c}s)$ transition manifests itself as final states containing a $\mbox{D}_s$,
$\Xi_c {\bar \Lambda_c}$, or charmonium states. In this section, I will present results which shed some
light on these issues.
In fig.~\ref{fig:double}, I show the electron spectrum, $P_e > 0.6$ GeV/c, from B decay,
where the opposite B has been
tagged with a high momentum lepton (P$_{\rm tag} > 1.5$ GeV/c). Correlating the charge and angle between
the two leptons, we can disentangle the primary lepton spectrum ($b \rightarrow c l \nu$) from the
secondary spectrum ($b \rightarrow c X, c \rightarrow Y l \nu$). Since we can detect electrons down to
0.6 GeV/c, we are able to probe a larger portion of the momentum spectrum and hence have to rely less
on models to extrapolate down to zero lepton momentum. This analysis yields,
\begin{displaymath}
{\cal B}(B\rightarrow Xl\nu)=(10.49\pm0.17\pm0.43)\%
\end{displaymath}
\begin{figure}[htb]
\vspace{9pt}
\hspace{0mm}
\vspace{-0.5cm}
{\epsfig{figure=double.eps, height=6.43 cm}}
\vspace{-0.5cm}
\caption{Electron Momentum Spectrum.}
\label{fig:double}
\end{figure}
\subsection{${\bar B} \rightarrow \mbox{D}_s^+ X$}
There are two diagrams for producing a $\mbox{D}_s$ in the final state, (a) $b\rightarrow c{\bar c}s$:
external W diagram,
where $ W~\rightarrow~c{\bar s}$, which hadronizes to form a $\mbox{D}_s^+$, and, (b) $b\rightarrow c{\bar u}d$:
internal or external W diagram,
where $ W~\rightarrow~u{\bar d}$, accompanied by $s{\bar s}$ popping. In the second case, the ${\bar c}$
quark from ${\bar b}$ decay combines with the s quark to form a $\mbox{D}_s^-$.
CLEO has measured the inclusive branching fraction\cite{ref:phipi},
\begin{displaymath}
{\cal B} (B \rightarrow \mbox{D}_s X) = (11.81 \pm 0.43 \pm 0.94)\%
\end{displaymath}
This result includes both sources of $\mbox{D}_s$, as described above. In
Fig.~\ref{fig:incl_ds}, I show the momentum spectrum of $\mbox{D}_s$ produced in B decays - the X axis is the
$\mbox{D}_s$ momentum normalized to the maximum momentum it can have ($[E_{beam}^2 - M_{\mbox{D}_s}^2]^{1/2}$). The
data points for $x \ge 0.25$ are due to two-body decays, where the $\mbox{D}_s$ is produced
via a $b\rightarrow c{\bar c}s$ transition, whereas the data points for $x < 0.25$ are either due to
$b\rightarrow c{\bar c}s$ where the $\mbox{D}_s$
is accompanied by more than 1 pion(s) or due to $b\rightarrow c{\bar u}d$, which is always a multi-body final state.
\begin{figure}[htb]
\vspace{9pt}
\hspace{0mm}
\vspace{-0.5cm}
{\epsfig{figure=incl_ds.eps, height=6.43 cm}}
\vspace{-0.5cm}
\caption{$\mbox{D}_s$ Momentum Spectrum.}
\label{fig:incl_ds}
\end{figure}
To investigate the relative strengths of production mechanism (a), which is a
$b\rightarrow c{\bar c}s$ transition and, (b), which is a $b\rightarrow c{\bar u}d$ transition, we have used $\mbox{D}_s - lepton$ correlations,
where the $\mbox{D}_s$
and the lepton come from different B mesons. The lepton is used to tag the flavour of one B, whereas the
charge of the $\mbox{D}_s$ is used to tag whether the $\mbox{D}_s$ is produced by mechanism (a) or (b). Therefore,
$\mbox{D}_s^- l^-$ combinations imply that the $\mbox{D}_s$ is produced via (b), whereas $\mbox{D}_s^+ l^-$ imply that the
$\mbox{D}_s$ is produced via (a). Fig.~\ref{fig:ds_lep} shows the $\mbox{D}_s$ mass for the two $\mbox{D}_s-lepton$ charge
combinations - the $\mbox{D}_s$ is reconstructed via the $\phi \pi$ decay mode. The raw yield for the
like-sign and opposite-sign combinations are $34.3\pm 9.1$ and $116.3\pm 15$ events, respectively. After
correcting for backgrounds (shown as black squares) and mixing,
we find that most of the $\mbox{D}_s$ mesons are produced via the $b\rightarrow c{\bar c}s$ transition,
with at most 31\% produced via the $b\rightarrow c{\bar u}d$ transition (90\% confidence
level upper limit).
At present, this analysis suffers from low statistics, but we hope to complement this
analysis by searching for exclusive decay modes, which will pinpoint more accurately the production
mechanism for $\mbox{D}_s$ mesons.
\begin{figure}[htb]
\vspace{9pt}
\hspace{0mm}
\vspace{-0.5cm}
{\epsfig{figure=ds_lep.eps, height=6.43 cm}}
\vspace{-0.5cm}
\caption{$\mbox{D}_s$ mass in $\mbox{D}_s - lepton$ combinations.}
\label{fig:ds_lep}
\end{figure}
\subsection{B $\rightarrow$ Charmonium}
This class of decays occurs via an internal W diagram, where $W \rightarrow c{\bar s}$, and the
${\bar c}$ quark produced in the decay of the ${\bar b}$ combines with the c quark to form a charmonium
state, $J/\psi, \psi', \chi_c, h_c, \eta_c, \psi''$.
Table~\ref{tab:onia} lists the CLEO measurements of
B decays into charmonium states. A ``direct'' measurement implies that all feed-downs into that final
state have been removed from the quoted result. Using theoretical estimates for the relative rates of
$B \rightarrow \chi_{c0}, h_c, \eta_c$, we estimate that the total branching fraction for B to
charmonium states is $(2.6 \pm 0.3)\%$. Since there are two charm quarks in these states, they enter
with twice the weight in Table~\ref{tab:nc}.
\begin{table}[hbt]
\setlength{\tabcolsep}{2.0pc}
\caption{Inclusive B decays to Charmonium states.}
\label{tab:onia}
\begin{tabular}{cc}
\hline
Decay mode & Rate \\
\hline
${\bar B} \rightarrow J/\psi X$ (direct) & $(0.80 \pm 0.08)\% $ \\
${\bar B} \rightarrow \psi' X$ (direct) & $(0.34 \pm 0.05)\% $ \\
${\bar B} \rightarrow \chi_{c1} X$ (direct) & $(0.37 \pm 0.07)\% $ \\
${\bar B} \rightarrow \chi_{c2} X$ & $(0.25 \pm 0.11)\% $ \\
${\bar B} \rightarrow \eta_c X$ & $< 0.9 \% $ \\
\hline
\end{tabular}
\end{table}
\subsection{B $\rightarrow$ baryons}
B $\rightarrow$ baryon decays can be mediated by both $b\rightarrow c{\bar u}d$ and
$b\rightarrow c{\bar c}s$ transitions as shown in fig.~\ref{fig:feyn} a-b and c-d, respectively. In this figure, ${\bar N},
{\bar Y}$ represent non-strange (n, p,...) and strange baryons ($\Lambda$,...), respectively. The
external W diagrams ((a), (c)) require two $q{\bar q}$ pairs to be popped from the vacuum, whereas the
internal W diagrams require only one such pair, leading to the possibility that the former class of
diagrams may not be dominant. In contrast, in B decays to mesons, the external W diagrams are quite
dominant. If the external W diagrams are dominant for B $\rightarrow$ baryons, then $b\rightarrow c{\bar c}s$ may not
play a big role here, since they mainly occur in internal W type processes (Fig.~\ref{fig:feyn}c is
phase-space suppressed). In other words, if both external W and $b\rightarrow c{\bar u}d$ are dominant, then one may expect
the ratio ${\bar B} (\Lambda_c {\bar N} X l \nu)/~{\bar B} (\Lambda_c X)\approx 12\%$,
as is the case for B $\rightarrow$
mesons.
\begin{figure}[htb]
\vspace{9pt}
\hspace{0mm}
\vspace{-0.5cm}
{\epsfig{figure=feyn.eps, height=6.43 cm}}
\vspace{-0.5cm}
\caption{Processes for B $\rightarrow$ baryon decays.}
\label{fig:feyn}
\end{figure}
We have studied the importance of external W diagrams, by searching for the decay
$B~\rightarrow~\Lambda_c^+ {\bar N} X e^- \nu$ using $\Lambda_c^+ - e^{\pm}$ correlations,
where both the
$\Lambda_c$ and electron come from the same B. Since we have two baryons in the final state,
the electron
momentum is softer than in the case of B decay to mesons, and we require that it be in the range,
0.7 GeV/c to 1.5 GeV/c. Opposite-sign combinations, $\Lambda_c^+ e^-$ are due to both signal
and background events,
whereas like-sign events $\Lambda_c^+ e^+$ are all background. Background in this case consists
of picking up the $\Lambda_c^+$ from the decay of one B, and the electron from the other B and also
due to B mixing. In Table~\ref{tab:sameb} we list the event yields (continuum subtracted)
and background estimates.
\begin{table}[hbt]
\setlength{\tabcolsep}{1.0pc}
\caption{$\Lambda_c - e$ combinations from the same B.}
\label{tab:sameb}
\begin{tabular}{ccc}
\hline
Yields & $\Lambda_c^+ e^-$ & $\Lambda_c^+ e^+$ \\
\hline
Raw Yield & $95 \pm 20$ & $74\pm 16$ \\
Bkgd estimate & $57 \pm 13$ & $87\pm 14$ \\
Mixing correc. & $+3 \pm 1$ & $ -3\pm 1$ \\
\hline
Net Yield & $35\pm 26$ & $-10\pm 21$ \\
\hline
\end{tabular}
\end{table}
As one
can see, we do not have a statistically significant signal as yet, but with the current data we can set
the following 90\% confidence level upper limit,
\begin{displaymath}
\frac{{\cal B} ({\bar B} \rightarrow \Lambda_c {\bar N} X l \nu)}
{{\cal B} ({\bar B} \rightarrow \Lambda_c X)}~< 6.0\%
\end{displaymath}
This result implies that the external W diagrams may not be dominant in B $\rightarrow$ baryons,
because if they were, then the above ratio would be closer to 12\%; thus, we may be able to
investigate the role of $b\rightarrow c{\bar c}s$ transitions, which occur mainly in internal W type
processes.
To investigate the relative strengths of $b\rightarrow c{\bar u}d$ and $b\rightarrow c{\bar c}s$ transitions, we now look at
$\Lambda_c - lepton$ correlations, where the two now come from {\bf different B's}. The lepton momentum
is required to be between 1.5 GeV/c and 2.4 GeV/c - this momentum region is relatively free from
$b \rightarrow c \rightarrow Xl\nu$ contamination. Like sign combinations, $\Lambda_c^+l^+$, arise when
the $\Lambda_c$ is created in a $b\rightarrow c{\bar u}d$ transition (fig.~\ref{fig:feyn}a,b), whereas opposite sign
combinations, $\Lambda_c^- l^+$, arise when the $\Lambda_c$ is created in a $b\rightarrow c{\bar c}s$ transition
(fig.~\ref{fig:feyn}d). In fig.~\ref{fig:diffb}, I present the $\Lambda_c$ mass for opposite sign and
like sign combinations, respectively, and table~\ref{tab:diffb} lists the raw yields
(continuum subtracted) and background estimates. The cross-hatched entries in the figure
are contributions due to continuum background.
\begin{figure}[htb]
\vspace{9pt}
\hspace{0mm}
\vspace{-0.5cm}
{\epsfig{figure=diffb.eps, height=6.43 cm}}
\vspace{-0.5cm}
\caption{$\Lambda_c$ invariant mass for $\Lambda_c e$ combinations from different B's.}
\label{fig:diffb}
\end{figure}
\begin{table}[hbt]
\setlength{\tabcolsep}{1.0pc}
\caption{$\Lambda_c - lepton$ combinations from the different B mesons.}
\label{tab:diffb}
\begin{tabular}{ccc}
\hline
Yields & $\Lambda_c^+ l^-$ & $\Lambda_c^+ l^+$ \\
& $b\rightarrow c{\bar c}s$ & $b\rightarrow c{\bar u}d$ \\
\hline
Raw Yield & $43 \pm 16$ & $141\pm 16$ \\
Bkgd estimate & $5 \pm 1.5$ & $2.1\pm 0.8$ \\
Mixing correc. & $-9 \pm 2$ & $ +9 \pm 2$ \\
\hline
Net Yield & $29 \pm 19$ & $148\pm 19$ \\
\hline
\end{tabular}
\end{table}
From these yields, the ratio of the relative strengths of $b\rightarrow c{\bar c}s$ and $b\rightarrow c{\bar u}d$
transitions in $B \rightarrow \Lambda_c$ decays is determined to be,
\begin{displaymath}
\frac{\Gamma (b\rightarrow c{\bar c}s)}{{\Gamma}(b\rightarrow c{\bar u}d)}=(20\pm 13\pm 4)\%
\end{displaymath}
nowhere near $2/3$, which is what one may expect if $\Gamma~(b\rightarrow c{\bar c}s)~\approx~0.3~\Gamma_{total}$ applied
universally to all B decays.
In addition, this result is consistent with the ratio being
$1/3$, which is what one expects from naive phase-space arguments. However, to have a more conclusive
result, we need more data, more techniques of tagging the flavour of one B.
$B \rightarrow \Xi_c X$ is another decay mode where one can probe the importance of the $b\rightarrow c{\bar c}s$
transition. This decay mainly occurs via the internal W diagram with the $W\rightarrow u{\bar d}$
accompanied by $s{\bar s}$ popping as in fig.~\ref{fig:feyn}b or
$W\rightarrow c{\bar s}$ accompanied by light quark-pair popping,
as in fig.~\ref{fig:feyn}d, respectively. There will be also be some contribution due to the external
W diagram as in fig.~\ref{fig:feyn}a.
If $[b\rightarrow c{\bar c}s/b\rightarrow c{\bar u}d] \approx 1/3$ and the ratio of $s{\bar s}$
to light quark-pair popping is about 0.15, then one could expect the ratio,
${\cal B}~(B~\rightarrow~\Xi_c~X)~/~{\cal B}~(B~\rightarrow~\Lambda_c~X)~\approx~0.48$. We reconstruct
$\Xi_c^0, \Xi_c^+$ in the $\Xi^- \pi^+, \Xi^- \pi^+ \pi^+$ modes, respectively. The ON (data points)
and OFF (shaded) resonance
contributions to $\Xi_c^0$ and $\Xi_c^+$ mass distributions are shown in fig.~\ref{fig:csc0} and
fig.~\ref{fig:cscp}, respectively. We find $59\pm 17$ events for $B \rightarrow \Xi_c^0 X$ and
$88\pm 20$ events for $B \rightarrow \Xi_c^+ X$.
\begin{figure}[htb]
\vspace{9pt}
\hspace{0mm}
\vspace{-0.5cm}
{\epsfig{figure=csc0.eps, height=6.43 cm}}
\vspace{-0.5cm}
\caption{$\Xi_c^0$ invariant mass in B decay.}
\label{fig:csc0}
\end{figure}
\begin{figure}[htb]
\vspace{9pt}
\hspace{0mm}
\vspace{-0.5cm}
{\epsfig{figure=cscp.eps, height=6.43 cm}}
\vspace{-0.5cm}
\caption{$\Xi_c^+$ invariant mass in B decay.}
\label{fig:cscp}
\end{figure}
To calculate a ratio for inclusive $\Xi_c$ production, we have to estimate the absolute branching
fraction scale for $\Xi_c$ decays. We do this by assuming that the semileptonic widths for all charm
hadrons is the same, and that $\Xi_c \rightarrow \Xi l \nu$ saturates the $\Xi_c$ semileptonic width
(similarly for $\Lambda_c$).
This leads to upper limits on the branching fraction of $\Xi_c \rightarrow \Xi X$, and
$\Lambda_c \rightarrow pK\pi$. I should point out that
these assumptions are not very reliable, and only serve to make a ``crude'' estimate. Using
CLEO data for the semileptonic data, we get that $B \rightarrow \Xi_c^+ X~=~(2.0 \pm 0.7)\%$,
$B \rightarrow \Xi_c^0 X~=~(2.8 \pm 1.2)\%$, and $B \rightarrow \Lambda_c X~=~(3.1 \pm 1.0)\%$.
Using these estimates, we find that
$[{\cal B} (B \rightarrow \Xi_c X)/{\cal B} (B \rightarrow \Lambda_c X])~\approx~1.5\pm~0.7$, which is
not terribly conclusive. This result is consistent with a small rate for $b\rightarrow c{\bar u}d$ transitions in
baryon production,
which is in sharp disagreement with the result from $\Lambda_c - lepton$ correlations. Most likely, the
branching fraction scale for the charmed baryons is wrong.
\section{Conclusions}
$b\rightarrow c{\bar c}s$ transitions do take place, as evidenced by $B \rightarrow \mbox{D}_s X, \Xi_c {\bar \Lambda_c}X$,
charmonium states. Our preliminary results indicate that the rate for $b\rightarrow c{\bar c}s$ is not enough to solve the
${\cal B} (B \rightarrow X l \nu)$ ``problem''. We find this branching fraction to be $(10.49 \pm 0.17
\pm 0.43)\%$ instead of the expected 12\%, and we also find $n_c$, the number of charm quarks/b quark to
be $1.15 \pm 0.044$ instead of 1.3.
Lack of time prevents me from presenting other results, but I will briefly point out some of them.
\begin{itemize}
\item We have made the first unambiguous measurement of $\mbox{D}_s$ semileptonic decays
to $\eta, \eta'$ final states.
\begin{displaymath}
\frac{{\cal B} (\mbox{D}_s \rightarrow \eta l \nu)} {{\cal B}
(\mbox{D}_s \rightarrow \phi l \nu)}~=~1.24 \pm 0.12 \pm 0.15
\end{displaymath}
\begin{displaymath}
\frac{{\cal B} (\mbox{D}_s \rightarrow \eta' l \nu)} {{\cal B}
(\mbox{D}_s \rightarrow \phi l \nu)}~=~0.43 \pm 0.11 \pm 0.07
\end{displaymath}
The ratio of the vector to pseduoscalar final states in $\mbox{D}_s$ semileptonic decays is about the same
as one finds in non-strange D semileptonic decays ($\approx 0.6$). In the past, most theoretical models
predicted this ratio to be 1.
\item We have made the first measurement of exclusive $b \rightarrow u$ decays,
\begin{displaymath}
{\cal B}(B^0\rightarrow\pi^+l^-\nu)~=~(1.34\pm 0.35\pm 0.28)\times 10^{-4}
\end{displaymath}
\begin{displaymath}
{\cal B} (B^0\rightarrow\rho^+l^-\nu)~=~(2.28\pm 0.36\pm 0.59^{+0.00}_{-0.46})\times 10^{-4}
\end{displaymath}
These branching fractions have been obtained using isospin constraints between the final states
$\pi^0l\nu$ and $\pi^+l\nu$, and between $\rho^0l\nu, \rho^+l\nu$ and $\omega l\nu$.
The ISGW model was used to determine efficiencies, etc.
\end{itemize}
Currently, we are processing more data which has already been collected. To further increase the
luminosity of CESR and the capabilities of the CLEO detector various upgrades are
underway. A new silicon vertex detector is being installed in CLEO and in 3-4 years we are planning to
significantly improve particle identification in CLEO \cite{ref:galik}. With these improvements, we
expect to be doing exciting physics in the future.
\section{Acknowledgements}
I would like to thank my colleagues on CLEO for explaining to me the details of their analyses. I
also thank Scott Menary and Isi Dunietz for their comments. This research was funded by the U.S.
Department of Energy, National Science Foundation and Vanderbilt University.
|
1,108,101,562,919 | arxiv | \section{Introduction}\label{sec:intro}
One of the profound consequences of quantum mechanics is that something \textit{can} come from nothing. Enforced by the uncertainty principle, the vacuum state of quantum mechanics is teeming with activity. Quantum fluctuations inherent in the vacuum give rise to a host of particles that seemingly move in and out of existence in the blink of an eye. These fluctuations, however fleeting, are the origin of some of the most important physical processes in the universe. From the Lamb shift \cite{lamb:1947} and Casimir force \cite{casimir:1948,lamoreaux:2007}, all the way up to the origin of the large scale structure \cite{springel:2006} and the cosmological constant \cite{weinberg:1989} of our universe, the effects of the quantum vacuum permeate all of physics.
Although the significance of vacuum fluctuations has been appreciated since the early days of quantum mechanics [see, e.g., \cite{milonni:1993}], the quantum properties of the vacuum state constitute an area of quantum field theory that remains relatively unexplored experimentally. So far, static quantum vacuum effects such as the Casimir force \cite{lamoreaux:1997} and Lamb shift \cite{lamb:1947} have been verified experimentally, along with the recent demonstration of the dynamical Casimir effect \cite{moore:1970,lahteenmaki:2011,wilson:2011}. In contrast, other dynamical amplification mechanisms such as the Schwinger process \cite{schwinger:1951}, Unruh effect \cite{unruh:1976}, and Hawking radiation \cite{hawking:1974,hawking:1975}, have yet to been observed\footnote{As discussed in Sec.~\ref{sec:analogue-hawking}, recent experimental evidence for an analogue of Hawking radiation \cite{belgiorno:2010} does not go far enough to definitively confirm the existence of this effect.}. The difficulties in observation can be traced to the extreme conditions under which these dynamical phenomena become appreciable. For example, the dynamical Casimir effect requires rapidly modulating the boundary conditions of the electromagnetic field, with peak velocities close to the speed of light. Likewise, Hawking radiation not only requires a black hole, but also demands one with a sufficiently small mass so as to make the emitted radiation observable above the ambient cosmic microwave background. With difficulties such as these in mind, researchers have looked to analogue systems that are able to generate the desired amplification effects, and at the same time surmount the difficulties inherent in observations of the actual processes.
One such class of available systems are superconducting circuit devices. The quantum mechanics of superconducting circuits has received considerable attention during recent years. This interest has largely been due to research on quantum computation and information processing \cite{nielson:2000}, for which superconducting circuits \cite{makhlin:2001,you:2005,wendin:2006,clarke:2008,schoelkopf:2008,you:2011} are considered promising fundamental building blocks. Experimental progress on superconducting resonator-qubit systems \cite{dicarlo:2010} have also inspired theoretical and experimental investigations of quantum optics in the microwave regime \cite{chiorescu:2004,wallraff:2004,schuster:2007,houck:2007,hofheinz:2009}. These recent advances in the engineering and control of quantum fields in superconducting circuits have also opened up the possibility to explore quantum vacuum effects with these devices. Indeed, the demonstration of both the Lamb shift in a superconducting artificial atom \cite{fragner:2008}, and the dynamical Casimir effect in a superconducting waveguide \cite{lahteenmaki:2011,wilson:2011}, have already been achieved.
We have two goals in mind for this Colloquium: the first is to introduce to condensed-matter physicists the following quantum vacuum amplification mechanisms: the Unruh effect \cite{unruh:1981}, Hawking radiation \cite{hawking:1974}, and the dynamical Casimir effect \cite{moore:1970,fulling:1976}. We shall in particular highlight their relationship to the well-known parametric amplifier from quantum optics. Parametric amplification has been applied extensively in quantum optics to, for example, the generation of nonclassical states \cite{slusher:1985,breitenbach:1997}, tests of wave-particle duality \cite{hong:1987}, quantum-erasers \cite{zou:1991}, and quantum teleportation \cite{bouwmeester:1997,furusawa:1998,kim:2001}. Here we will focus on the physical rather than mathematical aspects of these amplification mechanisms, as others have covered the latter in great detail \cite{birrell:1982,crispino:2008,fabbri:2005,dodonov:2002}. Our second goal is to introduce to researchers in the high-energy and general relativity communities, possible analogue experimental realizations of these effects in microwave superconducting circuit devices, where the similarities and differences in the various amplification effects manifest themselves in the design of their circuit counterparts. We emphasize, in particular, the potential advantages arising from their inherently low-noise quantum coherent nature.
The outline of this Colloquium is as follows: In Sec.~\ref{sec:amp-intro} we give a brief overview of quantum amplification basics, introducing the formalism to be used in later sections. Sec.~\ref{sec:vacuum} describes the methods by which photons may be generated from amplified vacuum fluctuations, and highlights the connections between the various effects. Sec.~\ref{sec:sc-circuits} details the superconducting circuit implementations, as well as reviews progress towards the detection of single-microwave photons, necessary to verify photon production from the vacuum. Finally, in Sec.~\ref{sec:future} we summarize and briefly discuss possible future applications of superconducting circuit models for engineering quantum ground states and realizing quantum gravity inspired analogues.
\section{Prelude to quantum amplification}\label{sec:amp-intro}
A physical system with time-dependent parameters often has resonant responses at certain modulation frequencies. This parametric resonance is very general, occurring in a wide variety of both classical and quantum mechanical systems. The representative example of classical parametric resonance is a child standing on a swing, who periodically modulates her center of mass (CM) by bending at the knees\footnote{Another commonly used example is that of a child swinging their legs while sitting on a swing. Careful inspection of the motion however reveals that the child drives the swing at the same frequency as the swing itself. This situation is therefore better characterized as a driven oscillator rather than a parametric process \cite{case:1990}.}. For a fixed CM, the equation of motion (for small amplitudes) is that of a simple pendulum with the solution
\begin{equation}
\theta(t)=\theta(0)\cos(\omega_{s}t)+\frac{L(0)}{m\omega_{s}l}\sin(\omega_{s}t),
\end{equation}
where $L(0)$ is the initial angular momentum and $\theta(t)$ the angular
displacement, while $m$ and $l$ are the pendulum mass and length, respectively.
With the CM governing the effective length of the swing, this motion modulates
the swing frequency $\omega_{s}=\sqrt{g/l}$ as
$\omega_{s}(t)=\omega_{s}(0)+\epsilon\sin\left(\omega_{\mathrm{cm}}t\right)$,
where $\omega_{s}(0)$ is the unperturbed swing frequency, $\omega_{\mathrm{cm}}$
is the CM modulation frequency, and $\epsilon$ is the resulting small frequency
change in the pendulum motion. If the child modulates the CM at twice the
oscillation frequency, $\omega_{\mathrm{cm}}=2\omega_{s}$, as shown in
Fig.~\ref{fig:swing},
\begin{figure}[t]\begin{center}
\includegraphics[width=8.0cm]{nation_fig01}
\caption{(Color online) Parametric amplification of pendulum motion by a child standing on a swing. The amplification is driven by changing the center of mass (star), and thus effective length, of the pendulum at twice the frequency of the unperturbed swing.}
\label{fig:swing}
\end{center}
\end{figure}
then the solution to the equation of motion is
\begin{equation}\label{eq:para-swing}
\theta(t)=\theta(0)e^{\epsilon t/2}\cos(\omega_{s}t)+\frac{L(0)}{m\omega_{s}l}e^{-\epsilon t/2}\sin(\omega_{s}t).
\end{equation}
The initial amplitude is therefore exponentially amplified while the out-of-phase component of motion is exponentially suppressed.
For parametric amplification to occur in a classical system it must initially be displaced from the equilibrium state. This is easily seen by setting $\theta(0)=L(0)=0$ in Eq.~(\ref{eq:para-swing}). Although many sources of fluctuations can exist, in principle nothing in classical mechanics prevents simultaneously setting the position and momentum of the oscillator to zero. This is in sharp contrast to the quantum mechanical description of an oscillator where the non-vanishing canonical commutation relation $\left[x,p\right]=i\hbar$ prevents the absence of motion. This implies that even the ground state of the quantized oscillator contains quantum fluctuations and thus may be parametrically amplified. The amplification of quantum fluctuations by parametrically modulating the frequency of an harmonic oscillator is closely related to the process of particle production in quantum fields and therefore serves as an instructive example. We will therefore begin with a short review introducing the basic mathematics and terminology used in later sections by considering the amplification of a quantized oscillator through a time-varying frequency.
We follow the analysis in \cite{jacobson:2004} and begin with the harmonic oscillator described by the Hamiltonian $H=p^{2}/(2m)+m\omega^{2}x^{2}/2$. With the position and momentum operators obeying the canonical commutation relation $\left[x,p\right]=m\left[x,\dot{x}\right]=i\hbar$, in the Heisenberg picture we have $\ddot{x}+\omega^{2}x=0$. Decompose the position operator $x(t)$ in terms of the non-hermitian raising $(a^{\dagger})$ and lowering $(a)$ operators and mode function $f(t)$ as $x(t)=f(t)a+\bar{f}(t)a^{\dagger}$, where the over-bar represents complex conjugation, and the mode function satisfies the oscillator classical equation of motion $\ddot{f}(t)+\omega^{2}f(t)=0$. Substituting into the commutation relation $\left[x,p\right]$ the above decomposition gives
\begin{equation}
\frac{m}{i\hbar}\left[x,\dot{x}\right]=\frac{m}{i\hbar}\left(f(t)\dot{\bar{f}}(t)-\bar{f}(t)\dot{f}(t)\right)\left[a,a^{\dag}\right]=1.
\end{equation}
Demanding the commutation relation $\left[a,a^{\dagger}\right]=1$ for all times, we have $\langle f,f\rangle=1$ and $\langle f,\bar{f}\rangle=0$, i.e. the mode functions $f(t)$ and $\bar{f}(t)$ are orthonormal in terms of the inner-product\footnote{In quantum field theory, the generalization of Eq.~(\ref{eq:kg}) to spacetimes where the dimensionality is larger than the zero-dimensional harmonic oscillator considered here is called the Klein-Gordon inner-product.}
\begin{equation}\label{eq:kg}
\langle f,g\rangle\equiv \frac{im}{\hbar}\left[\bar{f}(t)\dot{g}(t)-g(t)\dot{\bar{f}}(t)\right].
\end{equation}
The ladder operators may then be defined in terms of this inner-product as $a=\langle f, x\rangle$ and $a^{\dagger}=-\langle \bar{f}, x\rangle$.
Specifying the ground state of the system is equivalent to fixing the form of the mode function $f(t)$. For the simple harmonic oscillator, the ground state can be defined with respect to the ladder operators as the state for which $a|0\rangle=0$. Demanding this ground state be an eigenstate of the Hamiltonian $H|0\rangle=E|0\rangle$ gives the mode function equation of motion via
\begin{eqnarray}\label{eq:H}
&H|0\rangle&=\left(\frac{m\dot{x}^{2}}{2}+\frac{m\omega^{2}x^{2}}{2}\right)|0\rangle\\
&=&\frac{m}{2}\left\{\left[\dot{f}(t)a+\bar{\dot{f}}(t)a^{\dag}\right]^{2}+\omega^{2}\left[f(t)a+\bar{f}(t)a^{\dag}\right]^{2}\right\}|0\rangle \nonumber\\
&=&\frac{m}{\sqrt{2}}\overline{\left[\dot{f}(t)^{2}+\omega^{2}f(t)^{2}\right]}|2\rangle+\frac{m}{2}\left[\left|\dot{f}(t)\right|^{2}+\omega^{2}\left|f(t)\right|^{2}\right]|0\rangle. \nonumber
\end{eqnarray}
Since the term proportional to $|2\rangle$ must vanish, it follows that $\dot{f}(t)=\pm i \omega f(t)$ with normalization $\left|f(t)\right|^{2}=\hbar/(2m\omega)$ and inner-product $\langle f,f \rangle=\mp 1$. Positivity of the inner-product selects the solution $f(t)=x_{\rm zp}\exp(-i\omega t)$ where $x_{\rm zp}=\sqrt{\hbar/2m\omega}$ is the zero-point uncertainty in the oscillator's position. This is designated the ``positive frequency" solution\footnote{A complex function $f(t)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}d\omega\, g(\omega)e^{-i\omega t}$ is said to be ``positive frequency" if it's Fourier transform $g(\omega)$ vanishes for all $\omega\le0$. In this case, $f(t)$ is composed solely of Fourier components of the form $e^{-i\omega t}$ where $\omega>0$.}, whereas $\bar{f}(t)=x_{\rm zp} \exp(+i\omega t)$ is the conjugate, ``negative frequency" solution. Using Eq.~(\ref{eq:H}), it is straightforward to show that these mode functions lead to the canonical oscillator Hamiltonian $H=\hbar\omega\left(a^{\dag}a+1/2\right)$. The position operator may then be written in the form
\begin{equation}\label{eq:x}
x(t)=x_{\rm zp}\left(e^{-i\omega t}a + e^{+i\omega t}a^{\dagger}\right),
\end{equation}
where we see that the positive (negative) frequency solution is associated with the annihilation (creation) operator.
Now, suppose that the frequency of the harmonic oscillator is allowed to vary in time:
\begin{equation}\label{eq:time}
\ddot{x}+\omega(t)^{2}x=0,
\end{equation}
such that the initial ``input" frequency is defined as $\omega(t\rightarrow -\infty)=\omega_{\rm in}$, and the final ``output" frequency is $\omega(t\rightarrow\infty)=\omega_{\rm out}$. Here we assume that $\omega_{\rm out}$ differs from the input frequency $\omega_{\rm in}$. These frequencies define two sets of ladder operators $a_{\rm in}$, $a_{\rm out}$, corresponding ground states $|0\rangle_{\rm in}$, $|0\rangle_{\rm out}$, and mode functions $f_{\rm in}(t)$, $f_{\rm out}(t)$, where from the above simple harmonic oscillator analysis, $\left.f_{\rm in}(t)\right|_{t\rightarrow -\infty} \sim \exp\left(-i\omega_{\rm in}t\right)$ and $\left.f_{\rm out}(t)\right|_{t\rightarrow +\infty}\sim \exp\left(-i\omega_{\rm out}t\right)$, with
\begin{equation}
x(t)=f_{\mathrm{in}}(t)a_{\mathrm{in}}+\bar{f}_{\mathrm{in}}(t)a_{\mathrm{in}}^{\dagger}=f_{\mathrm{out}}(t)a_{\mathrm{out}}+\bar{f}_{\mathrm{out}}(t)a_{\mathrm{out}}^{\dagger}.
\end{equation}
As a second-order differential equation, Eq.~(\ref{eq:time}) requires two linearly independent solutions to characterize the dynamics. Given that $f_{\rm in}$ is a solution to the oscillator equation and $\langle f_{\mathrm{in}},\bar{f}_{\mathrm{in}}\rangle=0$, we may write the output state modes as a linear combination of the input state solutions, $f_{\rm out}=\alpha f_{\rm in}+\beta \bar{f}_{\rm in}$. Substituting into Eq.~(\ref{eq:kg}), the coefficients are connected through the symplectic relation
\begin{equation}\label{eq:constraint}
\left|\alpha\right|^{2}-\left|\beta\right|^{2}=1.
\end{equation}
With $f_{\rm out}(t)$ expressed using input modes, the output state lowering operator $a_{\rm out}=\langle f_{\rm out},x\rangle$ is then given as
\begin{equation}\label{eq:bogoliubov}
a_{\rm out}=\alpha a_{\rm in}-\bar{\beta}a_{\rm in}^{\dagger}.
\end{equation}
Assuming the oscillator is initially in the ground state $|0\rangle_{\rm in}$, the particle number expectation value at the output is $N_{\rm out}=\langle0|a^{\dagger}_{\rm out}a_{\rm out}|0\rangle_{\rm in}=\left|\beta\right|^{2}$. Other than adiabatic changes from $\omega_{\rm in}$ to $\omega_{\rm out}$, $\beta$ is non-vanishing, and there is a finite probability of the oscillator being found in an excited state at the output; the average excitation number $N_{\rm out}$ is determined by the coefficient of the negative frequency ($a_{\rm in}^{\dag}$) coefficient in Eq.~(\ref{eq:bogoliubov}).
Equation~(\ref{eq:bogoliubov}) is an example of a larger class of transformation called Bogoliubov transformations, where the ladder operators in the output state may be written as a linear combination of \textit{both} initial state creation and annihilation operators with coefficients satisfying the constraint given in Eq.~(\ref{eq:constraint}). All quantum amplification processes can be cast as Bogoliubov transformations \cite{leonhardt:2010}. They therefore represent a useful generalized framework within which one may compare the various amplification methods.
\section{Vacuum amplification}\label{sec:vacuum}
\begin{figure}[t]\begin{center}
\includegraphics[width=8.0cm]{nation_fig02}
\caption{(Color online) Relationships between quantum amplification mechanisms.
Counterclockwise from the parametric amplifier: For a single mode of the Minkowski vacuum, the
non-degenerate parametric amplifier (NDPA) and Unruh effect (UE) share the same
form of Bogoliubov transformations resulting in both exhibiting a two-mode
squeezed state. The UE is in turn connected to Hawking radiation (HR) through
the equivalence principle relating inertial and gravitational acceleration. The
exponential red-shifting (Doppler shift) of the field modes near the black hole
horizon results in Bogoliubov transformations that are identical to those for
the dynamical Casimir effect (DCE), provided the mirror's trajectory is given by
Eq.~(\ref{eq:receding-mirror}). Here, one obtains an identical Doppler shift,
leading to a thermal spectrum for the emitted radiation. Finally, the DCE and a
degenerate parametric amplifier (DPA) can be related by considering the case of
a single-mode cavity with a sinusoidally time-dependent boundary condition.}
\label{fig:relations}
\end{center}
\end{figure}
In this section we review the main mechanisms by which vacuum fluctuations are amplified into photons: the parametric amplifier (PA), Unruh effect (UE), Hawking radiation (HR), and the dynamical Casimir effect (DCE). Although these effects were first discovered in seemingly unrelated contexts, the universal description of quantum amplification provided by Bogoliubov transformations suggests these mechanisms are in fact closely related. Before exploring these effects in detail, we wish to draw the reader's attention to Fig.~(\ref{fig:relations}), which highlights in summary form the key conditions under which the various amplification mechanisms may be related. Fig.~(\ref{fig:relations}) serves to motivate the subsequent sections, where the depicted relationships are made explicit, and thus linked back to the parametric amplifier, our main objective.
\subsection{Parametric amplification}\label{sec:paramp}
All quantum amplifiers are inherently nonlinear systems \cite{clerk:2010}. One of the simplest nonlinear interactions, indicated in Fig.~\ref{fig:parametric-amplification-v1}, involves a pump photon of frequency $\omega_{p}$ being converted into two photons denoted the signal ($\omega_{s}$) and idler ($\omega_{i}$), obeying the frequency relation $\omega_{p}=\omega_{s}+\omega_{i}$. This process is known as parametric down conversion and occurs in a dielectric medium with a $\chi^{(2)}$ nonlinearity, the first nonlinear susceptibility in a medium without inversion symmetry \cite{boyd:2008}.
\begin{figure}[t]
\includegraphics[width=6.0cm]{nation_fig03}
\caption{(Color online) The principle of a parametric amplifier: a pump photon is down-converted by a nonlinear medium into a signal and
an idler photon, whose frequencies add up to that of the pump photon.}
\label{fig:parametric-amplification-v1}
\end{figure}
When a cavity is driven by a classical pump such as a laser or microwave generator that is not significantly attenuated by the loss of photons via the down-conversion process, this nonlinear interaction can be described by an effective Hamiltonian which, in the rotating frame, takes the form
\begin{equation} \label{eq:pa-hamiltonian}
H= i\hbar\eta(b_{\rm s}^\dag b_{\rm i}^\dag -b_{\rm s} b_{\rm i}),
\end{equation}
where $\eta$ is the pump amplitude dependent coupling strength, and the subscripts denote signal ($s$) and idler ($i$) modes respectively. In the special case that the signal and idler modes coincide $b_{s}=b_{i}=b$, Eq.~(\ref{eq:pa-hamiltonian}) describes a degenerate parametric amplifier (DPA)
where the pump drives the cavity mode at twice it's resonance frequency. The Heisenberg equations of motion that follow from the Hamiltonian Eq.~(\ref{eq:pa-hamiltonian}) lead to the time-evolution of the cavity mode operator
\begin{equation}\label{eq:dpa}
b(t)=b(0)\cosh\left(2\eta t\right)+b(0)^{\dag}\sinh\left(2\eta t\right),
\end{equation}
which is characteristic of a squeezing transformation \cite{walls:2008}. Comparison with Eq.~(\ref{eq:bogoliubov}) indicates that Eq.~(\ref{eq:dpa}) is in fact a Bogoliubov transformation with $\alpha =\cosh\left(2\eta t\right)$ and $\beta=\sinh\left(2\eta t\right)$. These coefficients are easily seen to satisfy the symplectic relation Eq.~(\ref{eq:constraint}). Assuming the mode is initially in the ground state, the number of excitations at later times is calculated from the coefficient of the negative frequency component ($b^{\dag}$) to be $N=\left<b^{\dag}(t)b(t)\right>=\left|\beta\right|^{2}=\sinh^{2}(2\eta t)$. The fact that $N$ grows as a function of time, even when starting from the vacuum state, is a purely quantum mechanical manifestation of parametric amplification of vacuum fluctuations. The effects of the squeezing transformation can be seen by defining quadrature amplitudes $X_{1}=b+b^{\dag}$ and $X_{2}=(b-b^{\dag})/i$ related to the mode's position and momentum operators respectively. By analogy with the classical parametric amplifier in Eq.~(\ref{eq:para-swing}), the DPA is a phase-sensitive amplifier, amplifying one quadrature of motion $X_{1}(t)=e^{2\eta t}X_{1}(0)$, while attenuating the other quadrature $X_{2}(t)=e^{-2\eta t}X_{2}(0)$.
The more general case of independent signal and idler modes represents a phase-sensitive amplification process know as the non-degenerate parametric amplifier (NDPA). The time-evolution of the signal and idler modes under the influence of the Hamiltonian (\ref{eq:pa-hamiltonian}) is described by a pair of Bogoliubov transformations
\begin{eqnarray}\label{eq:ndpa}
b_{s}(t)&=&b_{s}(0)\cosh\left(\eta t\right)+b^{\dagger}_{i}(0)\sinh\left(\eta t\right) \nonumber \\
b_{i}(t)&=&b_{i}(0)\cosh\left(\eta t\right)+b^{\dagger}_{s}(0)\sinh\left(\eta t\right),
\end{eqnarray}
where again, the number of quanta in each of the modes is easily calculated from the coefficients of the creation operator components, $N_{s}= N_{i} = \sinh^2(\eta t)$, assuming both modes are initially in their ground states.
In the Schr\"odinger picture, the wave function for the signal and idler modes is
\begin{eqnarray}
\label{eq:par-amp-wave-function}
\left|\Psi(t)\right> = \frac{1}{\cosh\eta t}\sum_{n=0}^{\infty} \left(\tanh\eta t\right)^n \left|n\right>_{s}\otimes\left|n\right>_{i},
\end{eqnarray}
where $\left|n\right>_{s}\otimes \left|n\right>_{i}$ corresponds to $n$ photons in each of the signal and idler modes. Given the form of the transformation in Eq.~(\ref{eq:ndpa}), the resulting state of the system (\ref{eq:par-amp-wave-function}) is a two-mode squeezed state, where $\eta t$ plays the role of squeezing parameter. In contrast to the DPA, the squeezing of the NDPA does not occur in a single mode, but rather in the composite system formed by the combined signal and idler modes \cite{walls:2008}. The two-mode squeezed state (\ref{eq:par-amp-wave-function}) is an example of an Einstein-Podolsky-Rosen (EPR) state \cite{einstein:1935} where the correlations between the signal and idler modes is stronger than that allowed by classical theory \cite{reid:1988}.
In cases where, either by choice or design, only one of the two modes is accessible, measurements on the remaining mode do not contain enough information to reconstruct Eq.~(\ref{eq:par-amp-wave-function}). Given the close relationship between information and entropy, this loss of information is encoded in the entropic properties of the measured single-mode state. As a bipartite system, the entropy of the measured mode may be calculated via the von Neumann entropy $S$ of the reduced density matrix obtained by tracing over the unobserved mode, also referred to as the entanglement entropy \cite{nielson:2000}. With the signal $(s)$ mode as the observed mode, tracing over the unobserved idler $(i)$ mode, we obtain for the entanglement entropy $S=-\mathrm{Tr}\rho_{s}\ln\rho_{s}$:
\begin{equation}\label{eq:thermal-osc}
S=-\ln\left[1-e^{-\hbar\omega_{s}/k_{\mathrm{B}}T(t)}\right]-\frac{\hbar\omega_{s}}{k_{\mathrm{B}}T(t)}\left[1-e^{\hbar\omega_{s}/k_{\mathrm{B}}T(t)}\right]^{-1},
\end{equation}
which is just the thermal entropy (neglecting the overall Boltzmann factor) of a quantum harmonic oscillator with temperature $T(t)$ related to the squeezing parameter via
\begin{equation}\label{eq:tt}
\tanh^{2} \eta t = \exp\left[-\frac{\hbar\omega_{s}}{k_{\mathrm{B}}T(t)}\right].
\end{equation}
Therefore, the non-vanishing entropy or equivalently information lost, by tracing over one of the two modes in a particle pair squeezed state (\ref{eq:par-amp-wave-function}) signals that the remaining mode is in a mixed, thermal state \cite{barnett:1985,yurke:1987}.
To understand the origin of the thermal state (\ref{eq:thermal-osc}) we note that, as unbounded harmonic oscillator mode systems, both the signal and idler states contain an infinite ladder of energy levels. In order to obtain a finite value for the entropy, the average energy, or equivalently number of particles, in each of the modes must also be specified \cite{barnett:1989,barnett:1991}. Although we do not know the quantum state of the idler mode after tracing over it in Eq.~(\ref{eq:par-amp-wave-function}), the correlations between photon number in the signal and idler modes, enforced by energy conservation, gives us implicit knowledge about the average energy of the idler state. Knowing only the energy of the idler mode, maximizing the entropy, or equivalently minimizing the information, of the idler state with respect to this constraint yields the thermal state entropy of Eq.~(\ref{eq:thermal-osc}). The bipartite structure of Eq.~(\ref{eq:par-amp-wave-function}) demands that this same value of the entropy hold for the measured signal mode as well.
\subsection{The Unruh effect}\label{sec:unruh}
Conceptually, perhaps the simplest way to generate particles from the vacuum is for an observer to accelerate. Unlike an inertial observer in Minkowski space, an observer undergoing constant acceleration is out of causal contact with a portion of the entire space-time due to the presence of a horizon. As a result, the initially pure Minkowski quantum vacuum state will appear to the observer to be in a mixed thermal state \cite{unruh:1976,crispino:2008}.
Before exploring this Unruh effect (UE) \cite{unruh:1976}, we need to define what is meant by ``an observer". As the name suggests, an observer should be a witness to the dynamics under consideration. As our focus here is on the generation of particles from the quantum vacuum, the observer is ideally represented by a particle-detector. Although a variety of model systems may be used for the particle-detector, for our purposes the observer will be represented as a two-level system, or qubit, detector with ground $|0\rangle$ and first-excited $|1\rangle$ energy levels separated by an energy $\hbar\omega_{01}$. In addition, we will assume a point-like detector that is linearly-coupled to the operators representing the quantized field or cavity mode of interest \cite{birrell:1982}. We will further suppose that the detector is weakly coupled to the field modes so as to allow the transition probabilities between the qubit ground and excited states to be calculated perturbatively \cite{clerk:2010}. Our choice of two-level detector will be further motivated in Sec.~\ref{sec:sc-circuits}, where we discuss the use of a superconducting phase-qubit as a single-shot microwave photon counter \cite{chen:2010}.
Having established the definition of an observer, let us now consider the worldline of an observer undergoing a constant proper acceleration $a$. In Minkowski coordinates $(ct,x)$, the paths of observers with constant acceleration are hyperbolas in space-time as seen in Fig.~\ref{fig:rindler}. For $a>0$, these paths trace out a section of Minkowski space known as the Right Rindler wedge (RRW) defined by the relation $|ct|<x$, and may be described using the Rindler coordinates, $(c\tau,\xi)$, describing the observer's path through Minkowski spacetime as viewed by the observer herself, and defined through the relations
\begin{equation}\label{eq:rindler-eqs}
ct=\xi\sinh\left(\frac{a\tau}{c}\right)\ \ ;\ \ x=\xi\cosh\left(\frac{a\tau}{c}\right),
\end{equation}
where $\tau$ is the observer's proper time and $\xi=c^{2}/a$ is the distance from the vertex (i.e. the closest point to the origin) of the observer's motion to the origin.
\begin{figure}[t]\begin{center}
\includegraphics[width=8.0cm]{nation_fig04}
\caption{(Color online) Paths of accelerated observers in Rindler coordinates $(c \tau,\xi)$ with proper time $\tau$ and constant acceleration $a=c^{2}/\xi$ as viewed in Minkowski space-time with coordinates $(ct,x)$. Lines (dashed) of constant proper time $\tau$ are also indicated. Observers in the right Rindler wedge (RRW) are out of causal contact with the left Rindler wedge (LRW) due to the presence of a horizon at $ct=\pm x$. Arrows give the direction of increasing proper time in each Rindler wedge.}
\label{fig:rindler}
\end{center}
\end{figure}
In switching to Rindler coordinates, the observer moves only in the direction of increasing proper time $\tau$, while the spatial coordinate $\xi$ remains constant, thus greatly simplifying the resulting equations of motion. Rewriting the Minkowski metric $ds^{2}=-c^{2}dt^{2}+dx^{2}$ in Rindler coordinates gives the Rindler metric
\begin{equation}\label{eq:rindler}
ds^{2}=-\left(\alpha\xi\right)^{2}d\tau^{2}+d\xi^{2},
\end{equation}
where $\alpha=a/c$ is a parameter characterizing the proper acceleration. Relative to the RRW, we may also define mathematically a second Left Rindler wedge (LRW) with $x<|ct|$ by reflecting the RRW across the $ct$-axis ($t\rightarrow -t$) and then across the x-axis ($x\rightarrow-x$) \cite{birrell:1982}. This change in sign for the time-coordinate in the LRW causes the proper-time $\tau$ to run backwards in Minkowski time $t$ as shown in Fig.~\ref{fig:rindler}. The two Rindler wedges are causally disconnected from each other as a result of a horizon located on the lightcone $ct=\pm x$. Trajectories of observers are asymptotically bound by this lightcone for $\tau\rightarrow-\infty$ and $\tau\rightarrow\infty$ where the observer's velocity approaches the speed of light. These limits represent the past and future horizons, respectively. Likewise, the path of an observer undergoing infinite acceleration $a\rightarrow\infty$ ($\xi\rightarrow 0$) lies on the horizon of the RRW, as may be checked from Eq.~(\ref{eq:rindler-eqs}).
In order to describe the Minkowski vacuum as seen by the accelerating observer, we will proceed in a manner similar to the time-dependent oscillator example in Sec.~\ref{sec:amp-intro}. First, we find the mode functions and their associated vacuum states for a scalar quantum field in both the Minkowski and Rindler spacetimes. We then calculate the Bogoliubov transformations linking the Minkowski and Rindler creation and annihilation operators. With the Bogoliubov transformations in hand, the quantum state seen by a RRW observer is readily obtained.
Analogously to the position operator for the harmonic oscillator in Eq.~(\ref{eq:x}), a scalar field in Minkowski spacetime may be expanded as a infinite sum of positive and negative frequency components,
\begin{equation}\label{eq:sum}
\phi=\sum_{j}u^{\mathrm{M}}_{\omega_{j}}a^{\mathrm{M}}_{\omega_{j}}+\bar{u}^{\mathrm{M}}_{\omega_{j}}a^{\mathrm{M},\dag}_{\omega_{j}},
\end{equation}
where the positive-frequency, orthonormal mode field functions are solutions to the 2D Minkowski wave equation
\begin{equation}\label{eq:wave-equation}
\left[\frac{1}{c^{2}}\frac{\partial^{2}}{\partial t^{2}}-\frac{\partial^{2}}{\partial x^{2}}\right]\phi=0,
\end{equation}
and given by the plane-waves
\begin{equation}\label{eq:plane}
u_{\omega_{j}}^{\mathrm{M}}=\frac{1}{\sqrt{4\pi\omega_{j}}}e^{ik_{j}x-i\omega_{j}t},
\end{equation}
with $\omega_{j}=c|k_{j}|$ and $-\infty \le j \le \infty$, where the superscript $\mathrm{M}$ signifies belonging to the Minkowski spacetime. The Minkowski vacuum state $|0\rangle^{\mathrm{M}}=\prod_{j}|0_{\omega_{j}}\rangle^{\mathrm{M}}$ is defined with respect to the positive frequency modes as the state that is annihilated by all lowering operators $a^{\mathrm{M}}_{\omega_{j}}$, i.e. $a^{\mathrm{M}}_{\omega_{j}}|0\rangle^{\mathrm{M}}=0$ for all $j$.
Of course, the accelerated observer may also define a vacuum state for the quantum field in the Rindler spacetime using the associated Rindler coordinates. Here, the orthonormal mode functions may be found by solving the 2D wave equation Eq.~(\ref{eq:wave-equation}) expressed in Rindler coordinates via Eq.~(\ref{eq:rindler-eqs}). As a static spacetime, the Rindler metric (\ref{eq:rindler}) admits a natural vacuum state $|0\rangle^{\mathrm{R}}=\prod_{j}|0_{\omega_{j}}\rangle^{\mathrm{R}}$ in the RRW with respect to the positive frequency Rindler modes $u_{\omega_{j}}^{\mathrm{R}}\propto \exp\left(-i\omega_{j}\tau\right)$. Note that the notion of positivity for the Rindler modes is with respect to the observer's proper time $\tau$. The Rindler coordinates ($c\tau,\xi$) in the LRW are completely independent of those in the RRW, giving rise to independent vacuum states for the LRW and RRW spacetimes. Again, the LRW vacuum state $|0\rangle^{\mathrm{L}}=\prod_{j}|0_{\omega_{j}}\rangle^{\mathrm{L}}$ is defined with respect to positive-frequency Rindler modes $u_{\omega_{j}}^{\mathrm{L}}$. However, as a consequence of the reflection $t\rightarrow -t$ used in defining the LRW, the notion of positive and negative frequencies is switched in the LRW. The result is a vacuum state in the LRW that is defined with respect to positive frequency modes $u_{\omega_{j}}^{\mathrm{L}}\propto \exp\left(i\omega_{j}\tau\right)$.
The Rindler modes $u_{\omega_{j}}^{\mathrm{R}}$,$u_{\omega_{j}}^{\mathrm{L}}$ and Minkowski modes $u_{\omega_{j}}^{\mathrm{M}}$ are not independent. Rather, they represent different expansions of the scalar field $\phi$ and therefore are related by a change of basis. As seen in Fig.~\ref{fig:rindler}, the RRW (or LRW) covers only $1/4$ of the entire Minkowski spacetime and as a result the Rindler modes in this region are not enough to reconstruct the entire Minkowski spacetime modes \cite{unruh:1976,birrell:1982}. We can however take a linear combination of modes from both Rindler wedges and, through analytic continuation \cite{boulware:1975}, cover the entire spacetime. In taking linear combinations of modes from the LRW and RRW, we have effectively mixed positive and negative frequency components. Given our discussion on Bogoliubov transformations in Sec.~\ref{sec:amp-intro}, when expressed in this combined Rindler basis, one should expect the Minkowski vacuum viewed by the accelerating observer to contain particles. As we shall see, this is indeed the case.
The general expansion of the Minkowski modes in Rindler modes reads,
\begin{equation}\label{eq:mode-relations}
u_{\omega_{j}}^{\mathrm{M}}= \sum_{i}\alpha_{ij}^{\mathrm{R}}u_{\omega_{i}}^{\mathrm{R}}+\bar{\beta}_{ij}^{\mathrm{R}}\bar{u}_{\omega_{i}}^{\mathrm{R}}+\alpha_{ij}^{\mathrm{L}}u_{\omega_{i}}^{\mathrm{L}}+\bar{\beta}_{ij}^{\mathrm{L}}\bar{u}_{\omega_{i}}^{\mathrm{L}}
\end{equation}
where $\alpha_{ij}^{\mathrm{R,L}}$ and $\beta_{ij}^{\mathrm{R,L}}$ are Bogoliubov transformation matrices with coefficients given by the Klein-Gordon inner-product between Minkowski and Rindler modes
\begin{equation}\label{eq:product}
\alpha_{ij}^{\mathrm{R,L}}=\left<u_{\omega_{i}}^{\mathrm{R,L}},u_{\omega_{j}}^{\mathrm{M}}\right>;\ \ \beta_{ij}^{\mathrm{R,L}}=-\left<u_{\omega_{i}}^{\mathrm{R,L}},\bar{u}_{\omega_{j}}^{\mathrm{M}}\right>.
\end{equation}
The connection between ladder operators and mode functions allows us to use Eq.~(\ref{eq:mode-relations}) to establish the Bogoliubov transformation between Minkowski and Rindler ladder operators as
\begin{equation}\label{eq:oper-relations}
a_{\omega_{j}}^{\mathrm{M}}= \sum_{i}\alpha_{ij}^{\mathrm{R}}a_{\omega_{i}}^{\mathrm{R}}+\bar{\beta}_{ij}^{\mathrm{R}}a_{\omega_{i}}^{\dag,\mathrm{R}}+\alpha_{ij}^{\mathrm{L}}a_{\omega_{i}}^{\mathrm{L}}+\bar{\beta}_{ij}^{\mathrm{L}}a_{\omega_{i}}^{\dag,\mathrm{L}}.
\end{equation}
Although we can explicitly evaluate Eq.~(\ref{eq:product}) to obtain the Bogoliubov transformation matrices in (\ref{eq:oper-relations}), the result does not elucidate the underlying physics of the amplification process as a single Minkowski mode $\omega_{j}$ will transform into a continuum of Rindler modes. Instead, we note that the Minkowski vacuum state $|0\rangle^{\mathrm{M}}$ is defined with respect to the positive frequency modes, $u_{\omega_{j}}^{\mathrm{M}}$, and any other set of basis mode functions constructed from a linear combination of these Minkowski modes will leave the vacuum state $|0\rangle^{\mathrm{M}}$ unchanged \cite{birrell:1982}. We therefore construct the Unruh basis \cite{unruh:1976} set of mode functions $\left\{v_{\omega_{j}}^{(1),\mathrm{M}},v_{\omega_{j}}^{(2),\mathrm{M}}\right\}$, from linear combinations of positive frequency Minkowski modes
\begin{equation}
v_{\omega_{j}}^{(1),\mathrm{M}}=\sum_{i}\epsilon^{(1)}_{ij}u_{\omega_{i}}^{\mathrm{M}};\ \ v_{\omega_{j}}^{(2), \mathrm{M}}=\sum_{i}\epsilon^{(2)}_{ij}u_{\omega_{i}}^{\mathrm{M}}
\end{equation}
such that, when expanded in the Rindler modes $\left\{u_{\omega_{j}}^{\mathrm{R}},u_{\omega_{j}}^{\mathrm{L}}\right\}$, diagonalizes the Bogoliubov transformation matrices $\alpha_{ij}$ in Eq.~(\ref{eq:oper-relations}). For the annihilation operators $b_{\omega_{j}}^{(1),\mathrm{M}},b_{\omega_{j}}^{(2),\mathrm{M}}$ associated with mode functions $v_{\omega_{j}}^{(1),\mathrm{M}},v_{\omega_{j}}^{(2),\mathrm{M}}$, this procedure yields the Bogoliubov transformations for the Rindler operators \cite{unruh:1976,birrell:1982}
\begin{eqnarray}\label{eq:unruh-bogo}
b_{\omega_{j}}^{(1),\mathrm{M}}&=&a^{\mathrm{R}}_{\omega_{j}}\cosh\left(r\right)+a^{\dag,\mathrm{L}}_{\omega_{j}}\sinh\left(r\right) \nonumber \\
b_{\omega_{j}}^{(2),\mathrm{M}}&=&a^{\mathrm{L}}_{\omega_{j}}\cosh\left(r\right)+a^{\dag,\mathrm{R}}_{\omega_{j}}\sinh\left(r\right),
\end{eqnarray}
with the effective squeezing parameter $r$ defined by $\tanh r =\exp\left(-\pi\omega_{j}/\alpha\right)$. In the Unruh basis we have a monochromatic Bogoliubov transformation relating a single Minkowski mode $\omega_{j}$ to the same mode in both the left and right Rindler wedges. More importantly, the Bogoliubov transformations (\ref{eq:unruh-bogo}) are of the same form as the transformations for the NDPA in Eq.~(\ref{eq:ndpa}). Thus we establish the connection between the NDPA and the UE summarized in Fig.~(\ref{fig:relations}).
For a single mode of the Minkowski vacuum $|0_{\omega_{j}}\rangle^{\mathrm{M}}$, the Bogoliubov transformations in Eq.~(\ref{eq:unruh-bogo}) lead to the two-mode squeezed state for the Rindler modes
\begin{equation}\label{eq:unruh-state}
|0_{\omega_{j}}\rangle^{\mathrm{M}}= \frac{1}{\cosh r}\sum_{n=0}^{\infty} \left(\tanh r \right)^n | n_{\omega_{j}}\rangle^{\mathrm{L}}\otimes | n_{\omega_{j}}\rangle^{\mathrm{R}}.
\end{equation}
From the viewpoint of the observer in the RRW, the presence of the horizon prevents access to the modes in the LRW and they must be traced over in Eq.~(\ref{eq:unruh-state}). By analogy with the NDPA in Sec.~\ref{sec:paramp}, the observed mode in the RRW are in a thermal state with temperature related to the squeezing parameter $r$ as follows:
\begin{equation}
\tanh^{2}\left(r\right)=e^{-2\pi\omega/\alpha}=\exp\left(-\frac{\hbar\omega}{k_{\mathrm{B}}T_{\mathrm{U}}}\right),
\end{equation}
where the Unruh temperature is
\begin{equation}\label{eq:tu}
T_{\mathrm{U}}=\frac{\hbar\alpha}{2\pi k_{\mathrm{B}}},
\end{equation}
in terms of the proper acceleration parameter $\alpha=a/c$. Here, the energy required to generate particles from the vacuum comes from the work needed to maintain the observers constant acceleration. Like the parametric amplifier, Sec.~\ref{sec:paramp}, we have implicitly assumed the energy of the accelerating observer is unaffected by the creation of particles. The transfer of energy to the field modes is quite natural given that our detector is linearly-coupled to the operators representing the quantized scalar field. As discussed earlier, these field modes are not local to the observer, but rather form a basis set covering the entire spacetime. As a result, the full spacetime of a Rindler observer is in a thermal state characterized by the Unruh temperature Eq.~(\ref{eq:tu}).
An equivalent way to understand the origin of the Unruh temperature $T_{\mathrm{U}}$ is to consider the effect of the horizon in the accelerating reference frame on a monochromatic plane wave with frequency $\Omega$ moving in the $x$-direction of Minkowski space, $\phi(x,t)=\exp\left[-i\Omega\left(t-x/c\right)\right]$. From the viewpoint of the accelerating observer, this wave may be expressed via Eq.~(\ref{eq:rindler-eqs}) as
\begin{eqnarray}\label{eq:rindler-mode}
\phi(\tau)&=&\exp\left\{\frac{-i\Omega\xi}{c}\left[\sinh\left(\alpha\tau\right)-\cosh\left(\alpha\tau\right)\right]\right\}\nonumber \\
&=&\exp\left[i\frac{\Omega}{\alpha}\left(e^{-\alpha\tau}\right)\right],
\end{eqnarray}
where we have used $\xi=c^{2}/a$. We see that the wave is no longer monochromatic, but is rather exponentially red-shifted (Doppler shifted) with an e-folding time determined by the observer's acceleration $\alpha$. Upon Fourier transforming Eq.~(\ref{eq:rindler-mode}), $f(\omega)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}d\tau\, \phi(\tau)e^{+i\omega \tau}$, the effect of this red-shift can be seen in the resulting power spectrum, $P\left(\omega\right)=\left|f\left(\omega\right)\right|^{2}$, which does not vanish at negative frequencies:
\begin{equation}
P(-\omega)=\left|f(-\omega)\right|^{2}=\frac{2\pi}{\omega\alpha}\frac{1}{e^{\frac{2\pi\omega}{\alpha}}-1};\ \ \omega>0.
\end{equation}
Comparing with a Planck distribution, we again recover the Unruh temperature Eq.~(\ref{eq:tu}) \cite{padmanabhan:2005}.
For the two-level observer/detector, the ratio of the power spectrum $P(\omega)$ evaluated at negative and positive qubit transition frequencies, $\mp\omega_{01}$ respectively, can be related to the Fermi golden rule transition rates $\Gamma$ between ground and excited-state energy levels \cite{clerk:2010}:
\begin{equation}\label{eq:detailed}
\frac{P(-\omega_{01})}{P(\omega_{01})}=\frac{\Gamma_{|0\rangle\rightarrow |1\rangle}}{\Gamma_{|1\rangle\rightarrow |0\rangle}}=\exp\left[\frac{-\hbar \omega_{01}}{k_{\mathrm{B}}T_{\mathrm{U}}}\right],
\end{equation}
which is identical to the detailed balance relation for transition rates in a thermal environment. In this way, the negative frequency terms represent the absorption of energy by the observer from the environment, whereas positive frequencies indicate emission. The excitation of the two-level detector can only occur if there are particles in the field mode to which it is coupled. The negative-frequency components signal the presence of particles as seen by the observer, and the departure from the Minkowski vacuum state. From the viewpoint of the accelerated observer, Eq.~(\ref{eq:detailed}) indicates that there is no difference between the transformed Minkowski vacuum state and a thermal environment at the Unruh temperature. We must therefore consider the Unruh temperature as corresponding to the actual physical temperature of the environment as seen by the observer.
Although the UE shares many features with the NDPA in Sec.~\ref{sec:paramp}, there are several important differences. For a constant acceleration, the squeezing parameter $r$, and therefore Unruh temperature $T_{\mathrm{U}}$, is time independent. Likewise, Eq.~(\ref{eq:unruh-state}) shows that $T_{\mathrm{U}}$ is the same for any choice of mode frequency $\omega_{j}$. This is in contrast to the parametric amplifier where the effective temperature is time dependent [Eq.~(\ref{eq:tt})] due to particle build up and with rates that depend on the mode coupling strength, pump amplitude and frequency \cite{leonhardt:2010}. Furthermore, in contrast to the NDPA where in principle both modes of the two-mode squeezed state (\ref{eq:par-amp-wave-function}) can be measured, the existence of a horizon for the accelerating observer allows only those modes in the RRW to be measured. The resulting thermal environment is of fundamental importance to quantum information and entanglement in relativistic systems \cite{hartle:1995,alsing:2003,fuentes:2005,peres:2004}.
\subsection{Hawking radiation}\label{sec:hawking}
One of the most astonishing predictions of general relativity is that of a black hole, a region of spacetime where gravity is so strong that not even light can escape its pull. When viewed by an observer at rest far from the black hole, a non-rotating, uncharged black hole with mass $M$ can be described by the Schwarzschild metric
\begin{equation}\label{eq:Schwarzschild}
ds^{2}=-\left(1- \frac{r_{s}}{r}\right)c^{2}dt^{2}+\left(1-\frac{r_{s}}{r}\right)^{-1}dr^{2}+r^{2}d\Omega^{2}
\end{equation}
where the radial $r$ coordinate is defined such that the area of a sphere is given by $A=4\pi r^{2}$ and the $t$ coordinate gives the time as measured by a static observer at $r=\infty$. Schwarzschild radius $r_{s}=2GM/c^{2}$ is defined as the radius at which the timelike metric term proportional to $dt^{2}$ vanishes. This denotes the boundary of the black hole called the event horizon, and also serves to define the black hole's surface area $A_{\rm BH}$. A more physical description of the horizon is given in Fig.~\ref{fig:collapse}
\begin{figure}[t]\begin{center}
\includegraphics[width=8.0cm]{nation_fig05}
\caption{(Color online) Formation of a horizon (black) by the gravitational collapse of a spherical object. Before the horizon forms, light rays (red) leaving the surface of the object (blue) are free to propagate out to spatial infinity. In contrast, once the mass of the body is within the Schwarzschild radius $r_{s}=2GM/c^{2}$, light rays are trapped behind the horizon and eventually encounter the singularity (dashed line). The horizon demarcates the last light ray able to escape from the surface to infinity and the first trapped ray inside the radius $r_{s}$. Equivalently, the horizon can be characterized by looking at the causal structure of spacetime indicated by light-cones (green) that give the direction of propagation for light rays at a given point. As one approaches the horizon, the light-cone begins to tilt toward the black hole singularity. On the horizon, the light-cone aligns along the $ct$-direction such that a light ray emitted from the horizon is stationary in space. As the time-component of the metric vanishes on the horizon, a light ray on the horizon also appears frozen in time. Inside the horizon, even time itself is points toward the singularity, so that nothing can escape.}
\label{fig:collapse}
\end{center}
\end{figure}
where we consider the gravitational collapse of a spherical object and the effect of the resulting horizon on the causal structure of spacetime and the propagation of photons.
Given the relation between mass and energy, $E=Mc^{2}$, the mass-dependence of the Schwarzschild radius $r_{s}$ may be used to write the energy-conservation relation for the black hole
\begin{equation}\label{eq:conservation}
dE=c^{2}dM=\frac{\kappa c^{2}}{8\pi G}dA_{\mathrm{BH}},
\end{equation}
where
\begin{equation}\label{eq:kappa}
\kappa=\frac{c^{4}}{4GM},
\end{equation}
is the surface gravity of the black hole: the force/mass exerted at infinity needed to keep a small test mass stationary at the horizon. For a black hole, the inability of light to escape beyond the event horizon out to spatial infinity, suggests that the horizon may be viewed as a uni-directional surface. Objects can fall into a black hole and increase its mass, but a reduction in mass is impossible as nothing can escape. This idea was used by \cite{hawking:1972} to prove that any physical process necessarily increases the surface area of a black hole $dA_{\mathrm{BH}}\ge0$. Shortly after, it was noted by \cite{bekenstein:1973} that this increase in area bore a striking resemblance to the second law of thermodynamics: the total entropy of an isolated system does not decrease. This suggests that Eq.~(\ref{eq:conservation}) may be recast in the form of the first law of thermodynamics $dE=TdS$ where $T$ is the temperature of the system in thermodynamic equilibrium. Later, the description of black hole mechanics was extended to include all four thermodynamic laws \cite{bardeen:1973}: black holes are intrinsically thermodynamical objects.
Using dimensional analysis, the relationship between area and entropy may be written in terms of the relevant fundamental constants as $dA_{\mathrm{BH}}=(\lambda G\hbar/k_{\mathrm{B}}c^{3})dS_{\mathrm{BH}}$ where $\lambda$ is an undetermined dimensionless constant. We may therefore express Eq.~(\ref{eq:conservation}) as the thermodynamic relation
\begin{equation}
dE=T_{\mathrm{H}}dS_{\mathrm{BH}}=\frac{\hbar\kappa}{8\pi k_{\mathrm{B}}c}\lambda dS_{\mathrm{BH}},
\end{equation}
which suggests that a black hole not only absorbs energy, but also emits radiation with a temperature proportional to the surface gravity Eq.~(\ref{eq:kappa}). This result is further motivated by the fact that the surface gravity is constant over the horizon of a stationary black hole, a property that is reminiscent of the uniform temperature of a thermal body in equilibrium; this constitutes the zeroth law of black hole mechanics \cite{bardeen:1973}. Although these considerations argued for the existence of a black hole temperature, the inability of anything to escape beyond the horizon suggested that the effective temperature of a black hole is actually zero: $T_{\mathrm{H}}$ has no meaning as a physical temperature. This conventional viewpoint was overturned by Hawking using quantum field theory in curved spacetime (QFTCS) to show that a black hole emits black body radiation with a Hawking temperature
\begin{equation}\label{eq:th}
T_{\mathrm{H}}=\frac{\hbar\gamma}{2\pi k_{\mathrm{B}}},
\end{equation}
characterized by the surface gravity parameter $\gamma=\kappa/c$ \cite{hawking:1974,hawking:1975}. In this way, Hawking was not only able to give a physical interpretation to the black hole temperature $T_{\mathrm{H}}$, but was also able to solidify the link between the black hole area $dA_{\mathrm{BH}}$ and entropy $dS_{\mathrm{BH}}$, with the proportionality constant fixed to be $\lambda=4$.
When viewed as a particle production process, Hawking radiation (HR) has a simple interpretation. As shown in Fig.~\ref{fig:hawking}, vacuum fluctuations produce pairs of virtual particles that quickly annihilate each other when far from the horizon. In contrast, near the horizon one particle in the pair may be trapped inside the horizon, unable to recombine with its partner. The particle outside the horizon is then free to propagate out to an observer at spatial infinity. The energy necessary for the outflow of particles comes from the gravitational field produced by the black hole's mass $M$ which, due to energy conservation, must decrease over time as radiation is emitted. With the surface gravity (\ref{eq:kappa}) being inversely proportional to the black hole mass and proportional to the Hawking temperature, the latter increases as the black hole radiates away energy. Unabated, the black hole experiences an unbounded increase in its temperature, and ultimately complete evaporation.
\begin{figure}[t]\begin{center}
\includegraphics[width=8.0cm]{nation_fig06}
\caption{(Color online) Cartoon of a black hole with vacuum fluctuations. Far from the horizon, vacuum fluctuations result in virtual particles that quickly annihilate each other. At the horizon however, one particle in a virtual pair may be trapped inside the horizon, allowing its partner to escape to arbitrary large distances---the Hawking effect.}
\label{fig:hawking}
\end{center}
\end{figure}
Although a black hole's mass $M$ decreases as HR is emitted, in typical derivations of the Hawking effect that use QFTCS \cite{hawking:1975,boulware:1976,hartle:1976}, the black hole mass, and therefore the spacetime metric (\ref{eq:Schwarzschild}), is considered to be fixed throughout the calculation. This is for two reasons: (i) The power output from the Hawking process is exceedingly low for black holes with masses above the Planck mass $m_{\mathrm{P}}=\sqrt{\hbar c/G}\sim2\times 10^{-8}~\mathrm{kg}$. In this situation, the net loss of energy due to HR is a negligibly small portion of the total black hole energy, and can safely be ignored. For example, a relatively small black hole may be close to the mass of the sun $\sim 10^{38}~\mathrm{kg}$, and is therefore well above this Planck scale. (ii) Allowing for black hole evaporation introduces explicit time-dependence in the spacetime metric. However, the connection between the zeroth and first-law of thermodynamics to those of black hole mechanics relies on the assumption of a stationary spacetime and a well defined surface gravity; conditions which are violated during evaporation \cite{wald:2001}.
In essence, the fixed mass condition assumes a classical source of energy with fixed amplitude that cannot be depleted through the emission process. Although this assumption appears to be unique to black holes, we have in fact made use of similar approximations for both the PA and UE considered in Secs.~(\ref{sec:paramp}) and (\ref{sec:unruh}), respectively. For the PA, our use of a classical fixed amplitude pump mode plays an analogous role to the fixed black hole mass. Likewise, in the UE we implicitly assumed that the source of the observer's acceleration had an unlimited supply of energy so as to maintain the proper acceleration $a$ indefinitely. We can in fact make use of this fixed mass condition, via the surface gravity (\ref{eq:kappa}), to relate the emission of HR to the UE through Einstein's equivalence principle relating inertial and gravitational accelerations \cite{einstein:1907}, as we now demonstrate below.
With HR generated close to the black hole horizon [see Fig.~\ref{fig:hawking}], the relationship to the UE is elucidated by taking the near-horizon approximation to the Schwarzschild metric Eq.~(\ref{eq:Schwarzschild}). To explore the near-horizon region of the black hole, we replace the Schwarzschild radial coordinate $r$ with a length
\begin{equation}
x=\int_{r_{s}}^{r}\sqrt{g_{rr}(r')}dr'=\int_{r_{s}}^{r}\left(1-\frac{r_{s}}{r'}\right)^{-1/2}dr',
\end{equation}
characterizing the proper distance close to the horizon. Near the horizon, $x\approx 2\sqrt{r_{s}(r-r_{s})}$, and the near-horizon form of the Schwarzschild metric (\ref{eq:Schwarzschild}) expressed in terms of this proper distance becomes \cite{fabbri:2005}
\begin{equation}\label{eq:near}
ds^{2}=-\left(\gamma x\right)^{2}dt^{2}+dx^{2},
\end{equation}
where we have ignored the coordinates transverse to the radial direction, as close to $100\%$ of the HR is emitted in the lowest, $l=1$, angular momentum state \cite{page:1976}; the black hole emits as close to radially as possible \cite{bekenstein:2002}. This is due to conformal symmetry in the near-horizon region \cite{carlip:2007}, and allows for the complete description of HR using only a single spatial dimension. The power emitted by HR in the radial direction may then be calculated assuming the unidirectional emission of power $\dot{E}_{1\mathrm{D}}$ from a one-dimensional blackbody \cite{nation:2010}:
\begin{equation}\label{eq:hpower}
\dot{E}_{1\mathrm{D}}=\frac{\pi k_{\mathrm{B}}^{2}}{12\hbar}T_{\mathrm{H}}^{2}.
\end{equation}
The near-horizon approximation to the Schwarzschild metric (\ref{eq:near}) is of the same form as the Rindler spacetime (\ref{eq:rindler}) of an accelerating observer, where the effective acceleration is provided by the surface gravity of the black hole $\kappa$ (\ref{eq:kappa}). The replacement $\alpha\rightarrow\gamma$ in Eq.~(\ref{eq:rindler}), which gives the metric (\ref{eq:near}), is a manifestation of Einstein's equivalence principle, and allows us to carry over the results obtained for the UE to the present case of HR. In particular, we can replace the acceleration parameter $\alpha$ with $\gamma$ in the Unruh temperature (\ref{eq:tu}), which then agrees with Eq.~(\ref{eq:th}) for the temperature of a black hole. Finally, as in the UE (\ref{eq:unruh-state}) and parametric amplification (\ref{eq:par-amp-wave-function}), the photon pairs generated via the Hawking process in this near-horizon region are entangled as a two-mode squeezed state.
It should be noted however that the Hawking radiation temperature (\ref{eq:th}) applies to an observer at rest far from the black hole. This is indicated by the use of the Schwarzschild time $t$ in (\ref{eq:near}) rather than the proper time $\tau$ of an Unruh observer from Eq.~(\ref{eq:rindler}). The surface gravity $\kappa$ is defined with respect to the observer at infinity as
\begin{equation}\label{eq:va}
\kappa=\left.Va\right|_{r=r_{s}},
\end{equation}
where
\begin{equation}\label{eq:accel}
a(r)=\frac{GM}{r^{2}\sqrt{1-\frac{r_{s}}{r}}}
\end{equation}
is the radial acceleration needed to keep an observer stationary at the radius $r$, and $V(r)=\sqrt{1-r_{s}/r}$ is the red-shift factor accounting for the energy lost by an escaping photon due to the gravitational potential of the black hole. It is easy to check that Eq.~(\ref{eq:va}) agrees with our earlier definition (\ref{eq:kappa}). We may calculate the Hawking temperature at an arbitrary radius $r$ away from the horizon taking into account the red-shift as
\begin{equation}
T(r)=\frac{\hbar(\kappa/c)}{2\pi k_{\mathrm{B}}V(r)}
\end{equation}
which, as one approaches the horizon, gives $T\rightarrow\hbar(a/c)/(2\pi k_{\mathrm{B}})$ with $a$ given by Eq.~(\ref{eq:accel}). This result is exactly the same as that obtained for the Unruh temperature (\ref{eq:tu}) in Sec.~\ref{sec:unruh}. By removing the effects of the gravitational red-shift, HR is seen to be nothing other than the UE for an accelerating observer near the horizon. Keep in mind that the acceleration Eq.~(\ref{eq:accel}), like the corresponding Unruh acceleration $a$, diverges as one approaches the horizon. Thus we establish the connection between the Unruh and Hawking effects through the equivalence principle, as summarized in Fig.~(\ref{fig:relations}).
Even though HR has been derived in a variety of ways \cite{hawking:1975,boulware:1976,hartle:1976,parentani:2000,parikh:2000}, there remain several unanswered questions. One concerns the trans-Planckian problem \cite{jacobson:1991,unruh:2005}, where the usual derivation of the thermal HR requires that the photon's linear dispersion relation holds up to arbitrarily high energies; classical notions of spacetime are expected to break down near the Planck energy, $E_{\mathrm{P}}=\sqrt{\hbar c^{5}/G}\sim 10^{19}~\mathrm{GeV}$. Another problem concerns the consequences of complete evaporation of a black hole via the emission of thermal HR; information stored in the black hole is destroyed, signaling a breakdown in the unitary evolution in quantum mechanics. This is known as the information loss paradox \cite{mathur:2009}. A third problem is the difficulty in measuring and verifying the negligibly low radiation temperatures predicted for astronomical black holes, i.e. $T_{\mathrm{H}}\sim10^{-9}~\mathrm{K}$ for a solar mass black hole. These difficulties have called into question some of the approximations made in QFTCS calculations of HR, as well as any hope of experimental confirmation. However, light may be shed on some of these problems by considering analogue condensed matter systems.
In preparation for discussing these HR analogues in Sec.~\ref{sec:analogue-hawking} below, we note that, from a calculational standpoint, the Schwarzschild metric (\ref{eq:Schwarzschild}) is not ideal since it is singular at the horizon. It is therefore beneficial to choose coordinates that remain well-behaved in the horizon region. A particularly good choice are the Painlev\'{e}-Gullstrand coordinates \cite{painleve:1921}
\begin{equation}\label{eq:metric}
ds^{2}=-\left[c^{2}-u\left(r\right)^{2}\right]d\tau^{2}+2u\left(r\right)drd\tau+dr^{2}+r^{2}d\Omega^{2},
\end{equation}
where the Schwarzschild time $t$ is replaced by the proper time $\tau$ of a free-falling observer, while the spatial coordinate remains the same as for the Schwarzschild metric. For an unlucky observer starting from rest at spatial infinity and free-falling into a black hole, the horizon occurs where the observer's proper time velocity $u(r)$ is equal to the vacuum speed of light $c$.
\subsection{The dynamical Casimir effect}\label{sec:dce}
The dynamical Casimir effect (DCE) concerns the generation of photons from the quantum
vacuum due to a time-dependent boundary condition, imposed by e.g.~a moving
mirror.
In contrast to the previously discussed UE in Sec.~\ref{sec:unruh}, where it was shown that
the notion of particle is observer dependent,
and where the Minkowski vacuum appears as thermal radiation to an {\it
accelerated observer},
here we will see that an {\it accelerated mirror} can result in radiation that
is detectable by an {\it inertial observer},
e.g., an observer at rest in Minkowski space far from the moving mirror.
See Fig.~\ref{fig:dce-schematic} for a schematic illustration of this process.
\begin{figure}[t]
\includegraphics[width=8.0cm]{nation_fig07}
\caption{(Color online) An oscillating mirror in free space generates photons
due to its interaction with vacuum fluctuations.
This effect is known as the dynamical Casimir effect.
The photons are generated in pairs with frequencies that add up to the frequency
of the mirror's oscillation.
The photon pair production can be interpreted as up-conversion of virtual
photons of the quantum vacuum fluctuations,
or, equivalently, as down-conversion of pump phonons from the oscillatory motion
of the mirror.}
\label{fig:dce-schematic}
\end{figure}
Consider a massless scalar field $\phi(x,t)$ in two-dimensional spacetime
satisfying the Klein-Gordon wave equation
\begin{eqnarray}
\frac{\partial^2 \phi}{\partial t^2} - \frac{\partial^2 \phi}{\partial x^2} =
0,
\end{eqnarray}
and subject to the boundary condition imposed by a mirror with the
trajectory $z(t)$,
\begin{eqnarray}
\phi(z(t), t) = 0.
\end{eqnarray}
Following \cite{moore:1970} and \cite{fulling:1976}, we perform a conformal (i.e. light-cone preserving)
coordinate transformation defined by
\begin{eqnarray}
\label{eq:dce_conformal_transf_f}
t-x &=& f(w-s),\\
\label{eq:dce_conformal_transf_g}
t+x &=& g(w+s).
\end{eqnarray}
The wave equation and the metric are invariant under conformal coordinate
transformations and retain their usual form in the $(w,s)$ coordinates:
\begin{eqnarray}
\frac{\partial^2 \phi}{\partial w^2} &-& \frac{\partial^2 \phi}{\partial s^2} =
0,\\
dx^2-dt^2 &=& f'(w-s) g'(w+s) \left(ds^2 - dw^2\right).
\end{eqnarray}
If we impose the condition that $x = z(t)$ is mapped to $s=0$ [see
Fig.~\ref{fig:dce-trajectory}(a)],
we get the static boundary condition in the transformed coordinates
\begin{eqnarray}
\phi(0, w) = 0,
\end{eqnarray}
and the following constraint on the functions $f$ and $g$
\begin{equation}
\label{eq:g_and_f_transform_eq}
\frac{1}{2}\left[g(w) - f(w)\right] = z\left\{\frac{1}{2}\left[g(w) +
f(w)\right]\right\}.
\end{equation}
In the $(w,s)$ coordinate system, the problem is static and can be readily
solved. The standard mode functions are
\begin{eqnarray}
\phi_\omega(w,s) = (\pi\omega)^{-\frac{1}{2}} \sin\omega s e^{-i\omega w},
\end{eqnarray}
which, in the original $(t,x)$ coordinates, take the form
\begin{eqnarray}
\!\!\!\phi_\omega(x,t) = i(4\pi\omega)^{-\frac{1}{2}} [e^{-i\omega g^{-1}(t+x)}
- e^{-i\omega f^{-1}(t-x)}].
\end{eqnarray}
The problem of finding the appropriate mode functions is therefore reduced to
finding the functions $g$ and $f$ and their inverses, given a particular mirror
trajectory $z(t)$.
For a trajectory $z(t)$, solutions that satisfy
Eq.~(\ref{eq:g_and_f_transform_eq}) usually exist, but analytical expressions
for $f(w)$ and
$g(w)$ can be difficult to obtain.
The same approach can be used for two mirrors that form a
cavity in two-dimensional spacetime \cite{moore:1970}. Assuming that one
mirror is fixed at $x=0$ and that the second mirror follow a trajectory
$x=z(t)$, the boundary conditions are
\begin{eqnarray}
\phi(0, t) = \phi(z(t), t) = 0.
\end{eqnarray}
Applying the conformal transformation in Eqs.~(\ref{eq:dce_conformal_transf_f}-\ref{eq:dce_conformal_transf_g}),
that maps the mirror coordinates as $x=0 \leftrightarrow s=0$ and
$x=z(t) \leftrightarrow s=1$ [see
Fig.~\ref{fig:dce-trajectory}(b)], results in the static boundary condition
\begin{eqnarray}
\phi(s=0, w) = \phi(s=1, w) = 0.
\end{eqnarray}
Setting $f(u)=g(u)$ and denoting $f^{-1}(u) =
R(u)$ yields the constraint
\begin{eqnarray}
\label{eq:dce_moore_equation}
R(t+z(t)) - R(t-z(t)) &=& 2.
\end{eqnarray}
This functional equation was first derived by Moore \cite{moore:1970},
and is often called the Moore equation.
Given the solution $R(u)$ to Eq.~(\ref{eq:dce_moore_equation}), we can
write the normal modes in the original $(x,t)$ coordinate as
\begin{eqnarray}
\phi_n(x,t) = (4\pi n)^{-\frac{1}{2}} [e^{-i\pi n R(t+x)} -
e^{-i\pi n R(t-x)}].
\end{eqnarray}
Again, the difficulty of the problem has been reduced to solving the
functional equation Eq.~(\ref{eq:dce_moore_equation}).
\begin{figure}[t]
\includegraphics[width=8.0cm]{nation_fig08}
\caption{(Color online) Mirror trajectories in the original coordinates and
the transformed coordinates for a single mirror (a) and a cavity with variable
length (b). The coordinate transformations simplify the boundary-value
problem, but finding the correct transformation functions ($f$, $g$, and
$R$, respectively) can itself be a difficult problem.}
\label{fig:dce-trajectory}
\end{figure}
The mode functions $\phi_n(x,t)$ are orthonormal with respect to
the Klein-Gordon inner product, and can be used in the usual
canonical quantization of the field
\begin{eqnarray}
\phi(x,t) = \sum_n a_n \phi_n(x,t) +
a_n^\dagger \bar{\phi}_n(x,t),
\end{eqnarray}
where the creation and annihilation operators $a_n$ and $a_m^\dagger$
satisfies the usual commutation relation $[a_n,a_m^\dagger] =
\delta_{nm}$.
The state of the field can be
characterized by e.g.~the energy-momentum tensor, $T_{\alpha\beta}(x,t)$,
\cite{fulling:1976,law:1994}
or by the photon statistics obtained by expanding the field in the Fock
state basis \cite{dodonov:1990}.
The advantage of the energy-momentum tensor, and in particular the
energy-density component $T_{00}(x,t)$, is that it is a local quantity that
describes the radiation at the point $(x,t)$, regardless of the behavior of the
boundary conditions at that point in time, but on the other hand it requires a
regularization procedure to yield finite results.
In contrast, the Fock-state representation is a decomposition in global
modes that depends on the boundary condition. The photon statistics usually
gives an intuitive picture of the field state, but with time-dependent boundary
conditions there is no well-defined Fock-state basis with a
time-translationally invariant vacuum state \cite{moore:1970,fulling:1976}.
However, it is possible to formulate a meaningful photon definition by
considering a scattering-type problem for bounded motion, with stationary
mirrors in the regions $t < 0$ and $t > T$, see Fig.~\ref{fig:dce-trajectory}.
The Fock-state basis for the stationary-mirror field can be used for the in and
out regions, corresponding to $t < 0$ and $t > T$, respectively.
We can formally write the field in the stationary regions as
\begin{eqnarray}
\phi_{\rm in}(x,t) &=& \sum_n\left(a_n\psi^{(0)}_n(x,t) + {\rm h.c.}\right),\\
\phi_{\rm out}(x,t) &=& \sum_n\left(b_n\psi^{(0)}_n(x,t) + {\rm h.c.}\right),
\end{eqnarray}
where $\psi^{(0)}_n(x,t) = i(\pi n)^{-\frac{1}{2}}
\sin \omega_n x e^{-i\omega_n t}$ is the mode functions for the stationary
mirror problem with resonance frequencies $\omega_n=\pi n/z_0$ and mirror separation $z_0$.
The operators $a_n$ and $b_n$ are related through the Bogoliubov
transformation
\begin{eqnarray}
b_m &=& \sum_n\left(a_n \alpha_{nm} + a_n^\dagger \bar{\beta}_{nm}\right).
\end{eqnarray}
The coefficients $\alpha_{nm}$ and $\beta_{nm}$ are given by projecting
the mode functions for the nonstationary region $0 \leq t \leq T$ at time
$t=T$ on the stationary mirror mode functions, using the Klein-Gordon
inner product,
\begin{eqnarray}
\label{eq:bogoliubov_dce_alpha}
\alpha_{nm} &=& \left<\psi_m^{(0)}(x,T), \phi_n(x,T)\right>,\\
\label{eq:bogoliubov_dce_beta}
\beta_{nm} &=& {\left<\psi_m^{(0)}(x,T), \bar{\phi}_n(x,T)\right>^*},
\end{eqnarray}
where we have taken $\phi_{n}(x,0) = \psi_n^{(0)}(x,0)$. For the in
and
out
regions the photon statistics is well-defined. If, for example, the field is in
the vacuum state at $t<0$, then the final photon number in the $n$th mode at
$t > T$ is
\begin{eqnarray}
N^{\rm out}_m = \left<b_m^\dag b_m\right>_{\rm in}= \sum_n
|\beta_{nm}|^2.
\end{eqnarray}
The condition for which $\beta_{nm} = 0$ can be found by equating
the energy flux $\left<T_{01}(x,t)\right>$ to zero. \cite{fulling:1976} showed
that the mirror trajectories that result
in a field without radiation are those with uniform acceleration (including,
of course, zero acceleration).
In contrast, mirror trajectories with non-uniform acceleration result in
radiation $\left<T_{01}(x,t)\right> \neq 0$, which in the out region $t>T$
corresponds to $\beta_{nm} \neq 0$ for some $n$ and $m$. This effect is often
called the dynamical Casimir effect.
Explicit expressions for the Bogoliubov coefficients
Eqs.~(\ref{eq:bogoliubov_dce_alpha}-\ref{eq:bogoliubov_dce_beta})
and photon number $N_m^{\rm out}$ have been evaluated
for a number of different mirror trajectories with nonuniform acceleration.
A mirror trajectory of considerable theoretical interest is the exponentially
receding mirror with a velocity that asymptotically approaches the speed of light,
\begin{eqnarray}
\label{eq:receding-mirror}
z(t) = -t - A\exp\left(-2\kappa t\right) + B, \;\;\;t>0
\end{eqnarray}
where $A, B, \kappa > 0$ are constants and $z(t)=0$, $t\leq0$.
This particular mirror trajectory results in exponential Doppler shift and radiation with a thermal black-body spectrum, with an effective temperature that is related to how fast the mirror velocity approach the speed of light $T_{\rm eff} = \kappa/2\pi$ \cite{davies:1978}.
Furthermore, an effective horizon occurs, after which a light ray from an
observer toward the mirror will never reach and reflect off the mirror, but will
instead travel to infinity along with the mirror.
Due to the appearance of this effective horizon, the
mathematical analysis of the radiation produced by the receding mirror is
identical to the derivation of Hawking radiation from black
holes, see Sec.~\ref{sec:hawking}.
Thus we establish the connection between the dynamical Casimir effect and Hawking radiation,
as summarized in Fig.~(\ref{fig:relations}).
From the point of view of experimentally detecting the radiation from
a non-uniformly accelerated mirror,
the most practical class of trajectories are periodic motions, and in particular
sinusoidal motion. For example, a single
mirror in free space that performs sinusoidal oscillations
produces a constant average number of photons $N_{\rm out}$ per
oscillation period \cite{lambrecht:1996, maia-neto:1996}: $N_{\rm
out} \propto (\epsilon\omega)^2$,
where $\epsilon$ is the amplitude of oscillations and $\omega$ is the
frequency of the sinusoidal mirror trajectory.
An exact solution to Eq.~(\ref{eq:dce_moore_equation}) for a cavity
with a near-sinusoidal mirror trajectory was found in \cite{law:1994},
where it was shown that the energy density in a cavity
with resonantly modulated length acquires a nontrivial structure in the form of
wave packets traveling back and forth in the cavity (see also
\cite{cole:1995,dalvit:1998}).
The build-up of photons in a cavity with sinusoidally modulated length
was studied in e.g.
\cite{dodonov:1990,dodonov:1993,dodonov:1996,ji:1997,schutzhold:1998}. It
was
shown that under resonant conditions, i.e., when the mirror oscillates with a
frequency that matches twice the frequency of a cavity mode, the photon
production can be resonantly enhanced.
The cavity photon number was found to grow as $(\epsilon\omega_nt)^2$ in the
short-time limit, and that the photon production rate is
proportional to $\epsilon\omega_n$ in the long-time limit. Here $\epsilon$ is
the amplitude of oscillations and $\omega_n$ is the frequency of the resonantly
driven mode.
The rate of photon build-up in the cavity depends not only on the motion of
the cavity mirrors, but also on the mode structure of the cavity. The modes
of the ideal cavity considered in e.g.~\cite{dodonov:1990, dodonov:1993} are
equidistant in frequency, and as a result significant intra-mode interaction
occur. If, in contrast, the cavity has only a single mode, or if
intra-mode interaction is negligible due to non-equidistant frequency spacing,
the cavity can be described as a single harmonic oscillator with time-dependent
frequency \cite{dodonov:1995, meplan:1996}. The Bogoliubov transformations
Eqs.~(\ref{eq:bogoliubov_dce_alpha}-\ref{eq:bogoliubov_dce_beta})
for resonant driving then coincide exactly with those for a degenerate parametric
amplifier (see Sec.~\ref{sec:paramp}), and
the photon number in the cavity is therefore $N_{\rm out}=\sinh^{2}(\eta t)$,
where the squeezing parameter in this case is $\eta t = \epsilon\omega_0t$.
Thus we establish the connection between the dynamical Casimir effect and a degenerate parametric amplifier, as indicated in Fig.~(\ref{fig:relations}).
This correspondence between the dynamical Casimir effect in a single-mode
cavity and parametric amplification has also been discussed in
\cite{schutzhold:2005}; \cite{dezael:2010}; \cite{johansson:2010}.
It is evident from the discussion above that for the dynamical Casimir effect to
be non-negligible the modulation must also be combined with a relatively large
amplitude $\epsilon$ and high frequency $\omega$. In fact,
the maximum speed of the boundary in a sinusoidal motion $v_{\rm max} =
\epsilon\omega$, must approach the speed of light for significant photon
production to occur \cite{lambrecht:1996}.
The DCE is therefore difficult to observe in experiments using massive mirrors \cite{braggio:2005}, since such objects cannot be accelerated to relativistic velocities in practice, and therefore produce photons only at very small rates.
The situation is improved in a cavity setup, but an important aspect that affects the photon build-up rate in a
cavity is dissipation \cite{dodonov:1998}. Although effect of dissipation is clearly to suppress the build-up of photons, a dissipative single-mode cavity with quality factor $Q$ is still expected to be above the threshold for parametric amplification if $\epsilon\omega Q > 1$ \cite{walls:2008}. A large number of photons should accumulate in such cavities, which therefore are considered promising candidates for experimental demonstration of the DCE \cite{kim:2006}.
Nevertheless, experimental verification of the DCE in the optical regime, with real massive mirrors, has not yet been demonstrated in either cavity or single-mirror setups. As previuosly discussed, this is mainly due to experimental difficulties in modulating the position of the mirrors sufficiently strongly, and the presence of decoherence, dissipation and thermal noise.
To overcome these difficulties, several systems have been proposed recently \cite{braggio:2005,segev:2007,naylor:2009,johansson:2009, johansson:2010} that use alternative means of enforcing and modulating the boundary conditions, using effective massless mirrors.
Experimental investigations of such proposals have been ongoing for the last few years \cite{braggio:2009, wilson:2010}, and have culminated in the experimental observation of the DCE \cite{lahteenmaki:2011,wilson:2011} using a superconducting waveguide.
We discuss the DCE with superconducting circuits in more detail in Sec.~\ref{sec:sc-circuits:dce}.
\section{Implementations in superconducting circuits}\label{sec:sc-circuits}
In this section we will highlight recent work, both experimental and theoretical, on implementing the amplification methods discussed in the previous section. The possibility to generate vacuum amplification effects in superconducting circuit devices is largely due to their use in quantum information and computation \cite{you:2005,you:2011,buluta:2011}. There, the transfer of information must be sufficiently free from dissipation and noise so as to maintain quantum coherence, while at the same time the information should be transferred via single quanta \cite{clarke:2008}. Similar requirements are also necessary for vacuum amplification experiments, which ideally should be free from spurious photon sources, and be sufficiently coherent such that the quantum entanglement between generated particle pairs is maintained long enough to be measured. In superconducting circuit systems, one way to achieve these combined goals is to make use of the circuit quantum electrodynamics (Circuit QED) architecture \cite{blais:2004}, where qubits are coupled via one or more effectively one-dimensional transmission line resonators \cite{chiorescu:2004,wallraff:2004,mariantoni:2011}. Transmission lines with quality factors exceeding $\sim 10^{6}$ have been demonstrated, corresponding to a photon that travels $10~\mathrm{km}$ before being dissipated \cite{schoelkopf:2008}. These advances have allowed for multiple qubit \cite{majer:2007,sillianpaa:2007,dicarlo:2010,wei:2006} and photon \cite{wang:2011} entanglement using transmission lines that span distances of several millimeters, and are therefore visible to the naked eye. In addition, the generation of single-photons on demand \cite{houck:2007}, and the possibility of strong nonlinearities at the single-photon level \cite{hoffman:2010}, open up additional possibilities for the control of photons inside these devices. Although typical experiments involve cavity resonators, recently there has been growing interest in the use of open transmission lines \cite{astafiev:2010b,astafiev:2010,zhou:2008}, which allow for broadband frequency signals such as those generated by the Unruh, Hawking, and dynamical Casimir effects. In the sections that follow, we will describe ways to use this open 1D circuit QED architecture to generate and detect photons from the quantum vacuum.
\subsection{Single-shot microwave photon detection}\label{sec:photon}
In order to confirm the existence of the vacuum amplification mechanisms discussed in Sec.~\ref{sec:vacuum}, one must verify that the measured photons are indeed generated from vacuum fluctuations and not some spurious ambient emission process. One possible technique is to exploit the correlated nature of the photon emission process through the use of coincidence detection measurements of the particle pairs. Implicit in this verification method is the use of single-shot photon detectors. With single-shot photon measurements, one in principle has access to all orders of the statistical correlations between emitted photons, or equivalently the density matrix, and therefore has entire knowledge of the quantum state \cite{leonhardt:2010}. In the optical frequency range, such detectors are readily available and allow for, among other things, all optical quantum computation \cite{kok:2007}, Bell inequality measurements \cite{weihs:1998}, quantum homodyne tomography \cite{smithey:1993}, quantum communication \cite{bouwmeester:1997}, and encryption protocols \cite{jennewein:2000}. In superconducting circuits, analogous single-photon detectors have been difficult to realize in practice due to the several orders of magnitude smaller energies of microwave photons as compared with visible photons.
In the absence of photon number detectors in the microwave regime, superconducting circuit devices have made use of linear quantum amplifiers \cite{clerk:2010} such as the high electron mobility transistor (HEMT) in measuring the quantized electromagnetic fields inside resonant cavities and transmission lines. Placed between the circuit QED system, and the secondary classical voltage or current amplification stage, these amplifiers can provide several orders of magnitude of gain for the input signal but necessarily add at least half a quantum of zero-point noise fluctuations at the input due to the Heisenberg uncertainty principle \cite{caves:1982}. Typically, the added noise is actually much higher than this minimum value, on the order of $10-20$ photons at $5~\mathrm{GHz}$ \cite{menzel:2010}. In using a single-photon detector, this added noise is circumvented, since an intermediate amplification stage is not required.
Recently it has been shown that a pair of linear amplifiers is capable of resolving all of the moments for the quantum state of a microwave photon provided that one repeats the experiment many times to sufficiently average out the added noise \cite{dasilva:2010,menzel:2010}. This approach has been applied to the study of blackbody radiation from a load resistor and in the investigation of quantum noise of a beam-splitter \cite{mariantoni:2010}. Furthermore, the anti-bunching of microwave photons in a superconducting cavity has been observed by measuring the second-order coherence function \cite{bozyigit:2010}, and complete state reconstruction of propagating microwave photons was performed via homodyne tomography \cite{eichler:2010}. In order to obtain sufficient averaging, on the order of $10^{9}-10^{10}$ repeated measurements are required.
Unambiguous verification of the vacuum amplification mechanisms discussed in Sec.~\ref{sec:vacuum} requires on-chip single-shot photon detectors in order to measure the correlations between individual photon pairs. Achieving this goal in the microwave regime has been one of the long-standing challenges in superconducting quantum circuits. The first experimentally realized device capable of single-photon detection in the microwave regime was based on a double quantum dot \cite{aguado:2000} and was used in the investigation of shot-noise from a quantum point contact \cite{gustavsson:2007}. More recently, the use of phase-qubits \cite{clarke:2008} for single-photon detection has been proposed \cite{helmer:2009,romero:2009,peropadre:2011}, driven in part by the success of similar devices in measuring and controlling the quantum state of both superconducting microwave \cite{liu:2004,hofheinz:2008,hofheinz:2009,ansmann:2009,wang:2009} and mechanical \cite{oconnell:2010} resonators. This work has culminated in a microwave Hanbury Brown and Twiss type experiment \cite{hanbury:1956} using a pair of phase-qubit detectors, and the observation of photon bunching from a thermal source \cite{chen:2010}. Here, the absorption of a single photon by the phase qubit causes a transition to the excited state which readily tunnels out of the potential well and into the continuum, generating a voltage signal via the Josephson phase-voltage relation \cite{likharev:1986}. Detection efficiencies exceeding $80\%$ were achieved, although in principle a perfect detector is possible \cite{peropadre:2011}. An ideal single phase-qubit detector acts as a binary, or ``bucket", detector that responds to the input signal by always absorbing a single photon, regardless of the original number of photons present. Number resolving detection can be approximated using only binary detectors by detector cascading, or ``multiplexing" \cite{leonhardt:2010}, where a single incoming mode is equally distributed over a large number of output modes followed by qubit detectors. If the number of qubit detectors is large compared to the number of photons present in the signal, each detector receives only a single photon on average, allowing high fidelity measurements of the photon number to be performed \cite{kok:2007}.
\subsection{SQUID based microwave parametric amplifiers}\label{sec:sc-circuits:pa}
\begin{figure*}[t]\begin{center}
\includegraphics[width=18.0cm]{nation_fig09}
\caption{(Color online) Schemes for superconducting circuit implementations of vacuum amplification processes: (a) The SQUID based parametric amplifier from \cite{castellanos-beltran:2008}. (b) The parametric amplifier can be approximated as a lumped $LC$-circuit with a current dependent inductance. The small normal current resistance is also depicted. (c) Spectrum of a parametric amplifier. For the non-degenerate case, one has separate peaks for the signal and idler modes satisfying $\omega_{s}+\omega_{i}=\omega_{p}$. In contrast, the degenerate amplifier satisfies $\omega_{s}=\omega_{i}$. (d) Illustration of a dc-SQUID array transmission line with accompanying bias line (pink) and flux-pulse (black) used in generating an analogue event horizon and Hawking radiation. (e) Lumped circuit model valid for frequencies below the plasma frequency and negligible SQUID self inductance. (f) One-dimensional black body spectrum of emitted Hawking radiation. The characteristic (Hawking) temperature of the distribution is determined by the gradient of the SQUID array speed of light in a frame co-moving with the flux pulse. (g) Circuit diagram of a SQUID terminated coplanar waveguide used in generating the dynamical Casimir effect. Modulation of the SQUID's Josephson energy is performed by the time-varying external flux $\Phi_{\mathrm{ext}}(t)$. (h) Equivalent lumped element circuit model for the semi-infinite coplanar waveguide and dc-SQUID. (i) Spectrum of photons emitted by the DCE assuming the SQUID is driven by a sinusoidally varying flux.}
\label{fig:compare}
\end{center}
\end{figure*}
Parametric amplification in the microwave regime has been investigated for some time \cite{barone:1982}, with early works \cite{wahlsten:1977, yurke:1988, yurke:1989} demonstrating degenerate parametric amplification using superconducting circuits and the nonlinear properties (i.e. current-phase relations) of Josephson junctions. The squeezing of vacuum fluctuations has also been observed \cite{movshovich:1990}. More recently, there has been a renewed interest in these devices for amplification and frequency conversion brought on by progress in solid-state quantum metrology and information processing in the microwave regime.
Of the many examples of circuit based parametric amplifiers \cite{tholen:2007,vijay:2009}, the focus here will be on systems comprising coplanar waveguide resonators incorporating dc superconducting quantum interference devices (dc-SQUIDs). A dc-SQUID consists of two identical Josephson junctions embedded in a superconducting loop, each with critical current $I_{c}$ and capacitance $C_{J}$ (assumed identical for simplicity). For a negligible loop self-inductance $L\ll \Phi_{0}/2\pi I_{c}$, and large plasma frequency $\omega_{p}^{s}=\sqrt{2\pi I_{c}^{s}/(2C_{J}\Phi_{0})}$, where $\Phi_{0}=h/2e$ is the flux quantum, the SQUID behaves as a passive external flux $\Phi_{\mathrm{ext}}$ and current dependent inductor
\begin{equation}\label{eq:inductor}
L(I,\Phi_{\mathrm{ext}})=\frac{\Phi_{0}}{2\pi I_{c}^{s}}\frac{\arcsin\left(I/I_{c}^{s}\right)}{\left(I/I_{c}^{s}\right)}.
\end{equation}
Here, $I_{c}^{s}=2I_{c}\cos\left(\pi\Phi_{\mathrm{ext}}/\Phi_{0}\right)$ is the SQUIDs flux tunable critical current. When used in a lumped-element LC-oscillator such as Fig.~\ref{fig:compare}b, the flux and current dependence of this effective inductor allows two independent ways of varying the resonance frequency of the circuit. Just like the child on the swing in Fig.~\ref{fig:swing}, this modulation of the oscillation frequency gives rise to parametric amplification.
Systems exploiting the nonlinear response of the SQUID inductance for large input currents have been considered by
\cite{abdo:2009} and \cite{castellanos-beltran:2007,castellanos-beltran:2008}, where the centerline conductor of the resonator contained either a single or an array of embedded SQUIDs. These devices also make use of the inductor's flux degree-of-freedom by using a dc-bias current to introduce a controllable oscillator resonant frequency tunable over several GHz. In \cite{abdo:2009,castellanos-beltran:2007} amplification and quadrature squeezing of an input signal was observed when operated as a degenerate amplifier and driven by a large amplitude pump mode. Additionally, amplification and squeezing of quantum fluctuations were observed by \cite{castellanos-beltran:2008} where the use of a coplanar cavity allowed for $10~\mathrm{dB}$\footnote{A decibel (dB) is a measure of the logarithmic ratio of two powers: $L_{\rm dB}=10\log_{10}\left(P_{1}/P_{2}\right)$. In the present case of squeezing, the powers $P_{1}$ and $P_{2}$ are given by the variances $\left(\Delta X_{1}\right)^{2}$ and $\left(\Delta X_{2}\right)^{2}$ of the quadrature operators $X_{1}$ and $X_{2}$, respectively, as defined in Sec.~\ref{sec:paramp}.} of squeezing. A diagram of this experimental setup is given in Fig.~(\ref{fig:compare}a) along with the corresponding single-mode lumped element circuit diagram in Fig.~(\ref{fig:compare}b).
The systems realized by \cite{yamamoto:2008,wilson:2010} differ from the
previous examples in their use of a SQUID operated in a linear regime with
respect to both the current and the applied magnetic flux.
In these systems the SQUID terminates a coplanar waveguide resonator,
and imposes a boundary condition that is tunable through the applied
magnetic flux.
In addition to a dc flux bias that is used to tune the resonance
frequency, a weak ac flux modulation is applied to produce
sinusoidally time-dependent resonance frequency.
Under resonant conditions, this frequency modulation can result in parametric
amplification, and the resonator is then described, in a rotating frame,
by an effective nonlinear Hamiltonian equivalent to that of a DPA.
Modulating the flux applied through the SQUID at twice the resonance frequency
was observed to amplify a small input signal and lead to
quadrature squeezing \cite{yamamoto:2008}, and to induce
parametric oscillations in the absence of an input signal \cite{wilson:2010}.
In addition to the longstanding work on DPA's, recently non-degenerate amplification based on a Josephson parametric converter (JPC) has been considered \cite{bergeal:2010,bergeal:2010b,bergeal:2010c}. The setup described in \cite{bergeal:2010b} consisting of two superconducting resonators coupled to a ring of four Josephson junctions, allows for the complete separation of the signal and idler modes, both spatially and temporally. The frequency response of such a system assuming $\omega_{p}=\omega_{s}+\omega_{i}$ is given in Fig.~(\ref{fig:compare}c). Phase-preserving amplification with a noise level three times that of the quantum limit was demonstrated in \cite{bergeal:2010}. Moreover, correlations between signal and idler modes of a two-mode squeezed state (\ref{eq:par-amp-wave-function}) generated from the quantum vacuum were seen in \cite{bergeal:2010c}. These correlations have also been observed for itinerant photons generated by a non-degenerate parametric amplifier formed from a broadband transmission line resonator terminated by a SQUID \cite{eichler:2011}. Unlike the JPC however, the use of a single resonator does not allow for spatial separation of the modes without the use of an additional beam-splitter.
\subsection{Unruh effect in driven nonlinear circuit devices}
Of the four effects considered, the Unruh effect (UE) is perhaps the most difficult to reproduce in an on-chip circuit device, since it requires the observer (two-level detector) to undergo constant acceleration; a circuit model capable of reproducing the UE has yet to be proposed. However, an interesting related mechanism occurs in nonlinear circuit devices driven into the bistable regime \cite{marthaler:2006,dykman:2007,serban:2007}. Here, the emission of energy into a thermal reservoir, viewed in a coordinate system rotating at the driving frequency (i.e. the rotating frame), leads to transitions to \textit{both} higher and lower quasienergy levels \cite{dykman:2007}. These transition rates obey a Boltzmann distribution with an effective temperature determined by the quasienergy. Surprisingly, this effective temperature is nonzero, even when the temperature of the thermal reservoir vanishes \cite{marthaler:2006}. This same effect was found for a two-level detector in the rotating frame \cite{serban:2007}, where a zero temperature thermal bath is seen to have both positive and negative frequency Fourier components, leading to transition rates between energy levels that are described in terms of a non-vanishing effective temperature. These predictions have been verified experimentally using a Josephson Bifurcation amplifier\cite{vijay:2009}. These results are similar to that of an accelerating observer in the UE, Eq.~(\ref{eq:detailed}), who views the Minkowski vacuum state as a thermal state at the Unruh temperature (\ref{eq:tu}). Although it is tempting to consider this an analogue to the UE, the excitation of a detector in the rotating frame does not correspond to an actual thermal environment comprised of physical particles \cite{letaw:1980}.
In the UE, both the amplified vacuum state (\ref{eq:unruh-state}) and the expectation value for the number operator, derived from the Bogoliubov transformations in Eq.~(\ref{eq:unruh-bogo}), correspond to a thermal state at the Unruh temperature (\ref{eq:tu}). However, while an observer in the rotating frame will register excitations from the vacuum as a result of negative frequency vacuum modes transforming to positive-frequency components in the rotating frame\footnote{For a discussion of this effect in nonlinear circuit devices see \cite{serban:2007}.} \cite{letaw:1980}, the expectation value for the corresponding number operator vanishes \cite{crispino:2008}. There is no mixing of positive and negative frequency components \cite{birrell:1982}, and no natural definition of a particle for a rotating observer \cite{letaw:1980}. Of course, one may still define an effective temperature for a single-mode using Eq.~(\ref{eq:detailed}), as done in \cite{serban:2007}, however in contrast to the UE, this effective temperature is frequency dependent and does not correspond to a physical thermal environment. In Sec.~\ref{sec:unruh} we saw that the energy needed to generate particles in the UE comes from the work done by the accelerating force. Therefore, in a rotating frame where the work vanishes, there is no particle production. Furthermore, unlike both the UE and HR, the spacetime of an observer in circular motion does not contain a horizon \cite{letaw:1980}, the essential ingredient for generating a thermal environment of tangible particles from the quantum vacuum.
\subsection{Analogue Hawking radiation in a dc-SQUID array}\label{sec:analogue-hawking}
Observing HR in a condensed matter system was first suggested by \cite{unruh:1981} who discovered an analogy between sound waves in a fluid and a scalar field in curved spacetime. The possibility of generating HR in a condensed matter system exists because Einstein's equations are not essential to the formal derivation of HR\footnote{This absence of Einstein's equations is a consequence of using quantum field theory in curved space which ignores back-reaction effects on the spacetime metric. This is closely related to the classical fixed amplitude pump approximation used in the parametric amplifier of Sec.~\ref{sec:paramp}. Although unable to reproduce the Einstein equations, analogue systems can still obtain related results when energy loss is taken into consideration \cite{anglin:1995,nation:2010a}.} \cite{visser:2003}. Instead, HR relies on two general requirements: (i) An effective spacetime metric containing a horizon. (ii) A quantized electromagnetic field with the correct Bogoliubov transformations for the conversion of vacuum fluctuations into photons. Since Unruh's original proposal, analogues satisfying these conditions have been found in liquid Helium \cite{jacobson:1998}, Bose-Einstein condensates \cite{garay:2000}, electromagnetic transmission lines \cite{schutzhold:2005}, fiber-optic setups \cite{philbin:2008}, superconducting circuits \cite{nation:2009}, and ion rings \cite{horstmann:2010}.
Although a variety of systems can in principle generate HR, the requirements for the unequivocal verification of the effect are common to all setups. First, the temperature of the emitted radiation should be higher than that of the ambient background environment so as to be detectable. Second, one must measure the correlations across the horizon between emitted photon pairs. This latter requirement is essential, since it is the only way to verify that a photon is emitted through the Hawking effect rather than from some other ambient emission process. Recently, \cite{belgiorno:2010} claimed to observe analogue HR in a fiber-optical setup similar to that of \cite{philbin:2008}. Although tantalizing, this experiment did not measure correlations between photon pairs and therefore cannot confirm the source of the emitted photons. Other objections to this claim of analogue HR have also been raised \cite{schutzhold:2010}. Another recent experiment has also succeeded in generating stimulated Hawking emission using surface waves on water \cite{weinfurtner:2011}. Although the spontaneous generation of particles from the Hawking effect cannot be observed in this setup, using the connection between stimulated and spontaneous emission, this work has demonstrated the thermal nature of the emission process, independent of the underlying short-wavelength physics, and the irrelevance of the full Einstein equations in the description of HR.
While not a superconducting device, the first circuit model for analogue HR was considered by \cite{schutzhold:2005}, where the horizon necessary for the conversion of vacuum fluctuations into photons was produced by modulating the capacitance of a one-dimensional (1D) microwave waveguide by means of an externally applied laser beam. The considered waveguide was modeled as a lumped-element transmission line, where the capacitance was formed by parallel conducting plates separated by a dielectric insulting material that couples to the laser's electric field. Sweeping the laser light along the waveguide at a fixed velocity, the resulting change in capacitance in turn changes the speed of light inside the waveguide and generates a horizon. Using experimentally feasible parameters, the Hawking temperature in this system was shown to be $\sim 10-100~\mathrm{mK}$. These temperatures are quite promising, as they are in the range of the ambient environmental temperatures set by dilution refrigerators [ see e.g. \cite{hofheinz:2009}].
Even with these relatively large Hawking temperatures, the setup considered in \cite{schutzhold:2005} has yet to be realized in experiment. The main drawback lies in the laser-based illumination, which would generate a large number of excess environmental photons. Moreover, unless the waveguide is itself superconducting, heating due to dissipative processes will be a problem. Finally, the photons in the waveguide are in the microwave regime and we therefore require a single-shot microwave detection scheme to verify the photon pair correlations.
We have already seen how superconducting devices may be used for microwave photon detection. We will now turn to a superconducting circuit device for the generation of analogue HR that overcomes the effects of unwanted dissipation and is based on currently available manufacturing techniques \cite{nation:2009}. To generate analogue HR in a superconducting circuit we consider the coplanar transmission line in Fig.~(\ref{fig:compare}d), where the centerline conductor is formed from an array of dc-SQUIDs. Additionally, a current bias line capable of applying an external flux to the SQUIDs is assumed to run the length of the array. This setup is closely related to the DPA's in \cite{castellanos-beltran:2007,castellanos-beltran:2008}, where we have replaced the resonator with an open transmission line in order to excite a continuum of modes. The SQUIDs are approximated as lumped inductors (\ref{eq:inductor}), forming an LC-oscillator together with the geometric capacitance between the centerline conductor and transmission line ground planes \cite{blais:2004}, see Fig.~(\ref{fig:compare}e). Therefore, this setup is essentially an array of coupled oscillators each with a nonlinear flux-dependent frequency. As a discrete system, our waveguide has a natural short distance, high-frequency, cutoff due to the SQUID separation $\Delta x$. The SQUID inertial terms, ignored in the lumped inductor approximation (\ref{eq:inductor}), give an additional high-frequency scale set by the plasma frequency $\omega_{p}$. The lowest of these two frequencies determines the onset of a nonlinear photon dispersion relation, and plays the role of the high-energy scale physics in our model \cite{unruh:2005}. Unlike a black hole, our circuit model is well characterized at all energy scales.
In order to generate the horizon, an external flux $\Phi_{\mathrm{ext}}$ is applied to the SQUID array in the form of a step-like flux pulse with fixed velocity $u$. When the flux pulse $\Phi_{\mathrm{ext}}(x-ut)$ moves along the array, the inductance of the SQUIDs increases, resulting in a decreased speed of light in the vicinity of the pulse,
\begin{equation}
c_{s}(x-ut)=\frac{\Delta x}{\sqrt{L\left[\Phi_{\mathrm{ext}}(x-ut)\right]C_{0}}}.
\end{equation}
Here, $L\left[\Phi_{\mathrm{ext}}(x-ut)\right]$ and $C_{0}$ are the dc-SQUID inductance and capacitance to ground, respectively. In analogy with Eq.~(\ref{eq:metric}), the horizon is generated where the pulse velocity $u$ is equal to the SQUID array speed of light $c_{s}$. However, recall that this definition of the horizon is valid only with respect to a moving observer. We therefore perform a coordinate transformation into a reference frame moving with the bias pulse. In this comoving frame, the wave equation for the electromagnetic field inside the SQUID array can be cast in terms of an effective spacetime metric with the form
\begin{equation}
ds^{2}_{\mathrm{eff}}=-\left[c_{s}\left(x\right)^{2}-u^{2}\right]d\tau^{2}+2udx d\tau+dx^{2},
\end{equation}
which is similar in form to the black hole metric (\ref{eq:metric}), apart from the interchange of spatial dependence between the SQUID array speed of light and flux pulse velocity. In Fig.~(\ref{fig:regions}) we plot the effect of a hyperbolic tangent flux-bias pulse of amplitude $\Phi_{\mathrm{ext}}=0.2\Phi_{0}$ on the SQUID array speed of light $c_{s}$ in the comoving frame\footnote{This choice of bias-pulse is motivated in \cite{nation:2009}.}. The pulse velocity must satisfy $u<c_{s}(\Phi_{\mathrm{ext}}=0)$ to form an horizon.
\begin{figure}[t]\begin{center}
\includegraphics[width=7cm]{nation_fig10}
\caption{(Color online) Effect of a steplike flux bias pulse on the SQUID array speed of light $c_{s}(x)$ as seen in a frame moving with the pulse. Here, velocities have been normalized with respect to the unbiased speed of light $c_{s}\left[\Phi_{\mathrm{ext}}(x)=0)\right]$. The pulse velocity was chosen to be $u=0.95c_{s}(0)$. In the co-moving frame, the horizon occurs where $c_{s}(x)=u$. Like a black hole, the horizon is a unidirectional surface, and the red arrow at the bottom indicates the only permissible direction for a photon to transverse the horizon.}
\label{fig:regions}
\end{center}
\end{figure}
Like both HR and the UE, the analogue HR temperature is determined by the characteristic frequency of the horizon. In condensed matter analogues, this frequency is given as the rate of change in the speed of light evaluated at the horizon
\begin{equation}\label{eq:hawking}
T_{H}=\frac{\hbar}{2\pi k_{b}}\left|\frac{\partial c_{s}(x)}{\partial x}\right|_{c_{s}^{2}=u^{2}},
\end{equation}
resulting in a one-dimensional blackbody spectrum, Fig.~(\ref{fig:compare}f). In addition, the output power in this device is identical to that emitted from a black hole, Eq.~(\ref{eq:hpower}). To estimate the Hawking temperature, we will assume parameter values similar to those of the DPA in \cite{castellanos-beltran:2007}. In addition, the validity of the SQUID inductor approximation demands that the change in the speed of light be less than the plasma frequency $\omega_{p}^{s}$. Assuming a maximum frequency an order of magnitude smaller than the plasma frequency results in a Hawking temperature $\sim 120~\mathrm{mK}$. This temperature can be a factor of 10 larger than the ambient temperature set by a dilution refrigerator and should be visible above the background thermal spectrum.
Unlike a real black hole, both photons in the two-mode squeezed state may be detected in this device, allowing for verification of the HR. In the laboratory frame, a detector at the far end of the SQUID array will see two incoming photons. One photon in front of the horizon, and one behind, with the former having a slightly higher propagation velocity (see Fig.~\ref{fig:regions}). Single-shot detection of these microwave photons can be accomplished using one or more tunable-phase qubit detectors \cite{chen:2010} coupled to the SQUID array. By repeatedly sending flux pulses down the bias line, the predicted one-dimensional black body spectrum may be probed by tuning the qubit resonant frequency. Additionally, information on the cross horizon correlations between the emitted photon pairs can be established though coincidence detection. In this way, one can unambiguously establish HR as the source of the emitted photons.
\subsection{Dynamical Casimir effect in superconducting
circuits}\label{sec:sc-circuits:dce}
Superconducting coplanar waveguides (CPWs) are excellent devices for
confining quasi one-dimensional electromagnetic fields,
which at low (cryogenic) temperatures and GHz frequencies can behave
quantum mechanically.
The boundary conditions for the field in a CPW can be made externally
tunable by terminating the waveguide through a SQUID.
The SQUID effectively imposes a boundary condition for the CPW, rather than being a dynamical system in itself, if its plasma frequency is much larger than all other relevant frequencies. The imposed boundary condition is then a function of the externally applied magnetic flux through the SQUID loop.
This method of implementing tunable boundary conditions has been used, e.g., in experiments on frequency-tunable resonators \cite{sandberg:2008,palacios-laloy:2008}, and for parametric amplification \cite{yamamoto:2008} and oscillations \cite{wilson:2010} (see Sec.~\ref{sec:sc-circuits:pa}).
It has also been proposed that SQUID-terminated CPW devices can be used for experimental investigations of the DCE \cite{johansson:2009,johansson:2010}. For frequencies far below the plasma frequency, it can be shown that the boundary condition that the SQUID imposes on the CPW reduces to that of a perfectly reflecting mirror at an effective distance from the SQUID,
\begin{eqnarray}
\mathcal{L}_{\rm eff} =
\frac{L(I, \Phi_{\rm ext})}{L_0}.
\end{eqnarray}
Here, $L(I, \Phi_{\rm ext})$ is the Josephson inductance of the SQUID [Eq.~(\ref{eq:inductor})], and $L_0$ is the characteristic inductance per
unit length of the CPW. The effective length $\mathcal{L}_{\rm eff}$ is a function of the externally applied magnetic flux $\Phi_{\rm ext}$. By applying an oscillating magnetic flux through the SQUID loop, it is therefore possible to mimic the boundary condition of an oscillating
mirror, resulting in DCE radiation.
The phase drop across a SQUID is exceptionally sensitive to the applied
magnetic flux, and the effective length of the SQUID can therefore
be tuned in a wide range by small changes in the applied magnetic flux.
In addition, sinusoidal magnetic fields that are generated by ac
currents through bias lines adjacent to the SQUID
can reach high frequencies (tens of GHz) in state-of-the-art
experiments with superconducting circuits \cite{yamamoto:2008, wilson:2010}.
This combination of large-amplitude and high-frequency modulation makes SQUID-terminated CPWs well suited for experimental demonstration of the DCE, as this allows relatively large photon production rates. Estimates suggests that with realistic circuit parameters radiation energies on the order of mK in temperature units can be achieved \cite{johansson:2009}, which is within the limit of sensitivity in recent experiments using linear amplifiers.
After decades of eluding experimental observation, the dynamical Casimir effect was recently demonstrated experimentally \cite{dalvit:2011,wilson:2011} using the kind of SQUID-terminated CPW device described above. In the experimental demonstration it was shown that the modulation of the boundary condition imposed by the SQUID does indeed result in photon production, and furthermore, that the generated radiation exhibits strong two-mode squeezing, which is a distinct signature of the quantum mechanical photon-pair creation process of the dynamical Casimir effect.
Shortly thereafter, the DCE in a resonator with time-dependent dielectric properties was also demonstrated in a SQUID-array resonator \cite{lahteenmaki:2011}, similar to those used in \cite{castellanos-beltran:2007, castellanos-beltran:2008}, where the array was operated in the linear regime with a high-frequency magnetic flux field applied (uniformly) across the SQUID array. The modulation of the inductances of the SQUIDs due to the applied magnetic flux then results in time-dependent dielectric properties of the SQUID-array resonator that corresponds to a modulation of the effective length of the resonator $\mathcal{L}_{\rm eff}(t) = \mathcal{L}\sqrt{L(0)/L(t)}$, where $L(t) = L(I, \Phi_{\rm ext}(t))$ now is the characteristic inductance per unit length of the SQUID array, and $\mathcal{L}$ is the length of the resonator.
Another type of superconducting device for studying the DCE experimentally was
introduced by \cite{segev:2007}.
That device consists of a superconducting stripline resonator that is
illuminated with an optical laser.
The optical radiation modulates the ratio of superconducting to normal electrons
in the
microwave stripline resonator, which in turn modulates its dielectric
properties.
Since a medium with time-dependent dielectric properties has a similar effect
on the electromagnetic field as a time-dependent boundary condition
\cite{yablonovitch:1988, johnston:1995},
it is expected that the laser illumination of the stripline resonator results
in
photon creation due to the DCE. Promising initial experimental results for this
system has been reported \cite{segev:2007}, where a resonance frequency
shift due to the laser illumination was demonstrated.
An alternative approach to amplification of vacuum fluctuations in a superconducting circuit was proposed in \cite{deliberato:2009}. There, it was shown that a non-adiabatic modulation of the vacuum Rabi frequency (i.e., the coupling strength) in a superconducting qubit-resonator circuit can produce a significant amount of radiation. Furthermore, the resulting radiation has spectral properties that should distinguish it from spurious photon sources, such as e.g.~ambient thermal radiation.
Using CPWs or stripline resonators in experiments on the DCE has the advantage that the electromagnetic field is quasi one-dimensional. Although the general setting of the DCE is the three-dimensional free space, most theoretical work on the DCE is, for simplicity, restricted to systems with only one spatial dimension. The CPW and stripline geometries are examples of physical realizations of such systems. The fact that the photons are confined to the CPW should also simplify the process of detecting the generated radiation.
Once DCE radiation has been successfully generated, there are a number of characteristics in the photon statistics that can be used to distinguish it from spurious photon noise sources.
In particular, the DCE results in correlated photon pairs with two-mode quadrature squeezing and spectral properties that can be measured with standard homodyne detection techniques \cite{castellanos-beltran:2008}. In addition, recent development of single-photon detectors in the microwave regime \cite{chen:2010} has opened up the possibility to measure directly the correlations between individual DCE photon pairs in superconducting circuits.
\section{Summary and outlook}\label{sec:future}
We have reviewed several important quantum vacuum amplification effects; the Unruh effect, Hawking radiation, and the dynamical Casimir effect, and emphasized the interconnections between these effects. In particular, we stressed the role of parametric amplification of vacuum fluctuations in these processes. In addition, we have examined current and future experimental setups aimed at observing these effects, or their analogs, in superconducting electrical circuits.
As we have shown, superconducting circuits are very promising devices for experimental investigations of quantum vacuum amplification effects, and such circuits have already been used in the experimental demonstration of the DCE \cite{wilson:2011,lahteenmaki:2011}. It appears likely that more such experiments will be carried out in the near future. In fact, several promising experimental steps in this direction have been demonstrated already in a variety of systems \cite{segev:2007, castellanos-beltran:2008, yamamoto:2008, wilson:2010}. A particularly important experimental breakthrough has been the recent development of single-photon detectors in the microwave regime \cite{chen:2010}. Should microwave single-photon detectors become readily available, the detection of both the DCE and HR in microwave circuits would be greatly simplified. This would allow probing of the quantum statistics for the resulting radiation so as to identify the characteristic signatures of these effects.
In addition to the quantum vacuum amplification effects discussed in this review, superconducting circuits have also been proposed for realizing systems with ultra-strong atom-cavity coupling \cite{nataf:2010,peropadre:2010,ashhab:2010}. The cavity field in these systems can have exotic properties such as particles in the ground state, squeezing of field quadratures, and ground state entanglement between the cavity field and the atom. Moreover, the ability to create degenerate vacuum states in a qubit array \cite{nataf:2010}, allows for the possibility of vacuum state qubits and quantum computation. Atom-cavity systems in the ultra-strong coupling have only recently started to become feasible experimentally \cite{niemczyk:2010,forn-diaz:2010}. This is yet another example of new regimes in quantum mechanics that are starting to become accessible due to progress in the engineering of quantum superconducting circuits.
Finally, as a quantum coherent device, the superconducting arrays of SQUIDs presented here may allow for investigating effects analogous to those of quantum gravitational fluctuations on the Hawking process and the propagation of photons. Making use of the superconducting-to-insulator phase transition in the SQUID array \cite{chow:1998,haviland:2000}, the application of a sufficiently large external flux results in quantum fluctuations of the dynamical variables governing the SQUID inductance in Eq.~(\ref{eq:inductor}). As this inductance determines the speed of light inside the array, this result may be interpreted as analogue fluctuations of the effective spacetime metric \cite{nation:2009}. For analogue Hawking radiation, these fluctuations manifest themselves as quantum uncertainty in the position of the horizon in Eq.~(\ref{eq:metric}), a scenario that is of interest for actual black holes as well \cite{ford:1997,parentani:2001}. As discussed in Sec.~\ref{sec:analogue-hawking}, our condensed matter analogues cannot faithfully reproduce the full Einstein equations, and the effective metric fluctuations do not provide an analogue of the yet to be determined dynamics expected from the quantum theory of gravity [e.g. the Wheeler-Dewitt equation \cite{dewitt:1967}]. Nevertheless, given that a theory of quantized gravity remains out of reach for the foreseeable future, the ability to reproduce analogous fluctuating metric effects in a superconducting circuit model should prove useful in addressing quantum gravitational corrections to the Hawking effect.
Given the ability to fabricate a wide range of devices, the full scope of quantum vacuum effects in superconducting circuits, and the possible applications thereof, is still unknown and in need of further investigation. Indeed, the superconducting circuit models discussed here are an example of quantum simulators \cite{lloyd:1996,buluta:2009}: controllable quantum systems engineered to reproduce the physical properties of another, seemingly different, quantum system. The wide range of amplification effects that can be simulated in these systems, hints at the possibility of a circuit-based universal quantum vacuum amplification simulator; a device capable of exploiting the generality of Bogoliubov transformations to reproduce the emission properties of any vacuum amplifier. What is certain however, is that superconducting circuits as a test bed for quantum-vacuum related physics offer unique advantages that will help to shed light on one of quantum mechanics' most remarkable features, namely the amplification of vacuum quantum fluctuations.
\section*{Acknowledgements}
We thank the referees for their very helpful comments on this Colloquium. PDN was partially supported by the Japanese Society for the Promotion of Science (JSPS) Postdoctoral Fellowship No.~P11202. MPB acknowledges support by the National Science Foundation (NSF) under grant No.~DMR-0804477. FN was partially supported by DARPA, AFOSR, Laboratory for Physical Science, National Security Agency, Army Research Office, NSF grant No. 0726909, JSPS-RFBR contract No. 09-02-92114, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and Funding Program for Innovative R\&D on S\&T (FIRST).
|
1,108,101,562,920 | arxiv | \section{Introduction}
\label{sec:intro}
\vspace{-7pt}
Singing voice synthesis~(SVS) is a task that generates singing voices from the given music score and lyrics like human singers. Deep learning based SVS approaches~\cite{ren2020deepsinger,blaauw2020sequence,hono2018recent,gu2021bytesing,ref_xiaoicesing,ref_diffsinger} have attracted tremendous attention in recent years for their extraordinary performances and wide applications. Similar to text-to-speech (TTS), most of these SVS systems consist of two stages, the acoustic model first generates low-dimensional spectral representations of vocal signals, typically mel-spectrogram, from the music score and lyrics, and the vocoder subsequently converts these intermediate representations into the singing waveform. Although these systems achieve decent performances, the two-stage models are separately trained, and the human-crafted intermediate representations, such as the mel-spectrogram, may limit the expressiveness of the synthesized singing voice.
We have recently proposed VISinger~\cite{ref_VISinger} -- an end-to-end (E2E) learned SVS approach based on VITS~\cite{ref_VITS} to mitigate the problems of two-stage systems. Specifically, VITS adopts the structure of CVAE to realize end-to-end speech synthesis. The posterior encoder extracts the latent representation $z$ from the linear spectrum, the decoder restores $z$ to the waveform, and the prior encoder provides a prior constraint for z according to the text. To better model singing, VISinger provides $z$ with more accurate frame level prior constraints under the guidance of F0 and provides extra prior information for the duration predictor. VISinger achieves superior performance over the typical two-stage systems such as Fastspeech~\cite{ref_fastspeech} + HiFiGAN~\cite{ref_HiFiGAN}.
Although VISinger advances the end-to-end SVS, it still has some drawbacks preventing its further application in real-world applications. First, the quality artefacts of the two-stage systems still exist in VISinger. Specifically, the audible glitches, such as spectral discontinuities and occasional mispronunciations, reduce the naturalness of the generated singing voice. Second, the sampling rate of the generated singing voice of VISinger is 24KHz, which does not meet the needs of high-fidelity (HiFi) applications which desire full-band audio (44.1KHz or higher).
To address these inadequacies, we reanalyzed the architecture and components of the VISinger. The first and most significant issue is that the latent representation $z$ extracted by the posterior encoder may contain phase information due to the gradients passed back by the decoder when modelling the waveform. This could lead to mispronunciation because it is extremely challenging to predict the phase from the linguistic input reasonably. Secondly, the HiFiGAN~\cite{ref_HiFiGAN} architecture adopted in VISinger is not well designed for the SVS task. Its absence of modelling capabilities of rich variations on singing voice may lead to the glitches problem. Finally, a higher sampling rate SVS system relies on an improved decoder to provide better modelling capabilities.
\begin{figure*}[ht]
\centering
\includegraphics[scale=0.48]{model.pdf}
\vspace{-10pt}
\caption{
Architecture of VISinger~2. Yellow components are part of the neural network architecture, and grey components are features or differentiable operations. The short line on the arrow indicates gradient truncation.
}
\label{model}
\vspace{-12pt}
\end{figure*}
In this paper, we propose VISinger~2, a digital signal processing~(DSP) synthesizer enhanced end-to-end SVS system for high-fidelity 44.1KHz singing generation. Specifically, inspired by recent advances in differentiable digital signal processing (DDSP)~\cite{ref_DDSP}, we incorporate a DSP synthesizer into VISinger to solve the above issues. Specifically, the DSP synthesizer consists of a harmonic synthesizer and a noise synthesizer to generate periodic and aperiodic signals from the latent representation $z$, respectively. The periodic and aperiodic signals are concatenated as conditional inputs to HiFiGAN, while the sum of the two produces a waveform to calculate the loss function. This design has sufficient advantages. First, both synthesizers need only amplitude information as input to generate the signals, thus fully compressing the phase component in $z$ and avoiding the text-to-phase challenge. Second, the representation of the periodic and aperiodic signal composition provides a strong condition for HiFi-GAN, substantially enhancing its modelling capability and allowing it to model a higher sampling rate. Finally, due to these improved modelling capabilities, the number of parameters in VISinger~2 can be substantially reduced by about 30\% compared to VISinger, further facilitating its use in real-world applications. Experiments show that VISinger~2 can generate a high-fidelity singing voice at a 44.1kHz sampling rate, with better naturalness and fewer glitches than VISinger and the traditional two-stage system.
We notice that there has been a recent trend to leverage the advances of conventional DSP to neural audio generation~\cite{ref_RefineGAN,ref_SingGAN,ref_Harm_SVC}. For example, in~\cite{ref_Harm_SVC}, harmonic signals are used to improve the stability of GAN and avoid pitch jitters and U/V errors in singing voice conversion. RefineGAN~\cite{ref_RefineGAN} calculates the speech template according to the pitch and then generates waveform according to the speech template. SingGAN~\cite{ref_SingGAN} adopts the source excitation with the adaptive feature learning filters to alleviate the glitch problem. These works usually focus on the periodic signal because the glitches problem comes from the defect of the periodic signal. Although motivated by these works aiming for better generation quality, our approach has substantial differences in terms of methodology. First, the above revisions are all made on vocoders, and the whole system still faces the two-stage mismatch problem. We mitigate this problem by proposing a fully end-to-end system VISinger~2. Second, to ensure that the extracted latent representation $z$ in VISinger~2 contains full amplitude information~(periodic and aperiodic parts), we leverage both periodic and aperiodic signals generated by the DSP synthesizer in our system design.
\vspace{-8pt}
\section{Method}
\label{sec:Method}
\vspace{-8pt}
The overall model architecture of VISinger~2 is shown in Fig.~\ref{model}. The proposed model adopts the conditional variational autoencoder (CVAE) structure, which includes three parts: a posterior encoder, a prior encoder and a decoder, the same as VITS~\cite{ref_VITS} and VISinger~\cite{ref_VISinger}. The posterior encoder extracts the latent representation~$z$ from spectral features, the decoder generates waveform~$y$ from $z$, and the prior conditional encoder constrains the extraction process of $z$. We will introduce the posterior encoder, decoder and prior encoder, respectively.
\vspace{-10pt}
\subsection{Posterior Encoder}
\label{sec2:Posterior Encoder}
\vspace{-5pt}
The posterior encoder is composed of multi-layer 1-D convolution, which aims to extract the latent representation~$z$ from the mel-spectrum. The last layer produces the mean and variance of the posterior distribution, and the resampling method is used to obtain the posterior $z$.
\vspace{-8pt}
\subsection{Decoder}
\label{sec2:Decoder}
\vspace{-4pt}
The decoder generates waveform from the latent representation~$z$ as shown in Fig.\ref{model}(c). To avoid text-to-phase and glitches problems, we incorporate a DSP synthesizer into the decoder. Specifically, we use a harmonic synthesizer and a noise synthesizer to generate periodic and aperiodic parts of the waveform from the posterior $z$. The generated waveforms are used as an auxiliary condition for HiFi-GAN as input to enhance its modelling capabilities relieving the glitch problem. Meanwhile, since the inputs of both two synthesizers contain only amplitude information, the posterior $z$ will lean towards not including phase information and thus alleviate the text-to-phase problem.
\vspace{-8pt}
\subsubsection{Harmonic Synthesizer}
\label{sec2:Periodic signal part}
\vspace{-5pt}
We use the harmonic synthesizer to generate harmonic components of audio the same as the harmonic oscillator in DDSP~\cite{ref_DDSP}. The harmonic synthesizer uses sin signals to simulate the waveform of each formant of the single sound source audio. The $k$-th sinusoidal component signal~$y_{k}$ generated by the harmonic synthesizer can be expressed as:
\vspace{-12pt}
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\begin{split}
y_{k}(n)=H_{k}(n)sin(\phi_{k}(n))
\end{split}
\end{equation}
where $n$ represents the time step of the sample sequence, and $H_{k}$ is the time-varying amplitude of the $k$-th sinusoidal component. The phase~$\phi_{k}(n)$ is obtained by integrating on the sample sequence:
\vspace{-3pt}
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\phi_{k}(n)=2\pi \sum_{m=0}^{n}\frac{f_{k}(m)}{Sr}+\phi_{0,k}
\end{equation}
where $f_{k}$ represents the frequency of the $k$-th sinusoidal component, $Sr$ represents the sampling rate, and $\phi_{0,k}$ represents the initial phase. We can get the phase of the sin signal~$y_{k}$ through an accumulation operation according to the fundamental frequency~$f_{k}$. The frequency~$f_{k}$ can be calculated by $f_{k}(n)=kf_{0}(n)$, where $f_{0}$ is the fundamental frequency. The time-varying $f_{k}$ and $H_{k}$ are interpolated from frame-level features. We extract the fundamental frequency using Harvest~\cite{ref_harvest} algorithm.
\vspace{-8pt}
\subsubsection{Noise Synthesizer}
\label{sec2:Aperiodic signal part}
\vspace{-5pt}
In the noise synthesizer, we use inverse short-time Fourier transform~(iSTFT) to generate the stochastic components of audio, similar to the filtered noise in DDSP. The aperiodic components are closer to noise, but the energy distribution is uneven in different frequency bands. The stochastic component signal~$y_{noise}$ generated can be expressed as:
\vspace{-13pt}
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\begin{split}
y_{noise}=iSTFT(N, P)
\end{split}
\end{equation}
where the phase spectrogram~$P$ of iSTFT is uniform noise in domain [$-\pi$, $\pi$], and the amplitude spectrogram~$N$ is predicted by the network.
\vspace{-5pt}
\subsubsection{Loss Function of Decoder}
\label{sec2:Combination}
\vspace{-5pt}
The DSP waveforms generated by the DSP synthesizer contain both harmonic and stochastic components. The complete DSP waveform $y_{DSP}$ and the loss~$L_{DSP}$ of the DSP synthesizer are defined as
\vspace{-10pt}
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\begin{split}
y_{DSP}=\sum_{k=0}^{K}y_{k}+y_{noise}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
L_{DSP}=\lambda_{DSP}\left\| \text{Mel}(y_{DSP}) - \text{Mel}(y)\right\|_{1}
\end{split}
\end{equation}
where $K$ represents the number of the sinusoidal component and $\text{Mel}$ represents the process of extracting mel-spectrum from waveform.
We use a downsampling network gradually downsamples the DSP waveforms to the frame-level features. The HiFi-GAN accepts the posterior z and the intermediate features generated by the downsampling network as input and generates the final waveform~$\hat{y}$. Following HiFi-GAN, the GAN loss for the generator G is defined as:
\vspace{-10pt}
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\begin{split}
L_{G}=L_{adv}(G)+\lambda_{fm} L{fm}+\lambda_{Mel}L_{Mel}
\end{split}
\end{equation}
where $L_{adv}$ is the adversarial loss, $L_{fm}$ is the feature matching loss, and $L_{Mel}$ is the Mel-Spectrogram loss.
\vspace{-5pt}
\subsubsection{Discriminator}
\label{sec2:Discriminator}
\vspace{-5pt}
We combine two sets of discriminators to improve the ability of the discriminator. One set of discriminators is multi-resolution spectrogram discriminator~(MRSD) in UnviNet~\cite{ref_Univnet}, and the other is Multi-Period Discriminator~(MPD) and Multi-Scale Discriminator~(MSD) in HiFi-GAN~\cite{ref_HiFiGAN}.
\vspace{-5pt}
\subsection{Prior Encoder}
\label{sec2:Prior Encoder}
\vspace{-5pt}
The prior encoder takes the music score as input to provide a prior constraint for CVAE. As mentioned in Section~\ref{sec2:Decoder}, the posterior $z$ will be used to predict $H$, $N$ in the decoder, where $H$ represents the amplitude of the sinusoidal formant and $N$ represents the amplitude spectrum of aperiodic components. Both $H$ and $N$ only contain amplitude information but not phase information, so the posterior $z$ will not contain phase information accordingly. In this way, the prior encoder will not model the text-to-phase mapping when predicting the posterior $z$ based on the music score.
Similar to VISinger~\cite{ref_VISinger}, the prior encoder adopts the same structure as Fastspeech~\cite{ref_fastspeech}. The flow~\cite{ref_flow} module plays an important role in VITS~\cite{ref_VITS}, but it occupies a large number of model parameters. For a more practical structure, we calculate the KL divergence $L_{kl}$ directly between the prior $z$ and the posterior $z$ without using flow.
We use a separate FastSpeech~\cite{ref_fastspeech} model to predict the fundamental frequency and mel-spectrum to guide the frame-level prior networks. The loss for the auxiliary feature is defined as:
\vspace{-7pt}
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\begin{split}
L_{af}=\left\| LF0 - \widehat{LF0} \right\|_{2} + \left\| Mel - \widehat{Mel} \right\|_{1}
\end{split}
\vspace{-5pt}
\end{equation}
where $\widehat{LF0}$ is the predicted log-F0, and $\widehat{Mel}$ is the predicted mel-spectrogram.
We take the predicted mel-spectrum as the auxiliary feature for the frame-level prior network in the training and inference process, so the auxiliary mel-spectrum does not bring a mismatch in the training and inference process. The frame-level prior network predicts the prior $z$ with the guide of auxiliary mel-spectrum to alleviate the text-to-phase problem further. We prove later in the experiment that VISinger~2 does not rely too much on this auxiliary mel-spectrum. The harmonic synthesizer accepts the predicted fundamental frequency as input to guide the generation of periodic signals in the inference process, while the ground-truth fundamental frequency is adopted in the training process.
The duration predictor accepts the music score as input and adopts the method in XiaoiceSing~\cite{ref_xiaoicesing} to simultaneously predict phoneme duration and note duration. The duration loss is expressed as:
\vspace{-13pt}
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\begin{split}
L_{dur}=\left\| d_{phone} - \widehat{d_{phone}} \right\|_{2} + \left\| d_{note} - \widehat{d_{note}} \right\|_{2}
\end{split}
\end{equation}
where $d_{phone}$ is the ground truth phoneme duration, $\widehat{d_{phone}}$ is the predicted phoneme duration, while $d_{note}$ is the ground truth note duration, and $\widehat{d_{note}}$ is the predicted note duration.
\vspace{-5pt}
\subsection{Final Loss}
\label{sec2:Final Loss}
\vspace{-5pt}
Our final objectives for the proposed model can be expressed as:
\vspace{-13pt}
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\begin{split}
L(G)=L_{G}+ L_{kl}+ L_{DSP} + L_{dur} + L_{af}
\end{split}
\end{equation}
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
L(D)=L_{adv}(D)
\end{equation}
where $L_{G}$ is the GAN loss for generator G, $L_{kl}$ is KL divergence between prior~$z$ and posterior~$z$, $L_{af}$ is the loss of the auxiliary feature, and $L_{adv}(D)$ is the GAN loss of discriminator D.
\vspace{-5pt}
\section{Experiments}
\label{sec:exp}
\vspace{-8pt}
\subsection{Datasets}
\label{sec3:Datasets}
\vspace{-3pt}
We evaluate VISinger~2 on the Opencpop~\cite{ref_opencpop} dataset, which consists of 100 popular Mandarin songs~(5.2 hours) performed by a female professional singer. All the audios are recorded at 44.1kHz with 16-bit quantization. Opencpop has a pre-defined training set and test set: 3,550 segments from 95 songs for training while 206 segments from 5 songs for the test. We follow Opencpop's division of the training and test set.
\vspace{-7pt}
\subsection{Model Configuration}
\label{sec3:Model Configuration}
\vspace{-5pt}
We train the following systems for comparison.
\begin{itemize}
\item
\textbf{CpopSing}: the two-stage conformer-based SVS model introduced in the Opencpop~\cite{ref_opencpop}. In the CpopSing, the Transformer blocks in Fastspeech~2~\cite{ref_FastSpeech2} are replaced with Conformer blocks. The adversarial training method similar to the sub-frequency adversarial loss in HiFiSinger~\cite{ref_HiFiSinger} is used in the CpopSing.
\item\vspace{-3pt}
\textbf{VISinger}: an end-to-end SVS system based on VITS. The model configuration is consistent with that in VISinger~\cite{ref_VISinger}.
\item\vspace{-3pt}
\textbf{RefineSinger}: a two-stage SVS system constructed by FastSpeech~\cite{ref_fastspeech} and RefineGAN~\cite{ref_RefineGAN}. The FFT block in both the encoder and decoder of Fastspeech are 4-layer. The duration predictor consists of a 3-layer 1D-convolutional network and predicts the phoneme-level and note-level duration. RefineGAN, which is designed for high sampling rate scenarios, adopts pitch-guided architecture to improve the ability of the generator. A Mel2F0 module introduced in \cite{ref_learn2sing2} is used to predict the F0 for RefineGAN. The hidden dimension of RefineGAN is 512, and the data augmentation method proposed in \cite{ref_RefineGAN} is not employed for simplicity.
\item\vspace{-3pt}
\textbf{VISinger~2}: the proposed end-to-end SVS system, adopting all the contributions introduced in the paper. Each FFT blocks in VISinger2 consist of 4-layer FFTs. The hidden dim and filter dim of FFT are 192 and 768, respectively. The hidden dimension of HiFi-GAN in the decoder is 256. The posterior encoder consists of an 8-layer 1D-convolutional network, and the dimension of potential representation $z$ is 192. The duration predictor consists of a 3-layer 1D-convolutional network with ReLU activation.
\end{itemize}
All models are trained up to 500k steps with a batch size of 16. The Adam optimizer with $\beta_{1}$ = 0.8, $\beta_{2}$ = 0.99 and $\epsilon$ = $10^{-9}$ is used to train all the models.
\begin{table}[]
\centering
\caption{Experimental results in terms of subjective mean opinion score~(MOS) and two objective metrics.}
\begin{tabular}{lccccc}
\bottomrule
Model & \makebox[0.05\textwidth][c]{\begin{tabular}[c]{@{}c@{}}Sample\\ Rate\end{tabular}} & \makebox[0.04\textwidth][c]{\begin{tabular}[c]{@{}c@{}}Model\\ Size~(M)\end{tabular}} & \makebox[0.04\textwidth][c]{\begin{tabular}[c]{@{}c@{}}F0\\ RMSE\end{tabular}} & \makebox[0.04\textwidth][c]{\begin{tabular}[c]{@{}c@{}}Dur\\ RMSE\end{tabular}} & MOS \\ \hline
Cpopsing &22k & 137.5 & 28.5 & 6.6 & 2.97$\pm$0.12 \\
VISinger &22k & 36.5 & 33.7 & 3.6 & 3.46$\pm$0.13 \\
VISinger~2 &22k & \textbf{25.7} & \textbf{26.0} & 2.8 & 3.69 $\pm$0.15 \\\hline
RefineSinger &44k & 36.0 & 39.1 & 2.8 & 2.85$\pm$0.10 \\
VISinger~2 &44k & \textbf{25.7} & 26.7 & \textbf{2.7} & \textbf{3.81$\pm$0.14} \\ \hline
Recording &22k & - & - & - & 4.22$\pm$0.12 \\
Recording &44k & - & - & - & 4.32$\pm$0.11 \\ \bottomrule
\end{tabular}
\label{MOS}
\vspace{-10pt}
\end{table}
\vspace{-6pt}
\subsection{Experimental Results}
\label{sec3:Experimental}
\vspace{-6pt}
We performed a mean opinion score (MOS) test for the above systems and randomly selected 30 segments from the test set for subjective listening, and ten listeners attended the test. The objective metrics, including F0 Root Mean Square Error~(F0-RMSE) and duration Root Mean Square Error~(dur-RMSE), are calculated to evaluate the performance of different systems. The results are summarized in Table~\ref{MOS}.
To evaluate the performance of the proposed VISinger~2 in a general SVS scenario, we first compared VISinger~2 with CpopSing and VISinger at the 22.05kHz sampling rate. As shown in Table~\ref{MOS}, VISinger~2 and VISinger perform significantly better than CpopSing in the MOS test, demonstrating the superiority of the end-to-end model in the general SVS scenario. Meanwhile, the MOS score of VISinger~2 is higher than VISinger by about 0.23, indicating the effectiveness of our design in a general SVS scenario. For further validation of the performance of VISinger2 in high sampling rate SVS scenarios, we compared VISinger~2 with RefineSinger at the 44.1 kHz sampling rate. The evaluation results listed in Table~\ref{MOS} show that VISinger~2 surpasses RefineSinger in MOS score by 33.6\% and has a MOS improvement of about 0.15 compared to the 22.05 kHz version of VISinger~2. This improvement shows that VISinger~2 is capable of modelling high sampling rates SVS enables high-fidelity singing voice generation. Note that CpopSing and VISinger did not participate in the 44.1kHz comparison for fairness as they are not designed for high sampling rate SVS. Similar to the MOS results, VISinger~2 outperformed the other systems in terms of objective metrics, validating our assumptions again.
Another observation worth highlighting is that in addition to outperforming the other systems in MOS and objective metrics, VISinger~2 has the smallest number of parameters in all comparison systems at 25.7M. This result demonstrates the effectiveness of our proposed approach and its sufficiency to be applied in real-world scenarios.
\begin{figure}[h]
\centering
\includegraphics[width=0.98\linewidth]{exp_spec.pdf}
\vspace{-12pt}
\caption{
Visualization of synthesized waveform.
}
\vspace{-15pt}
\label{exp_spec}
\end{figure}
We further visualize the waveforms generated by VISinger~2 in Fig.~\ref{exp_spec} to illustrate the role of the DSP synthesizer. As shown in Fig.~\ref{exp_spec}, the periodic components and aperiodic components are generated by the harmonic synthesizer and noise synthesizer, respectively. The generated periodic and aperiodic components are added to get DSP waveform~$y_{DSP}$. We can also find that the waveform finally generated by HiFi-GAN is guided by the DSP waveform as its conditional input
\vspace{-13pt}
\begin{table}[H]
\centering
\caption{Ablation study results in terms of subjective mean opinion score~(MOS).}
\begin{tabular}{lcc}
\hline
\multicolumn{1}{c}{Model} &\begin{tabular}[c]{@{}c@{}}Sample\\ Rate\end{tabular} & MOS \\ \hline
Recording &44k & 4.47$\pm$0.09 \\
VISinger2&44k & 3.96$\pm$0.11 \\
\ \ -auxiliary mel-spectrum&44k & 3.85$\pm$0.12 \\
\ \ -DSP synthesizer &44k & 3.02$\pm$0.13 \\ \hline
\end{tabular}\vspace{-7pt}
\label{MOS_ab}
\end{table}
\vspace{-12pt}
\subsection{Ablation study}
\label{sec3:Ablation study}
\vspace{-3pt}
To validate the effectiveness of each contribution, we conduct an ablation study. We remove the DSP synthesizer and auxiliary mel-spectrum feature, respectively. The results are summarized in Table~\ref{MOS_ab}. The results show that the model's performance degrades significantly when the DSP synthesizer is deleted, indicating that the DSP synthesizer plays an essential role in solving the text-to-phase problem and glitches problems. At the same time, when the auxiliary mel-spectrum feature is deleted, the model's performance degrades slightly, indicating that the auxiliary mel-spectrum can further solve the text-to-phase problem because a complete mel-spectrum guides the prediction of the prior $z$.
\vspace{-2pt}
\section{Conclusions}
\label{sec:Conclusions}
\vspace{-6pt}
In this work, we have updated our previous end-to-end singing voice synthesis system VISinger to its new version VISinger~2. Specifically, we solved the text-to-phase problem and the glitch artefacts problem and upgraded the sampling rate from 24KHz to 44.1KHz for a high-fidelity singing generation. These new contributions were achieved by incorporating a differential digital signal processing (DDSP) synthesizer with the VISinger decoder. In this way, the posterior encoder extracts the latent representation without phase information and avoids the prior encoder modelling text-to-phase mapping. To avoid glitch artefacts, we modified the decoder to accept the waveforms generated by the DSP synthesizer as a condition to produce the singing voice. Our experimental results show that, with fewer model parameters, VISinger~2 substantially outperforms CpopSing, VISinger and RefineSinger.
\vfill\pagebreak
\bibliographystyle{IEEE}
|
1,108,101,562,921 | arxiv | \section{Introduction}
The Anti de Sitter $(AdS)$/Conformal Field Theory (CFT) correspondence \cite{Malda,Gubser,Witten} asserts that certain quantities like correlation functions of fields of a CFT living on the conformal boundary of $AdS$ can be obtained by calculating purely geometrical quantities in the higher dimensional bulk spacetime, which is a solution of a classical theory of gravity. One such quantity of interest is the entanglement entropy of a subregion $A$ in the boundary CFT. Following a proposal by Ryu and Takayanagi (RT)\cite{Ryu1,Ryu2} and later by Hubeny, Rangamani and Takayanagi (HRT)\cite{Hubeny}, this quantity can be holographically obtained by calculating the area of a spacelike co-dimension two `extremal surface' $(\gamma_A)$ in the bulk spacetime,
\begin{gather}\label{HEE}
S_A ={Area(\gamma_A)\over 4 G_N},
\end{gather}
(where $G_N$ is the Newton's constant) and is dubbed as the Holographic Entanglement Entropy (HEE). By extremal surface one refers to the following notion. For asymptotically $AdS$ spacetimes in $d+1$ dimensions the surface $\gamma_A$ is $(d-1)$ dimensional and is obtained by extremizing the area functional,
\begin{gather}
Area=\int d^{d-1}\sigma\sqrt{h},
\end{gather}
where $\sigma$'s are the intrinsic coordinates and $h_{ab}$ is the induced metric.
If this were also a minimum of the area functional, which is the case that arises in stationary and static geometries, then, according to the Holographic entanglement entropy literature, it would be called a minimal surface. For static geometries the timelike Killing vector ($\partial_t$ say) is hypersurface orthogonal in the bulk geometry. It can then be shown that the extremal surface must lie on $t=constant$ slice and can be shown to be minimal. Hence the proposal reduces to finding a minimal surface on a constant time slice. The proposal, initially put forward by RT was precisely this. However for non static cases, where the timelike killing vector is not hypersurface orthogonal, or for dynamical geometries, where there is no time like Killing vector, $\gamma_A$ is no more minimal, and therefore RT proposal fails and one has to resort to the more general HRT proposal. {\it(In terms of nomenclature, in the mathematics literature, a minimal surface refers to just the critical point of the area functional and may not correspond to the minimum of the functional \cite{anciaux}. This is particularly the case in manifolds endowed with a Semi-Riemannian metric. We will stick to the latter nomenclature and use extremal and minimal interchangeably. Hence when we say minimal surfaces we actually mean extremal surfaces of HRT)} The equation obtained by extremizing the functional turns out to be nothing but the condition that the trace of the extrinsic curvature of the surface vanishes. The condition however yields non linear equations of motion for the embedding functions. It therefore becomes difficult to solve these equations unless the back ground geometry is highly symmetric. Consequently, though these equations for the embedding function can be obtained exactly for $AdS$ it becomes difficult to solve them exactly even for backgrounds like the boosted black brane or the Kerr-AdS. One therefore considers doing a perturbation by treating these backgrounds as perturbations over $AdS$, near the asymptotic boundary. This imminently yields linear equations as the procedure involves a linearization of the minimal surface equation.
The change in HEE between $AdS$ and excitations over it can then be calculated by considering variation of the area functional which incorporates the changes due to the change in the extremal surface $\gamma_A$ and the perturbation of the bulk metric.
At first order contributions only come from metric perturbations alone, while the change of the embedding of the extremal surface does not \cite{Lashkari,Jyotirmoy,Nozaki}. However at second order both first order change in the embeddings and second order metric perturbations contribute \cite{He, He:2013rsa,Guo:2013aca,Lashkari:2015hha, Kim:2015rvu}. In a previous work \cite{Ghosh:2016fop} the authors proposed a way to calculate the contributions to second order variations coming from the changes in the embedding, in $2+1$ dimensions. This was achieved by studying geodesic deviations between geodesics in rotating BTZ black hole (seen as perturbation over pure $AdS$) and pure $AdS_3$. These deviations were obtained as solutions of a ``generalized geodesic deviation equation''. In this paper we shall generalize this to arbitrary dimensions. In order to do so one has to reproduce the above notion, but now for minimal surfaces. Simplified cases for this deformation problem can be found in \cite{Fursaev:2010ix}.
Study of minimal surfaces in Riemannian geometries has been extensively carried out in the mathematics literature \cite{minimal, anciaux}. In the entanglement entropy literature the plateau problem for minimal surfaces has been studied in \cite{Fursaev:2007sg}. It is known that for surfaces embedded in a a given Riemannian space the area functional of the embedded surface is stationary, that is it's first variation vanishes, when the embedded surface is minimal. Likewise when the second variation is equated to zero it gives rise to the Jacobi equation for minimal surfaces \cite{simons}. The interpretation of the solutions of the Jacobi equation is the following. The solutions of this equation gives the deviation between a minimal surface and a neighboring minimal surface. In the physics literature the Jacobi equation has been studied in the context of relativistic membranes \cite{Capovilla:1994bs} and spiky strings on a flat background \cite{Bhattacharya:2016ixc}. However this equation is relevant only when the metric of the ambient space is fixed.
In the context of the present work one needs to modify this notion. Note that in our case one needs to study deviations between two surfaces which are minimal in two different spacetimes. The spacetimes are however related by a perturbation and not completely arbitrary. To begin with one has to ensure that all of the results obtained are manifestly gauge invariant and therefore has to be careful and precise in defining perturbations in the spirit of a covariant perturbation theory. We therefore adopt the notion introduced in \cite{Stewart:1974uz} in the context of gravity. A priori, taking cue from the results obtained for geodesics one then expects the Jacobi equation to be modified by appearance of an inhomogeneous term. This indeed turns out to be case, as will be shown later. We also obtain an expression for the change in the area functional, in arbitrary dimensions, upto second order.
Having obtained an equation that properly mimics the situtation at hand, one needs to demonstrate that the equations can indeed be solved, for the prescription to be of any relevance. We therefore solve this equation in the $3+1$ dimensional case for two choices of the boundary subsystem 1) Spherical subsystem and 2) Thin strip subsystem. We do this for Boosted black brane like perturbations over $AdS_4$. Using the solutions of the inhomogeneous Jacobi equation we obtain the change in HEE between $AdS_4$ and boosted black brane like perturbations over it.
\section{Notations and conventions}\label{N&C}
Consider a $d+1$ dimensional space time $(\mathcal M,{g})$ and another $d+1$ dimensional space time $(\mathcal M',g')$ which is diffeomorphic to $\mathcal M$. That is there is a differentiable map $\Phi:\mathcal M\rightarrow \mathcal M'$ which is however not isometric. We will call $(\mathcal M',g')$ to be a perturbation over $(\mathcal M,g)$ if $\accentset{(1)}{P}=\Phi_*g'-g$ is a small perturbation over $g$. Consider a surface $\mathcal S$ isometrically embedded in $\mathcal M$ and given by the function $f:S\rightarrow \mathcal M$. It is implied that the restriction of $f$ to the image of $\mathcal S$ is continuous and differentiable. In a local coordinate chart $x^\mu$ on $\mathcal M$ and $\tau^a$ on $\mathcal S$ the embedding can be represented by the embedding functions $x^\mu\circ f\circ (\tau^a)^{-1}$. This can be simply written as $x^{\mu}(\tau^a)$. The induced metric on $\mathcal S$ is the pull back of the metric $g$ under the map $f$, given by $h=f_*~g$. Again, in the local coordinates this can be written as $h_{ab}=g(\partial_a,\partial_b)=\frac{\partial x^\mu}{\partial\tau^a}\frac{\partial x^\nu}{\partial\tau^b}g(\partial_\mu,\partial_\nu)$. The quantity $\frac{\partial x^\mu}{\partial\tau^a}\partial_\mu$ is the push forward of the purely tangential vector field $\partial_a$ to $\mathcal M$. `$h_{ab}$' is the first fundamental form on $\mathcal S$. To define the second fundamental form one needs a connection or the covariant derivative on $\mathcal M$. The covariant derivative is a map $\nabla: T\mathcal M \otimes T\mathcal M\rightarrow T\mathcal M$. For two vector fields $W,Z~\in ~T\mathcal M$ it is denoted as $\nabla_WZ$ and is an element of $T\mathcal M$. Now suppose $x\in\mathcal S$. One can decompose the tangent space at the point $x$ into the tangent space of $\mathcal S$ and the space of normal vectors as $T_x\mathcal M=T_x\mathcal S\oplus T_x^{\perp}\mathcal S$. Then one defines the tangent bundle and normal bundle on $\mathcal S$ as $\bigcup_xT_x\mathcal S$ and $\bigcup_xT_x^\perp\mathcal S$ respectively. One can similarly define a covariant derivative on $\mathcal S$. Let it be denoted by $D: T\mathcal S \otimes T\mathcal S\rightarrow T\mathcal S$. Let $X,Y~\in T\mathcal S$. Then the Gauss decomposition allows us to write,
\begin{gather}
\nabla_XY=D_XY+K(X,Y),
\end{gather}
where $D_XY$ is purely tangential and $K(X,Y)$ is a vector in the normal bundle and is the extrinsic curvature or the second fundamental form. The metric compatibility of $\nabla$ in this notation is written as $\nabla_Wg(V,U)=g(\nabla_WU,V)+g(U,\nabla_WV)$. The metric compatibility of $\nabla$ with $g$ will imply metric compatibility of $D$ with $h$, by virtue of the above equation. One defines a connection $\nabla^{\perp}_XN^\perp$ in the normal bundle as $\nabla^\perp:T\mathcal S\otimes T^\perp \mathcal S\rightarrow T^\perp \mathcal S$, where $X\in T\mathcal S$ and $N^\perp\in T^\perp\mathcal S $. Then the shape operator $W_{N^\perp}(X)$ is defined as,
\begin{gather}
\nabla_XN^\perp=\nabla_X^\perp N^\perp-W_{N^\perp}(X).
\end{gather}
The shape operator and the extrinsic curvatures are related by the Weingarten equation,
\begin{gather}
g(W_{N^\perp}(X),Y)=g(N^\perp,K(X,Y)),
\end{gather}
where $X,Y\in T\mathcal S$ and $N^\perp\in T^\perp\mathcal S$. The Riemann tensor is a map $R:T\mathcal M \otimes T\mathcal M \otimes T\mathcal M\rightarrow T\mathcal M$ and is defined as,
\begin{gather}
R(W,U)V\equiv[\nabla_W,\nabla_U]V-\nabla_{[W,U]}V
\end{gather}
Similarly one can define an intrinsic Riemann tensor by,
\begin{gather}
\mathcal R(X,Y)Z\equiv[D_X,D_Y]Z-D_{[X,Y]}Z
\end{gather}
We write down the equations of Gauss and Codazzi, in this notation. Let $X,Y,Z,W\in T\mathcal S$ and $N^\perp\in T^\perp\mathcal S$. Then the Gauss equation is given as,
\begin{gather}
g(R(X,Y)Z,W)=g(\mathcal R(X,Y)Z,W)-g(K(X,Z),K(Y,W))+g(K(X,W),K(Y,Z)),
\end{gather}
and the Codazzi equation as,
\begin{gather}
g(R(X,Y)N^\perp,Z)=g((\nabla_YK)(X,Z),N^\perp)-g((\nabla_XK)(Y,Z),N^\perp)
\end{gather}
Now, we go over to notations involving perturbations. In the presence of perturbations a variation will be assumed to have have two contributions, one which is a flow along a vector $N\in T\mathcal M$, obtained by taking a covariant derivative $\nabla_N$ along $N$ and another variation $\delta_g$ which is purely due to metric perturbations. Since we will be doing all the calculations in a coordinate chart in the unperturbed space time, let try to define certain quantities on $\mathcal M$ arising due to the perturbations, i.e due to the difference in the two metrics $g$ and $\Phi_*~g'$. The metric perturbation will be given by,
\begin{gather}
(\delta_g~g)(\partial_\mu,\partial_\nu)\equiv\bigg[\Phi_*~g'-{g}\bigg](\partial_\mu,\partial_\nu)=\accentset{(1)}{P}(\partial_\mu,\partial_\nu),
\end{gather}
where $\accentset{(1)}{P}$ is a symmetric bilinear form on $\mathcal M$. Note that $\delta_g$ only acts on the metric and does not change the vector fields $\partial_\mu$. Now suppose there is a covariant derivative $\nabla'$ in $\mathcal M'$ compatible with $g'$, then for $X,Y\in T\mathcal M$,
\begin{gather}
C(X,Y)\equiv\delta_g\bigg(\nabla_XY\bigg)=\tilde\nabla_XY-\nabla_XY,
\end{gather}
where $\tilde\nabla=\phi^*\nabla'$ is the pullback connection on $M$. Note that $C(X,Y)$ is a vector field in $\mathcal M$. When written in coordinates it has exactly the same form as $\accentset{(1)}{C}^\mu_{\nu\rho}$ used in \cite{Ghosh:2016fop}. Since we will not be dealing with perturbations of further higher order, we have dropped the superscript $\accentset{(1)}{~}$.
We are now in a position to derive the inhomogeneous Jacobi equation for minimal surfaces. For the display of some semblance with \cite{Ghosh:2016fop}, a rederivation of the inhomogeneous Jacobi equation for geodesics, in this notation, is given in Appendix \ref{geosection}.
\section{Derivation of the Inhomogeneous Jacobi equation for surfaces}\label{DOJE}
In the previous section we considered $(\mathcal M',g')$ to be a perturbation over $(\mathcal M, g)$. Let us consider a one parameter family of such perturbed spacetimes $(\mathcal M_\lambda, g_\lambda)$ and a one parameter family of diffeomorphism, which are not necessarily isometric, $\Phi_\lambda:\mathcal M\rightarrow \mathcal M_\lambda$ such that $\mathcal M_0$ corresponds to the unperturbed spacetime and $\Phi_0$ is the identity map. Let $\mathcal S_\lambda$ be a family of co-dimension two minimal surfaces in $(\mathcal M_\lambda,g_\lambda)$ i.e the trace of their extrinsic curvatures vanishes. The surfaces can be parametrized by the embedding functions $f^\mu_\lambda (\tau^a)$, which allows one to write the tracelessness condition as $h^{ab}_\lambda K_{(\lambda)}(\partial_a,\partial_b)=0$. Note that one would think that the coordinates $\tau^a$ may be different for different $\mathcal S_\lambda$. But one can always adjust the functions $f^\mu_\lambda$ such that the surfaces can be coordinatised by the same intrinsic coordinates. Let us construct a family of immersed submanifolds $\tilde {\mathcal S_\lambda}$ in $\mathcal M_0$, given by the embedding functions $F^\mu_\lambda$ such that $\Phi_\lambda\circ F^\mu_\lambda=f^\mu_\lambda$. Let's denote the deviation vector between $F_0^\mu$ and the neighboring surface be denoted by $N$. Note that $N$ can always be taken to be normal to $\tilde{\mathcal {S}}_0$, as any tangent deviation will only result in a reparametrization of the intrinsic coordinates $\tau ^a$ and won't change the area of the surface. This statement is however not obvious in our case where we have metric perturbations. In this regard we take cue from the calculation done in the case of geodesic \cite{Ghosh:2016fop}. Since we have already removed the freedom of intrinsic coordinate reparametrization, by adjusting the $f_\lambda$'s, it is quite legitimate to take normal variations only. Moreover since we will ultimately be interested in area change it is sufficient for us to take normal variations only. Further $N$ can always be chosen such that it commutes with the vectors $\partial_a$ tangent to the submanifold i.e $[N,\partial_a]=0~\forall~a$.
The condition that $\mathcal S_\lambda$'s are minimal in $(\mathcal M_\lambda,g_\lambda)$ then reduces to a condition on $N$ in $\mathcal M_0$. At each order of the variation, the conditions are essentially inhomogeneous linear differential equations that $N$ must satisfy. The equation that one obtains at linear order is the one we will be interested in, since the solutions of this will provide us with the linear deformation of the minimal surface that we are seeking. As is evident, the equation can be derived by equating the more general variation $\delta_N=\nabla_N+\delta_g$, discussed in section \ref{N&C}, of the trace of the extrinsic curvature to zero i.e
\begin{equation}\label{bs}
\delta_N H_\lambda=h^{ab}_\lambda(\delta_N (\nabla_{(\lambda)\partial_a}\partial_b)^\perp)+(\delta_Nh^{ab}_{\lambda})K_{\lambda}(\partial_a,\partial_b)=0.
\end{equation}
We will drop the $\lambda$ subscript from here on, as the above variations will be calculated around the unperturbed surface i.e at $\lambda=0$. While dropping the $\lambda$'s surely will make the expressions look cleaner, one has to make sure that the minimal surface equation be used only after the derivatives have been computed. Let us first compute the first term of the above expression which involves the normal component of the covariant derivative.
\begin{eqnarray}\label{losq}
h^{ab}\delta_N (\nabla_{\partial_a}\partial_b)^\perp
=h^{ab}\Biggl(\nabla_N(\nabla_{\partial_a}\partial_b)+\delta_g(\nabla_{\partial_a}\partial_b)-\nabla_N(\nabla_{\partial_a}\partial_b)^T-\delta_g (\nabla_{\partial_a}\partial_b)^T\Biggr)\nonumber\\
=h^{ab}\Biggl(\nabla_{\partial_a}\nabla_{\partial_b}N+R(N,\partial_a)\partial_b+C(\partial_a,\partial_b)-\nabla_N(\nabla_{\partial_a}\partial_b)^T-\delta_g (\nabla_{\partial_a}\partial_b)^T\Biggr)
\end{eqnarray}
The action of the variation $\delta_N$ on any quantity $Q$ on $\mathcal M_0$ is taken to be of the form $\delta_N(Q)=\nabla_N(Q)+\delta_g(Q)$. This notation for variation has been adopted for convenience of calculation. That this reproduces the correct result, can be seen from the derivation of the inhomogeneous Jacobi equation, obtained by adopting this notation (appendix \ref{geo}). The action of $\delta_g$ is precisely on the space of sections on a tensor bundle in $\mathcal M_\lambda$. If we represent a flow on $\mathcal M_0$ and $\delta_g$ by two parameters then a priori these two parameters are completely independent of each other, but for the perturbations to work one needs them to be equal. How the parameter of the flow $\nabla_N$ can be related to the parameter of the variation $\delta_g$ is a mathematical issue the resolution of which we will leave for some future work. Adopting the above, one obtains,
\begin{equation}\label{cs}
(\delta_Nh^{ab})K(\partial_a,\partial_b)=2h^{ab}K(\partial_a,W_{N}(\partial_b))-h^{ac}h^{bd}K(\partial_a,\partial_b)\accentset{(1)}{P}(\partial_c,\partial_d)
\end{equation}
Substituting \eqref{losq},\eqref{cs} in \eqref{bs} we get
\begin{gather}\label{nst}
\delta_N H=h^{ab}\Biggl(\nabla_{\partial_a}\nabla_{\partial_b}N+R(N,\partial_a)\partial_b+C(\partial_a,\partial_b)-\nabla_N(\nabla_{\partial_a}\partial_b)^T-\delta_P (\nabla_{\partial_a}\partial_b)^T\Biggr)\\\notag+2h^{ab}K(\partial_a,W_{N}(\partial_b))-h^{ac}h^{bd}K(\partial_a,\partial_b)\accentset{(1)}{P}(\partial_c,\partial_d).
\end{gather}
A similar exercise with the term $ h^{ab}\delta_N (\nabla_{\partial_a}\partial_b)^T$ yield the following expression,
\begin{gather}\label{ct}
h^{ab}\Biggl[(\nabla_{(\nabla_{\partial_a}\partial_b)^T}N)^{\perp}+(\nabla_{\partial_a}\nabla_{\partial_b}N+R(N,\partial_a)\partial_b+C(\partial_a,\partial_b))^T+h^{cd}\accentset{(1)}{P}(K(\partial_a,\partial_b),\partial_c)\partial_d\Biggr].
\end{gather}
Substituting \eqref{ct} in \eqref{nst}, we get a complete expression for $\delta_NH$,
\begin{gather}\label{gost}
\delta_NH=h^{ab}\Biggl((\nabla_{\partial_a}\nabla_{\partial_b}N+R(N,\partial_a)\partial_b+C(\partial_a,\partial_b))^{\perp}-(\nabla_{(\nabla_{\partial_a}\partial_b)^T}N)^{\perp}\Biggr)-h^{cd}\accentset{(1)}{P}(H,\partial_c)\partial_d\\\notag
+2h^{ab}K(\partial_a,W_{N}(\partial_b))-h^{ac}h^{bd}K(\partial_a,\partial_b)\accentset{(1)}{P}(\partial_c,\partial_d)
\end{gather}
Noting that $(\nabla_{\partial_a}\nabla_{\partial_b}N)^\perp=-K(\partial_a,W_{N}(\partial_b))+\nabla^{\perp}_{\partial_a}\nabla^{\perp}_{\partial_b}N$, the above equation, along with the minimality condition $H=0$, can be recast in the following form, which is closer in form to the expressions known in the literature of minimal surfaces.
\begin{gather}\label{bang}
\delta_N
=\Delta^{\perp}N+Ric(N)+A(N)+C^\perp-\tilde{H},
\end{gather}
where we have defined $\Delta^{\perp}N$ to be the Laplacian on the normal bundle, given by $h^{ab}\Bigl(\nabla^{\perp}_{\partial_a}\nabla^{\perp}_{\partial_b}N-\nabla^{\perp}_{(\nabla_{\partial_a}\partial_b)^T}N\Bigr)$, $g(R(N,\partial_a)\partial_b,N)$ has been denoted by $Ric(N)$. $A(N)=h^{ab}K(\partial_a,W_{N}(\partial_b))$ is the Simon's operator whereas $C^\perp$ is defined as $C^\perp=h^{ab}C(\partial_a,\partial_b)^\perp$ and $\tilde{H}=\accentset{(1)}{P}^{ab}K(\partial_{a},\partial_{b})$.
Thus identifying the Jacobi/stability operator $(\mathcal L)$ for minimal surfaces as
\begin{gather}
\mathcal{L}N=\Delta^{\perp}N+Ric(N)+A(N),
\end{gather}
we can rewrite \eqref{bang} as
\begin{gather}\label{devi}
\mathcal{L}N=-C^\perp+\tilde{H}.
\end{gather}
This is the inhomogeneous Jacobi equation. The solutions of this equation will provide us with the deformation of a minimal surface under a perturbation of the ambient spacetime. The inhomogeneous terms in the above equation, involves perturbation of the metric and is the only term in the above equation that involves the perturbation. If there were no perturbations the equation would have corresponded to the one describing a deviation of a minimal surface to another minimal surface in the same spacetime $(\mathcal M_0,g_0)$. We will solve for solutions of this equation for specific cases and substitute the result in an area variation formula which we derive in the next section.
\section{Variation of the Area functional}
According to Hubeny, Rangamani, Takayanagi (HRT) proposal the area of a codimension two spacelike extremal surface$(\gamma_A)$ in $AdS_{d+1}$ whose boundary coincides with the boundary of subsystem $A$ gives the entanglement entropy for this subsystem. Our goal therefore would be to obtain the change in area of a minimal surface up to second order with the extra constraint that the boundary of the surface remain unaltered i.e the deviations vanish at the boundary. At second order we will encounter terms which involve the deviation of the embedding functions itself. It is here that we have to use the solutions of the inhomogeneous Jacobi equation. The first variation of area of the minimal surface is given by,
\begin{gather}\label{c}
\delta_NA=
\int d^{n}\tau~{\frac{\sqrt{h}}{2}}h^{ab}\delta_N h_{ab}= -\int d^{n}\tau~\sqrt{h}g(N,H)+{1\over 2}\int d^{n}\tau~\sqrt{h} h^{ab}\accentset{(1)}{P}(\partial_{a},\partial_{b})+\text{Surface terms}.
\end{gather}
If the perturbations are set to zero then we get back the known expression for first variation of area. In the presence of perturbations the on-shell expression can be obtained by setting ($H=0$).
\begin{gather}
\delta_NA={1\over 2}\int d^{n}\tau~\sqrt{h} h^{ab}\accentset{(1)}{P}(\partial_{a},\partial_{b})
\end{gather}
The second variation of area is given by
\begin{eqnarray}\label{svar}
\delta^{(2)}_NA=-\int d^{n}\tau~\delta_N(\sqrt{h}g(N,H))+{1\over 2}\int d^{n}\tau~\delta_N(\sqrt{h} h^{ab}\accentset{(1)}{P}(\partial_{a},\partial_{b}))+\text{Surface terms}
\end{eqnarray}
Note that since $[N,\partial_a]=0$ for all $a$, the variation of the surface term is again a surface term. From the results of the previous section \ref{DOJE}, the first term in the above expression can be written in terms of the stability operator. Simplifying the second term requires a bit of algebra. Note that $\delta_N(\sqrt{h}h^{ab}\accentset{(1)}{P}(\partial_a,\partial_b))$ has the following expression,
\begin{eqnarray}\label{ben}
\sqrt{h}h^{ab}\accentset{(1)}{P}(\partial_a,\partial_b)\left(-g(N,H)+{1\over 2}h^{cd}\accentset{(1)}{P}(\partial_{c},\partial_{d})\right)+2\sqrt{h}h^{ac}h^{bd}g(N,K(\partial_c,\partial_d))\accentset{(1)}{P}(\partial_a,\partial_b)\nonumber\\
-\sqrt{h}h^{ac}h^{bd}\accentset{(1)}{P}(\partial_c,\partial_d)\accentset{(1)}{P}(\partial_a,\partial_b)+\sqrt{h}h^{ab}\left[2\accentset{(1)}{P}(\nabla_{\partial_a}N,\partial_b)+2g(C(\partial_a,N),\partial_b)+\accentset{(2)}{P}(\partial_a,\partial_b)\right]
\end{eqnarray}
substituting the expression in (\ref{ben}) in (\ref{svar}) and using the conditions $H=0, \delta_NH=0$, one arrives at the following final expression for the second variation of the area functional \footnote{where we have used the following two expressions,
\begin{gather}
(\nabla_{\partial_a}P)(N,\partial_b)=g(C(\partial_a,\partial_b),N)+g(C(\partial_a,N),\partial_b)\notag
\end{gather}
\begin{gather}
\nabla_{\partial_a}[\sqrt{h}h^{ab}P(N,\partial_a)]=\sqrt{h}h^{ab}\nabla_{\partial_a}[P(N,\partial_b)]-\sqrt{h}h^{ab}P(N,(\nabla_{\partial_a}\partial_b)^T)\notag
\end{gather}.},
\begin{eqnarray}\label{svar2}
\delta^{(2)}_NA&=&{1\over 4}\int d^{n}\tau~\sqrt{h}h^{ab}\accentset{(1)}{P}(\partial_a,\partial_b)h^{cd}\accentset{(1)}{P}(\partial_{c},\partial_{d})\notag\\
&&+\int d^{n}\tau~\sqrt{h}h^{ac}h^{bd}g(N,K(\partial_c,\partial_d))\accentset{(1)}{P}(\partial_a,\partial_b)-{1\over 2}\int d^{n}\tau~\sqrt{h}h^{ac}h^{bd}\accentset{(1)}{P}(\partial_c,\partial_d)\accentset{(1)}{P}(\partial_a,\partial_b)\notag\\
&&+\int d^{n}\tau~\sqrt{h}h^{ab}{1\over 2}\accentset{(2)}{P}(\partial_a,\partial_b)-\int d^{n}\tau~\sqrt{h}h^{ab}g(C(\partial_a,\partial_b),N)+\text{Surface terms},
\end{eqnarray}
The appearance of surface terms in the above expression is not very crucial, at least in the context of our current work. Since the boundary subsystem is kept fixed, while the bulk metric is being perturbed, the boundary conditions on the deviation vector would imply that it vanishes at the boundary. Thus change in area will have no contribution from the boundary terms. If we started with a more general deviation vector which also had components tangent to the immersed surface, then the only modification of the above expression would have been through the appearance of more boundary terms. The bulk contribution still would have arised from normal variations only. This will be shown in full rigor in a later work \footnote{A.Ghosh and R.Mishra, work in progress}. where we will primarily use these boundary terms to find the change of entanglement entropy due to deformations of the subsystem itself.
\section{Brief outline of steps involved in obtaining Area variation upto second order}
Our goal is to provide a formalism to calculate a change in the area of an extremal surface under changes of embedding and perturbation of metric. For the sake of brevity, all our calculations will be done in $3+1$ dimensions. But this can be easily generalized to higher dimensions. In this section, we provide a brief outline of this formalism
1) Our first task is to take an asymptotically $AdS$ metric (to be considered as a perturbation over $AdS$) and identify the first and second order metric perturbations. In our case, this is achieved by writing the boosted AdS black brane metric in the Fefferman Graham coordinates, keeping up to second order (appendix \ref{Pert}). From the first order metric perturbations $\accentset{(1)}{P}_{\mu\nu}$ one can calculate the $(1,2)$ tensor.
\begin{gather}
{C}^{\mu}_{\nu \rho}=\frac{1}{2}{g}^{\mu\sigma}\left(\partial_\nu \accentset{(1)}{P}_{\rho\sigma}+\partial_\rho \accentset{(1)}{P}_{\nu \sigma}-\partial_\sigma \accentset{(1)}{P}_{\nu \rho}\right)-\frac{1}{2}\accentset{(1)}{P}^{\mu \sigma}\left(\partial_\nu{g}_{\rho \sigma}+\partial_\rho{g}_{\nu \sigma}-\partial_\sigma{g}_{\nu \rho}\right),
\end{gather}
where $g_{\mu\nu}$ is the unperturbed $AdS_4$ metric. The tensor defined is nothing but $C(X,Y)$ written in a coordinate system, i.e $C(\partial_\nu,\partial_\rho)=C^{\mu}_{\nu\rho}\partial_\mu$.
2) Next we choose a free boundary extremal surface in $AdS_4$ \cite {Fonda:2014cca}. We will consider two cases A) half sphere in $AdS_4$ which is the corresponding minimal surface for a circular disc like subsystem and B) minimal surface corresponding to a thin strip boundary subsystem. With these choices and the choice of the perturbed metric $\accentset{(1)}{P}^{\mu\nu},$ we can now solve the inhomogeneous Jacobi equation \eqref{devi} and obtain the deviation vector ($N$).
3) First and second order change in the area can be obtained by substituting the values of the deviation vector ($N$), first order metric perturbation $(\accentset{(1)}{P}_{\mu\nu})$ and the second order metric perturbation ($\accentset{(2)}{P}_{\mu\nu}, {C}^{\mu}_{\nu \rho}$) in the expression \eqref{svar2},\eqref{c} and integrating. From here the total change in area upto second order can be obtained as,
\begin{gather}
\Delta A =\Delta^{(1)}A+\frac{1}{2}\Delta^{(2)}A
\end{gather}
In the topic of the present paper we have selected asymptotically AdS spacetime. But this formalism can be easily applied to asymptotically flat case also. Here we have considered first order deviations of the extremal surface and second order metric perturbation to calculate the change in area up to second order. To calculate the change in area up to third order one need to consider second order deviation of the extremal surface and third order metric perturbations. Second order deviation can be obtained by extending the inhomogeneous Jacobi equation up to second order. The form of second order inhomogeneous Jacobi equation for geodesics can be found in \cite{Ghosh:2016fop}. Third order metric perturbation can be obtained by keeping third order terms in the asymptotic(Fefferman Graham) metric.
\section{Solutions of the inhomogeneous Jacobi equations and change in area}
Our choice of the asymptotic metric to be considered as a perturbation over $AdS_4$ is the Boosted AdS black brane metric written in the Fefferman Graham coordinates upto second order. The CFT state dual to this bulk geometry is a thermal plasma which is uniformly boosted along a certain direction and is characterized by a temperature $T$ and boost $\beta$. This choice of a stationary spacetime is made to elucidate that our formalism can be easily applied to both static and non static spacetimes and yields expected results for the non-static case. The metric for $AdS_4$ in Poincar\'e coordinates reads as
\begin{gather}
ds^{2}= {-dt^2+dx^2+dy^2+dz^2\over z^2}
\end{gather}
for simplicity we have set the radius of AdS to one. Now we will solve the inhomogeneous Jacobi equation and obtain an expression for the change in area for the case of two boundary subsystems namely
\subsection{Circular disk subsystem}
In the case where the boundary subsystem is a circular disk of radius $\mathscr R$, it is known that the minimal surface in the $AdS_{d+1}$ is a $d-1$ dimensional hypersphere. The embedding of such a surface in $AdS_4$ is given by the following embedding functions \cite{Bhattacharya:2013bna, Fonda:2014cca},
\begin{gather}\label{embedsphere}
x=\mathscr R\sin{\theta}\cos{\phi}+X,~~y=\mathscr R\sin{\theta}\sin{\phi}+Y,~~z=\mathscr R\cos{\theta},~~t=constant.
\end{gather}
The coordinates $\theta, \phi$ are the coordinates intrinsic to the surface and have ranges, $0\le\theta\le\frac{\pi}{2}$ and $0\le\phi<2\pi$. As is evident from the above expressions in eq.(\ref{embedsphere}), the surface of intersection of the half sphere with the $AdS_4$ boundary is at $\theta=\frac{\pi}{2}$. The intrinsic metric can be calculated via a pullback of the metric on the full space time and is given as,
\begin{gather}
ds^2_{induced}=h_{ab}~dx^a dx^b={d\theta^2 +\sin^2{\theta} d\phi^2\over\cos^2{\theta}}
\end{gather}
To facilitate our calculation we will construct a local basis adapted to this surface. To start with we first construct a local tangent basis. As is apparent from the expression for the induced metric, the tangent bases are,
\begin{gather}
e_{2}=\cos{\theta}\partial_{\theta},~~e_{3}=\cot{\theta}\partial_{\phi}.
\end{gather}
Since the surface is purely spacelike, this set provides the space like bases for the full spacetime. The set of basis vectors spanning the normal bundle will provide us with the other two basis vectors. To obtain them we first lift the tangent vectors to the space time, by using the embedding functions and then use the orthogonality relations. As a matter of convention we mark the time like normal as $e_0$ and the space like normal as $e_1$.
\begin{gather}
e_{0}=z\partial_{t},~~e_{1}={z(x-X)\over l}\partial_{x}+{z(y-Y)\over l}\partial_{y}+{z^2\over l}\partial_{z}
\end{gather}
To completely specify the embedding one also needs to find the extrinsic curvatures and the intrinsic connection. To do so we need to find the covariant derivatives between the tangent vectors. They turn out to be,
\begin{gather}\label{ec}
\nabla_{e_2}e_2 =0,~~\nabla_{e_3}e_3 =-\text {cosec~}{\theta}~e_{2},~~\nabla_{e_3}e_2 =\text {cosec~}{\theta}~e_{3}
\end{gather}
which gives the following for the intrinsic connection and the extrinsic curvature.
\begin{gather}
D_{e_2}e_{2}=0,~~D_{e_3}e_{3}=-\text {cosec~}{\theta}~e_{2},~~D_{e_2}e_{3}=0,~~D_{e_2}e_{2}=\text {cosec~}{\theta}~e_{3},~~D_{e_2}e_{2}=\text {cosec~}{\theta}e_{3}\notag\\
K(e_2 ,e_2)=0,~~K(e_3 ,e_3)=0,~~K(e_2 ,e_3)=0,~~K(e_3 ,e_2)=0
\end{gather}
The vanishing of the extrinsic curvature implies that the surface is totally geodesic i.e any curve that is a geodesic on the surface is also a geodesic of the full spacetime. Recall that the Jacobi equation involves the connection in the Normal bundle $\nabla^\perp$, which can be found by calculating the covariant derivative of a normal vector along a tangent vector.
\begin{gather}\label{nb}
\nabla_{e_2}e_0=0,~~\nabla_{e_3}e_0=0,~~\nabla_{e_2}e_1=0,~~\nabla_{e_3}e_1=0
\end{gather}
From this one can read off the normal connection $\nabla^\perp$, using the Weingarten map. The procedure involves expanding the normal connection as $\nabla^\perp_{e^a}e_{A}=\beta_A^B(e^a)e_{B}$ ($A,~B$ denotes an index for basis vectors in the normal bundle) and yields,
\begin{gather}
\nabla^{\perp}_{e_2}e_0=\beta_{0}^{0}(e_2)e_0 +\beta_{0}^{1}(e_2)e_1 =0,~~\nabla^{\perp}_{e_3}e_0=\beta_{0}^{0}(e_3)e_0 +\beta_{0}^{1}(e_3)e_1 =0\notag\\
\nabla^{\perp}_{e_2}e_1=\beta_{1}^{0}(e_2)e_0 +\beta_{1}^{1}(e_2)e_1 =0,~~\nabla^{\perp}_{e_3}e_1=\beta_{1}^{0}(e_3)e_0 +\beta_{1}^{1}(e_3)e_1 =0.
\end{gather}
The vanishing of the $\beta's$ is equivalent to saying that the normal bundle is flat. Using the above results, calculating the left hand side of the Jacobi equation is just a matter of algebra. We expand the deviation vector in the normal basis as $\alpha^A~e_A$ and find the following equations for the $\alpha^A$.
\begin{gather}
\cos^2{\theta}\partial^{2}_{\theta}\alpha^A+\cos^{2}{\theta}\cot{\theta}\partial_{\theta}\alpha^A+\cot^{2}{\theta}\partial^{2}_{\phi}\alpha^A-2\alpha^A=F^A,
\end{gather}
Where $F^A$ has been defined for compactness of the above expression and is given as in $F^A= e^A_\mu\big( C^{\perp\mu}+\tilde H^{\mu}\big)$. Note that in this case the both the normal projections yield the one and the same equation. The source of this symmetry can be traced back as due to the symmetry of the embedding surface itself. Before proceeding to find solutions of the above equation, we need to analyze the homogeneous equations. In other words we will impose the boundary condition that the deviation vector is zero at the boundary and check if this implies that the only solution of the `homogeneous' piece of the above equation is the trivial solution. As we will see, this knowledge would be helpful in our effort to obtain solutions of the `inhomogeneous' equations. The homogeneous equation can be solved by the method of separation of variables $\alpha^A(\theta,\phi)=\Theta^A(\theta)~\Phi^A(\phi)$. The equations then become ordinary differential equations.
\begin{gather}
{d^2\Theta^A\over d\theta^2}+\cot \theta{d\Theta^A\over d\theta}-(2\text {sec}^2 \theta+m^2 \text {cosec~}^2~\theta)\Theta^A=0
\end{gather}
and the $\phi$ equation is,
\begin{gather}
{d^{2}\Phi^A\over d\phi^2}+m^2\Phi^A=0
\end{gather}
For the $\phi$ equation the boundary condition is of course the periodic one $\Phi^A(\phi+2\pi)=\Phi^A(\phi)$, which restricts the values of $m$ to integers only. The most general solution of this equation is given by,
\begin{gather}
\Theta=C_1~\cos^2{\theta}(\sin{\theta})^m~
_2F_1\Big(1+\frac{m}{2},\frac{3}{2}+\frac{m}{2};m+1;\sin^2{\theta}\Big)\notag\\
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+C_2~\cos^2{\theta}(\sin{\theta})^{-m}~
_2F_1\Big(1-\frac{m}{2},\frac{3}{2}-\frac{m}{2};-m+1;\sin^2{\theta}\Big)
\end{gather}
Assuming the boundary condition $\Theta=0$ at $\theta=\frac{\pi}{2}$ and demanding that the solution be regular at $\theta=0$, one concludes that $C_1=C_2=0$. To check this assume $m$ to be positive (Similar arguments would hold for $m$ negative). Note that at $\theta=0$ the second solution diverges since $_2F_1\Big(1-\frac{m}{2},\frac{3}{2}-\frac{m}{2};-m+1;0\Big)=1$, while the $\sin^{-m}(\theta)$ term diverges. This implies $C_2$ must be set to zero. At $\theta=\frac{\pi}{2}$ the first solution diverges. This can be argued in the following way. Note that $lim_{z\rightarrow 1^-}\frac{_2F_1\Big(a,b;c;z\Big)}{(1-z)^{c-a-b}}=\frac{\Gamma(c)\Gamma(a+b-c)}{\Gamma(a)\Gamma(b)}$ for $\Re(c-a-b)<0$. Writing the first solution as,
\begin{eqnarray}
\frac{z^{\frac{m}{2}}}{(1-z)^{\frac{1}{2}}}\frac{_2F_1\Big(1+\frac{m}{2},\frac{3}{2}+\frac{m}{2};m+1;z\Big)}{(1-z)^{-\frac{3}{2}}},
\end{eqnarray}
one can realize that solution is divergent at $\theta=\frac{\pi}{2}$. Hence $C_1$ has to be set to zero. As expected for homogeneous spaces the only solution is the trivial one.
Now we will solve the inhomogeneous equation. By substituting $C\equiv\left(\frac{1}{3}+\beta^2\gamma^2\right)\frac{1}{z_0^3}$, $D\equiv\left(\frac{1}{3}\right)\frac{1}{z_0^3}$, $B\equiv\beta\gamma^2\frac{1}{z_0^3}$, and writing $\mathcal R^3\equiv\frac{\mathscr R^3}{z_0^3}$, the inhomogeneous equation for $e_1$ turns out to be,
\begin{gather}
\cos^2{\theta}\partial^{2}_{\theta}\alpha^1+\cos^{2}{\theta}\cot{\theta}\partial_{\theta}\alpha^1+\cot^{2}{\theta}\partial^{2}_{\phi}\alpha^1-2\alpha^1=\mathcal R^3\cos^4\theta\bigg(\frac{2}{3}+\beta^2\gamma^2\bigg)+\frac{5\mathcal R^3\sin^2\theta\cos^4\theta}{6}\notag\\
+\frac{5\mathcal R^3\beta^2\gamma^2\sin^2\theta\cos^4\theta}{4}+\frac{5\mathcal R^3\beta^2\gamma^2\sin^2\theta\cos^4\theta\cos2\phi}{4},
\end{gather}
and that for $e_0$ reads,
\begin{gather}
\cos^2{\theta}\partial^{2}_{\theta}\alpha^0+\cos^{2}{\theta}\cot{\theta}\partial_{\theta}\alpha^0+\cot^{2}{\theta}\partial^{2}_{\phi}\alpha^0-2\alpha^0=3\beta\gamma^2\mathcal R^3\cos^4\theta\sin\theta\cos\phi
\end{gather}
Let us consider the $e_1$ equations first. Note that since the equation is linear one can find the solutions for individual terms in the inhomogeneous piece separately. Let us therefore consider the terms containing no function of $\phi$.
\begin{gather}
\partial^{2}_{\theta}\alpha^1+\cot{\theta}\partial_{\theta}\alpha^1+\text {cosec~}^{2}{\theta}\partial^{2}_{\phi}\alpha^1-2~\text {sec}^2\theta~\alpha^1=\mathcal R^3\cos^4\theta\bigg(\frac{2}{3}+\beta^2\gamma^2\bigg)+\frac{5\mathcal R^3\sin^2\theta\cos^4\theta}{6}\notag\\
+\frac{5\mathcal R^3\sin^2\theta\cos^4\theta\beta^2\gamma^2}{4
\end{gather}
Owing to the fact that the right hand side of this equation contains no function of $\phi$ the only non trivial solution to this equation will come from $m=0$. This can be understood by taking a trial solution of the form $\sum_{m}\big(g_m(\theta)e^{im\phi}+g_{-m}(\theta)e^{-im\phi}\big)$. If one now lists the equations for individual $m's$, then only the $m=0$ equation will have an inhomogeneous term on the right hand side, while the other equations will be all homogeneous. But we have already shown that the solutions of the homogeneous equations are trivial. Therefore we only need to solve the $m=0$ equation, which reads,
\begin{gather}
\frac{d^{2}\Theta^1}{d{\theta}^2}+\cot{\theta}\frac{d\Theta^1}{d\theta}-2~\text {sec}^2\theta~\Theta^1=\mathcal R^3\cos^2\theta\bigg(\frac{2}{3}+\beta^2\gamma^2\bigg)+\frac{5\mathcal R^3\sin^2\theta\cos^2\theta}{6}
+\frac{5\mathcal R^3\sin^2\theta\cos^4\theta\beta^2\gamma^2}{4
\end{gather}
The solution to this equation with the conditions that it is zero at $\theta=\frac{\pi}{2}$ and regular at $\theta=0$ is given by,
\begin{gather}
\Theta^1=\frac{1}{288} \mathcal R^3 \cos ^2\theta\bigg(3 \beta ^2 \gamma ^2+2\bigg) \bigg(3 \cos 2\theta-23\bigg)
\end{gather}
The other equation containing a $\cos2\phi$ is equivalent to solving the $\theta$ equation for $m=2$.
\begin{gather}
\partial^{2}_{\theta}\Theta^1+\cot{\theta}\partial_{\theta}\Theta^1-4~\text {cosec~}^{2}{\theta}~\Theta^1-2~\text {sec}^2\theta~\Theta^1=\frac{5\mathcal R^3\beta^2\gamma^2\sin^2\theta\cos^2\theta}{4},
\end{gather}
The solution to this equation with conditions as above yields,
\begin{gather}
\Theta^1=-\frac{1}{64} \mathcal R^3 \beta^2 \gamma^2 \bigg(\sin{2\theta}\bigg)^2
\end{gather}
The full solution is then,
\begin{gather}
\alpha^1=\frac{1}{288} \mathcal R^3 \cos ^2\theta\bigg(3 \beta ^2 \gamma ^2+2\bigg) \bigg(3 \cos 2\theta-23\bigg)-\frac{1}{64} \mathcal R^3 \beta^2 \gamma^2 \bigg(\sin{2\theta}\bigg)^2\cos2\phi
\end{gather}
Now, we go over to the $e_0$ equation. By similar arguments, one concludes that the only contribution to the solution will come from $m=1$ term. Therefore, the equation becomes,
\begin{gather}
\partial^{2}_{\theta}\alpha^0+\cot{\theta}\partial_{\theta}\alpha^0-\text {cosec~}^{2}{\theta}~\alpha^0-2~\text {sec}^2\theta~\alpha^0=3\beta\gamma^2\mathcal R^3\cos^2\theta\sin\theta
\end{gather}
Along with the usual boundary conditions, the solution to this equation is,
\begin{gather}
\alpha^0=-\frac{1}{4} \beta \gamma ^2 \mathcal R^3 \sin \theta \cos ^2\theta\cos{\phi}
\end{gather}
The very fact that the solution of the above $e_0$ equation is non trivial proves the fact that the perturbed minimal surface ceases to be on a constant $t$ slice as was initially the case with the unperturbed minimal surface in $AdS_4$ background. One can also check that setting $\beta=0$, which gives the static case of an AdS Black Brane, makes $\alpha^0$ vanish.
We are now in a position to calculate the change in area. We first calculate the first order change in the area. As is known, at this order there is no contribution from deviations of the minimal surface itself, and therefore at this order the change must match with that obtained in \cite{Blanco:2013joa}. The first order change in HEE(S) for the spherical entangling surface can be extracted from eq.(\ref{c}) and is given by,
\begin{gather}
\Delta^{(1)}S={1\over 4G_N}\Delta^{(1)}A={1\over 8G_N} \int d^{d-1}\tau~\sqrt{h}h^{ab}\accentset{(1)}{P}(\partial_{a},\partial_{b}
=\frac{1}{32G_N} \pi \mathscr R^3\left(3 \beta ^2 \gamma ^2+2\right)\frac{1}{z_0^3}
\end{gather}
The second order variation has contributions from various terms. The full expression is given by eqn.(\ref{svar2}),
\begin{gather}\label{areachange2ndorder}
\Delta^{(2)}A=\int d^{d-1}\tau~\sqrt{h} \Bigl(h^{ab}h^{cd}\accentset{(1)}{P}(\partial_b,\partial_d)g(N^{\perp},K(\partial_a,\partial_c))-h^{ab}g(C(\partial_a,\partial_b),N^{\perp})\Bigr)\notag\\
+\int d^{d-1}\tau~\sqrt{h}\Bigl[{h^{ab}\over 2}\accentset{(2)}{P}(\partial_{a},\partial_{b})-{1\over 2}h^{ac}h^{bd}\accentset{(1)}{P}(\partial_{a},\partial_{b})\accentset{(1)}{P}(\partial_{c},\partial_{d})+{1\over 4}h^{ab}h^{cd}\accentset{(1)}{P}(\partial_{c},\partial_{d})\accentset{(1)}{P}(\partial_{a},\partial_{b})\Bigr]
\end{gather}
Let us analyze the above equation. The last three terms in the above equation eq.(49) are the terms coming purely from the bulk metric perturbations. The first and the second term arise from changes due change in the embedding function itself. The $N^\perp$ in the above equation therefore has to be substituted with the solutions of the Jacobi equation obtained before and then the integrals calculated. We therefore enumerate the results one by one. Consider the last three terms in the above expression which do not involve the deviation vector.
\begin{gather}
\int d^{d-1}\tau~\sqrt{h}{h^{ab}\over 2}\accentset{(2)}{P}(\partial_{a},\partial_{b}
=-\frac{1}{105} \pi \mathcal R^6 \left(6 \beta ^2 \gamma ^2-1\right)
\end{gather}
The next term is a product of two metric perturbations gives,
\begin{gather}
\int d^{d-1}\tau~\sqrt{h}{1\over 2}h^{ac}h^{bd}\accentset{(1)}{P}(\partial_{a},\partial_{b})\accentset{(1)}{P}(\partial_{c},\partial_{d}
=\frac{2 \pi \mathcal R^6 \left(216 \beta ^4 \gamma ^4+147 \beta ^2 \gamma ^2+49\right)}{2835}
\end{gather}
Finally the other term containing a product of two perturbations evaluates to,
\begin{gather}
\int d^{d-1}\tau~\sqrt{h}{1\over 4}h^{ab}h^{cd}\accentset{(1)}{P}(\partial_{c},\partial_{d})\accentset{(1)}{P}(\partial_{a},\partial_{b})=\frac{2 \pi \mathcal R^6 \left(108 \beta ^4 \gamma ^4+141 \beta ^2 \gamma ^2+47\right)}{2835}
\end{gather}
Note that the contribution from the first term is zero owing to the fact that the extrinsic curvature $K(\partial_a,\partial_b)$ is zero in this case of a spherical boundary subsystem. As we will see later this term does give non zero contributions for the case of a strip subsystem. While calculating the second term, the $N^\perp$ contained in the term has to be substituted with the solutions of the inhomogeneous stability equation. After substitution one obtains,
\begin{gather}
\int d^{d-1}\tau~\sqrt{h} h^{ab}g(C(\partial_a,\partial_b),N^{\perp}
= \frac{\pi \mathcal R^6 \left(459 \beta ^4 \gamma ^4+\beta ^2 \left(81 \gamma ^4+597 \gamma ^2\right)+199\right)}{1890}
\end{gather}
The total second order Change in HEE is then given by,
\begin{gather}\label{sed}
\Delta^{(2)}S={1\over 4G_N} \Delta^{(2)}A=-{\pi \mathscr R^6\over 4G_N}\frac{\left(1809 \beta ^4 \gamma ^4+3 \beta ^2 \left(81 \gamma ^2+713\right) \gamma ^2+551\right)}{5670}\frac{1}{z_0^6}
\end{gather}
This expression gives the second order change of HEE. Positivity of relative entropy between two states in the CFT demands that
\begin{gather*}
\Delta H\geq\Delta S
\end{gather*}
Where $H$ is the modular Hamiltonian for the spherical entangling surface, given in terms of the boundary stress tensor. One can now check that the equality is satisfied at the first order \cite{Blanco:2013joa}. As the modular Hamiltonian remains unchanged at second order, positivity of relative entropy demands that $\Delta^{(2)}S\leq 0$ at second order. Our result eq (\ref{sed}) is therefore in agreement with this observation.
The full expression for change of HEE is then given by
\begin{gather}
\Delta S=\Delta^{(1)}S+\frac{1}{2}\Delta^{(2)}S\notag\\
=\frac{1}{32G_N} \pi \mathscr R^3\left(3 \beta ^2 \gamma ^2+2\right)\frac{1}{z_0^3}-{\pi \mathscr R^6\over 8G_N}\frac{\left(1809 \beta ^4 \gamma ^4+3 \beta ^2 \left(81 \gamma ^2+713\right) \gamma ^2+551\right)}{5670}\frac{1}{z_0^6}
\end{gather}
the above expression gives the net change in HEE for spherical entangling surface upto second order over pure $AdS$(ground state) value.
\subsection{Thin Strip subsystem}
We now consider a two dimensional strip like subsystem on the $AdS_4$ boundary. The subsystem is given by the region $[-L,L]\times[-\frac{l}{2},\frac{l}{2}]$ of the $x-y$ plane, where $L\gg l$. The minimal surface corresponding to such a subsystem \cite {Fonda:2014cca} is characterized by the following embedding functions,
\begin{gather}
x=\lambda,~~y(\theta)=-z_* E\left({(\pi-2\theta)\over 4}\left.\right|2\right),~~z(\theta)=z_*\sqrt{\sin{\theta}},
\end{gather}
where $z_{*}$ is the turning point of the minimal surface in $AdS_4$ and $E(\alpha,\beta)$ is the incomplete elliptic integral of the second kind. Note that due to the condition $L\gg l$ the effects of the sides of the minimal surface can be neglected. The embedding function clearly reflects this approximation. In intrinsic coordinates the metric takes the form
\begin{gather}
{ds^2}_{induced}={{z_*}^2d\theta^2+4\sin{\theta}d\lambda^2\over 4 z_*^2\sin{\theta}^2},
\end{gather}
the range of the coordinates being $0\le\theta\le\pi$ and $-L\le\lambda\le L$. Further the turning point $z_{*}$ can be written in terms of the width $l$ of the subsystem as $z_*=\frac{\Gamma(\frac{1}{4})l}{2\sqrt{\pi}\Gamma(\frac{3}{4})}$. We also need to calculate the extrinsic curvature and the connection in the normal bundle. We again use a local tetrad adapted to the surface. The two spacelike bases are chosen such that they are tangent to the embedded surface. In intrinsic coordinate, they have the form,
\begin{gather}
e_2=2\sin{\theta}\partial_{\theta},~~e_3=z_*\sqrt{\sin{\theta}}\partial_{\lambda}
\end{gather}
These are lifted to the full spacetime coordinates and then by using orthogonality relations one can construct the bases which span the normal bundle.
\begin{gather}
e_1=z(\sin{\theta}\partial_z-\cos{\theta}\partial_y),~~e_0=z\partial_t
\end{gather}
The covariant derivatives of the normal vectors are given by,
\begin{gather}
\nabla_{e_2}e_1=\sin{\theta}~e_2,~~\nabla_{e_3}e_1=-\sin{\theta}~e_3,~~\nabla_{e_2}e_0=0,~~\nabla_{e_3}e_0=0
\end{gather}
From these one can read of the Weingarten maps and therefore the extrinsic curvatures,
\begin{gather}
W_{e_1}(e_2)=-\sin{\theta}~e_2,~~W_{e_1}(e_3)=\sin{\theta}~e_3,~~W_{e_0}(e_2)=0,~~W_{e_0}(e_3)=0
\end{gather}
We are now in a position to calculate the left hand hand side of the Jacobi equation. We expand the deviation vector as $\alpha^A~e_A$ and then by using the above expressions we get,
\begin{gather}
4\sin^2{\theta}~\partial^2_\theta\alpha^1+2\sin{\theta}\cos{\theta}~\partial_{\theta}\alpha^1+{z_*}^2\sin{\theta}~\partial^2_\lambda\alpha^1-2\cos^2{\theta}~\alpha^1=F^1\notag\\
4\sin^2{\theta}~\partial^2_\theta\alpha^0+2\sin{\theta}\cos{\theta}~\partial_{\theta}\alpha^0+{z_*}^2\sin{\theta}~\partial^2_\lambda\alpha^0-2\alpha^0=F^0
\end{gather}
As before, we first analyze the homogeneous equations by solving them using separation of variables.
\begin{gather}
{d^2\Theta^{1}\over d\theta^2}+\frac{1}{2}\cot{\theta}{d\Theta^{1}\over d\theta}-\bigg(\frac{1}{2}\cot^2{\theta}+\frac{k^2}{4\sin\theta}\bigg)\Theta^{1}=0\notag\\
{d^2\Theta^{0}\over d\theta^2}+\frac{1}{2}\cot{\theta}{d\Theta^{0}\over d\theta}-\bigg(\frac{1}{2}\text {cosec~}^2{\theta}+\frac{k^2}{4\sin\theta}\bigg)\Theta^{0}=0\notag\\
{d^{2}\Phi^{(0,1)}\over d\lambda^2}+\bigg({k\over z_*}\bigg)^2\Phi^{(0,1)}=0
\end{gather}
The solution to the $\theta$ part is given in terms of the generalized Heun's function,
and can be shown to yield trivial solutions under the boundary conditions assumed. We will now solve the inhomogeneous Jacobi equation for the strip subsystem for two separate cases ,
1. Strip along $'x'$ boost along $'x'$: In this case we consider the width of the strip to be along the $y$ direction and length along the $x$ direction in bounday of $AdS_4$. The inhomogeneous term for the Jacobi equation in this case is calculated for the asymptotic Boosted AdS blackbrane geometry (appendix \ref{Pert}) where the boost is along the $x$ direction.
2. Strip along $'x'$ boost along $'y'$: In this case the direction of the strip remains unchanged but the inhomogeneous term is now calculated for the same geometry but with the boost being along $y$ direction.
Changing the boost direction results in different deformations of the minimal surface. In the first case the surface remains on the same constant time ($t$) slice while in the second case there is a deviation of the surface along the time direction.
\subsubsection{\underline{Strip along `x' boost along `x'}}\label{SAXBAX}
In this case the $e_0$ equation turns out to be trivial i,e the inhomogeneous term is zero in the $e_0$ equation. Hence the surface remains on the same time slice. The $e_1$ equation is however non trivial. Note that since the right hand side is not a function of $\lambda$, only the $k=0$ solution will be non trivial, which can be recast into,
\begin{gather}
{d^2\Theta^{1}\over d\theta^2}+\frac{1}{2}\cot{\theta}{d\Theta^{1}\over d\theta}-\bigg(\frac{1}{2}\cot^2{\theta}\bigg)\Theta^{1}=\frac{1}{4}\left(3D+\frac{3C}{2}\right)z_*^3\left(\sin{\theta}\right)^{\frac{1}{2}}-\frac{7}{8}Dz_*^3\left(\sin{\theta}\right)^{\frac{5}{2}},
\end{gather}
where expressions for $C,D$ can be found in appendix \ref{Pert}. The homogeneous solutions for this is,
\begin{gather}
\Theta^1(\theta)
= \frac{C_1\cos\theta}{\sqrt{\sin\theta}}+C_2\sin{\theta}~~_2F_1\bigg(\frac{1}{4}, 1; \frac{1}{2}, \cos^2\theta\bigg),
\end{gather}
and the Wronskian is $W(\theta)=e^{-\frac{1}{2}\int\cot(\theta)d\theta}=\frac{1}{\sqrt{\sin{\theta}}}$.
The full solution is then $\Theta^1_c+\Theta^1_p$.
\begin{gather}
\Theta^1=\frac{C_1\cos(\theta)}{\sqrt{\sin(\theta)}}+C_2\sin{\theta}~~_2F_1\bigg(\frac{1}{4}, 1; \frac{1}{2}, \cos^2\theta\bigg)\notag\\
-\frac{\cos(\theta)}{\sqrt{\sin(\theta)}}\int_{}^{\theta}\left[\frac{1}{4}\left(3D+\frac{3C}{2}\right)z_*^3\left(\sin{\theta}\right)^{\frac{1}{2}}-\frac{7}{8}Dz_*^3\left(\sin{\theta}\right)^{\frac{5}{2}}\right](\sin{\theta'})^{\frac{3}{2}}~~_2F_1\bigg(\frac{1}{4}, 1; \frac{1}{2}, \cos^2\theta'\bigg)d\theta'\notag\\
+\sin{\theta}~~_2F_1\bigg(\frac{1}{4}, 1; \frac{1}{2}, \cos^2\theta\bigg)\int_{}^{\theta}\left[\frac{1}{4}\left(3D+\frac{3C}{2}\right)z_*^3\left(\sin{\theta}\right)^{\frac{1}{2}}-\frac{7}{8}Dz_*^3\left(\sin{\theta}\right)^{\frac{5}{2}}\right]\cos(\theta')d\theta'
\end{gather}
It is not possible to get an analytical form of the integral involving the hypergeometric function. However since certain definite integrals are known for hypergeometric function, we hope that the final integral involving the change in area can be obtained by doing an integration by parts.
To evaluate the integration constants we put the boundary condition $\Theta=0$ at $\theta=0$ and $\theta=\pi$. On demanding these the values of the constants turn out to be $C_1=\frac{\pi z_*^3}{16} \left(2 C+D\right)$ and $C_2=-\frac{\Gamma \left(\frac{1}{4}\right)^2 z_*^3\left(2C+D\right)}{16 \sqrt{2 \pi }}$.
We now go over to the calculation of the integrals for calculating the change of area. Before calculating the terms involving the deviation vector, we first evaluate the ones involving the metric perturbations only. The first order change in HEE is,
\begin{gather}
\Delta^{(1)}S={1\over 4G_N}\Delta^{(1)}A={1\over 8G_N} \int d^{d-1}\tau~\sqrt{h}h^{ab}\accentset{(1)}{P}(\partial_{a},\partial_{b})
=\frac{2L}{32 G_N} \pi z_*^2 (2 C+D)\notag\\=\frac{2L\times l^2}{4G_Nz_0^3}\frac{ (1+2\beta^2\gamma^2)\Gamma \left(\frac{1}{4}\right)^2}{32~ \Gamma \left(\frac{3}{4}\right)^2},
\end{gather}
which again matched with the results obtained in \cite{Mishra:2015cpa,Mishra:2016yor}. As before the last three terms in the second variation formula are,
\begin{gather}
\int d^{d-1}\tau~\sqrt{h}{h^{ab}\over 2}\accentset{(2)}{P}(\partial_{a},\partial_{b})=\frac{2L\times\pi ^{3/2} c^5 (7 C'+5 D')}{21 \sqrt{2} \Gamma \left(\frac{3}{4}\right)^2}
\end{gather}
The next term which involves the product of perturbations is,
\begin{gather}
\int d^{d-1}\tau~\sqrt{h}{1\over 2}h^{ac}h^{bd}\accentset{(1)}{P}(\partial_{a},\partial_{b})\accentset{(1)}{P}(\partial_{c},\partial_{d})=\frac{2L\times z_*^5 K\left(\frac{1}{2}\right) \left(77 C^2+45 D^2\right)}{231 \sqrt{2}}
\end{gather}
Finally we have the term
\begin{gather}
\int d^{d-1}\tau~\sqrt{h}{1\over 4}h^{ab}h^{cd}\accentset{(1)}{P}(\partial_{c},\partial_{d})\accentset{(1)}{P}(\partial_{a},\partial_{b})=\frac{2L\times\sqrt{\pi } z_*^5 \Gamma \left(\frac{5}{4}\right) \left(77 C^2+110 C D+45 D^2\right)}{462 \Gamma \left(\frac{3}{4}\right)}
\end{gather}
Now we go over to the other integrals. Consider the term,
\begin{gather}
\int d^{d-1}\tau~\sqrt{h} \Bigl(h^{ab}h^{cd}\accentset{(1)}{P}(\partial_b,\partial_d)g(N^{\perp},K(\partial_a,\partial_c))-h^{ab}g(C(\partial_a,\partial_b),N^{\perp})\Bigr)\notag\\
~~~~~~~~~~~~~~~~~~~~~~~~~~~~=\int_{-L}^{L}\int_{0}^{\pi}\frac{1}{2 z_*\sin ^{\frac{3}{2}}\theta}\bigg[ z_*^3 \left(\frac{3 C}{2}+3 D\right) \sin^{\frac{5}{2}} \theta-\frac{7}{2} z_*^3 D \sin ^{\frac{9}{2}}\theta\bigg]\Theta^1~d\theta~d\lambda
\end{gather}
Note that $\Theta^1$ contains two terms. One that does not have an analytical form and the other which does. Lets write these as $\Theta^1=-\frac{\cos(\theta)}{\sqrt{2\sin(\theta)}}\int_{0}^{\theta}f(\theta')~d\theta'+G(\theta)+\Theta^1_c(\theta)$. Therefore the above integral becomes,
\begin{eqnarray}
\int_{-L}^{L}\int_{0}^{\pi}&&\frac{1}{2 z_*\sin ^{\frac{3}{2}}\theta}\bigg[ z_*^3 \left(\frac{3 C}{2}+3 D\right) \sin^{\frac{5}{2}} \theta-\frac{7}{2} z_*^3 D \sin ^{\frac{9}{2}}\theta\bigg]\\\nonumber
&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~\times\Bigg(-\frac{\cos(\theta)}{\sqrt{\sin(\theta)}}\int_{0}^{\theta}f(\theta')~d\theta'+G(\theta)+\Theta^1_c\Bigg)d\theta~d\lambda
\end{eqnarray}
Note that the $G(\theta)$ can be obtained easily and the value evaluates to,
\begin{gather}
\frac{2L\times\sqrt{\pi } z_*^5 \Gamma \left(\frac{9}{4}\right) \left(77 C^2+110 C D+29 D^2\right)}{352 \Gamma \left(\frac{11}{4}\right)}
\end{gather}
The complementary part of the solution gives,
\begin{gather}
-\frac{2L}{64} \sqrt{\frac{\pi }{2}} z_*^5 \Gamma \left(\frac{1}{4}\right)^2 (2 C+D)^2
\end{gather}
The other integral is of the form $\int_{0}^{\pi} \left(g(\theta)\int_0^\theta f(\theta')d\theta'\right)d\theta$ and can be evaluated by parts,
\begin{gather}
\int_{0}^{\pi}g(x)\int_{0}^{x}f(x')dx'dx=\left[\left(\int_{0}^{x}f(x')dx'\right)\left(\int g(x)dx\right)\right]_{0}^{\pi}-\int_{0}^{\pi}f(x)\int g(x')dx'dx
\end{gather}
The first term in the above expression does not contribute, while the second term reproduces the number obtained for $G(\theta)$.
The total variation $\Delta^{(2)}S$ is then given as,
\begin{gather}
\Delta^{(2)}S={1\over 4G_N} \Delta^{(2)}A=\frac{2L\times l^5 }{z_0^6}\frac{\Gamma \left(\frac{1}{4}\right)^5}{\Gamma \left(\frac{3}{4}\right)^7}\frac{ \left(-84 (\pi -1) \beta ^4 \gamma ^4+28 (4-3 \pi ) \beta ^2 \gamma ^2 +(48-21 \pi)\right)}{21504\times 4 G_N\sqrt{2} \pi}
\end{gather}
This expression gives the second order change of HEE. As in the case of circular disk, the positivity of relative entropy demands that $\Delta^{(2)}A\leq 0$. This can be checked through a plot of $\Delta^{(2)}A$ against $\beta$ (See Fig-1). The whole expression is negative (at $\beta=0$) and monotonically decreasing as $\beta$. The change $\Delta S$ or the plot cannot however be trusted for too large values of $\beta$, since one needs to add further higher order corrections to the change for large $\beta$.
\begin{figure}[h]
\centering
\begin{overpic}[width=10cm]{Thinstripcase1.eps}
\put(-11,10){\small$\Delta^{(2)}A$}
\put(80,60){\small$\beta\rightarrow$}
\end{overpic}
\caption{\it Plot of $\Delta^{(2)}A~~\text{vs}~~\beta$ for strip along $x$ boost along $y$}
\end{figure}
The full expression for change of HEE is then given by
\begin{gather}
\Delta S=\Delta^{(1)}S+\frac{1}{2}\Delta^{(2)}S\notag\\
=\frac{2L\times l^2}{4G_N z_0^3}\frac{ (1+2\beta^2\gamma^2)\Gamma \left(\frac{1}{4}\right)^2}{32~ \Gamma \left(\frac{3}{4}\right)^2}+\frac{2L\times l^5 }{2\times z_0^6}\frac{\Gamma \left(\frac{1}{4}\right)^5}{\Gamma \left(\frac{3}{4}\right)^7}\frac{ \left(-84 (\pi -1) \beta ^4 \gamma ^4+28 (4-3 \pi ) \beta ^2 \gamma ^2+(48-21 \pi) \right)}{4 G_N\times 21504~\sqrt{2} \pi}
\end{gather}
the above expression gives the net change in HEE for strip entangling surface upto second order over pure $AdS$(ground state) value.
\subsubsection{\underline{Strip along `x' boost along `y'}}
In this case all the integrals for $e_1$ are same as that of the previous case with $C,~D$ replaced by $\tilde C,~\tilde D$ and $C',~D'$ replaced by $\tilde C',~\tilde D'$ (see appendix \ref{Pert}). However in this case the non homogeneous part of the $e_0$ equation is non trivial. Hence the extremal surface doesn't remain on the same time slice . The equation is,
\begin{gather}
4\sin^2{\theta}~\partial^2_\theta\alpha^0+2\sin{\theta}\cos{\theta}~\partial_{\theta}\alpha^0+{z_*}^2\sin{\theta}~\partial^2_\lambda\alpha^0-2\alpha^0=-3z_*^3(\sin{\theta})^{\frac{5}{2}}B\cos{\theta},
\end{gather}
which following the previous arguments reduces to solving only the equation,
\begin{gather}
{d^2\Theta^0\over d\theta^2}+\frac{1}{2}\cot{\theta}{d\Theta^0\over d\theta}-\frac{1}{2}\text {cosec~}^2\theta\Theta^0=-\frac{3}{4}z_*^3(\sin{\theta})^{\frac{1}{2}}B\cos{\theta}
\end{gather}
The solutions of this can be obtained in a straightforward manner and therefore we do not have to resort to efforts made in the previous section.
The full solutions turns out to be of the form,
\begin{gather}
\Theta^0=\frac{-\tilde B z_*^3 \theta}{4\sqrt{\sin\theta}}+\frac{\tilde B z_*^3 \sin2\theta}{8\sqrt{\sin\theta}}-\frac{2 C_1 E\bigg(\left.\frac{1}{4} (\pi -2 \theta)\right|2\bigg)}{\sqrt{\sin (\theta)}}+\frac{C_2}{\sqrt{\sin (\theta)}}
\end{gather}
Imposing the conditions $\Theta=0$ at $\theta=0$ and $\theta=\pi$, fixes $C_1$ and $C_2$ to,
the solutions of which are,
\begin{gather}
C_1=\frac{\pi \tilde B z_*^3}{8 \sqrt{2} \left(2 E\left(\frac{1}{2}\right)-K\left(\frac{1}{2}\right)\right)},~~C_2=\frac{1}{8} \pi \tilde B z_*^3,
\end{gather}
where $K(\alpha)$ and $E(\alpha)$ are the complete elliptic integral of the first and second kind respectively. The contributions coming from the component $\alpha^1$ of the deviation vector turns out to be same as that in the previous section with $C,~D$ replaced by $\tilde C,~\tilde D$ and $C',~D'$ replaced by $\tilde C',~\tilde D'$. The only other contribution different from the previous case comes from $-Tr(C)$ for the component $\alpha^0$ of the deviation vector and evaluates to,
\begin{gather}
\frac{2L ~\pi ^{3/2} (21 \pi -80) B^2 z_{*}^5}{336 \sqrt{2} \Gamma \left(\frac{3}{4}\right)^2}
\end{gather}
Total variation $\Delta^{(2)}S$ without the previous term is then given by,
\begin{gather}
\Delta^{(2)}S=\frac{2L\times l^5 }{4 G_N z_0^6}~~\frac{\Gamma \left(\frac{1}{4}\right)^5}{\Gamma \left(\frac{3}{4}\right)^7}\left(\frac{(20-21 \pi ) \beta ^4 \gamma ^4+2 (40-21 \pi ) \beta ^2 \gamma ^2+2 (21 \pi -80) \beta \gamma ^4+(48-21 \pi) }{21504 \sqrt{2} \pi }\right)
\end{gather}
As in the previous case $\Delta^{(2)}A\leq 0$. This can be checked by plotting $\Delta^{(2)}S$ against $\beta$(see Fig-2). It is negative and monotonically decreasing as a function of $\beta$. It is important to note that the boost independent term in the expression for $\Delta^{(2)}S$ for both the cases is same. Setting boost to zero makes both the cases identical to AdS black brane geometry.
\begin{figure}[h]
\centering
\begin{overpic}[width=10cm]{Thinstripcase2.eps}
\put(-11,10){\small$\Delta^{(2)}A$}
\put(80,60){\small$\beta\rightarrow$}
\end{overpic}
\caption{\it Plot of $\Delta^{(2)}A~~\text{vs}~~\beta$ for strip along $x$ boost along $y$}
\end{figure}
The first order change in HEE is given by
\begin{gather}
\Delta^{(1)}S={1\over 4G_N}\Delta^{(1)}A=\frac{2L\times l^2}{4 G_N z_0^3}\frac{ (1+\beta^2\gamma^2)\Gamma \left(\frac{1}{4}\right)^2}{32~ \Gamma \left(\frac{3}{4}\right)^2}
\end{gather}
Thus the full expression for change in HEE is then given by
\begin{gather}
\Delta S=\Delta^{(1)}S+\frac{1}{2}\Delta^{(2)}S\notag\\
=\frac{2L\times l^2}{4 G_N z_0^3}\frac{ (1+\beta^2\gamma^2)\Gamma \left(\frac{1}{4}\right)^2}{32~ \Gamma \left(\frac{3}{4}\right)^2}\notag\\+\frac{2L\times l^5 }{8 G_N z_0^6}~~\frac{\Gamma \left(\frac{1}{4}\right)^5}{\Gamma \left(\frac{3}{4}\right)^7}\left(\frac{(20-21 \pi ) \beta ^4 \gamma ^4+2 (40-21 \pi ) \beta ^2 \gamma ^2+2 (21 \pi -80) \beta \gamma ^4+(48-21 \pi) }{21504 \sqrt{2} \pi }\right)
\end{gather}
the above expression gives the net change in HEE for strip entangling surface upto second order over pure $AdS$(ground state) value.
\section{Issues of Gauge dependence}
The $\Phi_\lambda$'s in section \ref{DOJE} are called the identification maps. It encodes the information about how points in the perturbed and the unperturbed space times are to be identified. The notion of gauge transformation can be shown to arise due to different choices of the $\Phi_\lambda$'s. It is evident that the identification maps can be so chosen that the location of the perturbed minimal surface in the unperturbed spacetime is same as that of the unperturbed minimal surface. This is precisely the interpretation of the Hollands-Wald gauge \cite{Hollands:2012sf} used in \cite{Lashkari:2015hha,Faulkner:2017tkh,Beach:2016ocq}. But it seems that this in general can be done at any order of perturbation and not just at the linear order. Further, it seems that by choice of such gauge one renders the inhomogeneous term, in the Jacobi equation obtained, trivial and therefore irrelevant. We must emphasize that this is not the case. In order to find the Hollands Wald gauge (at linear order) one has to solve a linear second order differential equation which is precisely the inhomogeneous Jacobi equation. This has also been pointed out in \cite{Mosk:2017vsz}. Therefore choosing the Hollands-Wald gauge does not trivialize the problem of finding the change in area. However, it is absolutely possible that the Holland- Wald gauge is a convenient choice if one tries to find identities that the higher order perturbations of the area functional satisfy or finding relations between two gauge independent quantities like the `Fisher information` and the canonical energy \cite{Lashkari:2015hha}.
Having discussed this it is quite viable to state that the inhomogeneous equation is gauge covariant. In other words any gauge transformation of the metric perturbation can be absorbed in a shift of the deviation vector itself. This is a quite plausible conclusion that follows from the following lemma due to \cite{Stewart:1974uz}. The linear perturbation $Q_1$ of a quantity $Q_0$ on $(M, g)$ is gauge invariant if and only if one of the following holds:
(i) $Q_0$ vanishes,
(ii) $Q_0$ is a constant scalar,
(iii) $Q_0$ is a constant linear combination of products of Kronecker deltas.
In our case $Q_0$ is the mean curvature $(H)$ of the extremal surface in the background spacetime and hence is identically zero. However there is a subtle issue in application of the above lemma in our case. The quantities $Q$ defined in the lemma are globally defined while $H$ is locally defined on a codimension two surface. The expression for the second variation of the area functional is however invariant under different choices of $\Phi_{\lambda}:M\rightarrow M_{\lambda}$.
\section{Conclusion}
A few comments about higher order perturbations are in order. As is usual with any perturbation theory, the homogeneous part of the second order perturbation equation would be same as the Jacobi equation. However the inhomogeneous term will now depend both on second order perturbations as well as first order deviations. Note the second order deviation vector $M$ (say), can always be taken to commute with $N$ owing to the fact that they represent independent variations. Since the normal bundle is two dimensional one can have at most two mutually commuting directions. Hence it seems that the perturbation will terminate at second order and the complete change of entanglement entropy can be obtained by exponentiating this change upto second order. However this is speculative and requires further investigation. We have presented a systematic approach to obtain the change in HEE up to second order. For simplicity we have calculated this in $4$-dimensions but the approach remains unchanged in higher dimensions. The inhomogeneous Jacobi equation and second variation of the area functional presented here can be applied to non $AdS$ geometries also. In fact the Jacobi operator simplifies for the asymptotically flat case. We have seen that second order change receives contributions from first order changes in the embeddings and second order change of the bulk metric. In this approach the nature of the flow of the extremal surface can be understood by looking at the components of the deviation vector. Further, having obtained the second variation one can check if more general entropy bounds \cite{Bekenstein:1980jp,Hod:2000ju,Bekenstein:1999cf,Park:2015hcz} or has any relation with geometric inequalities \cite{Dain:2011mv} in general.
Note added: While this manuscript was being completed, a relevant work \cite{Mosk:2017vsz} which takes an iterative approach to calculate the higher order change in HEE, came up. The expression used for second variation of area evaluate is similar to that obtained by us. A specific choice of gauge (Holland-Wald gauge) was made to obtain certain results for spherical subsystem. However in our work we use the Fefferman Graham gauge and explicitly solve the inhomogeneous equation for both spherical and strip subsystems, with Boosted Black Brane like perturbations and evaluate the area variation upto second order.
\section{acknowledgements}
The authors would like to thank Amit Ghosh, Lucas Ambrozio and Harvendra Singh for discussions. AG acknowledges Sudipta Sarkar for helpful discussions and for pointing out reference \cite{Mosk:2017vsz} on the very day it appeared on arXiv. AG also acknowledges Atul Dixit for his course on special functions and the hospitality at SINP, where part of this work was done . Most of the integrals and solutions of the differential equations were done using Maple \cite{maple} and Mathematica \cite{mathematica}. Tensor calculations were done with the GRTensor package \cite{grtensor} on a Maple platform. AG is supported by SERB, government of India through the NPDF grant (PDF/2017/000533).
|
1,108,101,562,922 | arxiv | \section{Introduction}
Literature analysis plays an important role in contributing to scientific and technological innovation. It facilitates researchers to comprehensively understand the achievements in related fields including the latest advancement and future development trends. How to make it easier and faster for researchers to extract key relevant information and discover useful knowledge from literature papers has started to receive attentions in recent years. \par
With the rapid development of science and technology, the number of scientific papers in literature has grown exponentially in numbers. Traditional literature analysis methods apply manual retrieval and statistical analysis to such a huge amount of papers, which is rather time-consuming and labor-intensive. This has consequently necessitated the application of text mining and related technologies for automated processing in literature analysis. \par
Literature analysis includes the abstract analysis \cite{Tshitoyan2019Unsupervised}, keywords extraction \cite{Gollapalli2017Incorporating}, the analysis of the cooperation relationship among authors \cite{Liao2018A}, etc. The conventional literature analysis efforts mainly focus on analyzing topics, authors, abstracts, keywords, references, etc., rather than the main content of papers. Yet, we observe that the methods and datasets involved in the papers in many domains, such as science, computing and engineering, are also important in the scientific literature as they provide readers with important information about the key entities reflecting the methods and datasets involved in the papers. For example, in the sentence of \textit{``As shown later, GRUs gave better results than LSTMs in our settings"}, the method entities are \textit{``GRUs"} and \textit{``LSTMs"}, and in another sentence of \textit{``Finally we report results on CoNLL NER datasets"}, \textit{``CoNLL NER"} here is labeled as a dataset entity. \par
As a relatively new research problem in literature analysis, method and dataset mining in literature can effectively complement the conventional literature analysis and reflect the development trend of methods and datasets and their complex relationships. For example, scientometrics based on method and dataset mining in scientific literature provide a supplementary to existing techniques based on mining meta data of research papers such as authors and keywords. They can also make more accurate algorithm and/or dataset recommendations if relationships between the existing method and dataset entities are well established. Method and dataset mining from literature is a relatively new research problem, and there has been a relatively small body of reported research on method and dataset mining in scientific papers up till now. Kovacevic et al. \cite{Kovacevic2012Mining} and Houngbo et al. \cite{Houngbo2012Method} adopt a CRF structure to extract entities that contain methods or other semantic entities. Zha et al. \cite{Zha2019Mining} propose a cross-sentence attention network for a comparative relation model to extract the algorithm and algorithm relationship from the text for mining an algorithm roadmap. \par
The primary challenge of method and dataset mining in literature lies in the accurate extraction of the method and dataset entities, where NLP and text mining techniques are the major means to be applied. In recent years, deep neural network methods are become increasingly popular in text mining as they can generate the dense vectors of sentences without handcrafted features, which significantly streamline text learning and analytic tasks \cite{Zhang2017Coupling}. Some research work from different fields have been reported that use deep learning methods to extract data in literature papers from chemistry \cite{Luo2018An}, biology \cite{Li2018Recognizing} and medicine \cite{Ji2018A}. However, these works do not pay much attention to the methods and datasets entities, and their methods cannot directly solve the task in question. Our task involves more fine-grained entity extraction, focusing specifically on method and dataset mining in the scientific literature. \par
In this paper, we propose a novel model, called MDER (Method and Dataset Entity Recognition) for method and dataset mining in literature. The model utilizes rule embedding and adopts a parallel structure of CNN and Bi-LSTM with the self-attention mechanism. To facilitate the training and testing of MDER, the datasets are generated from four research areas of computer science, including NLP, CV, Data Mining and AI. By conducting comprehensive experiments on our model, we obtain interesting findings that can provide a good guidance for future practical applications of our model. \par
The main contributions of this paper are summarized as follows: \par
\begin{itemize}
\item We propose MDER, a novel entity recognition framework by incorporating the rule embedding technique and a CNN-BiLSTM-Attention-CRF structure for method and dataset entities mining in scientific literature. MDER is a semantic-based deep learning extraction model which can capture the semantic relationship information between words within sentences. It incorporates the advantages of multiple components and makes the learning more efficient. Rule embedding reduces the learning burden of the model, CNN and BiLSTM help capture the structural information according to the current context and the self-attention mechanism pays more attention to the important words related to the target entities in sentences.
\item We evaluate MDER on datasets from multiple different research areas of computer science, and the model shows great transfer learning capacity among datasets from different areas. In particular, the model trained on the mixed dataset has the best transferability and generalization performance. Our model also outperforms the state-of-the-art approaches. The ablation experiment demonstrates the effectiveness of each building module of our model in collectively contributing to the good recognition performance of method and dataset entities in literature; \par
\item Through data augmentation, MDER is capable of effectively dealing with the scenarios where the number of training paper samples is limited, making our model more robust;
\item A long-term literature analysis using MDER to extract entities in PAKDD papers published from 2009 to 2019 shows that our model is effective in analyzing the intrinsic relationships within different methods and datasets exhibiting their development trends over years in a long time span.
\end{itemize}
\section{Related Work}
In this section, we will review related works on literature analysis and named entity recognition.
\subsection{Literature Analysis}
Literature analysis and mining refer to the automatic processing and analysis of the information from a large amount of literature documents. The academic literature, such as the published scientific papers, has unique structured characteristics and is different from other types of literature (news, blogs, web pages, etc.). The academic literature is composed of title, abstract, keywords, main body, and references. The main body usually includes introduction, related work, methods, experiments and conclusions. Early research focuses on mining the bibliographic information (title, authors, references, etc.) of academic literature to study the subject content \cite{Kim2010Semeval, Tan2016Acemap}, such as extraction of keyword from topics, heat analysis of topics, subject classification, author nationality distribution, etc. At the same time, plenty of works have been conducted in abstract keyword analysis, citation relationship analysis, and so on \cite{Gollapalli2017Incorporating, Qazvinian2010Citation, Tan2016Acemap}. Although those approaches seem to work well on summarizing academic papers by investigating bibliographic content, they may miss out a large amount of valuable knowledge hidden in the body of the papers.\par
Knowledge extraction aims to extract various knowledge information (called knowledge elements) by understanding, recognizing, and screening of the knowledge contained in the literature. Knowledge elements are the basic units and structural elements that make up knowledge, and supplement literature metadata (i.e. title, author, abstract, keywords, etc.), and describe the structure of the document. They generally characterize the literature in terms of words, phrases, concepts, and terms, such as research categories, methods, data, indicators, index values, etc. In general, there are four types of knowledge extraction methods: (1) manual annotation-based extraction methods \cite{Augenstein2017Semeval}; (2) pattern-matching based rule extraction methods \cite{Singh2017App}; (3) ontology-based statistical extraction methods \cite{Lin2017Disorder, Okamoto2017Applying}; and (4) deep learning extraction methods \cite{Wagstaff2018Mars, Basaldella2018Bidirectional}. The first three type of methods relying on feature engineering are labor intensive and reflect the coarse semantic granularity of topics and terms. They are applied to construct the domain ontology and reveal the domain development overview, and they cannot provide fine-grained and refined services. Kovacevic et al. \cite{Kovacevic2012Mining} use CRF to identify the four semantics of Task, Method, Resource/Feature and Implementation. Houngbo et al. \cite{Houngbo2012Method} adopt the rule-based technique and CRF to extract method terms from the biomedical corpus, such as algorithm, technique, analysis, approach and method etc. The last category of methods employing deep learning frameworks provides insight of the latent semantic in the literature context. They expand from the literature metadata, topic and term extraction to semantic annotation for natural language, i.e., semantic-based deep learning extraction methods. Zha et al. \cite{Zha2019Mining} propose a cross-sentence attention network for comparative relation model to extract the algorithm and algorithm relationship from scientific publications for mining algorithm roadmap.\par
In this paper, we mainly focus on the methods and datasets appearing in the experimental sections of research publications and use deep learning models to recognize methods and datasets.
\subsection{Named Entity Recognition}
The research work on named entity recognition (NER) \cite{Grishman1996Message}, a fundamental research topic in natural language processing (NLP), has been developed for more than two decades. The NER task aims to recognize naming referential items from the text and classify them into pre-defined and meaningful categories such as names of people, places, and organizations \cite{Yadav2018A}. In specific/different fields, there are various types of defined entities in the corresponding/respective domain. Early studies on NER mainly utilize handcrafted rules with dictionaries and statistical methods. The rule-based methods \cite{Kim2000A,Sekine2004Definition} use the rules built by linguistic experts manually to determine the type according to the matching degree between rules and entities. The statistical-based methods \cite{Kravalova2009Czech} mine and analyze the linguistic information contained in the training corpus by statistical models and extracts features from the training corpus, including word features, context features, and semantic features. These methods are not only difficult to cover all the linguistic scenarios but also time-consuming and labor-intensive.\par
In recent years, deep neural network models are receiving more and more attentions in text mining because they can learn the dense feature representations from raw sequences, instead of using handcrafted features manually engineered by experts \cite{Lample2016Neural, Yang2018Design}. The feature vectors are low-dimensional word representations with rich semantic information. For example, a deep network model, BiLSTM-CRF, is introduced to solve sequence labeling problems for word-level vector representations \cite{Huang2015Bidirectional}. Since then, the BiLSTM structure and the CRF structure have been extensively used in the NER task \cite{Dong2016Character}. In Kim's paper \cite{Kim2016Character-aware}, a convolutional neural network (CNN) was employed over characters to form character representations. Compared with the word-based model, the character-based model \cite{Li2017Leveraging, Yang2017Neural} performs better because the character-based model can handle the input sequence containing unusual characters or out-of-vocabulary words better \cite{Strubell2017Fast, Chen2019GRN}. CNN and BiLSTM are usually used to extract character-level morphological information (such as the prefix or suffix of a word). Ling et al. \cite{Ling2015Finding} build vector representations of words by jointing character representations which are obtained through using BiLSTM. Chiu et al. \cite{Chiu2016Named} and Ma et al. \cite{Ma2016End} concatenate word embedding and character embedding which are obtained through a CNN structure as the input vector representations of the model. Moreover, using the attention mechanism can enhance the word representation for concentrating on the key part of a sentence \cite{Zukovgregoric2017Neural, Luo2018An}. However, the models proposed by these works are relatively simple which only use either the embedding of characters and words, or the attention mechanism. Usually, these models structurally connects CNN and LSTM in series which might lose some information in the transmission process. Also, they are not very effective in capturing the connections between words and the dependencies between tags. In our paper, CNN and LSTM are used in parallel which makes the output expression vector retain more information. Our multiple components are integrated by adding self-attention and CRF which could better characterize the connection between different words and the dependency between tags. \par
There have been also a few surveys which focus on NER systems for specific domains and languages, including biomedical NER \cite{Gridach2017Character, Wei2019Named}, Chinese clinical NER \cite{Wang2019Incorporating, Li2019An} and chemical NER \cite{Korvigo2018Putting}.
\section{Our Model}
In this section, we will elaborate on the model we propose for recognizing methods and datasets from scientific literature. We start this section by first introducing the necessary notational conventions and the formulation of our research problem as follows.\par
Given a sentence $\{w_1, w_2, \cdots, w_n\}$ can define its character-level sequence as $\{c_{11},c_{12},\cdots,c_{ij},\cdots$ $,c_{nm_n}\}$, where $w_i$ represents the $i$-th word, $n$ is the length of the sentence, $c_{ij}$ denotes the $j$-th character of the $i$-th word and $c_{nm_n}$ denotes the $m_n$-th character of the $n$-th word. For the sake of simplicity, we will rewrite the character-level input sequence of the given sentence as $\{x_1,x_2,\cdots,x_i,\cdots,x_m\}$, where $x_i$ is the $i$-th character of the sequence with a length of $m$. Our aim is to identify the tags (e.g. $\emph{M}$, $\emph{D}$, or others) of all characters for each word \cite{Dong2016Character, Akbik2018Contextual}. To perform the task, we construct an entity recognition classifier that recognizes the entities in the sentence. We note that NER is first proposed as a word-level tagging problem, and most existing datasets use word-level tags to denote named entity phrases. However, as mentioned before, we treat a sentence as a sequence of characters in this paper and our model is designed to predict the tag of each character. Therefore, we choose to use character-level tags.
As a major advantage, character-level tagging can, to a large extent, avoid the appearance of new words when predicting character tags. We apply the BIO (Begin, Inside, Outside) tagging scheme for characters. Let $L=\{B-M, I-M,B-D,I-D,O,padding\}$ denote the tagging set, “B–M” and “B-D” denote the tags of the beginning character of the method entity and the dataset entity, respectively, “I-M” and “I-D” represent the tags of the inside character of the method entity and the dataset entity, respectively. If a character subsequence in a sentence constitutes a named entity (NE) phrase, each character in that subsequence receive the tag composed of the position indicator (B, I) and the NE type (M, D). Otherwise, characters are assigned with the outside tags (O). For example, the tag of the method entity, “LSTM”, is “B-M, I-M, I-M, I-M”. The goal of our work is to essentially map each sentence into a sequence of character-level tags. As a comparison, Figure \ref{Figure: 1 example} shows the difference between the word-level and character-level tags for an example sentence.\par
\begin{figure}
\centering
\includegraphics[scale=.64]{example.pdf}
\caption{An example sentence with word level and character level NER tags.}
\label{Figure: 1 example}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.25]{MDERmodel.pdf}
\caption{The architecture of our MDER model.}
\label{Figure:2 MDER model}
\end{figure}
Our model combines the rule-based technique and a new deep neural network structure. The overall architecture of the proposed model is illustrated in Figure \ref{Figure:2 MDER model}. It consists of the input embedding layer, the BiLSTM-CNN layer, the self-attention mechanism layer and the CRF output layer. Rule embedding can reduce the learning burden of the model and make the learning more efficient. CNN is a useful supplement to BiLSTM which helps capture the structural information according to the current context. The self-attention mechanism pays more attention to the important words related to the target entities in sentences and interactive information between different words. CRF considers the correlations between tags in neighborhoods and decodes the best tag chain jointly. Our model incorporates the advantages of multiple components and makes the learning more effective. Next, we will explain all components in the model sequentially from the input to the output.
\subsection{Input Embedding Layer}
The input embedding layer contains both the rule embedding and character embedding. The rule embedding is based on the concept of user-specified blacklist and whitelist which reflect the regular pattern of special entities in the specific scientific domain in question, while the character embedding directly transforms the words in literature by mapping each character to a high-dimensional vector space.
\subsubsection{Rule Embedding}
We construct one blacklist containing some general words and two whitelists of entities. The tag of each character in the blacklist and whitelists are regarded as additional supervised information which helps train the model parameters. The whitelist of methods contains some common method entities, such as “SVM”, and the tags of characters in this word are B-M, I-M and I-M, respectively. The whitelist of datasets contains some known dataset entities, such as “Wiki” whose tagging sequence is \{B-D, I-D, I-D, I-D\}. The blacklist contains some general words such as “the” with character tags \{O, O, O\}. The characters of the words that do not belong to the blacklist or whitelists are set to unknown. \par
For rule embedding, each character adopts the aforementioned blacklist and whitelist tagging method. Then, each $x_i$ is represented using $x_i^r=e^r(x_i)$, where $e^r\in \mathbb{R}^{d_{r\times m}}$ denotes a rule embedding lookup table matrix obtained through model learning, and $x_i^r\in \mathbb{R}^{d_{r\times 1}}$. $d_r$ is the dimension of rule embedding of each character.
\subsubsection{Character Embedding}
For character $x_i$, its character embedding is represented as $x_i^c:=e^c(x_i)$ by searching the character embedding lookup table matrix $e^r\in \mathbb{R}^{d_{c\times m}}$, where $x_i^c\in \mathbb{R}^{d_c\times 1}$, $d_c$ is the dimension of character embedding of each charter.\par
The final feature representation of a character is the concatenation of its character embedding and its rule embedding as $x'_i=[x_i^r; x_i^c]$. For an input sentence $[x'_1, x'_2, \cdots, x'_m]$, each character representation is an embedding vector $x'_i\in \mathbb{R}^{(d_r+d_c)\times 1}$, where $d_r+d_c$ is the dimension of the final character vectors. \par
\subsection{BiLSTM-CNN Layer}
In the subsequent embedding layer, character representations $[x'_1, x'_2, \cdots, x'_m]$ are fed into both a BiLSTM structure and a CNN structure. For the BiLSTM-CNN layer, the input is the character sequence of one sentence as $\{x_1,x_2, \cdots, x_i,$ $\ldots, x_m\}$.
\subsubsection{Bidirectional Long Short-Term Memory (BiLSTM)}
Bidirectional Long Short-Term Memory (BiLSTM) \cite{Graves2005Framewise} has been shown powerfully to capture the dependencies of the input sequence. We employ a two-layer BiLSTM on top of the embedding layer. The output of the first layer can capture more syntactic information, while that of the second layer can learn more semantic information. \par
In our model, the forward pass of a unidirectional LSTM \cite{Jozefowicz2015An, Hochreiter1997Long} at the $t$-th input character is calculated as follows: \par
\begin{align}\label{1-6}
i_t&=\sigma(W_i[h_{t-1}; x_t]+b_i) \\
f_t&=\sigma(W_f[h_{t-1}; x_t]+b_f) \\
o_t&=\sigma(W_o[h_{t-1}; x_t]+b_o) \\
\tilde{c_t}&=\tanh (W_c[h_{t-1}; x_t]+b_c) \\
c_t&=f_t*c_{t-1}+i_t*(\tilde{c_t}) \\
h_t&=o_t*\tanh (c_t)
\end{align}
where $W_i$, $W_f$, $W_o$ and $W_c$ are the parameter matrices to be learned, $b_i$, $b_f$, $b_o$ and $b_c$ are the bias parameters, $\sigma$ is the logistic sigmoid active function, tanh is the hyperbolic tangent function, $\ast$ is element wise multiplication, $x_t$ is the input of the current time $t$, $h_{t-1}$ is the output of the hidden layer at timestamp $t-1$ and $h_t$ is the output of the hidden layer at the current time $t$. $i_t$, $f_t$ and $o_t$ are the input gate, forget gate and output gate, respectively, $c_t$ and $c_{t-1}$ are the current cell state and the state of the cell at the previous moment, respectively and $\widetilde{c_t}$ is temporary cell state at the current time $t$. \par
After feeding character representation to the first layer of the two-layer BiLSTM, the BiLSTM units will generate the forward hidden states $\{\overrightarrow{h_1^1}, \ldots, \overrightarrow{h_m^1}\}$ and the backward hidden states $\{\overleftarrow{h_1^1}, \ldots,\overleftarrow{h_m^1}\}$, where $\overrightarrow{h_t^1}, \overleftarrow{h_t^1} \in \mathbb{R}^{d_h\times1}$. By concatenating the two hidden states, the first layer of BiLSTM outputs the intermediate result $h_t^1=[\overrightarrow{h_t^1};\overleftarrow{h_t^1}]$. $h_t^1$ is also the input of the second layer of BiLSTM at timestamp $t$. ${h_1^1, \ldots,h_m^1}$ go through the same operation as above to produce the final output of the second layer of BiLSTM. Finally, the forward state $\overrightarrow{h_t^2}$ and the backward hidden state $\overleftarrow{h_t^2}$ are concatenated as the final output representation of the two BiLSTM layer as: \par
\begin{align}\label{7,8}
h_t&=[\overrightarrow{h_t^2};\overleftarrow{h_t^2}] \\
H&=\{h_1, \ldots,h_m\}
\end{align}
where $h_t\in \mathbb{R}^{2d_h\times 1}, H_t\in \mathbb{R}^{2d_h\times m}$ and $d_h$ denotes the number of hidden units.
\subsubsection{Convolutional Neural Network (CNN)}
Previous works have shown that Convolutional Neural Network (CNN) is an effective approach to extract character-level morphological information from characters of words \cite{Santos2014Learning, Chiu2016Named}. Therefore, we use CNN in our work to capture this structural information according to the current context and encode characters into neural representations.
We use \emph{k} convolution kernels with size $p\times q$ and the convolution stride $s\times t$ to execute a convolution operation on inputs $[x'_1, x'_2, \ldots, x'_m]$ and obtain $k$ feature maps. Then, the feature maps are operated by the Rectified Linear Unit (ReLU) activation function. The ReLU function is computed as follows: \par
\begin{align}\label{9}
relu(e)=
\begin{cases}
0, & \text{if $e\leq 0$}\\
e, & \text{if $e> 0$}
\end{cases}
\end{align}
where $e$ is an element of one feature map. Each element of one feature map is fed into Formula \ref{9} to produce a new feature map. Then, by applying the maximum pooling to new feature maps $M^i$, we obtain the flattened representation of each new feature maps as
\begin{align}\label{10}
c^i={MaxPooling}_{t=1,\cdots,m}(M^i)
\end{align}
where $c^i\in \mathbb{R}^m, i=1,2,\cdots, k$. \par
Finally, we stack $k$ feature vectors together. The output vector $h'_t \in \mathbb{R}^{d_{cnn}\times 1}$ of CNN corresponding to each character is obtained by concatenating the same position of feature representations $\{c^1, c^2,\cdots, c^k\}$. In this way, we obtain the final representations $\{h'_1, h'_2,\cdots, h'_m \}$ of CNN layer. Note that here $d_{cnn}$ is equal to $k$. \par
The output vector $h_t$ of BiLSTM and the output vector $h'_t$ of CNN are concatenated as the input of the next layer as
\begin{align}\label{11, 12}
g_t&=[h_t;h'_t]\\
G&=\{g_1,\cdots,g_m\}
\end{align}
where $g_t\in \mathbb{R}^{(2d_h+d_{cnn})\times 1}$ and $G\in \mathbb{R}^{(2d_h+d_{cnn})\times m}$.
\subsection{Self-attention Layer}
To capture the interactive information between the context and the character, we employ a self-attention mechanism \cite{Vaswani2017Attention, Tan2018Deep} in our model. The self-attention mechanism can capture the long-range dependencies between tokens and contextual information in a sequence. It selectively pays more attention to some important information and gives higher weights to them, while gives lower weights to other information. Now, we describe how we obtain the attention vector representation. We define that:\par
\begin{align}\label{13-15}
Q&=G^TW^Q\\
K&=G^TW^K\\
V&=G^T
\end{align}
where $W^Q, W^K\in \mathbb{R}^{(2d_h+d_{cnn})*d_Q}$ are parameters to be learned during the training. For each character representation in the sequence, we take the dot product of its linear transformation vector and linear transformation vector of every character representation in the sequence to calculate un-normalized attention weights. We adopt the softmax of the weights to produce the normalized attention weights matrix $\boldsymbol{\alpha}$, where $\boldsymbol{\alpha}\in \mathbb{R}^{m\times m}$ and softmax(·) is a column-wise normalizing function. The standard self-attention mechanism is computed as follows: \par
\begin{align}\label{16}
\boldsymbol{\alpha}=softmax(QK^T)
\end{align}\par
Then, we use the attention weights $\boldsymbol{\alpha}$ to create a weighted sum across all output vector of the BiLSTM-CNN layers for attention vector representation $h_i^a$:
\begin{align}\label{17-19}
&H^a=Attention(Q,K,V)=\boldsymbol{\alpha}V\\
&i.e.\ h_i^a=\sum_{j=1}^{m}{\alpha_{ij}V_j}\\
&\sum_{j=1}^{m}\alpha_{ij}=1, \forall i=1, 2, \ldots, m
\end{align}
where $h_i^a$ denotes the component of $H^a$, and $h_i^a\in \mathbb{R}^{1\times (2d_h+d_{cnn})}$. As an attention vector at position $i$, $\alpha_{ij}$ indicates how much attention received at position $j$. $H^a=(h_1^a, \cdots, h_m^a)$ is the attention vector representations which each captures the history information of the whole sentence.
\subsection{Linear Layer}
Next, a linear layer is added on top of the attention layer and is used to produce the probability score for each character. Hence, the final output representation of the self-attention layer is computed by a full connection which maps $h_i^a$ into the tagging space of $|L|$ classes:
\begin{align}\label{20}
Z=H^aW^a+b^a
\end{align}
where$Z,b^a\in \mathbb{R}^{m\times |L|}, W^a\in \mathbb{R}^{(2d_h+d_{cnn})\times |L|}$. $b^a$ is the bias vector, $W^a$ is the transformation matrices and $|L|$ is the size of tagging set ($|L|=6$).
\subsection{CRF Output Layer}
A conditional random field (CRF) \cite{Lafferty2001Conditional} is a random field globally conditioned on the observation sequence and has been widely used in feature-based supervised learning approaches. Many deep learning based NER models use a CRF layer as the tag decoder because of its ability to consider the correlations between tags in neighborhoods and decode the best tag chain jointly \cite{Zheng2017Joint, Strubell2017Fast}. So, we also use a CRF module to jointly decode tag sequences to extract entities. We consider $Z$ to be the input sequence scores, which is generated from the self-attention layer. Therefore, for an input sentence (character-level) $x=\{x_1,x_2,\cdots,x_m\}$, the probability score of the $i$-th character being assigned with the $j$-th tag is calculated as $Z_{ij}$, where $i=1,2,\cdots,m$ and $j=1,2,\cdots,|L|$. We define the sequence of expectations as $y=\{y_1,y_2,\cdots,y_m\}$, where $y_i\in \{B-M,I-M,B-D,I-D,O,padding\}$. Then, we set $A$ as the transition matrix of probability $Z$, and $A_{ls}$ is the transition probability from tag $l$ to tag $s$ , where $l, s\in L$. The decoding score of prediction $y=\{y_1,y_2,\cdots, y_m\}$ is computed as:
\begin{align}\label{21}
score(x,y)=\sum_{i=1}^{m}{Z_{i,y_i}+\sum_{i=0}^{m}A_{y_i,y_{i+1}}}
\end{align}
where $Z_{i,y_i}$ is the score of tag $y_i$ for character $x_i$, and $A_{y_i,y_{i+1}}$ corresponds to the transition score of tag $y_i$ to tag $y_{i+1}$. For each $y$, we use a softmax function to calculate the conditional probability over all possible tag sequences as follows: \par
\begin{align}\label{22}
p(y|x)=\frac{exp(score(x,y))}{\sum_{\tilde{y}\in Y_x}exp(score(x,\tilde{y}))}
\end{align}
where $Y_x$ is the set of the possible tag sequences for given $x$. \par
During the training stage, we train the model parameters by maximizing the log-likelihood probability of correct tag sequence on the training set $\{(x,y^x)\}$, where $y^x$ is the true tag sequence for input sentence $x$. The calculation formula is defined as follows: \par
\begin{align}\label{23}
ln(p(y^x|x))=score(x,y^x)-log(\sum_{\tilde{y}\in Y_x}exp(score(x,\tilde{y})))
\end{align}\par
Finally, during the tag sequence prediction, we derive the optimal $y^*$ such that Equation \ref{24} can be maximized and take it as the predicted tag sequence, as shown in the following equation: \par
\begin{align}\label{24}
y^*={argmax}_{\tilde{y}\in Y_x}score(x,\tilde{y})
\end{align} \par
Consequently, we obtain the tagging sequence $y^*$ with the highest score among all tagging sequences in the final decoding stage using the dynamic programming Viterbi algorithm which is often used to solve the optimal path problem.
\section{Experimental Evaluation}
In this section, we experimentally evaluate the proposed model under different datasets against multiple evaluation metrics.
\subsection{Experimental Data Construction}
Existing entity recognition datasets, either available publicly Or used by other existing works, are more about the recognition of proper nouns such as places, organizations, people names, chemical or medical entities, etc., which are not suitable for training and evaluating our model. Hence, we need to construct new datasets to cater for the special needs of this research work. In order to study the practicability of the proposed model in recognizing method and dataset entities in scientific literature of different domains, we first construct four new datasets in this research based on the published papers from top-tier flagship conferences in four popular research areas of computer science, namely NLP, computer vision, data mining and artificial intelligence. They are respectively the Annual Meeting of the Association for Computational Linguistics (ACL), the International Conference on Computer Vision and Pattern Recognition (CVPR) , the ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD) and AAAI Conference on Artificial Intelligence (AAAI). We downloaded 50 papers from the official websites of ACL \footnote{https://www.aclweb.org/anthology/events/acl-2019/}, CVPR \footnote{http://openaccess.thecvf.com/CVPR2019.py}, SIGKDD \footnote{https://www.kdd.org/kdd2019/accepted-papers\#} and AAAI \footnote{https://www.aaai.org/Library/AAAI/aaai19contents.php}, respectively. \par
The construction of each of the four area-specific datasets takes three steps: 1) segmentation of the sentences in the experimental section of the papers, 2) sentence preprocessing and 3) manually tagging entities. Firstly, we convert the PDF version of a paper into its TXT text version, use rule matching to extract the entire paragraphs of the experimental section of the paper and cut paragraphs into sentences by punctuation (e.g. \textit{``."}, \textit{``?"}, etc.). Secondly, we correct the spelling mistakes that are generated during the paper format transformation process. The same number of sentences are randomly selected from all the four areas to generate the four datasets. Because the minimum number of sentences collected in the four area is 2,800, so the number of sentences for the four datasets are all kept as 2,800 to facilitate the experiment. \par
Thirdly, we recruit six graduate students in our institute to manually tag these sentences, which followed a standard process. Every pair of students marked the same sentences, and the tags are considered to be correct if both of them have the same tagging result. As a result, each dataset ends up with 2,800 annotated sentences. The details of the four datasets are shown in Table \ref{Table: 1 datasets}. \par
\begin{table}[htbp]
\centering
\small
\setlength{\tabcolsep}{4pt}
\begin{tabular}{lccccc}
\toprule
Data&\#sentence&\#sentence&\#method&\#dataset&\#entities \\
& &with entities&entities&entities& \\
\midrule
\multirow{1}{*}{ACL} &2,800&1,322&1,725&626&2,351 \\
\cmidrule(lr){1-6}
\multirow{1}{*}{CVPR} &2,800&1,301&1,403&827&2,230 \\
\cmidrule(lr){1-6}
\multirow{1}{*}{SIGKDD} &2,800&1,376&2,017&497&2,514 \\
\cmidrule(lr){1-6}
\multirow{1}{*}{AAAI} &2,800&1,453&2,072&611&2,683 \\
\bottomrule
\end{tabular}
\caption{\label{Table: 1 datasets} The details of the experimental data sets.}
\end{table}
In addition to the four area-specific datasets, we also construct a mixed dataset based on the four datasets. The mixed dataset is constructed via randomly selecting 700 sentences from each of the four datasets ACL, CVPR, SIGKDD and AAAI, then combining them to form the 2,800 sentences. \par
All the five datasets help us better study the cross-area transferability of our model, while at the same time ensuring the consistency of the experiments when evaluating our model in different areas.
\subsection{Experimental Settings}
Each data corpus is randomly divided into two parts with the proportion of 4:1, with four folds are used for training and one fold is used for testing. One eighth of samples in training data are used as the validation data. In other words, each dataset is split into training, validation and testing sets with the ratio of 7: 1: 2. Since each dataset has 2,800 annotated sentences, so the number of the corresponding training set, cross-validation set and test set is 1960, 280 and 560, respectively. \par
In our experiments, we set reasonable values for various parameters used. For each input sentence, the maximum length of its character sequence is set to 600, the dimension of the rule embedding is 40 and the character embedding dimension is 200. To reduce feature dimension, save the computational overhead and enhance the nonlinear expressiveness of the model, the CNN component of the proposed model uses 30 convolution kernels with size $1\times 1$ and the convolution stride is $1\times 2$ (We tested other choices as well, but this set of values is the best). Thus, $d_{cnn}$ and $k$ are both 30. BiLSTM module contains two layers, with each layer having 200 hidden units. In the attention layer, the second dimension $d_Q$ of the involved matrix $W^Q$ is 400. During the training stage, the batch size is set to 16 and we minimize the loss function using the Adam optimizer \cite{Kingma2014Adam} in MDER. The dropout rate \cite{Srivastava2014Dropout} and learning ratio are set to 0.5 and 0.001, respectively. Our model is implemented using Tensorflow. \footnote{https://www.tensorflow.org/} \par
To evaluate the performance of MDER, we employ three types of evaluation metrics, Precision (P), Recall (R) and F1-score (F1). The F1 is the harmonic mean of P and R, which can provide a more balanced evaluation of the performance of the model. Specifically, Precision, Recall and F1-score are defined as follows: \par
\begin{align}\label{24-25}
&Precision=\frac{\#\ correct\ entities}{\# identified\ entities} \\
&Recall=\frac{\#\ correct\ entities}{\#\ annotated\ entities} \\
&F1=\frac{2\times Precision\times Recall}{Precision+Recall}
\end{align}
where $\#\ identified\ entities$ denotes the number of entities that are predicted, $\#\ correct\ entities$ represents the number of entities that are correctly predicted, and $\#\ annotated\ entities$ denotes the number of entities that are in the dataset corpus. Note, an entity is considered to be predicted correctly only if the label of every character of the entity is correctly predicted.
\subsection{Effectiveness Evaluation and Analysis}
Using F1 measure, we study the effectiveness performance of MDER which is trained using the training samples and tested using the test samples of the five different datasets. This experiment helps the evaluation of the transferring learning performance of MDER, which refers to the performance of applying MDER, which is trained using the data in one area, to the data of another one. For MDER, we conduct the experiment three times on each dataset and report the average test results. The mixed dataset samples multiple times from the other four datasets to repeatedly train the model. Table \ref{Table: 2 Experimental result} shows the F1-meansure experimental results of our models trained on different datasets. The result of the $i$-th row and the $j$-th column in the table indicates that MDER uses the $i$-th dataset as the training set and the validation set, and uses $j$-th dataset as the test set. For example, the value of 0.6893 in the first row and the fourth column of the table means that the model trained with the ACL dataset achieves F1 score of 0.6893 on the testing samples of the AAAI dataset. Here, 70\% of the ACL dataset is used as the training set, 10\% of the ACL dataset is used as the verification set, and 20\% of the AAAI dataset is used as the test set. In this way, we obtain a total of 25 test results as shown in Table \ref{Table: 2 Experimental result}. We also calculate the average and standard deviation of the test results of each model and present them in the last two columns of Table \ref{Table: 2 Experimental result}. Furthermore, Figure \ref{Figure: 3 Box-plot} depicts F1 values of each model on the five test sets in a boxplot.\par
\begin{table}[!htbp]
\centering
\setlength{\tabcolsep}{4pt}
\begin{tabular}{|l|l|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\multirow{2}*{F1}}& \multicolumn{5}{c|}{Test Data} &\multicolumn{2}{c|}{Statistics}\\
\cline{3-9}
\multicolumn{2}{|c|}{}&ACL&CVPR&SIGKDD&AAAI&Mixed&mean&std\\
\hline
\multirow{4}*{\makecell[c]{Traning\\Data}}&ACL&0.7243&0.6020&0.6573&0.6893&0.6987&0.6743&0.0420\\
\cline{2-9}
&CVPR&0.5770&0.6937&0.6373&0.6477&0.6633&0.6438&0.0384\\
\cline{2-9}
&SIGKDD&0.5870&0.5913&0.7270&0.6688&0.6740&0.6496&0.0534\\
\cline{2-9}
&AAAI&0.6253&0.5737&0.6897&0.7310&0.6903&0.6620&0.0556\\
\cline{2-9}
&Mixed&0.6507&0.6757&0.6903&0.6923&0.6963&0.6811&0.0167\\
\hline
\end{tabular}
\caption{\label{Table: 2 Experimental result} The results of our model on different train/test datasets.}
\end{table}
\begin{figure}[H]
\centering \includegraphics[scale=0.38]{Box-plot.jpg}
\caption{The Box-plot of F1 values of each model.}
\label{Figure: 3 Box-plot}
\end{figure}
From the results presented in Table \ref{Table: 2 Experimental result} and Figure \ref{Figure: 3 Box-plot}, we can obtain a rich set of interesting findings as follows: \par
1) We can see from Table \ref{Table: 2 Experimental result} that the diagonal cells within the table have the highest F1 scores for each row of the result matrix. It means the model achieves the best predicting results when the training and test sets are from the same area. This result is intuitive and easy to understand since both sets are sampled from the same domain and therefore share very similar features in terms of entity types, syntactic and semantic information; \par
2) For each row in Table \ref{Table: 2 Experimental result}, the second overall highest F1 value appears in the column of the mixed dataset. This is because one quarter of the training set of the dataset come from each of the other four area-specific datasets, whereby providing a fair amount of area-specific information for training the model; \par
3) The third overall highest F1 value of each row is in the column of AAAI dataset because AAAI covers a broad sub-fields of artificial intelligence including NLP, computer vision and data mining, even though it is not as representative as the mixed dataset we constructed; \par
4) The model trained by the mixed dataset has the highest average F1 value and the smallest variance when compared with the models trained using the other four datasets. This suggests that the model trained by the mixed dataset features the best entity recognition and generalization performance across different areas under study. The model trained using the other four area-specific datasets tend to be inferior in terms of performance due to the more limited area-specific entity features learned from the training sets. This also demonstrates the salient advantage of creating the mixed dataset in contributing to a more robust model that excels in transferring learning when mining methods and datasets from literature across multiple different areas.
\subsection{Comparative Study}
\begin{table}[htbp]
\centering
\small
\setlength{\tabcolsep}{4pt}
\begin{tabular}{lccccccc}
\toprule
&Precision&Recall&F1 \\
\midrule
\multirow{1}{*}{Baseline1} &0.660&0.562&0.6071 \\
\cmidrule(lr){1-4}
\multirow{1}{*}{Baseline2} &0.665&0.569&0.6133 \\
\cmidrule(lr){1-4}
\multirow{1}{*}{Baseline3} &0.698&0.597&0.6436 \\
\cmidrule(lr){1-4}
\multirow{1}{*}{MDER} &$\textbf{0.7623}$&$\textbf{0.6420}$&$\textbf{0.6963}$ \\
\bottomrule
\end{tabular}
\caption{\label{Table: 3 Comparative result} Our model and other model on the mixed dataset.}
\end{table}
As far as we know, there are very limited research works on the problem of method and dataset mining from literature. To evaluate the performance of our proposed model through a comparative study, we select several popular models for named-entity recognition (NER) as the baseline methods for performance comparison, which include:\par
\begin{itemize}
\item Kuru et al. \cite{Kuru2016Charner} use a BiLSTM + softmax network for CharNER (called Baseline1);
\item Huang et al. \cite{Huang2015Bidirectional} use a BiLSTM + CRF model for sequence tagging. The BiLSTM + CRF structure is widely used as a baseline model for the sequence labeling work (called Baseline2);
\item Gregoric et al. \cite{Zukovgregoric2017Neural} construct a BiLSTM + Self-Attention + CRF network for NER (called Baseline3).
\end{itemize}
Our model and the three baseline models are trained and tested three times on the mixed dataset and results are averaged. To evaluate the performance of models, we employ precision, recall, and F1-score metrics. Table 3 shows the experimental results of the comparative study. As depicted in the last row of Table \ref{Table: 3 Comparative result}, our model offers a significantly better performance in all the three measurements compared with the three baseline models.
\subsection{Ablation Study (The Effect of Different Modules)}
Since our proposed model consists of several key building modules (i.e., rule embedding, CNN (two-layer BiLSTM) and self-attention), we design a series of model variants to further verify the effectiveness of each building module. In this section, we train MDER with or without each of those building modules in the ablation experiment. The details of the involved model variants, called MDER w/o rule, MDER w/o CNN and MDER w/o self-attention, respectively, are presented as follows: \par
\begin{itemize}
\item MDER w/o rule: The MDER model without the rule embedding module;
\item MDER w/o CNN: The MDER model without the CNN module;
\item MDER w/o self-attention: The MDER model without the self-attention module.
\item MDER w/o CRF: The MDER model without the CRF module.
\end{itemize}
\begin{table}[htbp]
\centering
\small
\setlength{\tabcolsep}{4pt}
\begin{tabular}{lccccccc}
\toprule
&Precision&Recall&F1 \\
\midrule
\multirow{1}{*}{w/o rule} &0.7247&0.6157&0.6657 \\
\cmidrule(lr){1-4}
\multirow{1}{*}{w/o CNN} &0.7157&0.6080&0.6574 \\
\cmidrule(lr){1-4}
\multirow{1}{*}{w/o self-attention} &0.7047&0.5957&0.6455 \\
\cmidrule(lr){1-4}
\multirow{1}{*}{w/o CRF} &0.6721&0.5704&0.6171 \\
\cmidrule(lr){1-4}
\multirow{1}{*}{Complete MDER} &$\textbf{0.7623}$&$\textbf{0.6420}$&$\textbf{0.6963}$ \\
\bottomrule
\end{tabular}
\caption{\label{Table: 4 Ablation result} The effect of different module of our model on the mixed dataset.}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=.46]{with_o_four.jpg}
\caption{The performance drop of the three models relative to the original model on the mixed dataset.}
\label{Figure: 4 histogram}
\end{figure}\par
We train and test all these model variants using the mixed dataset three times and compare them with the complete MDER model. Table \ref{Table: 4 Ablation result} shows the entity recognition performance of all these models. Unsurprisingly, all model variants with a certain key component being taken out achieve the inferior performance in terms of all the three performance measurements compared with the full MDER model. Figure \ref{Figure: 4 histogram} shows the performance drop (by percentage) of the four model variants compared to the full MDER model. It is easy to see from the histogram that the w/o CRF and self-attention models have the larger percentage of decline, which suggests that the two module have the greater impact on our model. The third is the w/o CNN model, while the w/o rule model has the slightest performance drop which shows that its influence on our model is the least, relative to the other model variants. \par
Despite their varying degree of importance, all the four key building modules are important to collectively contribute to the good performance of our model: rule embedding helps reduce the learning burden of the model and make the learning more efficient, CNN can capture the structural information according to the current context, which is a useful supplement to BiLSTM. The self-attention component focus on the important words related to the entity in the context bases on the interactive information between different words. CRF could better characterize the dependency between tags which is widely used.
\subsection{The Effect of Data Augmentation}
In this section, we will explore the impact of the different dataset sizes on the performance of MDER. Given the limited amount of real data available for all the four datasets, we resort to data augmentation to increase the data set size in a very efficient and economical way so that we can study how it will impact the performance of our model. To conduct data augmentation, based on the original 2,800 sentences in each dataset, we generate synthetic samples through entity substitution and add them into each dataset. For each dataset, we have one glossary for the method entities and another one for the dataset entities. Entities in sentences within the dataset are randomly replaced with other entities with the same type based on the corresponding glossary. \par
\begin{figure}[H]
\centering
\includegraphics[scale=.48]{data_augmentation.jpg}
\caption{The result of our model on different datasets size.}
\label{Figure: 5 augmentation}
\end{figure}
Figure \ref{Figure: 5 augmentation} shows the variations of performances under different proportions of data augmentation on five datasets, and the five lines represent the F1 scores of the proposed model after augmentation is applied on the five datasets. The starting F1 value of each line correspondents to the diagonal value in Table 2 as both the training and test sets are from the same area. To maintain fairness and consistency, the model on each line is tested on the same testing set, i.e. the 20\% of the original 2,800 sentences, for each dataset. In the experiment, we take a 50\% increment for the amount of augmented data for each dataset. As a result, the size of the final dataset after augmentation is 1.5, 2, 2.5, 3, 3.5 and 4 times of the original dataset, respectively. From Section 4.2, we know the ratio of the training set to the test set is 7/2 = 3.5 on the original dataset (which is the starting value of the x axis of Figure \ref{Figure: 5 augmentation} and \ref{Figure: 6 training}). Hence, the final ratio of the size of the training set against the test set is 5.25, 7, 8.75, 10.25, 12.25, 14, respectively after data augmentation. For the augmentation of the mixed dataset, we still randomly sample from the four area-specific datasets as we do in Section 4.1. \par
\begin{figure}[H]
\centering
\includegraphics[scale=.48]{training_time.jpg}
\caption{The result of the training time of the models with the increase of data.}
\label{Figure: 6 training}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=.48]{test_time.jpg}
\caption{The result of the test time of the models with the increase of test data set.}
\label{Figure: 7 test}
\end{figure}
As shown in Figure \ref{Figure: 5 augmentation}, the general trend of each line is same. When the number of training data increases at the beginning, the F1 value increases by a large margin. This is because that a larger training dataset can substantially increase the performance of our model at the beginning. However, after the ratio reaches beyond 7, the increasing trend for our model becomes gradually flattening even when more training data are added, meaning that the benefits of additional training data start to diminish. \par
Figure \ref{Figure: 6 training} and Figure \ref{Figure: 7 test} characterize the execution time of our model in the training and testing stages, respectively. Both figures show an approximately linear increase of the execution time of our model when the training and testing data sets increase in size, but the lines in Figure \ref{Figure: 6 training} feature a higher slope than that in Figure \ref{Figure: 7 test}, indicating that the training is more computationally expensive than the testing for our model. In other words, once our model has been trained, applying it to new papers for method and dataset mining is more efficient. Considering both the F1 values and the execution time presented in Figure \ref{Figure: 5 augmentation}, \ref{Figure: 6 training} and \ref{Figure: 7 test}, we can see that when the data ratio goes beyond 7, the linear increase of the training time does no longer justify the increasingly negligible gain in recognition performance.
\subsection{Long-term Literature Mining}
\begin{table}[htbp]
\centering
\small
\setlength{\tabcolsep}{4pt}
\begin{tabular}{lccccc}
\toprule
&ACL&CVPR&SIGKDD&AAAI&ALL \\
\midrule
\multirow{1}{*}{Precision} &76\% &74\% &82\% &77\% &80\% \\
\bottomrule
\end{tabular}
\caption{\label{Table: 5 five models} The Precision results tested with five models.}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[scale=.46]{3_years_network.jpg}
\caption{The method network with edge weights $>2$ in 2009, 2014 and 2019.}
\label{Figure: 8 network}
\end{figure}\par
\begin{figure}
\centering
\includegraphics[scale=.23]{betweenness_histogram-small.jpg}
\caption{The histogram of the betweenness centrality of the top ten methods from 2009 to 2019.}
\label{Figure: 9 betweenness}
\end{figure}
In this section, we analyze the long-term development (over 10 years) of the method and dataset entities involved in PAKDD publications to discover useful patterns and insights. The general idea is to train our model using the five different datasets (which leads to five different models) and apply them individually to extract the method and dataset entities appearing in PAKDD publications. The test dataset for this case study is composed of the sentences collected from the experimental section of 1,226 PAKDD conference papers published from 2009 to 2019. \par
Because the test data of PAKDD are unlabeled and it is very time-consuming to label all of them to obtain the ground truth result, it is thus difficult to accurately quantify the Precision, Recall, and F1-score of our model on the PAKDD dataset. As an approximate alternative evaluation approach, we take a sampling strategy to only evaluate the precision performance of each model. To be specific, we randomly select a specific number of predicted method and dataset entities (such as 50 each) from our model, and then manually validate the correctness of the entity tagging (which is very efficient thanks to the typically small number of recognized entities). If there are, for example, a total of 90 entities which have been labeled correctly by our model, then the precision is 90\%. Please note that we do not intentionally tune our models in order to achieve a high precision in this experiment. The models are trained in a normal way which tries to achieve a good balance between precision of recall in the first place. The precision results of the five models in this experiment are shown in Table \ref{Table: 5 five models}, which involves employing the sampling strategies multiple times and averaging the final Precision results. Again, because the entities in the PAKDD dataset are not labeled, the Recall as well as the F1 measure cannot be quantified even using the sampling strategy, so the Recall and F1 measure results are not reported here.\par
The result shows that the recognition performance of the model trained on the SIGKDD dataset is the best, which is quite easy to understand as both PAKDD and SIGKDD are conferences in the field of data mining. Therefore, we use the recognition results produced by the model trained on the SIGKDD dataset to further construct complex network graphs and the histogram of the betweenness centrality (Figure \ref{Figure: 9 betweenness}). These visualization results can better show the popular methods used in different areas of computer science in the 10-year period from 2009 to 2019 and how they evolved over time. \par
The complex network graphs are built for visualizing the extracted method entities every five years starting from 2009 (Figure \ref{Figure: 8 network}). An edge in the graphs represents that the two methods connected by the edge appear in the same paper, and the edge weight is the number of papers containing the two methods. To facilitate the drawing of the graphs, we manually delete the inconsistent entities in the result and unify the naming method entities in the form of lowercase abbreviation. Figure \ref{Figure: 8 network} (a)-(c) show the partial graph results whose edge weights are greater than 2. The color of the nodes, indicating the category of methods, is supplied through manually classification.\par
Figure \ref{Figure: 8 network} (a) shows that the shallow classification algorithms, such as support vector machine (SVM), decision tree (DT), $k$-nearest neighbor (KNN), naive Bayes (NB) and Random Forest (RF), often appeared together in many PAKDD papers in 2009. The clustering algorithms also received a lot of attention, and researchers prefer to use hierarchical clustering and k-means methods. Figure \ref{Figure: 8 network} (b) presents that the shallow classification algorithms are still widely used in 2014, such as SVM, DT, NB, Logistic Regression (LR), etc., as well as the variants of SVM such as LIBSVM and TSVM. Recommendation methods are also popular, such as Collaborative Filtering (CF) and Probabilistic Matrix Factorization (PMF). Figure \ref{Figure: 8 network} (c) demonstrates that the deep learning models dominate the landscape in 2019. For example, Convolutional Neural Network (CNN), Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM) and Recurrent Neural Network (RNN) co-occur usually. Some text representation methods such as skip-gram and node2vec become also popular. Meanwhile, the shallow machine learning models, such as SVM, DT, KNN and LR, are still being extensively used, even though not as dominating as a decade ago.\par
Figure \ref{Figure: 9 betweenness} shows the top ten methods based on the betweenness centrality analysis in the complex network graph for each year. The betweenness centrality indicates the importance of a node in the graph by counting the number of shortest paths between each pair of nodes in the graph that pass through this node. In a network, the larger this value is, the more important the node will be. From the figure, we can see that SVM has the largest betweenness centrality for seven years, indicating that it is closely related to other methods and have been commonly used as a baseline comparison model in many studies during this period. From 2009 to 2016, the shallow machine learning methods dominated the landscape, but deep learning became increasingly popular from 2017 and start to dominating the landscape in the area of data mining - LSTM rose from the fourth position in 2018 to the first one in 2019 followed by CNN in the second position. \par
We also analyze the patterns exhibited by the dataset entities extracted from the PAKDD papers. Specifically, We study the number of papers using the same dataset entity. According to statistical observations, in 2009, the frequencies of machine learning datasets are high, such as UCI, Wine, Iris, 20newsgroup and WeBKB, etc. However, in 2019, deep learning datasets about text and images are popular, such as SemEval, Stanford CoreNLP, Twitter, MNIST, ILSVRC and YouTube Faces, etc. This is clearly consistent with the insights we obtained from the method mining regarding the general research development trajectory for computer science in the past decade.
\section{Conclusion and Future Research Directions}
In this paper, we study the problem of method and dataset mining in scientific papers via a semantic-based deep learning extraction model, called MDER. Our model, which combines the rule embedding technique and a CNN-BiLSTM-Attention-CRF structure, achieves competitive performance on extracting method and dataset entities in computer science papers. Through comprehensive experimental studies, we find that our model has great transfer learning ability and generalization performance on datasets of different domains, especially on the mixed dataset. Also, the ablation experiments indicate that each module of our model is complementary which together collectively contribute to the good recognition performance. By applying the trained MDER model on the published PAKDD papers published during 2009-2019, we can effectively mine and analyze the relationship among different methods and datasets and their development trajectory and trends over a long time span. Based on the success of this case study, we believe that our model is generic enough to be applied to mine method and dataset entities from other domains as well. \par
In the future, we will study additional interesting patterns around the recognized entities such as mining the opinion and sentiments of authors regarding different methods to achieve automatic performance assessment of different methods. We are also interested in building models that can automatically classify the entities to produce, for example, the categories of different methods, and offer personalized method and dataset recommendations. Finally, we plan to apply our model for mining method and dataset entities from literature in other appropriate domains, such as science and engineering, to further enhance its impact to the literature mining in those domains.
\printbibliography{}
\typeout{get arXiv to do 4 passes: Label(s) may have changed. Rerun}
\end{document}
\endinput |
1,108,101,562,923 | arxiv | \section{Introduction}
\label{s.intro}
Since its inception in 1960s, Computed Tomography (CT) has enjoyed a huge success in medical imaging. It is characterized by a specific acquisition and reconstruction process, in which a set of X-ray projections is first acquired for varying positions of the source and the detector and where X-rays from the source typically form a narrow fan beam. Subsequently, this projection data is processed by a reconstruction algorithm yielding either a two-dimensional slice or a three-dimensional volume. One of the more recent variants of Computed Tomography is the Cone Beam Computed Tomography (CBCT), where X-rays from the source diverge in a wider cone-shaped beam. Both the source and the detector in CBCT typically follow circular orbits around the isocenter, and the detector is a large flat panel array. CBCT is widely used in clinic nowadays in dentistry \citep{dawood2009}, interventional radiology \citep{floridi2014} and image-guided radiation therapy \citep{jaffray2002}, in certain cases replacing the classical CT.
CBCT reconstruction, however, is a hard problem. Firstly, it is known \citep{maas2010, tuy1983} that the data completeness condition for exact reconstruction of the whole volume is not satisfied for circular source/detector orbits. CBCT also inherits the imaging artifacts of classical CT such as streaking due to photon starvation in highly attenuated areas, which becomes particularly pronounced for repeated lower dose CBCT scans, and beam hardening. Furthermore, scatter-induced artifacts become more prominent due to the large panel size. These issues result in generally poor Hounsfield unit calibration, which is a serious limitation for applications in radiotherapy, where one would ideally use a daily CBCT scan for treatment plan adjustment without registration to a prior CT scan \citep{sonke2019}. This necessitates, along with other applications, the ongoing research on CBCT reconstruction.
In recent years, reconstruction methods based on deep learning have attracted a lot of interest in the community and demonstrated very promising results in public reconstruction challenges. For example, in the recent MRI reconstruction challenges \citep{fastmri2020, beauferris2020} deep learning methods have strongly outperformed the classical baselines. Generally speaking, any medical image reconstruction task can be viewed as an abstract inverse problem for a suitable forward operator, and different approaches have been proposed in the literature for solving such problems with deep learning \citep{schoenlieb2019}. We will only consider supervised reconstruction methods in this paper, but we would like to mention that unsupervised methods have also been developed.
One of the possible ways to apply deep learning to CT or CBCT reconstruction problems is to use a neural network as a learned post-processing operator for a classical reconstruction method such as filtered back-projection (FBP). This strategy was investigated in a number of publications, e.g., \citep{unser2017}, where it has been demonstrated that such learned post-processing can increase the reconstruction quality. At the same time, the neural network in this case does not have direct access to the raw data, thus it can fail to recover from some of the artifacts introduced by the classical reconstruction algorithm.
A rich family of alternative methods is given by \textit{learned iterative schemes}. Such schemes are often inspired by classical iterative schemes, combining the knowledge of the forward operator and its adjoint with neural networks that complement these schemes by e.g. filtering noise in the update term. A particularly important example of such schemes for two-dimensional CT is the Learned Primal-Dual (LPD) algorithm \citep{Adler2017b} which was inspired by the Primal-Dual Hybrid Gradient method \citep{pdhg2011}. In LPD, computations are performed by \textit{primal blocks} in the image domain and by \textit{dual blocks} in the projection domain, where each block is a small residual convolutional neural network, and the blocks are connected by projection and backprojection operators, enabling end-to-end training. Such architecture allows to filter noise efficiently, since raw projection data is provided to the dual blocks. Extensions of LPD to other modalities have been proposed as well, e.g., DBToR \citep{teuwen2021} has shown good results in two-dimensional Digital Breast Tomosynthesis and XPDNet has performed very well on two-dimensional MRI reconstruction in \citep{ramzi2020a}.
Unfortunately, LPD does not scale to a three-dimensional modality such as CBCT due to memory limitations. Indeed, for a $256 \times 256 \times 256$ float tensor a single convolution layer with $96$ features would already require 12 GB memory to perform a backpropagation. This makes it impossible to train LPD on clinically relevant resolutions. Increasing complexity of the primal/dual blocks beyond the simple residual Convolutional Neural Networks would increase memory requirements even further. $\partial $U-Net, which is an alternative, simpler scheme that does not operate in the projection space, was proposed in \citep{hauptmann2020}, where memory footprint was reduced using a multiscale approach, and reconstructions obtained by primal blocks at different scales are merged together by a U-Net. However, this method still does not allow to train using clinically relevant resolutions, and the expressive power of this scheme is reduced due to the absence of dual blocks and the conservative filter counts that were necessary to reduce memory footprint. As for the unrolled primal-dual schemes like LPD, they have not yet been shown to work in 3D for clinically relevant resolution and projection count, and it is the goal of this work to introduce such a scheme.
\section{Our contribution}
\label{s.contrib}
The key results of this work are:
\begin{itemize}
\item We develop LIRE, a practical framework for deep leaning-based CBCT reconstruction with clinically-relevant resolution and projection count using a learned iterative scheme that can be trained end-to-end on current consumer GPUs with 24 GB VRAM. Our framework is comprised of a learned primal-dual iterative scheme with residual primal-dual blocks, and a particular set of essential memory optimization techniques that are embedded in the algorithm.
\item Two models are trained from scratch for clinical CBCT geometries with small and large field-of-view respectively, where the large field-of-view is accomplished via an offset of the detector panel. We train the models on thorax CT scans with $256^3$ voxels ($2$ mm voxel pitch), using a $256 \times 256$ detector panel ($1.6$ mm pixel pitch) and either $400$ or $720$ projections.
\item We demonstrate superiority of our method to analytical, iterative and deep learning baselines on the test set of thorax CT scans for both field-of-view settings.
\item We demonstrate better out-of-distribution generalization of our method compared to a deep learning baseline for both geometries on a test data set of head \& neck CT scans, where our method improves upon analytical and iterative baselines as well.
\item We show additionally, as a proof of concept, that using NVIDIA A100 Tensor Core GPUs with 80 GB VRAM our model can be easily fine-tuned on thorax CT scans with $512^3$ voxels at a native $1$ mm voxel pitch, $512 \times 512$ detector panel and $720$ projections. We compare the fine-tuned model with a deep learning baseline, observing superiority of our method.
\end{itemize}
The major novelties of this work are:
\begin{itemize}
\item Our unrolled primal-dual iterative scheme is reversible. To compute the gradients, only the final latent vectors and the sequence of reconstructions returned by the algorithm are needed. This allows to use longer schemes, but on its own does not allow to use complex primal-dual blocks with 3D data due to memory limitations.
\item We additionally rely on the local nature of Convolutional Neural Networks (U-net included) and perform patch-wise computations inside the primal-dual blocks during both training and evaluation. During backpropagation, weight gradients received from different patches are summed, giving correct global gradients for network weights.
\item Thanks to these novelties, we are able to use 3D U-nets with high filter count inside the primal blocks. Conceptually, our framework allows to use U-nets in both primal and dual blocks, which can be important for scatter correction but also when applying this framework to other modalities such as MRI.
\item We provide the network with an auxiliary scalar tensor which has the same shape as the reconstructed volume. In this tensor, intensity of a given voxel contains information about the percentage of projections for which the corresponding spatial location is visible. The network is trained to reconstruct the intensities of all voxels which are visible in at least one projection, which results in a larger field-of-view than FBP.
\end{itemize}
The minor novelties of this work are:
\begin{itemize}
\item Compared to LPD, our scheme maintains a separate variable for the current iteration of reconstruction. The algorithm returns all resulting intermediate reconstructions, the training loss is then computed as the sum of reconstruction losses of these intermediate approximations. This way we aim to benefit from better supervision and also reduce potential gradient vanishing effects. Additionally, the algorithm can be stopped early during inference.
\item Compared to LPD, we provide Landweber update term \citep{Kaipio2005} to the primal blocks. This update term plays similar role as the gradient of data log-likelihood in Recurrent Inference Machines in \citep{lonning2019}.
\end{itemize}
\section{Materials and methods}
\label{s.matmethods}
\subsection{Tomography and inverse problems}
\label{ss.tomo}
CBCT reconstruction can be viewed as an inverse problem. Let $x: z \mapsto x(z)$ be a function specifying the attenuation coefficient for every point $z \in \Omega_X$ in the spatial domain $\Omega_X \subset \mathbb R^3$. The circular source rotation orbit is parametrized as a curve $\gamma: [0,1] \to \mathbb R^3$. Detector position and orientation are specified as a family of planes $\Omega_Y: t \mapsto \Omega_Y(t)$ for $t \in [0,1]$, where each such plane is canonically identified with $\mathbb R^2$. The line going from the source position $\gamma(t)$ at time step $t \in [0, 1]$ to the detector element $u \in \Omega_Y(t)$ is denoted by $L_{t, u}$. The \textit{cone-beam transform operator}, or simply the \textit{projection operator}, is then defined as
\begin{equation}
\mathcal P(x)(t, u) = \int_{L_{t, u}} x(z) dz,
\end{equation}
therefore, $\mathcal P$ is a linear operator mapping functions defined on $\Omega_X$ to functions defined on $[0,1] \times \mathbb R^2$. Hermitian\footnote{For suitably defined $L^2$ function spaces.} adjoint $\mathcal P^*$ of $\mathcal P$ is called the \textit{backprojection operator}.
Noisy CBCT data acquisition process can then be modeled as
\begin{equation}\label{eq.noisemodel}
y = \text{\texttt{Poisson}}(I_0 \cdot e^{-\mathcal P x}),
\end{equation}
where $I_0$ is the unattenuated X-ray photon count. The inverse problem of CBCT reconstruction is then to determine the tissue attenuation coefficients $x$ given the noisy projection data $y$.
\subsection{Data}
\label{ss.data}
In this work we simulate two common clinical acquisition geometries for a Linac-integrated CBCT scanner from Elekta \citep{letourneau2005}: a small field-of-view setting and a medium field-of-view setting, which will refer to as `small FOV setting' and `large FOV setting'. For both settings, the source-isocenter distance is $1000$ mm and the isocenter-detector plane distance is $536$ mm. For the small FOV setting, the source-isocenter ray passes through the center of the detector, while for the large FOV setting the detector is offset by $115$ mm to the side in the direction of rotation. Square detector panel with a side of $409.6$ mm and $256 \times 256$ pixel array was used for the main experiments, while for the additional proof-of-concept study we swtiched to $512 \times 512$ pixel array. The projection counts were $400$ and $720$ for the small FOV and the large FOV setting respectively for the main experiments. The source moves over a $200$ degrees arc for the small FOV setting, and for the large FOV setting the source moves over a full circle.
To train and evaluate our models, we collected two diagnostic CT datasets from the institutional archive: a dataset of 424 thorax CT scans with isotropic spacing of $1$ mm, and a dataset of 79 head \& neck CT scans with anisotropic spacing of between $0.9$ mm and $1.0$ mm for axial plane and between $1.0$ mm and $1.6$ mm for the perpendicular direction. Both datasets had axial slice of $512 \times 512$ voxels. For the main experiments, all data was downsampled by a factor of $2$, resulting in volumes with $256^3$ voxels. For the additional proof-of-concept study, we did not apply any resampling.
The thorax CT dataset was used to train, validate and test the models, while the additional head \& neck dataset was used exclusively for testing the models on out-of-distribution data. The thorax CT dataset was partitioned into a training set of 260 scans, a validation set of 22 scans and a test set of 142 scans.
To simulate noisy projection data for the CT scans, Hounsfield units were converted into attenuation coefficients using $\mu = 0.2 \ \textrm{cm}^{-1}$ as the water linear attenuation coefficient. Attenuated projection data was corrupted by Poisson noise with $I_0 = 30000$ photons in Eq. \eqref{eq.noisemodel}.
\subsection{Baseline methods}
\label{ss.baselines}
For the main experiments in this work we considered the following baselines: FBP \citep{feldkamp84}, PDHG \citep{pdhg2011} with TV regularisation and U-net with FBP input.
We used ODL \citep{odl2017} implementation of FBP and PDHG. We chose Hann filter with $0.9$ frequency modulation for the FBP baseline by testing different combinations of filter and frequency on the training set. Parker weighting was used for FBP reconstruction with small FOV. For FBP reconstruction with large FOV setting it is important to take into account the fact that the central cylindrical region of the volume is measured by twice as many projections as the rest of the FOV, this results in a reconstruction artifact in the form of a bright ring around the center of axial volume slices. To reduce this effect, one solution is to smoothly reduce intensity of projections in the detector region which captures twice as much data as we go from the detector center to the detector edge. We do so by multiplying all projections by the following weighting factor \citep{microct} after the FBP filtering and before the backprojection:
\begin{equation*}
\omega(s) = \begin{cases}
1 & -\Delta \leq s \leq -\Theta \\
\frac 1 2 \left( - \sin \left( \frac{\pi \arctan (s / D)}{2 \arctan(\Theta/ D)} \right) + 1\right) & -\Theta \leq s \leq \Theta \\
0 & \Theta \leq s \leq \Delta
\end{cases}
\end{equation*}
In this formula, $s$ is the signed distance between detector a pixel and the projection of the rotation axis onto the detector plane, which is taken with the `minus' sign if we are closer to the detector center than to the edge and with the `plus' sign otherwise, $D$ is the size of the detector, and $\Theta = 0.289 D$ is a parameter which we chose experimentally for our geometry to obtain uniform reconstructions without the ring artifacts.
For the PDHG baseline, we used $600$ iterations with $0.25$ weight for the TV regularization term. The parameters of PDHG were obtained via tuning on the train set as well.
Finally, as a main deep learning baseline we implemented a 3D U-net for post-processing the FBP output. We used U-net with $3$ downsampling layers, valid convolutions and $64$ base filters, similar to \citep{cicek2016}, but without Instance or Batch normalization layers. PReLU activation functions were used. As input for the U-net, we provided the FBP reconstruction and the field-of-view tensor $V_f$ defined later in Section \ref{ss.liretrain}. Two U-nets, one for small FOV and one for large FOV, were trained for the main experiments on downsampled data using the same augmentation strategy as LIRE and the same loss function as LIRE (see Alg.\ \ref{alg:liretrain}), except for the reconstruction loss over the partial field of view, since FBP quality is very poor in this region. U-nets were trained to reconstruct $128 \times 128 \times 128$ patches due to memory limitations. Adam optimizer \citep{kingma2014} was employed with an initial learning rate of $0.0001$ and a plateau scheduler with linear warm-up and 10 epoch patience. The best-performing model on the validation set was used for testing. A separate U-net with similar architecture was trained for the proof-of-concept study on high-resolution data; similar to LIRE, $L^1$ reconstruction loss was minimized in this experiment. One NVIDIA Quadro RTX 8000 was used for training the U-net's. For the main experiments on downsampled data, it takes U-net+FBP approximately 10 seconds to reconstruct volumes for both geometries. PDHG takes 14 minutes to reconstruct a small FOV volume and 18 minutes to reconstruct a large FOV volume\footnote{It should be noted that the ODL implementation of PDHG performs a lot of data transfer between CPU and GPU, which hurts the performance.}.
Evaluation of LIRE and the baseline methods was performed using PSNR and SSIM metrics restricted to the full field of view region, where $V_f > 0$.
\section{LIRE}
\subsection{LIRE architecture and implementation}
\label{ss.arch}
\begin{algorithm}
\centering
\begin{algorithmic}[1]
\Procedure{\texttt{reconstruct}}{$y, \mathcal P, \mathcal P^*, \theta, V$}
\State $x_0 \gets \mathcal P^*(y)$ \Comment{Normalized backprojection init}
\State $I \gets []$ \Comment{Initialize output list}
\State $f \gets x_0^{\otimes 8} \in X^{8}$\Comment{Initialize primal vector}
\State $h \gets y^{\otimes 8} \in U^{8}$\Comment{Initialize dual vector}
\For{$i \gets 1, \dots, 8$}
\State $d_1, d_2 \gets \text{\texttt{Splt}}(h)$ \Comment{Split dual channels}
\State $p_1, p_2 \gets \text{\texttt{Splt}}(f)$ \Comment{Split prime channels}
\State $p_{\text{op}} \gets \mathcal P([p_2, x_{i-1}]^{\oplus})$ \Comment{Project $p_2$ and $x_{i-1}$}
\State $d_2 \gets d_2 + \Gamma_{\theta_i^d}([p_{\text{op}}, d_1, y]^{\oplus})$ \Comment{Upd. $d_2$}
\State $b_{\text{op}} \gets \mathcal P^*(d_2)$ \Comment{Backproject $d_2$}
\State $\text{\textit{LW}} \gets \mathcal P^* (\mathcal P (x_{i-1}) - y)$ \Comment{Landweber term}
\State $p_2 \gets p_2 + \Lambda_{\theta_i^p}([b_{\text{op}}, p_1, x_{i-1}, \text{\textit{LW}}, V]^{\oplus})$ \Comment{Upd. $p_2$}
\State $h \gets [d_1, d_2]^{\oplus}$ \Comment{Combine new dual}
\State $f \gets [p_1, p_2]^{\oplus}$ \Comment{Combine new primal}
\State $x_i \gets x_{i-1} + \text{\texttt{Conv3d}}(f, \theta_i^o)$ \Comment{Update $x_{i-1}$}
\State $I \gets I + [x_i]$ \Comment{Append $x_i$ to output list}
\State $h \gets \text{\texttt{Perm}}(h, {\theta_i^m})$ \Comment{Permute dual channels w. $\theta_i^m$}
\State $f \gets \text{\texttt{Perm}}(f, {\theta_i^m})$ \Comment{Permute prim. channels w. $\theta_i^m$}
\EndFor
\State \textbf{return} $I$
\EndProcedure
\end{algorithmic}
\caption{LIRE.}
\label{alg:liremain}
\end{algorithm}
\begin{algorithm}
\centering
\begin{algorithmic}[1]
\Procedure{\texttt{loss}}{$x, y, V_f, V_s$}
\State $L_1 \gets \| x - y \|_{V_f, 1}$ \Comment{$L^1$ loss in full FOV}
\State $L_2 \gets \| x - y \|_{V_s, 1}$ \Comment{$L^1$ loss in part. FOV}
\State $S_1 \gets 1.0 - \text{\texttt{SSIM}}_{V_f}(x, y)$ \Comment{1-SSIM, full FOV}
\State $S_2 \gets 1.0 - \text{\texttt{SSIM}}_{V_s}(x, y)$ \Comment{1-SSIM, part. FOV}
\State \textbf{return} $L_1 + \alpha_1 S_1 + \alpha_2 (L_2 + \alpha_1 S_2)$
\EndProcedure
\For{$j \gets 1, \dots, N_{\text{iter}}$}
\State $x \sim \mathcal{D}_{\text{train}}$\Comment{Sample train volume}
\State $\Delta \sim \mathcal N(0, 100) \in \mathbb R^3$ \Comment{Sample offset w.r.t. scan center}
\State $\delta \gets x.\text{\texttt{spacing}}$ \Comment{Get spacing of volume $x$}
\State $\mathcal P, \mathcal P^* \gets \mathcal P_{\Delta, \delta}, \mathcal P_{\Delta, \delta}^*$ \Comment{Define projector, backprojector}
\State $\overline{\mathcal P}, \overline{\mathcal P}^* \gets \mathcal P / \| \mathcal P\|, \mathcal P^* / \| \mathcal P\|$ \Comment{Normalize operators}
\State $y \gets \text{\texttt{Poisson}}(I_0 \cdot e^{-\mathcal P(x)})$ \Comment{Noisy projections}
\State $\overline y \gets -\text{\texttt{ln}}(y) / \| \mathcal P\| $\Comment{Normalized log-transform}
\State $V_f \gets \text{\texttt{FullFOV}}(\mathcal P)$ \Comment{Compute full FOV}
\State $V_p \gets \text{\texttt{PartialFOV}}(\mathcal P)$ \Comment{Compute partial FOV}
\State $V_s \gets V_p \setminus V_f$ \Comment{Incomplete FOV mask}
\State $I \gets \text{\texttt{RECONSTRUCT}}(\overline y, \overline{\mathcal P}, \overline{\mathcal P}^*, \theta, V_f)$ \Comment{Reconstruct}
\State $\text{loss} \gets 0$ \Comment{Initialize loss tensor}
\For{$z \gets I[1], \dots, I[8]$} \Comment{Loop over iterates}
\State $\text{loss} \gets \text{loss} + \text{\texttt{LOSS}}(x, z, V_f, V_s)$ \Comment{Increment loss}
\EndFor
\State $\text{compute gradients of loss w.r.t. $\theta$, update $\theta$}$
\EndFor
\end{algorithmic}
\caption{Training of LIRE.}
\label{alg:liretrain}
\end{algorithm}
LIRE is a data-driven algorithm, where a learned iterative scheme is unrolled and the parameters of this scheme are jointly optimized to minimize expected reconstruction loss over the training dataset. The choice of a particular scheme will, naturally, affect both the performance and the required resources such as GPU memory to train such a scheme.
When designing LIRE, we took inspiration from Learned Primal-Dual (LPD) reconstruction algorithm \citep{Adler2017b}. The main disadvantage of LPD, however, is that it does not scale well to 3D reconstruction problems such as Cone Beam CT. We drastically reduce memory footprint of the LIRE algorithm compared to vanilla LPD and at the same time improve its expressive power by using more complex primal and dual blocks. In order to reduce the memory footprint, we designed LIRE around two main principles: reversibility for network as a whole and patch-wise computations for local operations. We briefly describe these two concepts below. Furthermore, for the additional proof-of-concept experiment on high resolution data, we implemented a CPU-GPU memory streaming mechanism, which would keep entire primal/dual vectors in CPU memory and only send the channels required for computing the primal/dual updates to the GPU.
Reversible residual neural networks were originally introduced in \citep{revnet2017}. In a reversible residual network layer, the input tensor $z$ is split into tensors $z_1, z_2$ along the channel dimension. The output $w$ of the layer is then defined by combining $z_1$ and $z_2 + \Lambda(z_1)$ back along the channel dimension, where $\Lambda$ is a Convolutional Neural Network. Since the input $z$ can be uniquely restored\footnote{Up to numerical errors, which are typically negligible in practice for small neural networks.} from the output $w$, it is not essential to store intermediate features of $\Lambda$ prior to the actual computation of gradients for $\Lambda$. The main observation behind patch-wise computations is that for neural networks, which are composed solely of local operators such as valid convolutions, activation functions, upsamling and downsampling layers, it is possible to partition the input tensor $z$ into patches $z_i, i=1,\dots,k$ along the spatial dimensions and compute the output patch-by-patch. In general, for every $i$ it is also necessary to enlarge the patch $z_i$ by an adding all tensor elements $\partial z_i \subset z$ within a certain distance to $z_i$ in order to account for the receptive field of the network when computing features for locations inside $z_i$. For a special case of image classification CNNs that do not involve U-net type architectures or non-local operations such a patch-wise computation strategy was used in \citep{pinckaers2019}.
The reconstruction algorithm is given by the function \texttt{RECONSTRUCT}($y, \mathcal P, \mathcal P^*, \theta, V$) in Algorithm~\ref{alg:liremain}. Here $y$ is the log-transformed and scaled projection data, $\mathcal P$ and $\mathcal P^*$ are the normalized projection and backprojection operators respectively, $\theta$ is a parameter vector and $V$ is an auxiliary single-channel image space tensor with the same dimensions as the reconstructed volume which we will define later in Section~\ref{ss.liretrain}. Parameters $\theta$ are partitioned into 4 parameter groups, where $\{ \theta_i^p \}_{i=1}^8$ are the primal block parameters, $\{ \theta_i^d \}_{i=1}^8$ are the dual block parameters, $\{ \theta_i^o \}_{i=1}^8$ are the output convolution parameters and $\{ \theta_i^m \}_{i=1}^8$ are the permutation parameters.
We clarify the notation first. We write $[z_1, z_2, \dots, z_k]^{\oplus}$ to denote the channel-wise concatenation of tensors $z_1, z_2, \dots, z_k$ which are assumed to have the same spatial and batch dimensions. Function $\text{\texttt{Splt}}(z)$ splits tensor $z$ with $2n$ channels into tensors $z_1, z_2$ which get the first $n$ feature maps of $z$ and the last $n$ feature maps of $z$ respectively. Function $\text{\texttt{Perm}}(z, {\theta_i^m})$ permutes tensor $z$ with $n$ channels along the channel dimension with a permutation $\theta_i^m \in \text{\texttt{Sym}}(n)$.
In LIRE we use 8 primal/dual iterations (8 primal and 8 dual blocks) with both primal and dual latent vectors having 8 channels. Backprojected data without FBP filtering is used to initialize the reconstruction $x_0$. The initial primal vector is defined by stacking 8 copies of $x_0$, and the initial dual vector is defined by stacking 8 copies of $y$. At the beginning of each iteration $i = 1,\dots,8$, we split the primal and the dual latent vectors along the channel dimension. First we update the dual latent vector in Line 10 of Alg.~\ref{alg:liremain} using dual block $\Gamma_{\theta_i^d}$ comprised of 3 layers of $3 \times 3 \times 3$ convolutions with 96, 96 and 4 filters respectively and LeakyReLU activation after the first and the second convolution layer.
To update the primal block, we compute the Landweber term in Line 12 of Alg.~\ref{alg:liremain}, which plays a similar role as the gradient log-likelihood term in Recurrent Inference Machines in \citep{lonning2019}. We update the primal latent vector in Line 13 of Alg.~\ref{alg:liremain} using primal block $\Lambda_{\theta_i^p}$. Primal block $\Lambda_{\theta_i^p}$ is a U-net with a single downsampling layer, $3 \times 3 \times 3$ valid convolutions with 96 filters in the first double-convolution block, 192 filters in the bottleneck and LeakyReLU activation after all but the last convolution layer. We use average pooling with $2 \times 2 \times 2$ kernel in the encoder and nearest upsampling in the decoder layer.
Primal and the dual updates are computed patch-wise, which is possible thanks to the locality of $\Gamma_{\theta_i^d}$ and $\Lambda_{\theta_i^p}$, during backward pass weight gradients obtained from patches are summed to obtain the global weight gradients. New primal and dual vectors are combined in Lines 14-15. Reconstruction $x_{i-1}$ is updated in Line 16, where $\text{\texttt{Conv3d}}$ is a $1 \times 1 \times 1$ convolution with parameters $\theta_i^o$, and we append the new reconstruction $x_i$ to the output list in Line 17. Finally, we permute the channels of primal and dual latent vectors using the same permutation $\theta_i^m$ in Lines 18-19. For every $i$, the permutation $\theta_i^m$ is some fixed permutation of $[1,2,\dots,8]$ which is randomly initialized during model initialization and stored as a model parameter; we require that $\theta_i^m$ mixes the first and the second half of $[1,2,\dots,8]$.
The algorithm was implemented as a `black box' C++/CUDA extension for PyTorch \citep{pytorch} in order to maximize speed and memory efficiency. Firstly, we implemented the projection and the backprojection operators for CBCT geometry as a CUDA extension for PyTorch. Since both operators are linear and the backprojection operator is the Hermitian adjoint of the projection operator, this is sufficient to enable gradient backpropagation. In the projector code, we followed the same approach as ASTRA Toolbox \citep{astra2016} and PYRO-NN \citep{syben2019} by using texture memory and trilinear interpolation when sampling attenuation values along the source-detector rays. Adjointness of the operators was tested by checking the definition of Hermitian adjoint
\begin{equation*}
\langle \mathcal P x, y \rangle = \langle x, \mathcal P^* y\rangle
\end{equation*}
for random positive test functions (=tensors) $x, y$. The LIRE network itself was then built as a C++/CUDA extension for PyTorch by implementing \textit{both} forward and backward passes since automatic differentiation is not available inside C++/CUDA extensions. PyTorch automatic differentiation was still used to compute the gradients of the loss for the output tensors $x_1, \dots, x_8 \in I$, but the subsequent computation of the gradients of the parameters $\theta$ was performed by LIRE in the backward pass. Correctness of the gradient computations for LIRE parameters was verified by computing numerical directional derivatives for random directions inside the parameter space and comparing this with the analytical directional derivatives computed using gradients from LIRE.
\subsection{LIRE training details}
\label{ss.liretrain}
\begin{table*}[t]
\caption{Test results on thorax CT and head \& neck CT at 2 mm voxel pitch (best result in bold)}
\label{tab:comparison-lung}
\centering
\begin{tabular}{|l|l l|l l|l|l|l|}
\hline
\multirow{2}{*}{Method} & \multicolumn{2}{c|}{Thorax CT} & \multicolumn{2}{c|}{H\&N CT} & \multirow{2}{*}{Weight count} & \multirow{2}{*}{Optimizer steps} & \multirow{2}{*}{Batch size}\\
\cline{2-5}
& PSNR & SSIM & PSNR & SSIM & & & \\
\hline
FBP (small FOV) & $15.289$ & $0.572$ & $27.647$ & $0.721$ & - & - & -\\
TV (small FOV) & $27.370$ & $0.771$ & $33.903$ & $0.866$ & - & - & -\\
U-Net (small FOV) & $32.405$ & $0.803$ & $36.433$ & $0.879$ & 23341k & 284640 & 1\\
LIRE (small FOV) & $\mathbf{33.345}$ & $\mathbf{0.885}$ & $\mathbf{37.941}$ & $\mathbf{0.971}$ & 24497k & 6798 & 8\\
\hline
FBP (large FOV) & $20.051$ & $0.662$ & $22.396$ & $0.711$ & - & - & -\\
TV (large FOV) & $29.237$ & $0.793$ & $37.862$ & $0.945$ & - & - & -\\
U-Net (large FOV) & $34.297$ & $0.849$ & $37.064$ & $0.885$ & 23341k & 287040 & 1\\
LIRE (large FOV) & $\mathbf{34.432}$ & $\mathbf{0.903}$ & $\mathbf{40.113}$ & $\mathbf{0.982}$ & 24497k & 6600 & 8\\
\hline
\end{tabular}
\end{table*}
\begin{table*}[t]
\caption{Test results on thorax CT at 1 mm voxel pitch (best result in bold)}
\label{tab:comparison-lung-high}
\centering
\begin{tabular}{|l|l l|l|l|l|}
\hline
\multirow{2}{*}{Method} & \multicolumn{2}{c|}{Thorax CT} & \multirow{2}{*}{Weight count} & \multirow{2}{*}{Optimizer steps} & \multirow{2}{*}{Batch size}\\
\cline{2-3}
& PSNR & SSIM & & & \\
\hline
U-Net (large FOV) & $33.773$ & $0.848$ & 23341k & 62400 (cold start) & 1\\
LIRE (large FOV) & $\mathbf{35.784}$ & $\mathbf{0.881}$ & 24497k & 1560 (warm start) & 2\\
\hline
\end{tabular}
\end{table*}
We provide the training procedure for LIRE in Algorithm~\ref{alg:liretrain}. The training is supervised, and the training set of CT volumes is denoted by $\mathcal D_{\text{train}}$. We elaborate on the training procedure below.
A CT volume is repeatedly sampled from the training dataset in Line 9 of Alg.~\ref{alg:liretrain}. During the sampling, augmentations that flip patient left-right and top-bottom are randomly applied, both with probability $50 \%$. We sample a random offset for the rotation center w.r.t. the center of the CT volume from an isotropic Gaussian distribution with $0$ mean and a standard deviation of $100$ mm in Line 10. Choosing a random offset can be viewed as an additional type of augmentation, furthermore, in practice the isocenter in radiotherapy will be located close to a tumor. We define projection and backprojection operators for the CBCT projection geometry with given volume spacing and center offset in Line 12, and in Line 13 we compute normalized versions of these operators. The operator norm is estimated numerically using power method with three iterations \citep{boyd1974}. Synthetic noisy projection data is computed in Line 14 (see Eq. \ref{eq.noisemodel}). This noisy projection data is log-transformed and scaled in Line 15. In general, for a realistic CBCT geometry the field of view does not necessarily contain scanned object completely. When comparing reconstruction metrics it is also important to compute these metrics inside an appropriately defined field of view only, since having a large part of the volume set to $0$ outside the corresponding field of view would yield over-optimistic reconstruction metrics. We define the full field of view tensor $V_f$ and the partial field of view tensor $V_p$ in Lines 16 and 17 respectively, both of these are scalar tensors having same dimensions as the volume that we want to reconstruct. For the projection geometry with small FOV setting, the full field of view tensor is constructed as
\begin{equation*}
V_f(p) = \begin{cases}
1 & p \textrm{ is seen from all projection angles} \\
0 & \textrm{otherwise,}
\end{cases}
\end{equation*}
while for the projection geometry with large FOV setting the full field of view tensor is constructed as
\begin{equation*}
V_f(p) = \begin{cases}
1 & p \textrm{ is seen from all projection angles} \\
0.5 & p \textrm{ is seen from half of the proj. angles} \\
0 & \textrm{otherwise.}
\end{cases}
\end{equation*}
We chose to use different values ($1.0$ and $0.5$) above to mark the voxels seen from all the projection angles and the voxels which are seen from only half of the angles, however, we expect that the exact numerical values used in these masks are not important. For both small and large field of view settings, the partial field of view is defined as
\begin{equation*}
V_p(p) = \begin{cases}
1 & p \textrm{ is seen from at least one angle} \\
0 & \textrm{otherwise.}
\end{cases}
\end{equation*}
In particular, this definition of $V_p$ implies that in the central axial plane all voxels are marked as `partially visible'. In Line 18, we define $V_s$ as the tensor which equals $1$ on the set of all voxels $p$ s.t. $V_p(p) > 0, V_f(p) = 0$ and zero elsewhere. In Line 19, we call the main reconstruction procedure, providing log-transformed normalized projection data, normalized versions of projection and backprojection operators, the collection of weights $\theta$ and the auxiliary tensor $V_f$. $V_f$ helps the network to deal with the non-homogeneouos nature of the reconstruction artifacts.
The reconstruction algorithm returns a list $I = [z_1, z_2, \dots, z_8]$ of reconstructions, which are obtained after performing $1, 2, \dots, 8$ reconstruction steps respectively. We sum the reconstruction losses over all $z \in I$ in Line 22. Loss computation takes place in the $\texttt{LOSS}$ function in Alg.~\ref{alg:liretrain}. We sum losses over the full field of view region, where $V_f > 0$, and the partial field of view region, where $V_s > 0$. We compute the loss for partial field of view to ensure that the network can provide at least an approximate reconstruction in this region. A linear combination of $L^1$ loss and Structural Similarity Loss is computed for both regions. We used $\alpha_1 = 0.1$ for both field of view settings. $\alpha_2$ was set to $0.1$ initially and then reduced to $0.01$ after first learning rate decay step.
We trained two versions of LIRE for the main experiments, one for the small FOV setting and one for the large FOV setting. LIRE was trained to reconstruct complete volumes. For the internal patch-based computations inside LIRE we set the patch size to $128 \times 128 \times 128$, resulting in roughly $30$ GB VRAM usage per single volume. Reducing the patch size to $32 \times 32 \times 32$ decreased the usage to roughly $20$ GB VRAM per single volume. Eight NVIDIA Quadro RTX 8000 GPUs with 48 GB VRAM were used for training LIRE in distributed data parallel mode. We used Adam optimizer \citep{kingma2014} with an initial learning rate of $0.0001$ and a plateau scheduler with linear warm-up and 10 epoch patience. At the end of each epoch models were evaluated, the best model was picked for testing. Training was stopped when we did not observe improvement for more than 15 epochs. For the additional proof-of-concept study on high-resolution data, we performed a warm start from LIRE trained on downsampled lung CT scans with large FOV setting. Two NVIDIA A100 Tensor Core GPUs with 80 GB VRAM inside a virtual machine on NVIDIA TryEGX Platform were used. We employed Adam optimizer with an initial learning rate of $0.000025$ and linear warm-up, the model was fine-tuned for 12 epochs without any learning rate decay. $L^1$ reconstruction loss was used during fine-tuning, due to higher memory costs associated with SSIM loss. LIRE evaluation was performed in the full field of view region, where $V_f > 0$, using PSNR and SSIM metrics.
During the inference on the downsampled data with 2 mm voxel pitch and $256 \times 256$ detector panel, it takes LIRE approximately 104 seconds to reconstruct a single volume on a single Quadro RTX 8000 for the small FOV setting and approximately 115 seconds to reconstruct a volume for the large FOV setting. On high-resolution data with 1 mm voxel pitch, $512 \times 512$ detector panel and large FOV setting, it takes LIRE approximately 15 minutes to reconstruct a single volume on Quadro RTX 8000, and 6 minutes if A100 is used instead. Faster inference on A100 can be attributed to higher memory bandwidth and other hardware architectural improvements.
\section{Results}
\subsection{Main experiments}
\label{s.results}
In Table \ref{tab:comparison-lung} we summarize the results for the test set of thorax CT scans and the out-of-distribution test set of head \& neck CT scans. We provide thorax CT axial slices for small FOV in Figure \ref{fig:axial-small}, thorax CT coronal slices for small FOV in Figure \ref{fig:cor-small}, thorax CT axial slices for large FOV in Figure \ref{fig:axial-large}, thorax CT coronal slices for large FOV in Figure \ref{fig:cor-large}, head \& neck CT axial slices for large FOV in Figure \ref{fig:axial-large-ext} and head \& neck CT coronal slices for large FOV in Figure \ref{fig:cor-large-ext}. The slices were taken from randomly chosen test volumes. We see that our method outperforms the classical and deep learning baselines in all cases, including the out-of-distribution test set. Compared to U-net+FBP, most notable is the improvement in SSIM, ranging from $+0.05$ to $+0.08$ on thorax CT data, and a much larger field-of-view, since LIRE is not constrained by the data sufficiency region of FBP. PSNR improvement over the U-net is $+0.9$ dB for small FOV and $+0.13$ dB for large FOV. On the out-of-distribution test set, we observe better generalization of LIRE compared to U-net+FBP in the form of an increased PSNR and SSIM gap between LIRE and U-net, even though LIRE has a slightly higher parameter count, allowing to suggest that primal-dual schemes with shallow U-nets generalize better than a single deep U-net. Visual inspection of thorax CT slices shows better visibility of lung fissures in LIRE reconstructions compared to the baselines. In head \& neck CT slices, we observe that U-net loses spatial resolution and introduces a strong `shadow' in the neck region. LIRE yields best reconstructions on the head \& neck CT set due to better handling of photon noise compared to the iterative method, but in the low-noise neck region we observe that the methods are quite close in visual image quality.
Additionally, we measured the performance of LIRE and PDHG on the test set of thorax CT data for the small FOV setting in the region where $V_f = 0, V_p = 1$, consisting of the voxels in the partial field of view which do not belong to the full field of view. This way we obtained mean PSNR of $16.938$ and mean SSIM of $0.233$ for PDHG, whereas for LIRE mean PSNR was $28.156$ and mean SSIM was $0.795$.
The results of the proof-of-concept high-resolution experiment for the test set of thorax CT scans are summarized in Table \ref{tab:comparison-lung-high}. We only provide comparison with the U-net+FBP, since it is the best performing baseline method on the downsampled data. We provide high-resolution thorax CT axial slices for large FOV in Figure \ref{fig:axial-large-hr} and high-resolution thorax CT coronal slices for large FOV in Figure \ref{fig:coronal-large-hr}. In these proof-of-concept experiments, LIRE still outperforms U-net+FBP. Similar to our experiments on downsampled data, visual inspection of the high-resolutions thorax CT slices shows better visibility of lung fissures in LIRE reconstructions compared to the U-net.
\clearpage
\begin{figure*}[!ht]
\centering
\subfloat[]{\includegraphics[width=0.3\linewidth]{1600209177fbc9_gt_axial_95_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600209177fbc9_lire_axial_95_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth, ]{1600209177fbc9_unet_axial_95_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600209177fbc9_fbp_axial_95_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600209177fbc9_pdhg_axial_95_b.pdf}%
\caption{(a) Axial slice of Thorax CT with HU range=(-1000, 800), (b) LIRE/small FOV, (c) U-net/small FOV, (d) FBP/small FOV, and (e) PDHG/small FOV.}
\label{fig:axial-small}
\end{figure*}
\begin{figure*}[!ht]
\centering
\subfloat[]{\includegraphics[width=0.3\linewidth]{1600209177fbc9_gt_coronal_129_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600209177fbc9_lire_coronal_129_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth, ]{1600209177fbc9_unet_coronal_129_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600209177fbc9_fbp_coronal_129_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600209177fbc9_pdhg_coronal_129_b.pdf}%
\caption{(a) Coronal slice of Thorax CT with HU range=(-1000, 800), (b) LIRE/small FOV, (c) U-net/small FOV, (d) FBP/small FOV, and (e) PDHG/small FOV.}
\label{fig:cor-small}
\end{figure*}
\clearpage
\begin{figure*}[!ht]
\centering
\subfloat[]{\includegraphics[width=0.3\linewidth]{1600247942cf6c_gt_axial_97_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600247942cf6c_lire_axial_97_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth, ]{1600247942cf6c_unet_axial_97_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600247942cf6c_fbp_axial_97_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600247942cf6c_pdhg_axial_97_b.pdf}%
\caption{(a) Axial slice of Thorax CT with HU range=(-1000, 800), (b) LIRE/large FOV, (c) U-net/large FOV, (d) FBP/large FOV, and (e) PDHG/large FOV.}
\label{fig:axial-large}
\end{figure*}
\begin{figure*}[!ht]
\centering
\subfloat[]{\includegraphics[width=0.3\linewidth]{1600247942cf6c_gt_coronal_165_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600247942cf6c_lire_coronal_165_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth, ]{1600247942cf6c_unet_coronal_165_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600247942cf6c_fbp_coronal_165_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600247942cf6c_pdhg_coronal_165_b.pdf}%
\caption{(a) Coronal slice of Thorax CT with HU range=(-1000, 800), (b) LIRE/large FOV, (c) U-net/large FOV, (d) FBP/large FOV, and (e) PDHG/large FOV.}
\label{fig:cor-large}
\end{figure*}
\clearpage
\begin{figure*}[!ht]
\centering
\subfloat[]{\includegraphics[width=0.3\linewidth]{141_gt_axial_117_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{141_lire_axial_117_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth, ]{141_unet_axial_117_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{141_fbp_axial_117_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{141_pdhg_axial_117_b.pdf}%
\caption{(a) Axial slice of Head \& Neck CT with HU range=(-1000, 1000), (b) LIRE/large FOV, (c) U-net/large FOV, (d) FBP/large FOV, and (e) PDHG/large FOV.}
\label{fig:axial-large-ext}
\end{figure*}
\begin{figure*}[!ht]
\centering
\subfloat[]{\includegraphics[width=0.3\linewidth]{141_gt_coronal_129_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{141_lire_coronal_129_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth, ]{141_unet_coronal_129_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{141_fbp_coronal_129_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{141_pdhg_coronal_129_b.pdf}%
\caption{(a) Coronal slice of Head \& Neck CT with HU range=(-1000, 1000), (b) LIRE/large FOV, (c) U-net/large FOV, (d) FBP/large FOV, and (e) PDHG/large FOV.}
\label{fig:cor-large-ext}
\end{figure*}
\clearpage
\begin{figure*}[!ht]
\centering
\subfloat[]{\includegraphics[width=0.49\linewidth]{1600247942cf6c_gt_axial_176_b_hr.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.49\linewidth]{1600247942cf6c_lire_axial_176_b_hr.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.49\linewidth, ]{1600247942cf6c_unet_axial_176_b_hr.pdf}%
\caption{(a) High resolution axial slice of Thorax CT with HU range=(-1000, 800), (b) LIRE/large FOV, (c) U-net/large FOV.}
\label{fig:axial-large-hr}
\end{figure*}
\clearpage
\begin{figure*}[!ht]
\centering
\subfloat[]{\includegraphics[width=0.49\linewidth]{1600247942cf6c_gt_coronal_257_b_hr.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.49\linewidth]{1600247942cf6c_lire_coronal_257_b_hr.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.49\linewidth, ]{1600247942cf6c_unet_coronal_257_b_hr.pdf}%
\caption{(a) High resolution coronal slice of Thorax CT with HU range=(-1000, 800), (b) LIRE/large FOV, (c) U-net/large FOV.}
\label{fig:coronal-large-hr}
\end{figure*}
\clearpage
\section{Discussion}\label{sec:discussion}
\label{s.discussion}
We have presented LIRE, a practical algorithm for deep leaning-based CBCT reconstruction with clinically-relevant resolution and projection count using a learned primal-dual scheme that can be trained end-to-end on current consumer GPUs with 24 GB VRAM. We have shown that our method outperforms the classical and deep learning baselines on the test set of thorax CT scans and the out-of-distribution test set of head \& neck CT scans, where we additionally observe better generalization of our method compared to the U-net baseline. In particular, the photon noise in highly attenuated areas is handled very well, which indicates that LIRE can potentially help to lower the dose of CBCT scans. For the small field of view setting, our method is able to reconstruct certain anatomy details outside the full field of view much better than the iterative baseline, which can be interesting for applications in radiotherapy, e.g., by allowing for a better registration of the planning CT scan to the CBCT reconstruction.
This work has certain limitations. Firstly, we do not take scatter artifacts into account. Feasibility of supervised scatter correction with deep learning was demonstrated in e.g. \citep{dse_spie}, and such method can be in principle combined with our learned primal-dual scheme and trained end-to-end. Secondly, we do not correct for possible motion artifacts in thorax CBCT due to breathing or heartbeat. Thirdly, our metrics do not directly imply suitability of our method for radiotherapy planning; a proper Monte Carlo dose simulation would be required to test that.
\section{Acknowledgements}
\label{s.acks}
We would like to thank NVIDIA Corporation for providing us with the access to A100 virtual machine instances and for supporting us throughout these experiments. In particular, we would like to thank Joe Cullen from NVIDIA for enabling this collaboration.
\bibliographystyle{abbrvnat
\section{Introduction}
\label{s.intro}
Since its inception in 1960s, Computed Tomography (CT) has enjoyed a huge success in medical imaging. It is characterized by a specific acquisition and reconstruction process, in which a set of X-ray projections is first acquired for varying positions of the source and the detector and where X-rays from the source typically form a narrow fan beam. Subsequently, this projection data is processed by a reconstruction algorithm yielding either a two-dimensional slice or a three-dimensional volume. One of the more recent variants of Computed Tomography is the Cone Beam Computed Tomography (CBCT), where X-rays from the source diverge in a wider cone-shaped beam. Both the source and the detector in CBCT typically follow circular orbits around the isocenter, and the detector is a large flat panel array. CBCT is widely used in clinic nowadays in dentistry \citep{dawood2009}, interventional radiology \citep{floridi2014} and image-guided radiation therapy \citep{jaffray2002}, in certain cases replacing the classical CT.
CBCT reconstruction, however, is a hard problem. Firstly, it is known \citep{maas2010, tuy1983} that the data completeness condition for exact reconstruction of the whole volume is not satisfied for circular source/detector orbits. CBCT also inherits the imaging artifacts of classical CT such as streaking due to photon starvation in highly attenuated areas, which becomes particularly pronounced for repeated lower dose CBCT scans, and beam hardening. Furthermore, scatter-induced artifacts become more prominent due to the large panel size. These issues result in generally poor Hounsfield unit calibration, which is a serious limitation for applications in radiotherapy, where one would ideally use a daily CBCT scan for treatment plan adjustment without registration to a prior CT scan \citep{sonke2019}. This necessitates, along with other applications, the ongoing research on CBCT reconstruction.
In recent years, reconstruction methods based on deep learning have attracted a lot of interest in the community and demonstrated very promising results in public reconstruction challenges. For example, in the recent MRI reconstruction challenges \citep{fastmri2020, beauferris2020} deep learning methods have strongly outperformed the classical baselines. Generally speaking, any medical image reconstruction task can be viewed as an abstract inverse problem for a suitable forward operator, and different approaches have been proposed in the literature for solving such problems with deep learning \citep{schoenlieb2019}. We will only consider supervised reconstruction methods in this paper, but we would like to mention that unsupervised methods have also been developed.
One of the possible ways to apply deep learning to CT or CBCT reconstruction problems is to use a neural network as a learned post-processing operator for a classical reconstruction method such as filtered back-projection (FBP). This strategy was investigated in a number of publications, e.g., \citep{unser2017}, where it has been demonstrated that such learned post-processing can increase the reconstruction quality. At the same time, the neural network in this case does not have direct access to the raw data, thus it can fail to recover from some of the artifacts introduced by the classical reconstruction algorithm.
A rich family of alternative methods is given by \textit{learned iterative schemes}. Such schemes are often inspired by classical iterative schemes, combining the knowledge of the forward operator and its adjoint with neural networks that complement these schemes by e.g. filtering noise in the update term. A particularly important example of such schemes for two-dimensional CT is the Learned Primal-Dual (LPD) algorithm \citep{Adler2017b} which was inspired by the Primal-Dual Hybrid Gradient method \citep{pdhg2011}. In LPD, computations are performed by \textit{primal blocks} in the image domain and by \textit{dual blocks} in the projection domain, where each block is a small residual convolutional neural network, and the blocks are connected by projection and backprojection operators, enabling end-to-end training. Such architecture allows to filter noise efficiently, since raw projection data is provided to the dual blocks. Extensions of LPD to other modalities have been proposed as well, e.g., DBToR \citep{teuwen2021} has shown good results in two-dimensional Digital Breast Tomosynthesis and XPDNet has performed very well on two-dimensional MRI reconstruction in \citep{ramzi2020a}.
Unfortunately, LPD does not scale to a three-dimensional modality such as CBCT due to memory limitations. Indeed, for a $256 \times 256 \times 256$ float tensor a single convolution layer with $96$ features would already require 12 GB memory to perform a backpropagation. This makes it impossible to train LPD on clinically relevant resolutions. Increasing complexity of the primal/dual blocks beyond the simple residual Convolutional Neural Networks would increase memory requirements even further. $\partial $U-Net, which is an alternative, simpler scheme that does not operate in the projection space, was proposed in \citep{hauptmann2020}, where memory footprint was reduced using a multiscale approach, and reconstructions obtained by primal blocks at different scales are merged together by a U-Net. However, this method still does not allow to train using clinically relevant resolutions, and the expressive power of this scheme is reduced due to the absence of dual blocks and the conservative filter counts that were necessary to reduce memory footprint. As for the unrolled primal-dual schemes like LPD, they have not yet been shown to work in 3D for clinically relevant resolution and projection count, and it is the goal of this work to introduce such a scheme.
\section{Our contribution}
\label{s.contrib}
The key results of this work are:
\begin{itemize}
\item We develop LIRE, a practical framework for deep leaning-based CBCT reconstruction with clinically-relevant resolution and projection count using a learned iterative scheme that can be trained end-to-end on current consumer GPUs with 24 GB VRAM. Our framework is comprised of a learned primal-dual iterative scheme with residual primal-dual blocks, and a particular set of essential memory optimization techniques that are embedded in the algorithm.
\item Two models are trained from scratch for clinical CBCT geometries with small and large field-of-view respectively, where the large field-of-view is accomplished via an offset of the detector panel. We train the models on thorax CT scans with $256^3$ voxels ($2$ mm voxel pitch), using a $256 \times 256$ detector panel ($1.6$ mm pixel pitch) and either $400$ or $720$ projections.
\item We demonstrate superiority of our method to analytical, iterative and deep learning baselines on the test set of thorax CT scans for both field-of-view settings.
\item We demonstrate better out-of-distribution generalization of our method compared to a deep learning baseline for both geometries on a test data set of head \& neck CT scans, where our method improves upon analytical and iterative baselines as well.
\item We show additionally, as a proof of concept, that using NVIDIA A100 Tensor Core GPUs with 80 GB VRAM our model can be easily fine-tuned on thorax CT scans with $512^3$ voxels at a native $1$ mm voxel pitch, $512 \times 512$ detector panel and $720$ projections. We compare the fine-tuned model with a deep learning baseline, observing superiority of our method.
\end{itemize}
The major novelties of this work are:
\begin{itemize}
\item Our unrolled primal-dual iterative scheme is reversible. To compute the gradients, only the final latent vectors and the sequence of reconstructions returned by the algorithm are needed. This allows to use longer schemes, but on its own does not allow to use complex primal-dual blocks with 3D data due to memory limitations.
\item We additionally rely on the local nature of Convolutional Neural Networks (U-net included) and perform patch-wise computations inside the primal-dual blocks during both training and evaluation. During backpropagation, weight gradients received from different patches are summed, giving correct global gradients for network weights.
\item Thanks to these novelties, we are able to use 3D U-nets with high filter count inside the primal blocks. Conceptually, our framework allows to use U-nets in both primal and dual blocks, which can be important for scatter correction but also when applying this framework to other modalities such as MRI.
\item We provide the network with an auxiliary scalar tensor which has the same shape as the reconstructed volume. In this tensor, intensity of a given voxel contains information about the percentage of projections for which the corresponding spatial location is visible. The network is trained to reconstruct the intensities of all voxels which are visible in at least one projection, which results in a larger field-of-view than FBP.
\end{itemize}
The minor novelties of this work are:
\begin{itemize}
\item Compared to LPD, our scheme maintains a separate variable for the current iteration of reconstruction. The algorithm returns all resulting intermediate reconstructions, the training loss is then computed as the sum of reconstruction losses of these intermediate approximations. This way we aim to benefit from better supervision and also reduce potential gradient vanishing effects. Additionally, the algorithm can be stopped early during inference.
\item Compared to LPD, we provide Landweber update term \citep{Kaipio2005} to the primal blocks. This update term plays similar role as the gradient of data log-likelihood in Recurrent Inference Machines in \citep{lonning2019}.
\end{itemize}
\section{Materials and methods}
\label{s.matmethods}
\subsection{Tomography and inverse problems}
\label{ss.tomo}
CBCT reconstruction can be viewed as an inverse problem. Let $x: z \mapsto x(z)$ be a function specifying the attenuation coefficient for every point $z \in \Omega_X$ in the spatial domain $\Omega_X \subset \mathbb R^3$. The circular source rotation orbit is parametrized as a curve $\gamma: [0,1] \to \mathbb R^3$. Detector position and orientation are specified as a family of planes $\Omega_Y: t \mapsto \Omega_Y(t)$ for $t \in [0,1]$, where each such plane is canonically identified with $\mathbb R^2$. The line going from the source position $\gamma(t)$ at time step $t \in [0, 1]$ to the detector element $u \in \Omega_Y(t)$ is denoted by $L_{t, u}$. The \textit{cone-beam transform operator}, or simply the \textit{projection operator}, is then defined as
\begin{equation}
\mathcal P(x)(t, u) = \int_{L_{t, u}} x(z) dz,
\end{equation}
therefore, $\mathcal P$ is a linear operator mapping functions defined on $\Omega_X$ to functions defined on $[0,1] \times \mathbb R^2$. Hermitian\footnote{For suitably defined $L^2$ function spaces.} adjoint $\mathcal P^*$ of $\mathcal P$ is called the \textit{backprojection operator}.
Noisy CBCT data acquisition process can then be modeled as
\begin{equation}\label{eq.noisemodel}
y = \text{\texttt{Poisson}}(I_0 \cdot e^{-\mathcal P x}),
\end{equation}
where $I_0$ is the unattenuated X-ray photon count. The inverse problem of CBCT reconstruction is then to determine the tissue attenuation coefficients $x$ given the noisy projection data $y$.
\subsection{Data}
\label{ss.data}
In this work we simulate two common clinical acquisition geometries for a Linac-integrated CBCT scanner from Elekta \citep{letourneau2005}: a small field-of-view setting and a medium field-of-view setting, which will refer to as `small FOV setting' and `large FOV setting'. For both settings, the source-isocenter distance is $1000$ mm and the isocenter-detector plane distance is $536$ mm. For the small FOV setting, the source-isocenter ray passes through the center of the detector, while for the large FOV setting the detector is offset by $115$ mm to the side in the direction of rotation. Square detector panel with a side of $409.6$ mm and $256 \times 256$ pixel array was used for the main experiments, while for the additional proof-of-concept study we swtiched to $512 \times 512$ pixel array. The projection counts were $400$ and $720$ for the small FOV and the large FOV setting respectively for the main experiments. The source moves over a $200$ degrees arc for the small FOV setting, and for the large FOV setting the source moves over a full circle.
To train and evaluate our models, we collected two diagnostic CT datasets from the institutional archive: a dataset of 424 thorax CT scans with isotropic spacing of $1$ mm, and a dataset of 79 head \& neck CT scans with anisotropic spacing of between $0.9$ mm and $1.0$ mm for axial plane and between $1.0$ mm and $1.6$ mm for the perpendicular direction. Both datasets had axial slice of $512 \times 512$ voxels. For the main experiments, all data was downsampled by a factor of $2$, resulting in volumes with $256^3$ voxels. For the additional proof-of-concept study, we did not apply any resampling.
The thorax CT dataset was used to train, validate and test the models, while the additional head \& neck dataset was used exclusively for testing the models on out-of-distribution data. The thorax CT dataset was partitioned into a training set of 260 scans, a validation set of 22 scans and a test set of 142 scans.
To simulate noisy projection data for the CT scans, Hounsfield units were converted into attenuation coefficients using $\mu = 0.2 \ \textrm{cm}^{-1}$ as the water linear attenuation coefficient. Attenuated projection data was corrupted by Poisson noise with $I_0 = 30000$ photons in Eq. \eqref{eq.noisemodel}.
\subsection{Baseline methods}
\label{ss.baselines}
For the main experiments in this work we considered the following baselines: FBP \citep{feldkamp84}, PDHG \citep{pdhg2011} with TV regularisation and U-net with FBP input.
We used ODL \citep{odl2017} implementation of FBP and PDHG. We chose Hann filter with $0.9$ frequency modulation for the FBP baseline by testing different combinations of filter and frequency on the training set. Parker weighting was used for FBP reconstruction with small FOV. For FBP reconstruction with large FOV setting it is important to take into account the fact that the central cylindrical region of the volume is measured by twice as many projections as the rest of the FOV, this results in a reconstruction artifact in the form of a bright ring around the center of axial volume slices. To reduce this effect, one solution is to smoothly reduce intensity of projections in the detector region which captures twice as much data as we go from the detector center to the detector edge. We do so by multiplying all projections by the following weighting factor \citep{microct} after the FBP filtering and before the backprojection:
\begin{equation*}
\omega(s) = \begin{cases}
1 & -\Delta \leq s \leq -\Theta \\
\frac 1 2 \left( - \sin \left( \frac{\pi \arctan (s / D)}{2 \arctan(\Theta/ D)} \right) + 1\right) & -\Theta \leq s \leq \Theta \\
0 & \Theta \leq s \leq \Delta
\end{cases}
\end{equation*}
In this formula, $s$ is the signed distance between detector a pixel and the projection of the rotation axis onto the detector plane, which is taken with the `minus' sign if we are closer to the detector center than to the edge and with the `plus' sign otherwise, $D$ is the size of the detector, and $\Theta = 0.289 D$ is a parameter which we chose experimentally for our geometry to obtain uniform reconstructions without the ring artifacts.
For the PDHG baseline, we used $600$ iterations with $0.25$ weight for the TV regularization term. The parameters of PDHG were obtained via tuning on the train set as well.
Finally, as a main deep learning baseline we implemented a 3D U-net for post-processing the FBP output. We used U-net with $3$ downsampling layers, valid convolutions and $64$ base filters, similar to \citep{cicek2016}, but without Instance or Batch normalization layers. PReLU activation functions were used. As input for the U-net, we provided the FBP reconstruction and the field-of-view tensor $V_f$ defined later in Section \ref{ss.liretrain}. Two U-nets, one for small FOV and one for large FOV, were trained for the main experiments on downsampled data using the same augmentation strategy as LIRE and the same loss function as LIRE (see Alg.\ \ref{alg:liretrain}), except for the reconstruction loss over the partial field of view, since FBP quality is very poor in this region. U-nets were trained to reconstruct $128 \times 128 \times 128$ patches due to memory limitations. Adam optimizer \citep{kingma2014} was employed with an initial learning rate of $0.0001$ and a plateau scheduler with linear warm-up and 10 epoch patience. The best-performing model on the validation set was used for testing. A separate U-net with similar architecture was trained for the proof-of-concept study on high-resolution data; similar to LIRE, $L^1$ reconstruction loss was minimized in this experiment. One NVIDIA Quadro RTX 8000 was used for training the U-net's. For the main experiments on downsampled data, it takes U-net+FBP approximately 10 seconds to reconstruct volumes for both geometries. PDHG takes 14 minutes to reconstruct a small FOV volume and 18 minutes to reconstruct a large FOV volume\footnote{It should be noted that the ODL implementation of PDHG performs a lot of data transfer between CPU and GPU, which hurts the performance.}.
Evaluation of LIRE and the baseline methods was performed using PSNR and SSIM metrics restricted to the full field of view region, where $V_f > 0$.
\section{LIRE}
\subsection{LIRE architecture and implementation}
\label{ss.arch}
\begin{algorithm}
\centering
\begin{algorithmic}[1]
\Procedure{\texttt{reconstruct}}{$y, \mathcal P, \mathcal P^*, \theta, V$}
\State $x_0 \gets \mathcal P^*(y)$ \Comment{Normalized backprojection init}
\State $I \gets []$ \Comment{Initialize output list}
\State $f \gets x_0^{\otimes 8} \in X^{8}$\Comment{Initialize primal vector}
\State $h \gets y^{\otimes 8} \in U^{8}$\Comment{Initialize dual vector}
\For{$i \gets 1, \dots, 8$}
\State $d_1, d_2 \gets \text{\texttt{Splt}}(h)$ \Comment{Split dual channels}
\State $p_1, p_2 \gets \text{\texttt{Splt}}(f)$ \Comment{Split prime channels}
\State $p_{\text{op}} \gets \mathcal P([p_2, x_{i-1}]^{\oplus})$ \Comment{Project $p_2$ and $x_{i-1}$}
\State $d_2 \gets d_2 + \Gamma_{\theta_i^d}([p_{\text{op}}, d_1, y]^{\oplus})$ \Comment{Upd. $d_2$}
\State $b_{\text{op}} \gets \mathcal P^*(d_2)$ \Comment{Backproject $d_2$}
\State $\text{\textit{LW}} \gets \mathcal P^* (\mathcal P (x_{i-1}) - y)$ \Comment{Landweber term}
\State $p_2 \gets p_2 + \Lambda_{\theta_i^p}([b_{\text{op}}, p_1, x_{i-1}, \text{\textit{LW}}, V]^{\oplus})$ \Comment{Upd. $p_2$}
\State $h \gets [d_1, d_2]^{\oplus}$ \Comment{Combine new dual}
\State $f \gets [p_1, p_2]^{\oplus}$ \Comment{Combine new primal}
\State $x_i \gets x_{i-1} + \text{\texttt{Conv3d}}(f, \theta_i^o)$ \Comment{Update $x_{i-1}$}
\State $I \gets I + [x_i]$ \Comment{Append $x_i$ to output list}
\State $h \gets \text{\texttt{Perm}}(h, {\theta_i^m})$ \Comment{Permute dual channels w. $\theta_i^m$}
\State $f \gets \text{\texttt{Perm}}(f, {\theta_i^m})$ \Comment{Permute prim. channels w. $\theta_i^m$}
\EndFor
\State \textbf{return} $I$
\EndProcedure
\end{algorithmic}
\caption{LIRE.}
\label{alg:liremain}
\end{algorithm}
\begin{algorithm}
\centering
\begin{algorithmic}[1]
\Procedure{\texttt{loss}}{$x, y, V_f, V_s$}
\State $L_1 \gets \| x - y \|_{V_f, 1}$ \Comment{$L^1$ loss in full FOV}
\State $L_2 \gets \| x - y \|_{V_s, 1}$ \Comment{$L^1$ loss in part. FOV}
\State $S_1 \gets 1.0 - \text{\texttt{SSIM}}_{V_f}(x, y)$ \Comment{1-SSIM, full FOV}
\State $S_2 \gets 1.0 - \text{\texttt{SSIM}}_{V_s}(x, y)$ \Comment{1-SSIM, part. FOV}
\State \textbf{return} $L_1 + \alpha_1 S_1 + \alpha_2 (L_2 + \alpha_1 S_2)$
\EndProcedure
\For{$j \gets 1, \dots, N_{\text{iter}}$}
\State $x \sim \mathcal{D}_{\text{train}}$\Comment{Sample train volume}
\State $\Delta \sim \mathcal N(0, 100) \in \mathbb R^3$ \Comment{Sample offset w.r.t. scan center}
\State $\delta \gets x.\text{\texttt{spacing}}$ \Comment{Get spacing of volume $x$}
\State $\mathcal P, \mathcal P^* \gets \mathcal P_{\Delta, \delta}, \mathcal P_{\Delta, \delta}^*$ \Comment{Define projector, backprojector}
\State $\overline{\mathcal P}, \overline{\mathcal P}^* \gets \mathcal P / \| \mathcal P\|, \mathcal P^* / \| \mathcal P\|$ \Comment{Normalize operators}
\State $y \gets \text{\texttt{Poisson}}(I_0 \cdot e^{-\mathcal P(x)})$ \Comment{Noisy projections}
\State $\overline y \gets -\text{\texttt{ln}}(y) / \| \mathcal P\| $\Comment{Normalized log-transform}
\State $V_f \gets \text{\texttt{FullFOV}}(\mathcal P)$ \Comment{Compute full FOV}
\State $V_p \gets \text{\texttt{PartialFOV}}(\mathcal P)$ \Comment{Compute partial FOV}
\State $V_s \gets V_p \setminus V_f$ \Comment{Incomplete FOV mask}
\State $I \gets \text{\texttt{RECONSTRUCT}}(\overline y, \overline{\mathcal P}, \overline{\mathcal P}^*, \theta, V_f)$ \Comment{Reconstruct}
\State $\text{loss} \gets 0$ \Comment{Initialize loss tensor}
\For{$z \gets I[1], \dots, I[8]$} \Comment{Loop over iterates}
\State $\text{loss} \gets \text{loss} + \text{\texttt{LOSS}}(x, z, V_f, V_s)$ \Comment{Increment loss}
\EndFor
\State $\text{compute gradients of loss w.r.t. $\theta$, update $\theta$}$
\EndFor
\end{algorithmic}
\caption{Training of LIRE.}
\label{alg:liretrain}
\end{algorithm}
LIRE is a data-driven algorithm, where a learned iterative scheme is unrolled and the parameters of this scheme are jointly optimized to minimize expected reconstruction loss over the training dataset. The choice of a particular scheme will, naturally, affect both the performance and the required resources such as GPU memory to train such a scheme.
When designing LIRE, we took inspiration from Learned Primal-Dual (LPD) reconstruction algorithm \citep{Adler2017b}. The main disadvantage of LPD, however, is that it does not scale well to 3D reconstruction problems such as Cone Beam CT. We drastically reduce memory footprint of the LIRE algorithm compared to vanilla LPD and at the same time improve its expressive power by using more complex primal and dual blocks. In order to reduce the memory footprint, we designed LIRE around two main principles: reversibility for network as a whole and patch-wise computations for local operations. We briefly describe these two concepts below. Furthermore, for the additional proof-of-concept experiment on high resolution data, we implemented a CPU-GPU memory streaming mechanism, which would keep entire primal/dual vectors in CPU memory and only send the channels required for computing the primal/dual updates to the GPU.
Reversible residual neural networks were originally introduced in \citep{revnet2017}. In a reversible residual network layer, the input tensor $z$ is split into tensors $z_1, z_2$ along the channel dimension. The output $w$ of the layer is then defined by combining $z_1$ and $z_2 + \Lambda(z_1)$ back along the channel dimension, where $\Lambda$ is a Convolutional Neural Network. Since the input $z$ can be uniquely restored\footnote{Up to numerical errors, which are typically negligible in practice for small neural networks.} from the output $w$, it is not essential to store intermediate features of $\Lambda$ prior to the actual computation of gradients for $\Lambda$. The main observation behind patch-wise computations is that for neural networks, which are composed solely of local operators such as valid convolutions, activation functions, upsamling and downsampling layers, it is possible to partition the input tensor $z$ into patches $z_i, i=1,\dots,k$ along the spatial dimensions and compute the output patch-by-patch. In general, for every $i$ it is also necessary to enlarge the patch $z_i$ by an adding all tensor elements $\partial z_i \subset z$ within a certain distance to $z_i$ in order to account for the receptive field of the network when computing features for locations inside $z_i$. For a special case of image classification CNNs that do not involve U-net type architectures or non-local operations such a patch-wise computation strategy was used in \citep{pinckaers2019}.
The reconstruction algorithm is given by the function \texttt{RECONSTRUCT}($y, \mathcal P, \mathcal P^*, \theta, V$) in Algorithm~\ref{alg:liremain}. Here $y$ is the log-transformed and scaled projection data, $\mathcal P$ and $\mathcal P^*$ are the normalized projection and backprojection operators respectively, $\theta$ is a parameter vector and $V$ is an auxiliary single-channel image space tensor with the same dimensions as the reconstructed volume which we will define later in Section~\ref{ss.liretrain}. Parameters $\theta$ are partitioned into 4 parameter groups, where $\{ \theta_i^p \}_{i=1}^8$ are the primal block parameters, $\{ \theta_i^d \}_{i=1}^8$ are the dual block parameters, $\{ \theta_i^o \}_{i=1}^8$ are the output convolution parameters and $\{ \theta_i^m \}_{i=1}^8$ are the permutation parameters.
We clarify the notation first. We write $[z_1, z_2, \dots, z_k]^{\oplus}$ to denote the channel-wise concatenation of tensors $z_1, z_2, \dots, z_k$ which are assumed to have the same spatial and batch dimensions. Function $\text{\texttt{Splt}}(z)$ splits tensor $z$ with $2n$ channels into tensors $z_1, z_2$ which get the first $n$ feature maps of $z$ and the last $n$ feature maps of $z$ respectively. Function $\text{\texttt{Perm}}(z, {\theta_i^m})$ permutes tensor $z$ with $n$ channels along the channel dimension with a permutation $\theta_i^m \in \text{\texttt{Sym}}(n)$.
In LIRE we use 8 primal/dual iterations (8 primal and 8 dual blocks) with both primal and dual latent vectors having 8 channels. Backprojected data without FBP filtering is used to initialize the reconstruction $x_0$. The initial primal vector is defined by stacking 8 copies of $x_0$, and the initial dual vector is defined by stacking 8 copies of $y$. At the beginning of each iteration $i = 1,\dots,8$, we split the primal and the dual latent vectors along the channel dimension. First we update the dual latent vector in Line 10 of Alg.~\ref{alg:liremain} using dual block $\Gamma_{\theta_i^d}$ comprised of 3 layers of $3 \times 3 \times 3$ convolutions with 96, 96 and 4 filters respectively and LeakyReLU activation after the first and the second convolution layer.
To update the primal block, we compute the Landweber term in Line 12 of Alg.~\ref{alg:liremain}, which plays a similar role as the gradient log-likelihood term in Recurrent Inference Machines in \citep{lonning2019}. We update the primal latent vector in Line 13 of Alg.~\ref{alg:liremain} using primal block $\Lambda_{\theta_i^p}$. Primal block $\Lambda_{\theta_i^p}$ is a U-net with a single downsampling layer, $3 \times 3 \times 3$ valid convolutions with 96 filters in the first double-convolution block, 192 filters in the bottleneck and LeakyReLU activation after all but the last convolution layer. We use average pooling with $2 \times 2 \times 2$ kernel in the encoder and nearest upsampling in the decoder layer.
Primal and the dual updates are computed patch-wise, which is possible thanks to the locality of $\Gamma_{\theta_i^d}$ and $\Lambda_{\theta_i^p}$, during backward pass weight gradients obtained from patches are summed to obtain the global weight gradients. New primal and dual vectors are combined in Lines 14-15. Reconstruction $x_{i-1}$ is updated in Line 16, where $\text{\texttt{Conv3d}}$ is a $1 \times 1 \times 1$ convolution with parameters $\theta_i^o$, and we append the new reconstruction $x_i$ to the output list in Line 17. Finally, we permute the channels of primal and dual latent vectors using the same permutation $\theta_i^m$ in Lines 18-19. For every $i$, the permutation $\theta_i^m$ is some fixed permutation of $[1,2,\dots,8]$ which is randomly initialized during model initialization and stored as a model parameter; we require that $\theta_i^m$ mixes the first and the second half of $[1,2,\dots,8]$.
The algorithm was implemented as a `black box' C++/CUDA extension for PyTorch \citep{pytorch} in order to maximize speed and memory efficiency. Firstly, we implemented the projection and the backprojection operators for CBCT geometry as a CUDA extension for PyTorch. Since both operators are linear and the backprojection operator is the Hermitian adjoint of the projection operator, this is sufficient to enable gradient backpropagation. In the projector code, we followed the same approach as ASTRA Toolbox \citep{astra2016} and PYRO-NN \citep{syben2019} by using texture memory and trilinear interpolation when sampling attenuation values along the source-detector rays. Adjointness of the operators was tested by checking the definition of Hermitian adjoint
\begin{equation*}
\langle \mathcal P x, y \rangle = \langle x, \mathcal P^* y\rangle
\end{equation*}
for random positive test functions (=tensors) $x, y$. The LIRE network itself was then built as a C++/CUDA extension for PyTorch by implementing \textit{both} forward and backward passes since automatic differentiation is not available inside C++/CUDA extensions. PyTorch automatic differentiation was still used to compute the gradients of the loss for the output tensors $x_1, \dots, x_8 \in I$, but the subsequent computation of the gradients of the parameters $\theta$ was performed by LIRE in the backward pass. Correctness of the gradient computations for LIRE parameters was verified by computing numerical directional derivatives for random directions inside the parameter space and comparing this with the analytical directional derivatives computed using gradients from LIRE.
\subsection{LIRE training details}
\label{ss.liretrain}
\begin{table*}[t]
\caption{Test results on thorax CT and head \& neck CT at 2 mm voxel pitch (best result in bold)}
\label{tab:comparison-lung}
\centering
\begin{tabular}{|l|l l|l l|l|l|l|}
\hline
\multirow{2}{*}{Method} & \multicolumn{2}{c|}{Thorax CT} & \multicolumn{2}{c|}{H\&N CT} & \multirow{2}{*}{Weight count} & \multirow{2}{*}{Optimizer steps} & \multirow{2}{*}{Batch size}\\
\cline{2-5}
& PSNR & SSIM & PSNR & SSIM & & & \\
\hline
FBP (small FOV) & $15.289$ & $0.572$ & $27.647$ & $0.721$ & - & - & -\\
TV (small FOV) & $27.370$ & $0.771$ & $33.903$ & $0.866$ & - & - & -\\
U-Net (small FOV) & $32.405$ & $0.803$ & $36.433$ & $0.879$ & 23341k & 284640 & 1\\
LIRE (small FOV) & $\mathbf{33.345}$ & $\mathbf{0.885}$ & $\mathbf{37.941}$ & $\mathbf{0.971}$ & 24497k & 6798 & 8\\
\hline
FBP (large FOV) & $20.051$ & $0.662$ & $22.396$ & $0.711$ & - & - & -\\
TV (large FOV) & $29.237$ & $0.793$ & $37.862$ & $0.945$ & - & - & -\\
U-Net (large FOV) & $34.297$ & $0.849$ & $37.064$ & $0.885$ & 23341k & 287040 & 1\\
LIRE (large FOV) & $\mathbf{34.432}$ & $\mathbf{0.903}$ & $\mathbf{40.113}$ & $\mathbf{0.982}$ & 24497k & 6600 & 8\\
\hline
\end{tabular}
\end{table*}
\begin{table*}[t]
\caption{Test results on thorax CT at 1 mm voxel pitch (best result in bold)}
\label{tab:comparison-lung-high}
\centering
\begin{tabular}{|l|l l|l|l|l|}
\hline
\multirow{2}{*}{Method} & \multicolumn{2}{c|}{Thorax CT} & \multirow{2}{*}{Weight count} & \multirow{2}{*}{Optimizer steps} & \multirow{2}{*}{Batch size}\\
\cline{2-3}
& PSNR & SSIM & & & \\
\hline
U-Net (large FOV) & $33.773$ & $0.848$ & 23341k & 62400 (cold start) & 1\\
LIRE (large FOV) & $\mathbf{35.784}$ & $\mathbf{0.881}$ & 24497k & 1560 (warm start) & 2\\
\hline
\end{tabular}
\end{table*}
We provide the training procedure for LIRE in Algorithm~\ref{alg:liretrain}. The training is supervised, and the training set of CT volumes is denoted by $\mathcal D_{\text{train}}$. We elaborate on the training procedure below.
A CT volume is repeatedly sampled from the training dataset in Line 9 of Alg.~\ref{alg:liretrain}. During the sampling, augmentations that flip patient left-right and top-bottom are randomly applied, both with probability $50 \%$. We sample a random offset for the rotation center w.r.t. the center of the CT volume from an isotropic Gaussian distribution with $0$ mean and a standard deviation of $100$ mm in Line 10. Choosing a random offset can be viewed as an additional type of augmentation, furthermore, in practice the isocenter in radiotherapy will be located close to a tumor. We define projection and backprojection operators for the CBCT projection geometry with given volume spacing and center offset in Line 12, and in Line 13 we compute normalized versions of these operators. The operator norm is estimated numerically using power method with three iterations \citep{boyd1974}. Synthetic noisy projection data is computed in Line 14 (see Eq. \ref{eq.noisemodel}). This noisy projection data is log-transformed and scaled in Line 15. In general, for a realistic CBCT geometry the field of view does not necessarily contain scanned object completely. When comparing reconstruction metrics it is also important to compute these metrics inside an appropriately defined field of view only, since having a large part of the volume set to $0$ outside the corresponding field of view would yield over-optimistic reconstruction metrics. We define the full field of view tensor $V_f$ and the partial field of view tensor $V_p$ in Lines 16 and 17 respectively, both of these are scalar tensors having same dimensions as the volume that we want to reconstruct. For the projection geometry with small FOV setting, the full field of view tensor is constructed as
\begin{equation*}
V_f(p) = \begin{cases}
1 & p \textrm{ is seen from all projection angles} \\
0 & \textrm{otherwise,}
\end{cases}
\end{equation*}
while for the projection geometry with large FOV setting the full field of view tensor is constructed as
\begin{equation*}
V_f(p) = \begin{cases}
1 & p \textrm{ is seen from all projection angles} \\
0.5 & p \textrm{ is seen from half of the proj. angles} \\
0 & \textrm{otherwise.}
\end{cases}
\end{equation*}
We chose to use different values ($1.0$ and $0.5$) above to mark the voxels seen from all the projection angles and the voxels which are seen from only half of the angles, however, we expect that the exact numerical values used in these masks are not important. For both small and large field of view settings, the partial field of view is defined as
\begin{equation*}
V_p(p) = \begin{cases}
1 & p \textrm{ is seen from at least one angle} \\
0 & \textrm{otherwise.}
\end{cases}
\end{equation*}
In particular, this definition of $V_p$ implies that in the central axial plane all voxels are marked as `partially visible'. In Line 18, we define $V_s$ as the tensor which equals $1$ on the set of all voxels $p$ s.t. $V_p(p) > 0, V_f(p) = 0$ and zero elsewhere. In Line 19, we call the main reconstruction procedure, providing log-transformed normalized projection data, normalized versions of projection and backprojection operators, the collection of weights $\theta$ and the auxiliary tensor $V_f$. $V_f$ helps the network to deal with the non-homogeneouos nature of the reconstruction artifacts.
The reconstruction algorithm returns a list $I = [z_1, z_2, \dots, z_8]$ of reconstructions, which are obtained after performing $1, 2, \dots, 8$ reconstruction steps respectively. We sum the reconstruction losses over all $z \in I$ in Line 22. Loss computation takes place in the $\texttt{LOSS}$ function in Alg.~\ref{alg:liretrain}. We sum losses over the full field of view region, where $V_f > 0$, and the partial field of view region, where $V_s > 0$. We compute the loss for partial field of view to ensure that the network can provide at least an approximate reconstruction in this region. A linear combination of $L^1$ loss and Structural Similarity Loss is computed for both regions. We used $\alpha_1 = 0.1$ for both field of view settings. $\alpha_2$ was set to $0.1$ initially and then reduced to $0.01$ after first learning rate decay step.
We trained two versions of LIRE for the main experiments, one for the small FOV setting and one for the large FOV setting. LIRE was trained to reconstruct complete volumes. For the internal patch-based computations inside LIRE we set the patch size to $128 \times 128 \times 128$, resulting in roughly $30$ GB VRAM usage per single volume. Reducing the patch size to $32 \times 32 \times 32$ decreased the usage to roughly $20$ GB VRAM per single volume. Eight NVIDIA Quadro RTX 8000 GPUs with 48 GB VRAM were used for training LIRE in distributed data parallel mode. We used Adam optimizer \citep{kingma2014} with an initial learning rate of $0.0001$ and a plateau scheduler with linear warm-up and 10 epoch patience. At the end of each epoch models were evaluated, the best model was picked for testing. Training was stopped when we did not observe improvement for more than 15 epochs. For the additional proof-of-concept study on high-resolution data, we performed a warm start from LIRE trained on downsampled lung CT scans with large FOV setting. Two NVIDIA A100 Tensor Core GPUs with 80 GB VRAM inside a virtual machine on NVIDIA TryEGX Platform were used. We employed Adam optimizer with an initial learning rate of $0.000025$ and linear warm-up, the model was fine-tuned for 12 epochs without any learning rate decay. $L^1$ reconstruction loss was used during fine-tuning, due to higher memory costs associated with SSIM loss. LIRE evaluation was performed in the full field of view region, where $V_f > 0$, using PSNR and SSIM metrics.
During the inference on the downsampled data with 2 mm voxel pitch and $256 \times 256$ detector panel, it takes LIRE approximately 104 seconds to reconstruct a single volume on a single Quadro RTX 8000 for the small FOV setting and approximately 115 seconds to reconstruct a volume for the large FOV setting. On high-resolution data with 1 mm voxel pitch, $512 \times 512$ detector panel and large FOV setting, it takes LIRE approximately 15 minutes to reconstruct a single volume on Quadro RTX 8000, and 6 minutes if A100 is used instead. Faster inference on A100 can be attributed to higher memory bandwidth and other hardware architectural improvements.
\section{Results}
\subsection{Main experiments}
\label{s.results}
In Table \ref{tab:comparison-lung} we summarize the results for the test set of thorax CT scans and the out-of-distribution test set of head \& neck CT scans. We provide thorax CT axial slices for small FOV in Figure \ref{fig:axial-small}, thorax CT coronal slices for small FOV in Figure \ref{fig:cor-small}, thorax CT axial slices for large FOV in Figure \ref{fig:axial-large}, thorax CT coronal slices for large FOV in Figure \ref{fig:cor-large}, head \& neck CT axial slices for large FOV in Figure \ref{fig:axial-large-ext} and head \& neck CT coronal slices for large FOV in Figure \ref{fig:cor-large-ext}. The slices were taken from randomly chosen test volumes. We see that our method outperforms the classical and deep learning baselines in all cases, including the out-of-distribution test set. Compared to U-net+FBP, most notable is the improvement in SSIM, ranging from $+0.05$ to $+0.08$ on thorax CT data, and a much larger field-of-view, since LIRE is not constrained by the data sufficiency region of FBP. PSNR improvement over the U-net is $+0.9$ dB for small FOV and $+0.13$ dB for large FOV. On the out-of-distribution test set, we observe better generalization of LIRE compared to U-net+FBP in the form of an increased PSNR and SSIM gap between LIRE and U-net, even though LIRE has a slightly higher parameter count, allowing to suggest that primal-dual schemes with shallow U-nets generalize better than a single deep U-net. Visual inspection of thorax CT slices shows better visibility of lung fissures in LIRE reconstructions compared to the baselines. In head \& neck CT slices, we observe that U-net loses spatial resolution and introduces a strong `shadow' in the neck region. LIRE yields best reconstructions on the head \& neck CT set due to better handling of photon noise compared to the iterative method, but in the low-noise neck region we observe that the methods are quite close in visual image quality.
Additionally, we measured the performance of LIRE and PDHG on the test set of thorax CT data for the small FOV setting in the region where $V_f = 0, V_p = 1$, consisting of the voxels in the partial field of view which do not belong to the full field of view. This way we obtained mean PSNR of $16.938$ and mean SSIM of $0.233$ for PDHG, whereas for LIRE mean PSNR was $28.156$ and mean SSIM was $0.795$.
The results of the proof-of-concept high-resolution experiment for the test set of thorax CT scans are summarized in Table \ref{tab:comparison-lung-high}. We only provide comparison with the U-net+FBP, since it is the best performing baseline method on the downsampled data. We provide high-resolution thorax CT axial slices for large FOV in Figure \ref{fig:axial-large-hr} and high-resolution thorax CT coronal slices for large FOV in Figure \ref{fig:coronal-large-hr}. In these proof-of-concept experiments, LIRE still outperforms U-net+FBP. Similar to our experiments on downsampled data, visual inspection of the high-resolutions thorax CT slices shows better visibility of lung fissures in LIRE reconstructions compared to the U-net.
\clearpage
\begin{figure*}[!ht]
\centering
\subfloat[]{\includegraphics[width=0.3\linewidth]{1600209177fbc9_gt_axial_95_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600209177fbc9_lire_axial_95_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth, ]{1600209177fbc9_unet_axial_95_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600209177fbc9_fbp_axial_95_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600209177fbc9_pdhg_axial_95_b.pdf}%
\caption{(a) Axial slice of Thorax CT with HU range=(-1000, 800), (b) LIRE/small FOV, (c) U-net/small FOV, (d) FBP/small FOV, and (e) PDHG/small FOV.}
\label{fig:axial-small}
\end{figure*}
\begin{figure*}[!ht]
\centering
\subfloat[]{\includegraphics[width=0.3\linewidth]{1600209177fbc9_gt_coronal_129_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600209177fbc9_lire_coronal_129_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth, ]{1600209177fbc9_unet_coronal_129_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600209177fbc9_fbp_coronal_129_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600209177fbc9_pdhg_coronal_129_b.pdf}%
\caption{(a) Coronal slice of Thorax CT with HU range=(-1000, 800), (b) LIRE/small FOV, (c) U-net/small FOV, (d) FBP/small FOV, and (e) PDHG/small FOV.}
\label{fig:cor-small}
\end{figure*}
\clearpage
\begin{figure*}[!ht]
\centering
\subfloat[]{\includegraphics[width=0.3\linewidth]{1600247942cf6c_gt_axial_97_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600247942cf6c_lire_axial_97_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth, ]{1600247942cf6c_unet_axial_97_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600247942cf6c_fbp_axial_97_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600247942cf6c_pdhg_axial_97_b.pdf}%
\caption{(a) Axial slice of Thorax CT with HU range=(-1000, 800), (b) LIRE/large FOV, (c) U-net/large FOV, (d) FBP/large FOV, and (e) PDHG/large FOV.}
\label{fig:axial-large}
\end{figure*}
\begin{figure*}[!ht]
\centering
\subfloat[]{\includegraphics[width=0.3\linewidth]{1600247942cf6c_gt_coronal_165_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600247942cf6c_lire_coronal_165_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth, ]{1600247942cf6c_unet_coronal_165_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600247942cf6c_fbp_coronal_165_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{1600247942cf6c_pdhg_coronal_165_b.pdf}%
\caption{(a) Coronal slice of Thorax CT with HU range=(-1000, 800), (b) LIRE/large FOV, (c) U-net/large FOV, (d) FBP/large FOV, and (e) PDHG/large FOV.}
\label{fig:cor-large}
\end{figure*}
\clearpage
\begin{figure*}[!ht]
\centering
\subfloat[]{\includegraphics[width=0.3\linewidth]{141_gt_axial_117_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{141_lire_axial_117_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth, ]{141_unet_axial_117_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{141_fbp_axial_117_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{141_pdhg_axial_117_b.pdf}%
\caption{(a) Axial slice of Head \& Neck CT with HU range=(-1000, 1000), (b) LIRE/large FOV, (c) U-net/large FOV, (d) FBP/large FOV, and (e) PDHG/large FOV.}
\label{fig:axial-large-ext}
\end{figure*}
\begin{figure*}[!ht]
\centering
\subfloat[]{\includegraphics[width=0.3\linewidth]{141_gt_coronal_129_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{141_lire_coronal_129_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth, ]{141_unet_coronal_129_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{141_fbp_coronal_129_b.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.3\linewidth]{141_pdhg_coronal_129_b.pdf}%
\caption{(a) Coronal slice of Head \& Neck CT with HU range=(-1000, 1000), (b) LIRE/large FOV, (c) U-net/large FOV, (d) FBP/large FOV, and (e) PDHG/large FOV.}
\label{fig:cor-large-ext}
\end{figure*}
\clearpage
\begin{figure*}[!ht]
\centering
\subfloat[]{\includegraphics[width=0.49\linewidth]{1600247942cf6c_gt_axial_176_b_hr.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.49\linewidth]{1600247942cf6c_lire_axial_176_b_hr.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.49\linewidth, ]{1600247942cf6c_unet_axial_176_b_hr.pdf}%
\caption{(a) High resolution axial slice of Thorax CT with HU range=(-1000, 800), (b) LIRE/large FOV, (c) U-net/large FOV.}
\label{fig:axial-large-hr}
\end{figure*}
\clearpage
\begin{figure*}[!ht]
\centering
\subfloat[]{\includegraphics[width=0.49\linewidth]{1600247942cf6c_gt_coronal_257_b_hr.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.49\linewidth]{1600247942cf6c_lire_coronal_257_b_hr.pdf}%
\hfil
\subfloat[]{\includegraphics[ width=0.49\linewidth, ]{1600247942cf6c_unet_coronal_257_b_hr.pdf}%
\caption{(a) High resolution coronal slice of Thorax CT with HU range=(-1000, 800), (b) LIRE/large FOV, (c) U-net/large FOV.}
\label{fig:coronal-large-hr}
\end{figure*}
\clearpage
\section{Discussion}\label{sec:discussion}
\label{s.discussion}
We have presented LIRE, a practical algorithm for deep leaning-based CBCT reconstruction with clinically-relevant resolution and projection count using a learned primal-dual scheme that can be trained end-to-end on current consumer GPUs with 24 GB VRAM. We have shown that our method outperforms the classical and deep learning baselines on the test set of thorax CT scans and the out-of-distribution test set of head \& neck CT scans, where we additionally observe better generalization of our method compared to the U-net baseline. In particular, the photon noise in highly attenuated areas is handled very well, which indicates that LIRE can potentially help to lower the dose of CBCT scans. For the small field of view setting, our method is able to reconstruct certain anatomy details outside the full field of view much better than the iterative baseline, which can be interesting for applications in radiotherapy, e.g., by allowing for a better registration of the planning CT scan to the CBCT reconstruction.
This work has certain limitations. Firstly, we do not take scatter artifacts into account. Feasibility of supervised scatter correction with deep learning was demonstrated in e.g. \citep{dse_spie}, and such method can be in principle combined with our learned primal-dual scheme and trained end-to-end. Secondly, we do not correct for possible motion artifacts in thorax CBCT due to breathing or heartbeat. Thirdly, our metrics do not directly imply suitability of our method for radiotherapy planning; a proper Monte Carlo dose simulation would be required to test that.
\section{Acknowledgements}
\label{s.acks}
We would like to thank NVIDIA Corporation for providing us with the access to A100 virtual machine instances and for supporting us throughout these experiments. In particular, we would like to thank Joe Cullen from NVIDIA for enabling this collaboration.
\bibliographystyle{abbrvnat
|
1,108,101,562,924 | arxiv | \section{Introduction}
\label{sec:intro}
Recently, the context dependent (CD) deep neural network (DNN) hidden Markov model (HMM) (CD-DNN-HMM) has become the dominant framework for acoustic modeling in speech recognition (e.g. \cite{AmDBN}\cite{DnnForAm}\cite{CdDnn1}\cite{CdDnn2}). However, given that speech is an inherently dynamic process, some researchers pointed out that recurrent neural networks (RNNs) can be considered as alternative models for acoustic modeling \cite{SRWDRNN}. The cyclic connections in RNNs exploit a self-learnt amount of temporal context, which makes RNNs better suited for sequence modeling tasks. Unfortunately, in practice, conventional RNNs are hard to be trained properly due to the vanishing gradient and exploding gradient problems as described in \cite{LRNN}. To address these problems, literature \cite{LSTM1} proposed an elegant RNN architecture, called as long short-term memory (LSTM).
LSTMs and conventional RNNs have been successfully used for many sequence labeling and sequence prediction tasks. In language modeling, RNNs were used as generative models over word sequences,
and remarkable improvements were achieved \cite{RNNLM} over the standard n-gram models. For handwriting recognition, LSTM networks have been applied for a long time \cite{handwriting}, in which, the bidirectional LSTM (BLSTM) networks trained with connectionist temporal classification (CTC)\cite{CTC} has been demonstrated performing better than the HMM-based system. In speech synthesis, the BLSTM network has also been applied and a notable improvement was obtained \cite{LSTM.TTS}. For language identification, LSTM based approach was proposed in \cite{LSTM.LID} to compared with i-vector and DNN systems, and better performance was achieved. Recently, LSTM networks have also been introduced on phoneme recognition task \cite{SRWDRNN}, robust speech recognition task \cite{RobustASR.LSTM}, and large vocabulary speech recognition task \cite{HSRWDBLSTM}\cite{LSTMRNNAM}\cite{LSTMLVSP}, and shown state-of-the-art performances. Subsequently, the sequence discriminative training of LSTM networks is investigated in \cite{SeqDis.LSTM}, and a significant gain was obtained.
In the researches of acoustic modeling, depth for feed-forward neural networks can lead to more expressive models. LSTMs and conventional RNNs are inherently deep in time, for they can be expressed as a composition of multiple nonlinear layers when unfolded in time. This paper explores the depth of LSTMs, which is defined as the depth in space. Based on earlier researches on constructing deep RNNs \cite{HTCDRNN}, in this work, possible approaches are explored to extend LSTM networks into deep ones, and various deep LSTM networks are empirically evaluated and compared on a large vocabulary Mandarin Chinese conversational telephone speech recognition task. Although lots of attentions have been attracted to the deep LSTM networks, this paper summaries the approaches of constructing deep LSTM networks from different perspectives, and suggests alternative architectures that can yield comparable performance.
\section{Constructing LSTM based deep RNNs}
\subsection{The conventional LSTM architecture}
Given an input sequence $x=(x_1,x_2,\ldots,x_T)$, a conventional RNN computes the hidden vector sequence $h=(h_1,h_2,\ldots,h_T)$ and output vector sequence $y=(y_1,y_2,\ldots, y_T)$ from $t=1$ to $T$ as follows:
\begin{align}\label{RNN}
&h_t=\mathcal{H}(W_{xh}x_{t}+W_{hh}h_{t-1}+b_{h})\\
&y_t=W_{hy}h_{t}+b_{y}
\end{align}
where, the $W$ denotes weight matrices, the $b$ denotes bias vectors and $\mathcal{H}(\cdot)$ is the recurrent hidden layer function.
\begin{figure}[htb]
\centerline{\includegraphics[width=65mm]{Figure0}}
\caption{\label{Figure0} {The architecture of a LSTM network with one memory block, where green lines are time-delayed connections.}}
\label{spprod}
\end{figure}
In the LSTM architecture, the recurrent hidden layer consists of a set of recurrently connected subnets known as ``memory blocks''. Each memory block contains one or more self-connected memory cells and three multiplicative gates to control the flow of information. In each LSTM cell, the flow of information into and out of the cell is guarded by the learned input and output gates.
Later, in order to provide a way for the cells to reset themselves, the forget gate was added \cite{LSTM.Forget.Gate}. In addition, the modern LSTM architecture contains peephole weights connecting the gates to the memory cell, which improve the LSTM's ability to learn tasks that require precise timing and counting of the internal states \cite{LSTM.Peephole.Weights}. As illustrated in Fig.~\ref{Figure0}, the recurrent hidden layer function $\mathcal{H}$ for this version of LSTM networks is implemented as following:
\begin{align}\label{LSTM}
&i_t=\sigma(W_{xi}x_t+W_{hi}h_{t-1}+W_{ci}c_{t-1}+b_{i})\\
&f_t=\sigma(W_{xf}x_t+W_{hf}h_{t-1}+W_{cf}c_{t-1}+b_{f})\\
&a_t=\tau(W_{xc}x_t+W_{hc}h_{t-1}+b_{c})\\
&c_t=f_{t}c_{t-1}+i_{t}a_{t}\\
&o_t=\sigma(W_{xo}x_t+W_{ho}h_{t-1}+W_{co}c_{t}+b_{o})\\
&h_t=o_{t}\theta(c_t)
\end{align}
Where, $\sigma$ is the logistic sigmoid function, and $i$, $f$, $o$, $a$ and $c$ are respectively the input gate, forget gate, output gate, cell input activation, and cell state vectors, and all of which are the same size as the hidden vector $h$. $W_{ci}$, $W_{cf}$, $W_{co}$ are diagonal weight matrices for peephole connections. $\tau$ and $\theta$ are the cell input and cell output non-linear activation functions, generally in this paper $tanh$.
\subsection{Deep LSTM networks}
A number of theoretical results support that a deep, hierarchical model can be more efficient at representing some functions than a shallow one \cite{LearningDeepArchAI}. This paper is focused on constructing deep LSTM networks.
In \cite{HTCDRNN}, the architecture of conventional RNNs is carefully analyzed, and from three points, an RNN can be deepened: (1) input-to-hidden function, (2) hidden-to-hidden transition and (3) hidden-to-output function. In this paper, from these three points and the stacked LSTMs, several novel architectures to extend LSTM networks to deep ones are introduced as follows. For convenience, a simplified illustration of the LSTM is shown in Fig.~\ref{Figure1}(a) firstly.
\begin{figure}[htb]
\centerline{\includegraphics[width=75mm]{Figure1}}
\caption{\label{Figure1} {Illustrations of different strategies for constructing LSTM based deep RNNs. (a) a conventional LSTM; (b) a LSTM with input projection; (c) a LSTM with output projection. (d) a LSTM with deep input-to-hidden function; (e) a LSTM with deep hidden-to-output function; (f) stacked LSTMs}}
\label{spprod}
\end{figure}
\subsubsection{Deep hidden-to-hidden transition}
In \cite{HTCDRNN}, an RNN with deep transition is discussed for increasing the depth of the hidden-to-hidden transition. Thus, two architectures can be obtained, as illustrated in Fig.~\ref{Figure1}(b) and Fig.~\ref{Figure1}(c). In details, in the architecture shown in Fig.~\ref{Figure1}(b), a multiple layer transformation is added before the cell input activation,
and which means that the calculation of $a_t$ in equation (5) is changed as:
\begin{align}\label{LSTM-IP}
&a_{0,t}=\phi_{0}(W_{0,x}x_t+W_{0,h}h_{t-1}+b_{0})\\
&a_t=\phi_{L}(W_{L}\phi_{L-1}(\ldots\phi_{1}(W_{1}a_{0,t}+b_{1}))+b_{L})
\end{align}
Where, $\phi$ is the activation function. In this paper, this architecture is called as the \emph{LSTM with input projection layer} (LSTM-IP for short).
Another architecture, shown in Fig.~\ref{Figure1}(c), has a separate hidden layer after the LSTM memory blocks, and the output activation $h_t$ is changed as $p_t$
\begin{equation}\label{RNN2}
p_t=\phi_{L}(W_{L}\phi_{L-1}(\ldots\phi_{0}(W_{0}h_{t} + b_{0}))+b_{L})
\end{equation}
We call this architecture as the \emph{LSTM with output projection layer} (LSTM-OP for short). However, this architecture is proposed earlier in literature \cite{LSTMRNNAM} to address the computation complexity of learning a LSTM network, in which it is called as the LSTM projected. From the perspective of this paper, this architecture is considered as a way to increase the depth of the hidden-to-hidden transition, although it may further be beneficial in tackling computation complexity issue.
It should be noticed that, in the LSTM-OP architecture, linear activation units can be used in projection layer, just like literature \cite{LSTMRNNAM} suggested. By contrast, there must be a non-linear activation (e.g. $tanh$) units used in the projection layer in the LSTM-IP.
\subsubsection{Deep input-to-hidden function}
A typical way to make the input-to-hidden function deep is using higher-level representations of DNNs as the input for RNNs. Literature \cite{SCUTHFEFDNN} reported that a better phoneme recognition performance could be achieved by applying this strategy for RNNs.
All the previous studies are based on conventional RNNs, and in this research, this method is adopted for constructing deep LSTM networks as illustrated in Fig.~\ref{Figure1}(d), and applied to a large vocabulary speech recognition task.
\subsubsection{Deep hidden-to-output function}
It was discussed that a deep hidden-to-output function can be useful to disentangle the factors of variations in the hidden state \cite{HTCDRNN}. Based on this view, we construct a deep LSTM network shown in Fig.~\ref{Figure1}(e) by adding some intermediate layers between the output of the LSTM and the softmax layer.
\subsubsection{Stack of LSTMs}
Perhaps, the most straight-forward way to construct the deep LSTM network is to stack multiple LSTM layers on top of each other.
Specifically, output $h_t$ from the lower LSTM layer, is the input $x_t$ of the upper LSTM layer. This stacked LSTM networks can combine the multiple levels or representations with flexible use of long range context, and was introduced for acoustic modeling in speech recognition in \cite{SRWDRNN}, which showed that a significant performance improvement can be obtained compared with the shallow one.
\section{GPU Implementation}
We implement the LSTM network training on multi-GPU devices. In the training procedure, the truncated back-propagation though time (BPTT) learning algorithm \cite{BPTT} is adopted. Each sentence in the training set is split into subsequences with equal length $T_{bptt}$ (e.g. 15 frames). As illustrated in Fig.~\ref{Figure2}, two adjacent subsequences have overlapping frames $T_{overlap}$ (e.g. 5 frames). The gradients are computed for each subsequence and back-propagated to its start.
For computational efficiency, one GPU operates in parallel on $N$ (e.g. 20) subsequences from different utterances at a time. After the GPU has updated the parameters in the LSTM networks, it continues with the next $N$ subsequences in these utterances
Besides, in order to train these networks on multi-GPU devices, asynchronous stochastic gradient descent (ASGD) \cite{ASGD1}\cite{ASGD2} is adopted.
In our experiments, it took us about two days to train a shallow conventional LSTM network having 750 cells with four GPU devices on a 150-hour speech corpus, where training a LSTM layer took around two to five times as much time as the training for a full-connection feed-forward hidden layer.
\begin{figure}[htb]
\centerline{\includegraphics[width=80mm]{Figure2}}
\caption{\label{Figure2} {Illustration of GPU implementation.}}
\label{spprod}
\end{figure}
\section{Experiments}
We evaluate these LSTM networks on a large vocabulary speech recognition task - the HKUST Mandarin Chinese conversational telephone speech recognition \cite{HKUST}. The corpus (LDC2005S15, LDC2005T32) is collected and transcribed by Hong Kong University of Science and Technology (HKUST), which contains 150-hour speech, and 873 calls in the training set and 24 calls in the development set, respectively. In our experiments, around 10-hour speech was randomly selected from the training set, used as the validate set for network training, and the original development set in the corpus was used as speech recognition test set, which is not used in the training or the hyper-parameters determination procedures.
\subsection{Experimental setup}
The speech in the dataset is represented with 25ms frames of Mel-scale log-filterbank coefficients (including the energy value), along with their first and second temporal derivatives. In the experiments, the feed-forward DNNs used the concatenated features, which were produced by concatenating the current frame with 5 frames in its left and right context. However, for the inputs of LSTM networks, only current features (no context) were used.
A trigram language model was used in all the experiments, which was estimated using all the transcriptions of the acoustic model training set. We use a hybrid approach \cite{CdDnn2}\cite{LSTMRNNAM} for acoustic modeling with LSTM networks or DNNs, in which the neural networks' outputs are converted as pseudo likelihood as the state output probability in the HMM framework. All the networks were trained based on the alignments generated by a well-trained GMM-HMM systems with 3304 tied context dependent HMM states (realignments by DNNs were not performed), and only the cross-entropy objective function was used for all networks.
For network training, the learning rate was decreased exponentially. We tried to set the initial and final learning rates specific to a network architecture for stable convergence of each network. In the experiments, the initial learning rates ranged from 0.0005 to 0.002, and each final learning rate was always set as one-tenth of the corresponding initial one. In the training procedure of LSTM networks, the strategy introduced in \cite{RNN.difficulty} was applied to scale down the gradients. Besides, since the information from the future frames helps making LSTM networks better decisions for current frame, we also delayed the output HMM state labels by 3 frames.
\subsection{Experimental results}
Firstly, the baseline performance is summarized in Table~\ref{table0}.
For training the Subspace GMM \cite{SubspaceGMM}, KALDI toolkit \cite{KALDI} was used.
All the DNNs in the experiments had 4 hidden layers. Each layer in the ``ReLU DNN'' model had 2000 ReLU units \cite{ReLU}.
Each layer in the ``PNorm DNN'' model had 800 pnorm units \cite{Pnorm}, where the hyper-parameter $p$ is set to 2,
and the group size is set to 8.
The ``Conv DNN'' model had two convolutional layers (along with max-pooling) and three ReLU layers.
It can be found out that,
the character error rates (CER) of baseline GMM-HMM and DNN-HMM are comparable with those reported in \cite{HKUST.Result1}\cite{HKUST.Result2}\cite{HKUST.Result3}.
\begin{table} [t,h]
\caption{\label{table0} {Speech recognition results of baseline systems on the HKUST Mandarin Chinese conversational telephone speech recognition task.}}
\vspace{2mm}
\centerline{
\begin{tabular}{|l|c|}
\hline
Model Descriptions & CER(\%) \\
\hline \hline
GMM & 48.68 \\
\hline
Subspace GMM & 44.29 \\
\hline
ReLU DNN & 38.42 \\
\hline
PNorm DNN & 38.01 \\
\hline
Conv DNN & 37.13 \\
\hline
\end{tabular}}
\end{table}
Experiments were conducted to evaluate these deep LSTM networks shown in Fig.~{\ref{Figure1}}.
In the training procedure of these LSTM networks, the $T_{bptt}$ was fixed on 15, $T_{overlap}$ was fixed on 5.
Four GPUs were used, and each GPU operated in parallel on $20$ subsequences at a time.
The LSTM-IP network in the experiment had 750 LSTM cells and a non-linear activation projection layer with 2000 $tanh$ units.
The LSTM-OP network in the experiment had 2000 LSTM cells and a linear activation projection layer with 750 nodes.
In order to construct a LSTM network with deep input-to-hidden function, we constructed a LSTM network by putting a LSTM network on three feed-forward intermediate layers, and each feed-forward layer had 2000 ReLU units. This network is indicated as ``3-layer ReLU + LSTM'' in Table~\ref{table1}.
Similarly, we trained a model indicated as ``2-layer Conv + 2-layer ReLU + LSTM''.
For the deep hidden-to-output function, a LSTM network, indicated as ``LSTM + 3-layer ReLU'' in Table~\ref{table1}, was constructed by adding three feed-forward intermediate hidden layers on top of the LSTM layer, and each feed-forward hidden layer had 2000 ReLU units.
The stacked LSTMs network was also evaluated, in which, three conventional LSTMs were stacked, and each layer had 750 LSTM cells. These three networks were trained using the discriminative pre-training algorithm \cite{DiscriminativePretraining}.
Concretely, in the training procedure of ``3-layer ReLU + LSTM'', three ReLU hidden layers were firstly pre-trained, and then the original output softmax layer was replaced by a new random initialized LSTM layer along with a new output softmax layer. Finally, the whole network was jointly optimized.
\begin{table} [t,h]
\caption{\label{table1} {Speech recognition results of different strategies of constructing deep LSTM networks.}}
\vspace{2mm}
\centerline{
\begin{tabular}{|l|c|}
\hline
Model Descriptions & CER(\%) \\
\hline \hline
LSTM & 40.28 \\
\hline
LSTM-IP & 39.09 \\
\hline
LSTM-OP & 35.92 \\
\hline
3-layer ReLU + LSTM & 37.31 \\
\hline
2-layer Conv + 2-layer ReLU + LSTM & 36.66 \\
\hline
LSTM + 3-layer ReLU & 37.16 \\
\hline
Stack of LSTM (3-layer) & \bf{35.91} \\
\hline
\end{tabular}}
\end{table}
Comparing these results listed in Table~\ref{table1} with the baseline, the performance of 1-layer conventional LSTM network is even worse than the feed-forward DNNs. Through making deep hidden-to-hidden transitions, obvious performance improvements can be obtained, especially the LSTM-OP. Besides, the performance can also been improved by making deep input-to-hidden and hidden-to-output functions. It should be noted that, the LSTM-OP can yield comparable performance with the stacked LSTMs, which reached a similar conclusion with that in \cite{LSTMRNNAM}.
It is possible to design and train deeper variant of a LSTM network that combines different methods in Fig~\ref{Figure1} together. For instance, a stacked LSTM-OPs network may be constructed by combining the deep hidden-to-hidden transition and the stack of LSTMs. Combining different methods in Fig~\ref{Figure1} is a potential way to further improve the performance. Thus, experiments were conducted to evaluate some selected combinations of these methods for constructing deep LSTM networks, where each hidden layer had the same configuration as that in the experiments described above. The results are listed in Table~\ref{table2}, and the best performance can be obtained by combining the LSTM-OP and deep hidden-to-output function.
\begin{table} [t,h]
\caption{\label{table2} {Speech recognition results of selected combinations for constructing deep LSTM networks.}}
\vspace{2mm}
\centerline{
\begin{tabular}{|l|c|}
\hline
Model Descriptions & CER(\%) \\
\hline \hline
3-layer ReLU + LSTM-OP & 36.73 \\
\hline
2-layer Conv + 2-layer ReLU + LSTM-OP & 36.15 \\
\hline
LSTM-OP + 3-layer ReLU & \bf{34.65} \\
\hline
Stack of LSTM-IP (3-layer) & 35.00 \\
\hline
Stack of LSTM-OP (3-layer) & 34.84 \\
\hline
\end{tabular}}
\end{table}
From these results in Table~\ref{table2}, we can find out that, the performance can be further improved by stacking LSTM-IPs and the LSTM-OPs. However, the network, that had LSTM-OP layer on top of three feed-forward intermediate layers, yielded worse performance than the LSTM-OP network, which needed to be further researched. What is noteworthy is that the network that had three full-connection hidden layers on top of LSTM-OP layer yielded the best performance, and required less computations than the stacked LSTM-OPs network in both training and testing procedures.
These experimental results had revealed that deep LSTM networks benefit from the depth. Compared with the shallow LSTM network, a 13.98\% relatively CER reduction can be obtained. Compared with the feed-forward DNNs, the deep LSTM networks can reduce the CER from 38.01\% to 34.65\%, which is a 8.87\% relatively CER reduction.
\section{Discussion and Conclusions}
In this paper, we have explored novel approaches to construct long short-term memory (LSTM) based deep recurrent neural networks (RNNs). A number of theoretical results support that a deep, hierarchical model can be more efficient at representing some functions than a shallow one \cite{LearningDeepArchAI}. This paper is focused on constructing deep LSTM networks, which have been shown to give state-of-the-art performance for acoustic modeling on some speech recognition tasks. Inspired from the discussion about how to construct deep RNNs in \cite{HTCDRNN}, several alternative architectures were constructed for deep LSTM networks from three points: (1) input-to-hidden function, (2) hidden-to-hidden transition and (3) hidden-to-output function. Furthermore, in this paper, some deeper variants of LSTMs were also designed by combining different points.
In this work, these LSTM network training were implemented on multi-GPU devices, in which the truncated BPTT learning algorithm was adopted, and the experiments discovered that the LSTM RNNs can also be quickly trained on GPU devices.
We empirically evaluated various deep LSTM networks on a large vocabulary Mandarin Chinese conversational telephone speech recognition task. The experiments revealed that constructing deep LSTM architecture outperformed the standard shallow LSTM networks and DNNs. Besides, the LSTM-OP followed with three feed-forward intermediate layers outperformed the stacked LSTM-OPs
However, we believe that this work is just a preliminary study on how to construct deep LSTM networks. There are many efforts need to be done about the architectures of LSTM networks. Some other architectures will be explored and evaluated in our future work, such as a LSTM-IP network which has three non-linear activation projection layers, a stacked LSTMs network followed with multiple feed-forward intermediate layers, a LSTM network with both input and output project layers, and deep architectures with the maxout unit improved LSTM layer \cite{MaxoutLSTM}.
\section{Acknowledgements}
\label{sec:Acknowledgement}
The work was supported in part by the National Basic Research Program (2013CB329304), the research special fund for public welfare industry of health(201202001), and National Natural Science Foundation (No.61121002, No.91120001).
\ninept
\bibliographystyle{IEEEbib}
|
1,108,101,562,925 | arxiv | \section{Introduction}
Astronomical survey science is entering a decade which will witness a surfeit of new optical imaging data that will transform astronomical and cosmological research. The new datasets obtained from upcoming ground-based and space-based observing facilities will image vast areas of the sky at unprecedented depths. Among the most important of these will be the Legacy Survey of Space and Time (LSST) from the Vera Rubin Observatory \cite{0912.0201}, which will complement not only the already existing datasets from Dark Energy Survey (DES, \cite{1708.01530}) and Hyper-Suprime Cam (HSC,\cite{1809.09148}), but also the data from upcoming space-based surveys like Euclid \cite{1001.0061}. Due to the large field of view, most of these surveys will not be able to employ adaptive optics resulting in an arcsec sized point spread function (PSF). At the depths of these surveys, this will result in blending, affecting over half of the galaxies in the survey (for example, in \cite{2005.12039}, the authors estimate that approximately 60\% of the objects in the HSC are affected by this problem).
Blending, especially if undetected, can introduce serious systematic errors in the survey analysis. These include potentially catastrophically wrong inference of fluxes, leading to biased photometric redshifts, biased estimates of shear and local object density dependent systematics that can interact with galaxy and cluster detections. The main approach to solving the blending problem is two-pronged. On one hand, there is the realization that source separation (deblending) will never be perfect and we should focus on understanding its properties and its effects on the analysis. This is done through simulations and artificial source injections into real data. On the other hand, the better the deblender we have, the smaller the corrections we need to do to make our analysis unbiased. We therefore need both good deblenders and good schemes for understanding their imperfections.
Deblending is in principle a well defined problem. The basic model is that the images of individual galaxies are combined on the projected plane, assuming perfect transparency (i.e. intensities add), and then observed through the atmosphere and telescope with known PSF and noise properties. The most sophisticated deblenders on the market combine machine-learning approaches for setting priors on galaxy shapes, and physical modelling for things that we can model explicitly, like the noise and the PSF \cite{1802.10157,1912.03980}.
Several machine-learning based methods have been proposed recently, in order to grapple with the galaxy deblending problem. ~\cite{reiman2019deblending} designed a branched deblender with generative adversarial networks, which can work on the deblending of images with two overlapped galaxies. ~\cite{boucaud2020photometry} developed a framework for measuring the photometry of blended galaxies as well as perform segmentation with a standard convolutional neural network (CNN) and a U-Net. In 2020,~\cite{2005.12039} introduced an algorithm where a Variational Auto Encoder(VAE)-like neural network was used for galaxy deblending. Most of the current generation neural-network based galaxy deblenders require the galaxy to be located at the center of the image in order to give the network a sign as to which galaxy to recover, which is often impractical and prevents the model from being used iteratively, i.e., it will degrade the model's performance on the image after the first galaxy is removed. Besides, the number of galaxies in the blended image for the previous methods is always fixed, and to the best of our knowledge, there is no neural network based galaxy deblending framework that can work on images with an arbitrary and unknown number of galaxies. In this paper, we propose an innovative deblending framework with a galaxy deblender, and a classifier that can deblend galaxies from a blended image iteratively without any prior information about the number of galaxies. In addition, since the deblender recovers galaxies based on their luminosity, our framework has no constraint on the position of the galaxies. Nevertheless, like other machine-learning approaches to deblending, our work remains exploratory as we aim to better understand the applicability of neural networks to astronomical image analysis.
This paper is organized as follows: in Section~\ref{sec:NN_Training}, we introduce the architecture of our framework including the deblender and the classifier, and the experimental settings used to train the model. In Section~\ref{sec:Result}, we present the experimental results and the comparison with the industry standard deblending method - Source Extractor (SExtractor, \cite{B&A1996}). The discussion and conclusion follow, in the last Section~\ref{sec:Conclusion}.
\section{Neural Network Architecture and Training} \label{sec:NN_Training}
\newcommand{\emph{deblender}}{\emph{deblender}}
\newcommand{\emph{classifier}}{\emph{classifier}}
\subsection{Proposed Framework} \label{fwdesc}
Our goal is to deblend galaxies from astronomical images with an arbitrary and unknown number of overlapped galaxies. The proposed framework consists of two components: a \emph{deblender}, which isolates the image corresponding to a single galaxy from an astronomical image, and a \emph{classifier}, which counts how many galaxies remain in the image. The \emph{deblender}~and \emph{classifier}\ are then used iteratively to separate the scene into its constituent galaxy images. This is illustrated in the Figure \ref{fig:framework} and represented in the following meta-code:
\begin{verbatim}
while True:
num_galaxies = classifier(image)
if num_galaxies == 0:
exit loop
deblended_galaxy = deblender(image)
image -= deblended_galaxy
\end{verbatim}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{Figs/Framework.png}
\caption{Framework.}
\label{fig:framework}
\end{figure}
Specifically, given a noisy blended image with multiple galaxies, the deblender will take it as the input and output a noiseless image with a single galaxy as it is trained. This single-galaxy image is then subtracted from the input of the deblender to get the residual image which is another noisy blended image. The classifier is used to determine if there are further galaxies in the residual image, and if there are, this process is repeated until there is no galaxy left in the residual image.
In an ideal case, the classifier will detect one fewer galaxy at each step and the process will stop when there are no more galaxies left. We call such deblends \emph{High quality} deblends. In a non-ideal case, one of the two following scenarios often play out. The number of galaxies in the image does not decrease by one galaxy per iteration, but the process still eventually comes to a halt with zero galaxies detected in the final image. When this happens, we refer to the result as \emph{Medium quality} deblends. The third case is if the process gets stuck in an infinite loop where the \emph{classifier}\ maintains that there are more than zero galaxies in the image, but the deblender fails to locate them. Then, if the classifier predicts the same non-zero number of galaxies in the residual image for three consecutive iterations we terminate the iteration and call the results \emph{Low quality} deblends.
The next subsection describes the architectures of our two main components.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{Figs/RDN_arch.pdf}
\caption{The architecture of RDN.}
\label{fig:rdn}
\end{figure}
\subsection{Deblender}
Given an astronomical image with multiple overlapped galaxies, the deblender aims to deblend one galaxy from it.
A Residual Dense Network (RDN)~\cite{zhang2018residual} is trained as the deblender in this framework. RDN shows superior performance in image super-resolution~\cite{zhang2018residual} and image restoration~\cite{zhang2020residual}. For the deblending task, the RDN will take noisy blended images with multiple galaxies as input and the output will be a noiseless image containing the brightest galaxy.
Figure~\ref{fig:rdn} shows the architecture of the RDN, starting with a shallow feature extraction net (SFTNet), which includes two convolutional layers that extract features from the input. The SFTNet is followed by the main component of the RDN, namely the residual dense blocks (RDBs). Each RDB contains $C$ convolutional layers with a ReLU activation function. The layers in the RDB are densely connected, which means the feature maps from all the preceding layers of the current RDB as well as the output from the preceding RDB will be the input of the current convolutional layer. The feature maps from the convolutional layers in the current RDB and the preceding RDB are also concatenated as a local feature fusion, followed by a $1\times1$ convolutional layer. At the end of an RDB, there is a local residual step by the addition of the output from the previous RDB. Following $D$ RDBs, where $D$ denotes the number of RDBs, is a global feature fusion where, similar to the local feature fusion, the features from all the RDBs will be concatenated and then passed to a $1\times1$ convolutional layer. The last step before the up-sampling net is global residual learning, referring to an addition between the current features and the features from the first convolutional layer in SFTNet. The efficient sub-pixel convolutional neural network (ESPCN) followed by a convolutional layer forms the up-sampling net. Since the RDN in our framework is used for deblending, the output image will have the same size as the input image. During training, the RDN will take noisy blended images as input and predict noiseless images with a single galaxy. $\ell_1$-norm is used to compute the difference between the output of the RDN and the ground truth, thus the loss function for RDN is written:
\begin{equation}
l^\theta_{\rm RDN}(I, I_{gt}) = \frac{1}{WH}\sum_{x=1}^{W}\sum_{y=1}^{H} \left|f(I, \theta)_{x,y}-(I_{gt})_{x,y} \right| \label{eq:RDN}
\end{equation}
where $I$ is the input noisy blended image and $f_{\theta}(I)$ is the output of the RDN with $\theta$ referring to the parameters in the model. $I_{GT}$ denotes the ground truth image, which is a noiseless image with the brightest galaxy.
In Figure \ref{fig:rdn} we show a schematic representation of the \emph{deblender}\ network.
\subsection{Classifier}
For the classifier portion of the deblending network, we use a VGG-16 network~\cite{simonyan2014very}. VGG-16 contains 13 convolutional layers and 3 fully connected layers. In our framework, it is modified to have four classes as output, i.e., the classifier can tell if the images have 0, 1, 2 or 3 galaxies.
In the first phase of training, the classifier is trained on images with galaxies ranging from 0 to 3. During the second phase in the training process where the deblender and the classifier are trained jointly, the deblended images from the deblender will also be passed through the classifier to see if the RDN has deblended only one galaxy from the blended image. This loss will also be used to update the RDN's parameters. Thus, the training set of the classifier contains both noiseless and noisy images with 0 to 3 galaxies. The cross-entropy (CE) loss is utilized to train the classifier parameterized by $\phi$ as Eq.~\ref{eq:CE} where $y$ represents the one-hot ground truth label.
\begin{equation}
l^\phi_{CE}(I) = -y^{\rm T}\cdot\log(g(I, \phi)) \label{eq:CE}
\end{equation}
\subsection{Training \& Test Data}
In this work, we use BlendingToolKit \footnote{\url{https://github.com/LSSTDESC/BlendingToolKit}} in order to simulate the galaxy images and to generate the blended images. BlendingToolkit is a complete framework whose functionalities include generation of images with blended objects as well as providing measurements of the deblending and detection performance. It relies on the industry standard \texttt{galsim} \cite{2015A&C....10..121R} packages to make the actual renderings of the galaxies and is built on top of the \texttt{WeakLensingDeblending} \footnote{\url{https://github.com/LSSTDESC/WeakLensingDeblending}} package.
We first generate noiseless images of single galaxies, which can be considered as the pre-blended ground truth images. We employ the LSST DM galaxy catalog with a span of 1 square deg (\texttt{OneDegSq.fits} supplied with the BlendingToolKit). We use the default options, which impose a magnitude cut on the i-band, $i<25.3$. Then, the noiseless blended images are generated by a pixelwise summation between these single galaxy images. Additional random Poisson noise is added to the image to get the noisy blended images corresponding to the LSST 10-year depth. We selected $g$, $r$ and $i$ bands from the resultant images and converted them into RGB images. The original dimension of the generated images is (120, 120, 3). We crop each image to (80, 80, 3) from the center. The pixel values of the blended images will be normalized to $[0, 1]$ before entering the framework and the pre-blended images will be scaled to $[-1, 1]$ following~\cite{reiman2019deblending} .
Unlike the other machine learning based deblending methods where one of the galaxies has to be located at the center of the image~\cite{reiman2019deblending, boucaud2020photometry}, there is no constraint on the positions of the galaxies for our deblender, because we train it to output the object with the highest luminosity. This is also the reason why our framework can work iteratively. The use of the summation when creating the blended images also makes the iterative process possible. During training, the noisy blended images are treated as the input and the noiseless image with the brightest galaxy as the pre-blended ground truth for the deblender. Our goal is to train a deblender that can deblend and denoise at the same time and will always recover the brightest galaxy from the blended images. The classifier is trained with $0-3$ galaxies and for each of the four classes, half of the training data contain noiseless images while the other half contain noisy images. This is essential in order to train a classifier that can tell the number of galaxies in both noiseless and noisy images at the same time.
We generate 50,000 2-galaxy blended images with BlendingToolKit as the training set for the deblender. The test set contains 1000 blended images with 2 galaxies in each. There is also a second test set with 1000 3-galaxy blended images, to demonstrate the generalization of our framework. For the classifier, we generated blended images containing 0, 1, 2 and 3 galaxies, where each of the classes has 100,000 images (making it 400,000 in total), half of which are noiseless images and the other half noisy. The test set for the classifier contains 2000 images for each class and 8000 in total. The test set for the classifier also has the same ratio of noiseless and noisy images. Therefore, the classifier gives 4 classes as output indicating the number of galaxies in the input image.
\subsection{Training procedure}
The training process can be divided into two phases. In the first phase, the deblender and the classifier are pre-trained separately until they perform reasonably. Then they are fine-tuned jointly to boost the performance of both, while working in the iterative setting. Specifically, we expect that the classifier will force the deblender to output only one galaxy, while the deblender will provide more intermediate images to motivate the classifier to have better discrimination ability.
During the pre-training of the RDN, the noisy two-galaxy blend images are used as the input and the noiseless images with the brighter single galaxy as the ground truth. Eq.~\ref{eq:RDN} is the objective function during this process. A model trained in this way will have the ability to not only deblend the brighter galaxy, but also to denoise simultaneously. Meanwhile, in the first training phase the classifier is trained on the dataset containing both noisy and noiseless blended images with 0, 1, 2 and 3 galaxies. This pre-training phase makes the trained classifier able to classify both noisy and noiseless images and the trained RDN to determine the end of the whole workflow. They will be used during together during the second phase of the training.
During the second phase, the deblender and the classifier will be trained jointly. The framework will operate as it is designed, i.e., the deblender will be run twice because the input contains two galaxies. The blended image, denoted as $I_{{\rm blend}}$ will be passed through the deblender to get the deblended $I_{{\rm deblend}-1}$, which is compared with the ground truth $I_{{\rm gt}-1}$ to build the $\ell_1$-norm loss $l_{\rm RDN}(I_{{\rm deblend}-1}, I_{{\rm gt}-1})$. The first residual image $I_{{\rm res}-1}$ is the subtraction between $I_{{\rm blend}}$ and $I_{{\rm deblend}-1}$. Generally, the residual images are fainter than the blended images and therefore they will be normalized by dividing with the maximum pixel values. Then, the deblender will predict and scale back to recover the second galaxy $I_{{\rm deblend}-2}$ from $I_{{\rm res}-1}$ and another residual image $I_{{\rm res}-2}$ will be calculated. For simplicity, we use $\mathbb{S}=\{I_{{\rm blend}}, I_{{\rm deblend}-1}, I_{{\rm deblend}-2}, I_{{\rm res}-1}, I_{{\rm res}-2}, I_{{\rm gt}-1}, I_{{\rm gt}-2}\}$ to denote all the images. In this phase, the classifier will be optimized by all the available images in $\mathbb{S}$ as formulated in Eq.~\ref{eq:VGG-2}. Since $\mathbb{S}$ is class-imbalanced with more 1-galaxy images than 0 and 2-galaxy images, $\alpha_I$ is used as a weight. For 1-galaxy images, $\alpha_I=0.2$, otherwise $\alpha_I=1$. For the deblender, in addition to $l_{\rm RDN}(I_{{\rm deblend}-1}, I_{{\rm gt}-1})$, it is updated based on a loss from the classifier's prediction on the deblended images. The objective function for the second phase can be formulated as Eq.~\ref{eq:RDN-2}. $\lambda$ here is a trade-off coefficient.
\begin{equation}
l^\phi_{phase-2} = \sum_{I \in \mathbb{S}} \alpha_I l_{CE}(I) \label{eq:VGG-2}
\end{equation}
\begin{equation}
l^\theta_{phase-2} = l_{\rm RDN}(I_{{\rm deblend}-1}, I_{{\rm gt}-1})+\lambda l_{CE}(I_{{\rm deblend}}) \label{eq:RDN-2}
\end{equation}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{iterative_results/blend2-18.png}
\includegraphics[width=\linewidth]{iterative_results/blend2-39.png}
\captionsetup{justification=raggedright}
\caption{Iterative results for 2-galaxy blended images. We show two examples. In each example, the top row shows the noisy image as it progresses through the iterative process with the input image on the left and the remaining noise image (after the galaxies were subtracted) on the right. The middle row show the galaxy images that were isolated from the input image by the \emph{deblender}. The bottom row shows the ground truth, which can be visually compared with the deblended images above. All the images were plotted with inverse fluxes to improve the contrast (i.e. white corresponds to zero flux). The number at the corner of each image represents the number of remaining galaxies as determined by the classifier.
}
\label{fig:iterative_blend2}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{iterative_results/blend2-22.png}
\includegraphics[width=\linewidth]{iterative_results/blend2-328.png}
\caption{As Fig. \ref{fig:iterative_blend2}, but for medium and low quality samples.}
\label{fig:iterative_blend2_medium_and_low}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{iterative_results/blend3-11.png}
\includegraphics[width=\linewidth]{iterative_results/blend3-21.png}
\caption{As Fig. \ref{fig:iterative_blend2}, but for 3 galaxy blends.}
\label{fig:iterative_blend3}
\end{figure}
\section{Results} \label{sec:Result}
\subsection{Experimental settings}
\subsubsection{Implementation details}
The RDN used in our framework contains $D=16$ RDBs with $C=8$ convolutional layers in each block. During the first phase of the training process, the learning rate for the RDN is $10^{-4}$, the batch size is 128, and it is trained for 150 epochs. For the classifier, the initial learning rate is $0.1$, the batch size is 200, and it is trained for 200 epochs. In the second phase, the deblender and the classifier are updated jointly, where the learning rate decays to $10^{-5}$ for the former and $10^{-6}$ for the latter. After some experimentation, we chose the trade-off coefficient $\lambda=10^{-4}$. The batch size is 8 due to GPU limit and the framework is trained for 10 epochs in this phase.
\subsubsection{Evaluation}
We trained the framework on 2-galaxy blended images and tested the trained model on both 2-galaxy and 3-galaxy blended images. Both test sets contain 1000 noisy blended images.
We start by showing some example results in Figures \ref{fig:iterative_blend2} and \ref{fig:iterative_blend3}. These results show the typical output from a high-quality deblend. The network is using both the morphological as well as the color information to isolate images of individual galaxies which is helpful in recovering images even in difficult cases.
\begin{table}
\centering
\caption{\label{tab:iterative} Iterative test on 2-galaxy and 3-galaxy blended images.}
\begin{tabular}{c|cc}
\hline \hline
& 2-galaxy & 3-galaxy \\
\hline
High-quality & 77.3\% & 50.4\% \\
Medium-quality & 15.3\% & 36.9\% \\
Low-quality & 7.4\% & 12.7\% \\
\hline \hline
\end{tabular}
\end{table}
In Table~\ref{tab:iterative} we present the fraction of deblends sorted by the quality category (see Section \ref{fwdesc}) for 2-galaxy and 3-galaxy problems. Note that this only indicates how well the scheme thinks it is doing, rather than how well it is actually doing. With this caveat, the process shows relatively good results with a large fraction of nominally high-quality deblends.
\begin{table*
\centering
\begin{tabular}{c|cc|cc|cc|cc}
\hline \hline
& \multicolumn{4}{c|}{2-galaxy problem} & \multicolumn{4}{c}{3-galaxy problem} \\
\hline
& \multicolumn{2}{c|}{PSRN} & \multicolumn{2}{c|}{SSIM} & \multicolumn{2}{c|}{PSRN} & \multicolumn{2}{c}{SSIM} \\
\hline
& Mean & Median & Mean & Median & Mean & Median & Mean & Median \\
\hline
Deblended 1 & 56.51 & 57.93 & 0.9973 & 0.9997 & 54.66 & 57.19 & 0.9954 & 0.9997 \\
Deblended 2 & 58.18 & 59.02 & 0.9967 & 0.9997 & 55.96 & 58.77 & 0.9929 & 0.9996 \\
Deblended 3 & & & & & 57.71 & 59.29 & 0.9956 & 0.9995 \\
\hline \hline
\end{tabular}
\caption{\label{tab:bigtable} PSNR(dB) and SSIM for 2-galaxy and 3-galaxy deblending problems.}
\end{table*}
To quantify the quality of the deblends, we start with some standard image analysis metrics. We applied the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) as metrics to evaluate the quality of deblended images from the deblender when compared with the ground truth.
PSNR represents a peak error in the measurement of reconstruction quality in image compression. It computes the logarithm of the ratio between the maximum pixel value (MAX) of the ground truth and the mean squared error (MSE) between the test image and the ground truth in decibels. In our experiment, the ground truth consists of noiseless images with a single galaxy and the test images are the corresponding deblended images from the RDN. PSNR is formulated as Eq.~\ref{eq:PSNR}:
\begin{equation}
{\rm PSNR(dB)} = 20\cdot \log_{10}({\rm MAX})-10\cdot \log_{10}({\rm MSE}) \label{eq:PSNR}
\end{equation}
SSIM~\cite{wang2004image}, known as the structural similarity index, is commonly used to evaluate the similarity between two images using the means $\mu_x$ and $\mu_y$, the variances $\sigma_x$ and $\sigma_y$ and the covariance $\sigma_{xy}$. In Eq.~\ref{eq:SSIM}, $c_1=(k_1L)^2$ and $c_2=(k_2L)^2$ are two small constants to avoid the instability with a weak denominator. We use $k_1=0.01$, $k_2=0.03$ by default and $L$ is the dynamic range of pixel values.
\begin{equation}
{\rm SSIM} = \frac{(2\mu_x\mu_y+c_1)(2\sigma_{xy}+c_2)}{(\mu_x^2+\mu_y^2+c_1)(\sigma_x^2+\sigma_y^2+c_2)} \label{eq:SSIM}
\end{equation}
Results are summarized in Table \ref{tab:bigtable}. We do observe a few trends here. First, the PSNR of the second deblended galaxy is higher than that of the first. This somewhat counter-intuitive result comes from the normalization of PSNR, since the second deblend is fainter than the first one. In other words, the PSNR is not telling us that the 2nd deblended galaxy is better deblended, only that its quality is less degraded than expected, given how much fainter it is. This interpretation is confirmed by the SSIM values, which are higher for the first deblend, but only marginally. In both cases we see that the 3-galaxy problem is more difficult than the 2-galaxy problem. We also see that the median values are systematically above the mean values, telling us that the mean is pulled down by a small number of catastrophic outliers. To study these results in language that is more relevant to the astronomy community, we turn to the recovery of the fluxes and moments, described in the next section.
\subsection{Flux and image moment recovery using RDN}
\label{sec:flux-image-moment}
\begin{figure}
\centering
\includegraphics[width=8.5cm, height=8.5cm]{plots/flux_x.png}
\captionsetup{justification=raggedright}
\caption{Plot showing the comparison of the fluxes recovered from the RDN and the true fluxes for the \texttt{x} galaxies. The red line is the linear regression fit to the data and the dashed black line is an arbitrary $y=x$ fit. The shading in green denotes a $95\%$ confidence interval to the linear fit.}
\label{fl_xcomp_SE}
\end{figure}
An important measure of the effectiveness of any algorithm such as ours is the accurate recovery of physical parameters from the input image. To that end, we have opted to compare the fluxes and the second order image moments (which represent the equivalent ellipse of the image and by extension, the shape and orientation) retrieved by the RDN against that of the truth images. For ease of reference, the brighter galaxies in each field are referred to as `x' galaxies and the fainter ones as `y' galaxies.
Figure \ref{fl_xcomp_SE} shows a comparison of the fluxes of the \texttt{x} galaxies as recovered by the RDN against the truth images for the test set. The larger panel on the left shows the true fluxes on the X-axis and the RDN fluxes on the y-axis. The solid red line is a linear regression fit to the data, and the dashed black line is an arbitrary $y=x$ fit. The light green shading represents a $95\%$ confidence interval to the linear fit. The three smaller panels on the right side show the same fits to the high, medium and low quality blends which are detailed in Section \ref{fwdesc}. The overlapping histograms in the bottom left panel show the density distributions of the RDN and true fluxes. Figure \ref{fl_ycomp_SE} is a similar representation for the fainter, \texttt{y} galaxies.
\begin{figure}
\centering
\includegraphics[width=8.5cm,height=8.5cm]{plots/flux_y.png}
\captionsetup{justification=raggedright}
\caption{As Fig. \ref{fl_xcomp_SE}, but for the \texttt{y} galaxies. }
\label{fl_ycomp_SE}
\end{figure}
As is evident from Figure \ref{fl_xcomp_SE}, the RDN is very efficient in recovering the fluxes for the brighter galaxies in our fields, more or less uniformly so for the high, medium and low quality images.
The effectiveness of the RDN for this group of images can be further observed from the overlapping histograms of the true (red) and recovered (green) fluxes in the bottom left panel. Figure \ref{fl_ycomp_SE} shows the same plots for the fainter, \texttt{y} galaxies. As expected, the lower quality assignments are associated with a higher scatter between real and measured fluxes, although for the brighter galaxy, there is no measurable difference between the medium and low quality marks (given the size of our test catalog). For the fainter galaxies, however, the low quality deblends are essentially uncorrelated with the true values. We find that except for secondary objects in the low-quality bin, our RDN based estimator is essentially unbiased in recovering flux values.
Similarly to the fluxes, the shapes and orientations of the galaxies, represented by their second order image moments, are crucial characteristics that we aimed to recover using the RDN, in particular with the weak lensing application in mind. Figure \ref{debtr_mu11} shows a comparison of the moment $\mu_{11}$ for the truth images and the RDN recovered images. The bigger panels show the true moment on the x-axis and the RDN recovered moment on the y-axis for the \texttt{x} and \texttt{y} galaxies. The blue line is an arbitrary $y=x$ fit. The smaller panels on the bottom show the same plot for the high, medium and low quality images. Results for the other component of the second order moment, $\mu_{20}-\mu_{02}$ looks qualitatively the same. In an ideal deblender, the results would lie on the $y=x$ line, but we notice the appearance of a curious cross, which we will comment on below.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{plots/debtr_mu11_both_test-2.png}
\captionsetup{justification=raggedright}
\caption{Plot showing the comparison of true $\mu_{11}$ and RDN recovered $\mu_{11}$ for the brighter (\texttt{x}) and the fainter (\texttt{y}) galaxies on the large top plots. The smaller second row plots show the same split by quality bin. The blue line is a $y=x$ fit. }
\label{debtr_mu11}
\end{figure*}
\begin{figure}
\includegraphics[width=\linewidth]{plots/comp_mom_xy.png}
\caption{Plot showing the second order image moments for \texttt{x} and \texttt{y} ground truth images in green, with the RDN recovered moments superimposed in blue. The true and recovered moments exhibit the same general distribution while some scatter is observed at the edges of the plot. }
\label{mom_comp_xy}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{plots/true_mu2_hist.pdf}
\captionsetup{justification=raggedright}
\caption{Plot showing the histogram of the absolute value of the second moment $|\mu_2|$ for the true images. The true simulated galaxies belong to two categories: they are either round $|\mu_2|=0$ or not. The histogram is truncated with a small number of galaxies with considerably higher $|\mu_s|$s.}
\label{fig:hist}
\end{figure}
Figure \ref{mom_comp_xy} shows a comparison of the second order moments ($\mu_{11}$, $\mu_{20} -\mu_{02}$) plotted against each other for both the ground truth \texttt{x} and \texttt{y} galaxies (green) with the moments recovered by the RDN superimposed (blue). We see no evidence of any preferred axis in the recovered distribution of these quantities.
This leads to a curious result. The ensemble properties of the RDN recovered fluxes are correctly distributed. However, the per-galaxy results have structure that goes beyond the normal scatter around the truth result. In particular, the cross seen in Figure \ref{debtr_mu11} implies that galaxies with a significant $\mu_{11}$ sometimes end up having zero recovered $\mu_{11}$, but also vice-versa, that galaxies with zero true $\mu_{11}$ occasionally end up having a significant recovered $\mu_{11}$.
One plausible explanation for this curious result is as follows. In Figure \ref{fig:hist} we plot the histogram of the truth value of $|\mu_2| = \sqrt{(\mu_{20}-\mu_{02})^2 + (2\mu_{11})^2}$, i.e. the rotationally invariant measure of the non-circularity of the image. This histogram shows a distinct peak at around $\mu=0$ region. The simulated galaxies fall into two classes, those that are approximately round and those which are not. The RDN network seems to have learned this and tries to classify the galaxy into one of the two categories before reconstructing its image. The arms of the crosses correspond to these failures: one for the non-circular galaxies reconstructed as circular ones and the other for the circular galaxies reconstructed as the non-circular ones.
\subsection{Comparison with SExtractor}
\label{sec:comp-with-sextr}
\begin{figure}
\captionsetup{justification=raggedright}
\includegraphics[width=8.5cm,height=8.5cm]{plots/comp_rdnvssex_sex_flux.png}
\caption{Plot showing the comparison of flux recovery from the RDN and SExtractor.The top left and right panels show the true fluxes on the X-axis and the RDN and SExtractor fluxes on the y-axis respectively for \texttt{x} galaxies. The red line is a linear regression fit with a $95\%$ confidence interval represented by light green shading. The dashed black line is the $y=x$ line. The bottom left and right panels are the same representations for \texttt{y} galaxies. }
\label{rdnsexcomp_flux}
\end{figure}
In this section, we compare our deblending strategy with what is widely considered as the industry standard, Source Extractor (SExtractor, \cite{B&A1996}), which has been the baseline detection, deblending and image extraction software in astronomy for over two decades. It returns a set of user specified parameters from the more extensive default parameter file, by following the specific configurations defined by the user. In this work, we have used the Python formulation of SExtractor, \textit{sep} \cite{sep2015} with the settings employed for DES\footnote{\url{https://github.com/esheldon/sxdes}}.
\begin{figure}
\includegraphics[width=\linewidth]{plots/comp_smom_xy.png}
\captionsetup{justification=raggedright}
\caption{Plot showing the second order image moments for \texttt{x} and \texttt{y} ground truth images in green, with the SExtractor recovered moments superimposed in blue. While the distribution is similar to that in Figure \ref{mom_comp_xy}, the SExtractor recovered moments exhibit much larger scatter than the RDN recovered moments. }
\label{mom_sex_xy}
\end{figure}
We note that this is a fundamentally unfair comparison for several reasons. First, the RDN has been trained for this particular set of galaxies and thus internally employs a correctly tuned prior for the distribution of morphologies, fluxes and ellipticities, while SExtractor employs a general algorithm and is thus intrinsically more robust. Moreover, following DES configuration, SExtractor is run on the co-added image (R+G+B) and is thus missing the color information. Nevertheless, it is an appropriate sanity check to measure what kind of improvements can be brought about by employing more sophisticated methods.
Figure \ref{rdnsexcomp_flux} shows the comparison of the flux recovery by the RDN and SExtractor for both the \texttt{x} and \texttt{y} galaxies. The top left and right panels feature the RDN recovered and SExtractor recovered fluxes on the y-axis and the true fluxes on the X-axis for the \texttt{x} galaxies. The bottom left and right panels are the same representations for the \texttt{y} galaxies. The red line is a linear regression fit to the data, with the light green shading representing a $95\%$ confidence interval. The dashed black line shows $y=x$. As is evident from the plots, the RDN does a better job of recovering the object fluxes for both the \texttt{x} and \texttt{y} galaxies. SExtractor has a tendency to put a disproportionate amount of flux into the first detected objects, and hence the \texttt{x} images are biased high, while the \texttt{y} images are biased low.
Measuring moments for SExtractor images in considerably more difficult due to the presence of noise in them and the results are overly noisy. Correctly subtracting the noise contribution in a fair manner goes beyond the scope of this paper.
\subsection{Failure modes}
In this section we investigate the most common failure modes in the two approaches by cherry-picking cases, in which one method performs correctly while the other fails.
We manually looked at all the cases where RDN detects just a single galaxy. In a majority of those, the secondary object was very faint and overlapped almost perfectly with the primary, leading to the deblending being suspended because the \emph{classifier}\ determined that there are no further objects. However, for $12$ out of the $1000$ objects in the test set, for the deblended \texttt{x} galaxies, the result clearly shows two objects, examples of which are shown in Figure \ref{rdnfail_1}. These problems are trivial for the SExtractor approach. A visual inspection of all these images was conducted and it was observed that, both the objects in $7$ of these fields are very similar in shape and, in $5$ fields, they had similar brightnesses. It stands to reason that the RDN has some difficulty in deblending objects with similar structural parameters, most likely because it cannot decide which is the ``brighter'' object, especially in the presence of noise. In fact, when presented with such examples in training, it simply pays a large loss penalty whenever it starts with the ``wrong'' object, without a clear rule about how to pick one of the two objects if the noise hides the identity of the bright ones. We will discuss this further in Section \ref{sec:Conclusion}.
\begin{figure*}
\includegraphics[width=\textwidth]{plots/combims-rdnfail.png}
\captionsetup{justification=raggedright}
\caption{Instances where the RDN fails, while SExtractor succeeds in detecting and deblending the objects in the field. On the left is the original blend, in the centre is the output of the RDN and on the right is the blend with the SExtractor detections superimposed in red.}
\label{rdnfail_1}
\end{figure*}
When SExtractor is run on the test set, it returns photometric parameters for $979$ \texttt{x} galaxies and $729$ \texttt{y} galaxies (the \texttt{x} counterparts of all \texttt{y} galaxies are present in the \texttt{x} set). As seen in Figure \ref{rdnfail_1}, this includes $9$ instances where the RDN fails to deblend the different galaxies in the field. However, overall, there are more instances where the RDN successfully deblends the \texttt{x} and \texttt{y} galaxies, while SExtractor either doesn't detect either galaxy, or only detects the \texttt{x} galaxy, but not the the \texttt{y} galaxy, a few examples of which are illustrated by Figure \ref{sexfail_1}. We note that these are traditional blend merges, where the deblender merges two distinct objects into a single one. Since human eye is pretty good at detecting these, we could possibly improve upon this by fine-tuning the SExtractor settings, potentially at the cost of artificially shredding objects with substructure. These are also examples where color information is most helpful.
\begin{figure*}
\includegraphics[width=\textwidth]{plots/combims-sexfail.png}
\captionsetup{justification=raggedright}
\caption{Instances where SExtractor fails to detect and deblend, while the RDN succeeds. On the far left is the original blend, the RDN result is represented in the two central panels for the \texttt{x} and \texttt{y} galaxies, and the blend with the SExtractor detection superimposed in red is on the far right panel.}
\label{sexfail_1}
\end{figure*}
\section{Summary and Conclusions} \label{sec:Conclusion}
In this work we present a new approach to deblending using a Residual Dense Network, which was trained on blended galaxy images with a realistic PSF and noise levels, and which has performed decently in deblending and recovering object fluxes and shapes. Compared to previous works, our set up does not assume that the object to be deblended is located at the center of the image. We also do not need to assume in advance, the number of objects to be deblended. By using two neural networks, one to perform deblending and one to determine the number of remaining objects in the field, we can classify the quality of deblends. We have shown that deblends that are designed to be higher quality have indeed, less noisy fluxes and shapes.
In the most current deblending networks found in literature, the network is trained to deblend the central galaxy. Given that centering in the presence of noise presents its own problems, we have trained our network to deblend the brightest galaxy remaining in the image. This works very well, where the choice of the brightest is clear, but leads to a particular failure mode where the network cannot decide where to start, when two objects are approximately equally bright (see Figure \ref{rdnfail_1}). This might be a generic feature of any kind of deblender that proceeds by peeling off one galaxy at a time. For example, even if we choose the galaxy closer to the center, there would likely be an equivalent failure mode when two galaxies are equally close to the center. The same thing could happen for other initial assumptions that attempt to distinguish between the galaxies. There are several approaches to solve this. One possibility would be to use a symmetrized loss function that would train the network to deblend either galaxy. Another possibility would be to have a more sophisticated network that deblends multiple galaxies at once, perhaps again with an appropriately symmetrized loss function.
We have found that our method outperforms the industry standard SExtractor on flux recovery, subject to strong caveats mentioned in the Section \ref{sec:comp-with-sextr}. However, we found that the RDN recovered shapes are either very good or catastrophically wrong. The catastrophical failures result in the cross-effect discussed in Section \ref{sec:flux-image-moment}. One possible explanation for this involves bimodality in the shapes of the truth objects and the RDN internally assigning the image to the wrong class. It also serves as a warning for use of a deeply non-linear deblender based on NN for work requiring precision shapes, such as weak lensing. At the same time, such work could lead to potentially interesting new work on galaxy-star separation and morphological classification of galaxies.
Similarly to the majority of other neural network approaches, our deblender cannot deal optimally with a PSF that is variable and is different for the different input channels (for e.g. for different bands), nor can it deal with a pixel mask. This is a significant deficiency that is usually swept under the rug under the assumption that the network can be retrained for a set of different PSF. However, in practice, we have a different PSF for different bands and with the typical astronomical observations in five or six bands, the number of possible PSF combinations becomes unmanageable. The PSF shape in each band thus needs to be part of the input and network training. We are leaving these important aspects for future work.
One of the main advantages of most ML approaches to deblending is that they also automatically denoise and deconvolve the image and can in principle fill in the missing information in case of pixel masks. For a sufficiently sophisticated neural network, this would trivialize many typical operations such as flux and shear estimation on the resulting truth images. However, it would also require us to have a more advanced way of propagation of both statistical and systematic uncertainties.
This work has demonstrated the feasibility of using RDN in astronomical image analysis and has highlighted some of the unresolved issues with the current machine learning approaches to the deblending problem in the case of realistic astronomical images.
\section*{Acknowledgements}
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research through the SciDAC grant ``Accelerating HEP Science: Inference and Machine Learning at Extreme Scales''. We thank our collaborators for useful input.
\section{Introduction}
Astronomical survey science is entering a decade which will witness a surfeit of new optical imaging data that will transform astronomical and cosmological research. The new datasets obtained from upcoming ground-based and space-based observing facilities will image vast areas of the sky at unprecedented depths. Among the most important of these will be the Legacy Survey of Space and Time (LSST) from the Vera Rubin Observatory \cite{0912.0201}, which will complement not only the already existing datasets from Dark Energy Survey (DES, \cite{1708.01530}) and Hyper-Suprime Cam (HSC,\cite{1809.09148}), but also the data from upcoming space-based surveys like Euclid \cite{1001.0061}. Due to the large field of view, most of these surveys will not be able to employ adaptive optics resulting in an arcsec sized point spread function (PSF). At the depths of these surveys, this will result in blending, affecting over half of the galaxies in the survey (for example, in \cite{2005.12039}, the authors estimate that approximately 60\% of the objects in the HSC are affected by this problem).
Blending, especially if undetected, can introduce serious systematic errors in the survey analysis. These include potentially catastrophically wrong inference of fluxes, leading to biased photometric redshifts, biased estimates of shear and local object density dependent systematics that can interact with galaxy and cluster detections. The main approach to solving the blending problem is two-pronged. On one hand, there is the realization that source separation (deblending) will never be perfect and we should focus on understanding its properties and its effects on the analysis. This is done through simulations and artificial source injections into real data. On the other hand, the better the deblender we have, the smaller the corrections we need to do to make our analysis unbiased. We therefore need both good deblenders and good schemes for understanding their imperfections.
Deblending is in principle a well defined problem. The basic model is that the images of individual galaxies are combined on the projected plane, assuming perfect transparency (i.e. intensities add), and then observed through the atmosphere and telescope with known PSF and noise properties. The most sophisticated deblenders on the market combine machine-learning approaches for setting priors on galaxy shapes, and physical modelling for things that we can model explicitly, like the noise and the PSF \cite{1802.10157,1912.03980}.
Several machine-learning based methods have been proposed recently, in order to grapple with the galaxy deblending problem. ~\cite{reiman2019deblending} designed a branched deblender with generative adversarial networks, which can work on the deblending of images with two overlapped galaxies. ~\cite{boucaud2020photometry} developed a framework for measuring the photometry of blended galaxies as well as perform segmentation with a standard convolutional neural network (CNN) and a U-Net. In 2020,~\cite{2005.12039} introduced an algorithm where a Variational Auto Encoder(VAE)-like neural network was used for galaxy deblending. Most of the current generation neural-network based galaxy deblenders require the galaxy to be located at the center of the image in order to give the network a sign as to which galaxy to recover, which is often impractical and prevents the model from being used iteratively, i.e., it will degrade the model's performance on the image after the first galaxy is removed. Besides, the number of galaxies in the blended image for the previous methods is always fixed, and to the best of our knowledge, there is no neural network based galaxy deblending framework that can work on images with an arbitrary and unknown number of galaxies. In this paper, we propose an innovative deblending framework with a galaxy deblender, and a classifier that can deblend galaxies from a blended image iteratively without any prior information about the number of galaxies. In addition, since the deblender recovers galaxies based on their luminosity, our framework has no constraint on the position of the galaxies. Nevertheless, like other machine-learning approaches to deblending, our work remains exploratory as we aim to better understand the applicability of neural networks to astronomical image analysis.
This paper is organized as follows: in Section~\ref{sec:NN_Training}, we introduce the architecture of our framework including the deblender and the classifier, and the experimental settings used to train the model. In Section~\ref{sec:Result}, we present the experimental results and the comparison with the industry standard deblending method - Source Extractor (SExtractor, \cite{B&A1996}). The discussion and conclusion follow, in the last Section~\ref{sec:Conclusion}.
\section{Neural Network Architecture and Training} \label{sec:NN_Training}
\newcommand{\emph{deblender}}{\emph{deblender}}
\newcommand{\emph{classifier}}{\emph{classifier}}
\subsection{Proposed Framework} \label{fwdesc}
Our goal is to deblend galaxies from astronomical images with an arbitrary and unknown number of overlapped galaxies. The proposed framework consists of two components: a \emph{deblender}, which isolates the image corresponding to a single galaxy from an astronomical image, and a \emph{classifier}, which counts how many galaxies remain in the image. The \emph{deblender}~and \emph{classifier}\ are then used iteratively to separate the scene into its constituent galaxy images. This is illustrated in the Figure \ref{fig:framework} and represented in the following meta-code:
\begin{verbatim}
while True:
num_galaxies = classifier(image)
if num_galaxies == 0:
exit loop
deblended_galaxy = deblender(image)
image -= deblended_galaxy
\end{verbatim}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{Figs/Framework.png}
\caption{Framework.}
\label{fig:framework}
\end{figure}
Specifically, given a noisy blended image with multiple galaxies, the deblender will take it as the input and output a noiseless image with a single galaxy as it is trained. This single-galaxy image is then subtracted from the input of the deblender to get the residual image which is another noisy blended image. The classifier is used to determine if there are further galaxies in the residual image, and if there are, this process is repeated until there is no galaxy left in the residual image.
In an ideal case, the classifier will detect one fewer galaxy at each step and the process will stop when there are no more galaxies left. We call such deblends \emph{High quality} deblends. In a non-ideal case, one of the two following scenarios often play out. The number of galaxies in the image does not decrease by one galaxy per iteration, but the process still eventually comes to a halt with zero galaxies detected in the final image. When this happens, we refer to the result as \emph{Medium quality} deblends. The third case is if the process gets stuck in an infinite loop where the \emph{classifier}\ maintains that there are more than zero galaxies in the image, but the deblender fails to locate them. Then, if the classifier predicts the same non-zero number of galaxies in the residual image for three consecutive iterations we terminate the iteration and call the results \emph{Low quality} deblends.
The next subsection describes the architectures of our two main components.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{Figs/RDN_arch.pdf}
\caption{The architecture of RDN.}
\label{fig:rdn}
\end{figure}
\subsection{Deblender}
Given an astronomical image with multiple overlapped galaxies, the deblender aims to deblend one galaxy from it.
A Residual Dense Network (RDN)~\cite{zhang2018residual} is trained as the deblender in this framework. RDN shows superior performance in image super-resolution~\cite{zhang2018residual} and image restoration~\cite{zhang2020residual}. For the deblending task, the RDN will take noisy blended images with multiple galaxies as input and the output will be a noiseless image containing the brightest galaxy.
Figure~\ref{fig:rdn} shows the architecture of the RDN, starting with a shallow feature extraction net (SFTNet), which includes two convolutional layers that extract features from the input. The SFTNet is followed by the main component of the RDN, namely the residual dense blocks (RDBs). Each RDB contains $C$ convolutional layers with a ReLU activation function. The layers in the RDB are densely connected, which means the feature maps from all the preceding layers of the current RDB as well as the output from the preceding RDB will be the input of the current convolutional layer. The feature maps from the convolutional layers in the current RDB and the preceding RDB are also concatenated as a local feature fusion, followed by a $1\times1$ convolutional layer. At the end of an RDB, there is a local residual step by the addition of the output from the previous RDB. Following $D$ RDBs, where $D$ denotes the number of RDBs, is a global feature fusion where, similar to the local feature fusion, the features from all the RDBs will be concatenated and then passed to a $1\times1$ convolutional layer. The last step before the up-sampling net is global residual learning, referring to an addition between the current features and the features from the first convolutional layer in SFTNet. The efficient sub-pixel convolutional neural network (ESPCN) followed by a convolutional layer forms the up-sampling net. Since the RDN in our framework is used for deblending, the output image will have the same size as the input image. During training, the RDN will take noisy blended images as input and predict noiseless images with a single galaxy. $\ell_1$-norm is used to compute the difference between the output of the RDN and the ground truth, thus the loss function for RDN is written:
\begin{equation}
l^\theta_{\rm RDN}(I, I_{gt}) = \frac{1}{WH}\sum_{x=1}^{W}\sum_{y=1}^{H} \left|f(I, \theta)_{x,y}-(I_{gt})_{x,y} \right| \label{eq:RDN}
\end{equation}
where $I$ is the input noisy blended image and $f_{\theta}(I)$ is the output of the RDN with $\theta$ referring to the parameters in the model. $I_{GT}$ denotes the ground truth image, which is a noiseless image with the brightest galaxy.
In Figure \ref{fig:rdn} we show a schematic representation of the \emph{deblender}\ network.
\subsection{Classifier}
For the classifier portion of the deblending network, we use a VGG-16 network~\cite{simonyan2014very}. VGG-16 contains 13 convolutional layers and 3 fully connected layers. In our framework, it is modified to have four classes as output, i.e., the classifier can tell if the images have 0, 1, 2 or 3 galaxies.
In the first phase of training, the classifier is trained on images with galaxies ranging from 0 to 3. During the second phase in the training process where the deblender and the classifier are trained jointly, the deblended images from the deblender will also be passed through the classifier to see if the RDN has deblended only one galaxy from the blended image. This loss will also be used to update the RDN's parameters. Thus, the training set of the classifier contains both noiseless and noisy images with 0 to 3 galaxies. The cross-entropy (CE) loss is utilized to train the classifier parameterized by $\phi$ as Eq.~\ref{eq:CE} where $y$ represents the one-hot ground truth label.
\begin{equation}
l^\phi_{CE}(I) = -y^{\rm T}\cdot\log(g(I, \phi)) \label{eq:CE}
\end{equation}
\subsection{Training \& Test Data}
In this work, we use BlendingToolKit \footnote{\url{https://github.com/LSSTDESC/BlendingToolKit}} in order to simulate the galaxy images and to generate the blended images. BlendingToolkit is a complete framework whose functionalities include generation of images with blended objects as well as providing measurements of the deblending and detection performance. It relies on the industry standard \texttt{galsim} \cite{2015A&C....10..121R} packages to make the actual renderings of the galaxies and is built on top of the \texttt{WeakLensingDeblending} \footnote{\url{https://github.com/LSSTDESC/WeakLensingDeblending}} package.
We first generate noiseless images of single galaxies, which can be considered as the pre-blended ground truth images. We employ the LSST DM galaxy catalog with a span of 1 square deg (\texttt{OneDegSq.fits} supplied with the BlendingToolKit). We use the default options, which impose a magnitude cut on the i-band, $i<25.3$. Then, the noiseless blended images are generated by a pixelwise summation between these single galaxy images. Additional random Poisson noise is added to the image to get the noisy blended images corresponding to the LSST 10-year depth. We selected $g$, $r$ and $i$ bands from the resultant images and converted them into RGB images. The original dimension of the generated images is (120, 120, 3). We crop each image to (80, 80, 3) from the center. The pixel values of the blended images will be normalized to $[0, 1]$ before entering the framework and the pre-blended images will be scaled to $[-1, 1]$ following~\cite{reiman2019deblending} .
Unlike the other machine learning based deblending methods where one of the galaxies has to be located at the center of the image~\cite{reiman2019deblending, boucaud2020photometry}, there is no constraint on the positions of the galaxies for our deblender, because we train it to output the object with the highest luminosity. This is also the reason why our framework can work iteratively. The use of the summation when creating the blended images also makes the iterative process possible. During training, the noisy blended images are treated as the input and the noiseless image with the brightest galaxy as the pre-blended ground truth for the deblender. Our goal is to train a deblender that can deblend and denoise at the same time and will always recover the brightest galaxy from the blended images. The classifier is trained with $0-3$ galaxies and for each of the four classes, half of the training data contain noiseless images while the other half contain noisy images. This is essential in order to train a classifier that can tell the number of galaxies in both noiseless and noisy images at the same time.
We generate 50,000 2-galaxy blended images with BlendingToolKit as the training set for the deblender. The test set contains 1000 blended images with 2 galaxies in each. There is also a second test set with 1000 3-galaxy blended images, to demonstrate the generalization of our framework. For the classifier, we generated blended images containing 0, 1, 2 and 3 galaxies, where each of the classes has 100,000 images (making it 400,000 in total), half of which are noiseless images and the other half noisy. The test set for the classifier contains 2000 images for each class and 8000 in total. The test set for the classifier also has the same ratio of noiseless and noisy images. Therefore, the classifier gives 4 classes as output indicating the number of galaxies in the input image.
\subsection{Training procedure}
The training process can be divided into two phases. In the first phase, the deblender and the classifier are pre-trained separately until they perform reasonably. Then they are fine-tuned jointly to boost the performance of both, while working in the iterative setting. Specifically, we expect that the classifier will force the deblender to output only one galaxy, while the deblender will provide more intermediate images to motivate the classifier to have better discrimination ability.
During the pre-training of the RDN, the noisy two-galaxy blend images are used as the input and the noiseless images with the brighter single galaxy as the ground truth. Eq.~\ref{eq:RDN} is the objective function during this process. A model trained in this way will have the ability to not only deblend the brighter galaxy, but also to denoise simultaneously. Meanwhile, in the first training phase the classifier is trained on the dataset containing both noisy and noiseless blended images with 0, 1, 2 and 3 galaxies. This pre-training phase makes the trained classifier able to classify both noisy and noiseless images and the trained RDN to determine the end of the whole workflow. They will be used during together during the second phase of the training.
During the second phase, the deblender and the classifier will be trained jointly. The framework will operate as it is designed, i.e., the deblender will be run twice because the input contains two galaxies. The blended image, denoted as $I_{{\rm blend}}$ will be passed through the deblender to get the deblended $I_{{\rm deblend}-1}$, which is compared with the ground truth $I_{{\rm gt}-1}$ to build the $\ell_1$-norm loss $l_{\rm RDN}(I_{{\rm deblend}-1}, I_{{\rm gt}-1})$. The first residual image $I_{{\rm res}-1}$ is the subtraction between $I_{{\rm blend}}$ and $I_{{\rm deblend}-1}$. Generally, the residual images are fainter than the blended images and therefore they will be normalized by dividing with the maximum pixel values. Then, the deblender will predict and scale back to recover the second galaxy $I_{{\rm deblend}-2}$ from $I_{{\rm res}-1}$ and another residual image $I_{{\rm res}-2}$ will be calculated. For simplicity, we use $\mathbb{S}=\{I_{{\rm blend}}, I_{{\rm deblend}-1}, I_{{\rm deblend}-2}, I_{{\rm res}-1}, I_{{\rm res}-2}, I_{{\rm gt}-1}, I_{{\rm gt}-2}\}$ to denote all the images. In this phase, the classifier will be optimized by all the available images in $\mathbb{S}$ as formulated in Eq.~\ref{eq:VGG-2}. Since $\mathbb{S}$ is class-imbalanced with more 1-galaxy images than 0 and 2-galaxy images, $\alpha_I$ is used as a weight. For 1-galaxy images, $\alpha_I=0.2$, otherwise $\alpha_I=1$. For the deblender, in addition to $l_{\rm RDN}(I_{{\rm deblend}-1}, I_{{\rm gt}-1})$, it is updated based on a loss from the classifier's prediction on the deblended images. The objective function for the second phase can be formulated as Eq.~\ref{eq:RDN-2}. $\lambda$ here is a trade-off coefficient.
\begin{equation}
l^\phi_{phase-2} = \sum_{I \in \mathbb{S}} \alpha_I l_{CE}(I) \label{eq:VGG-2}
\end{equation}
\begin{equation}
l^\theta_{phase-2} = l_{\rm RDN}(I_{{\rm deblend}-1}, I_{{\rm gt}-1})+\lambda l_{CE}(I_{{\rm deblend}}) \label{eq:RDN-2}
\end{equation}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{iterative_results/blend2-18.png}
\includegraphics[width=\linewidth]{iterative_results/blend2-39.png}
\captionsetup{justification=raggedright}
\caption{Iterative results for 2-galaxy blended images. We show two examples. In each example, the top row shows the noisy image as it progresses through the iterative process with the input image on the left and the remaining noise image (after the galaxies were subtracted) on the right. The middle row show the galaxy images that were isolated from the input image by the \emph{deblender}. The bottom row shows the ground truth, which can be visually compared with the deblended images above. All the images were plotted with inverse fluxes to improve the contrast (i.e. white corresponds to zero flux). The number at the corner of each image represents the number of remaining galaxies as determined by the classifier.
}
\label{fig:iterative_blend2}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{iterative_results/blend2-22.png}
\includegraphics[width=\linewidth]{iterative_results/blend2-328.png}
\caption{As Fig. \ref{fig:iterative_blend2}, but for medium and low quality samples.}
\label{fig:iterative_blend2_medium_and_low}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{iterative_results/blend3-11.png}
\includegraphics[width=\linewidth]{iterative_results/blend3-21.png}
\caption{As Fig. \ref{fig:iterative_blend2}, but for 3 galaxy blends.}
\label{fig:iterative_blend3}
\end{figure}
\section{Results} \label{sec:Result}
\subsection{Experimental settings}
\subsubsection{Implementation details}
The RDN used in our framework contains $D=16$ RDBs with $C=8$ convolutional layers in each block. During the first phase of the training process, the learning rate for the RDN is $10^{-4}$, the batch size is 128, and it is trained for 150 epochs. For the classifier, the initial learning rate is $0.1$, the batch size is 200, and it is trained for 200 epochs. In the second phase, the deblender and the classifier are updated jointly, where the learning rate decays to $10^{-5}$ for the former and $10^{-6}$ for the latter. After some experimentation, we chose the trade-off coefficient $\lambda=10^{-4}$. The batch size is 8 due to GPU limit and the framework is trained for 10 epochs in this phase.
\subsubsection{Evaluation}
We trained the framework on 2-galaxy blended images and tested the trained model on both 2-galaxy and 3-galaxy blended images. Both test sets contain 1000 noisy blended images.
We start by showing some example results in Figures \ref{fig:iterative_blend2} and \ref{fig:iterative_blend3}. These results show the typical output from a high-quality deblend. The network is using both the morphological as well as the color information to isolate images of individual galaxies which is helpful in recovering images even in difficult cases.
\begin{table}
\centering
\caption{\label{tab:iterative} Iterative test on 2-galaxy and 3-galaxy blended images.}
\begin{tabular}{c|cc}
\hline \hline
& 2-galaxy & 3-galaxy \\
\hline
High-quality & 77.3\% & 50.4\% \\
Medium-quality & 15.3\% & 36.9\% \\
Low-quality & 7.4\% & 12.7\% \\
\hline \hline
\end{tabular}
\end{table}
In Table~\ref{tab:iterative} we present the fraction of deblends sorted by the quality category (see Section \ref{fwdesc}) for 2-galaxy and 3-galaxy problems. Note that this only indicates how well the scheme thinks it is doing, rather than how well it is actually doing. With this caveat, the process shows relatively good results with a large fraction of nominally high-quality deblends.
\begin{table*
\centering
\begin{tabular}{c|cc|cc|cc|cc}
\hline \hline
& \multicolumn{4}{c|}{2-galaxy problem} & \multicolumn{4}{c}{3-galaxy problem} \\
\hline
& \multicolumn{2}{c|}{PSRN} & \multicolumn{2}{c|}{SSIM} & \multicolumn{2}{c|}{PSRN} & \multicolumn{2}{c}{SSIM} \\
\hline
& Mean & Median & Mean & Median & Mean & Median & Mean & Median \\
\hline
Deblended 1 & 56.51 & 57.93 & 0.9973 & 0.9997 & 54.66 & 57.19 & 0.9954 & 0.9997 \\
Deblended 2 & 58.18 & 59.02 & 0.9967 & 0.9997 & 55.96 & 58.77 & 0.9929 & 0.9996 \\
Deblended 3 & & & & & 57.71 & 59.29 & 0.9956 & 0.9995 \\
\hline \hline
\end{tabular}
\caption{\label{tab:bigtable} PSNR(dB) and SSIM for 2-galaxy and 3-galaxy deblending problems.}
\end{table*}
To quantify the quality of the deblends, we start with some standard image analysis metrics. We applied the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) as metrics to evaluate the quality of deblended images from the deblender when compared with the ground truth.
PSNR represents a peak error in the measurement of reconstruction quality in image compression. It computes the logarithm of the ratio between the maximum pixel value (MAX) of the ground truth and the mean squared error (MSE) between the test image and the ground truth in decibels. In our experiment, the ground truth consists of noiseless images with a single galaxy and the test images are the corresponding deblended images from the RDN. PSNR is formulated as Eq.~\ref{eq:PSNR}:
\begin{equation}
{\rm PSNR(dB)} = 20\cdot \log_{10}({\rm MAX})-10\cdot \log_{10}({\rm MSE}) \label{eq:PSNR}
\end{equation}
SSIM~\cite{wang2004image}, known as the structural similarity index, is commonly used to evaluate the similarity between two images using the means $\mu_x$ and $\mu_y$, the variances $\sigma_x$ and $\sigma_y$ and the covariance $\sigma_{xy}$. In Eq.~\ref{eq:SSIM}, $c_1=(k_1L)^2$ and $c_2=(k_2L)^2$ are two small constants to avoid the instability with a weak denominator. We use $k_1=0.01$, $k_2=0.03$ by default and $L$ is the dynamic range of pixel values.
\begin{equation}
{\rm SSIM} = \frac{(2\mu_x\mu_y+c_1)(2\sigma_{xy}+c_2)}{(\mu_x^2+\mu_y^2+c_1)(\sigma_x^2+\sigma_y^2+c_2)} \label{eq:SSIM}
\end{equation}
Results are summarized in Table \ref{tab:bigtable}. We do observe a few trends here. First, the PSNR of the second deblended galaxy is higher than that of the first. This somewhat counter-intuitive result comes from the normalization of PSNR, since the second deblend is fainter than the first one. In other words, the PSNR is not telling us that the 2nd deblended galaxy is better deblended, only that its quality is less degraded than expected, given how much fainter it is. This interpretation is confirmed by the SSIM values, which are higher for the first deblend, but only marginally. In both cases we see that the 3-galaxy problem is more difficult than the 2-galaxy problem. We also see that the median values are systematically above the mean values, telling us that the mean is pulled down by a small number of catastrophic outliers. To study these results in language that is more relevant to the astronomy community, we turn to the recovery of the fluxes and moments, described in the next section.
\subsection{Flux and image moment recovery using RDN}
\label{sec:flux-image-moment}
\begin{figure}
\centering
\includegraphics[width=8.5cm, height=8.5cm]{plots/flux_x.png}
\captionsetup{justification=raggedright}
\caption{Plot showing the comparison of the fluxes recovered from the RDN and the true fluxes for the \texttt{x} galaxies. The red line is the linear regression fit to the data and the dashed black line is an arbitrary $y=x$ fit. The shading in green denotes a $95\%$ confidence interval to the linear fit.}
\label{fl_xcomp_SE}
\end{figure}
An important measure of the effectiveness of any algorithm such as ours is the accurate recovery of physical parameters from the input image. To that end, we have opted to compare the fluxes and the second order image moments (which represent the equivalent ellipse of the image and by extension, the shape and orientation) retrieved by the RDN against that of the truth images. For ease of reference, the brighter galaxies in each field are referred to as `x' galaxies and the fainter ones as `y' galaxies.
Figure \ref{fl_xcomp_SE} shows a comparison of the fluxes of the \texttt{x} galaxies as recovered by the RDN against the truth images for the test set. The larger panel on the left shows the true fluxes on the X-axis and the RDN fluxes on the y-axis. The solid red line is a linear regression fit to the data, and the dashed black line is an arbitrary $y=x$ fit. The light green shading represents a $95\%$ confidence interval to the linear fit. The three smaller panels on the right side show the same fits to the high, medium and low quality blends which are detailed in Section \ref{fwdesc}. The overlapping histograms in the bottom left panel show the density distributions of the RDN and true fluxes. Figure \ref{fl_ycomp_SE} is a similar representation for the fainter, \texttt{y} galaxies.
\begin{figure}
\centering
\includegraphics[width=8.5cm,height=8.5cm]{plots/flux_y.png}
\captionsetup{justification=raggedright}
\caption{As Fig. \ref{fl_xcomp_SE}, but for the \texttt{y} galaxies. }
\label{fl_ycomp_SE}
\end{figure}
As is evident from Figure \ref{fl_xcomp_SE}, the RDN is very efficient in recovering the fluxes for the brighter galaxies in our fields, more or less uniformly so for the high, medium and low quality images.
The effectiveness of the RDN for this group of images can be further observed from the overlapping histograms of the true (red) and recovered (green) fluxes in the bottom left panel. Figure \ref{fl_ycomp_SE} shows the same plots for the fainter, \texttt{y} galaxies. As expected, the lower quality assignments are associated with a higher scatter between real and measured fluxes, although for the brighter galaxy, there is no measurable difference between the medium and low quality marks (given the size of our test catalog). For the fainter galaxies, however, the low quality deblends are essentially uncorrelated with the true values. We find that except for secondary objects in the low-quality bin, our RDN based estimator is essentially unbiased in recovering flux values.
Similarly to the fluxes, the shapes and orientations of the galaxies, represented by their second order image moments, are crucial characteristics that we aimed to recover using the RDN, in particular with the weak lensing application in mind. Figure \ref{debtr_mu11} shows a comparison of the moment $\mu_{11}$ for the truth images and the RDN recovered images. The bigger panels show the true moment on the x-axis and the RDN recovered moment on the y-axis for the \texttt{x} and \texttt{y} galaxies. The blue line is an arbitrary $y=x$ fit. The smaller panels on the bottom show the same plot for the high, medium and low quality images. Results for the other component of the second order moment, $\mu_{20}-\mu_{02}$ looks qualitatively the same. In an ideal deblender, the results would lie on the $y=x$ line, but we notice the appearance of a curious cross, which we will comment on below.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{plots/debtr_mu11_both_test-2.png}
\captionsetup{justification=raggedright}
\caption{Plot showing the comparison of true $\mu_{11}$ and RDN recovered $\mu_{11}$ for the brighter (\texttt{x}) and the fainter (\texttt{y}) galaxies on the large top plots. The smaller second row plots show the same split by quality bin. The blue line is a $y=x$ fit. }
\label{debtr_mu11}
\end{figure*}
\begin{figure}
\includegraphics[width=\linewidth]{plots/comp_mom_xy.png}
\caption{Plot showing the second order image moments for \texttt{x} and \texttt{y} ground truth images in green, with the RDN recovered moments superimposed in blue. The true and recovered moments exhibit the same general distribution while some scatter is observed at the edges of the plot. }
\label{mom_comp_xy}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{plots/true_mu2_hist.pdf}
\captionsetup{justification=raggedright}
\caption{Plot showing the histogram of the absolute value of the second moment $|\mu_2|$ for the true images. The true simulated galaxies belong to two categories: they are either round $|\mu_2|=0$ or not. The histogram is truncated with a small number of galaxies with considerably higher $|\mu_s|$s.}
\label{fig:hist}
\end{figure}
Figure \ref{mom_comp_xy} shows a comparison of the second order moments ($\mu_{11}$, $\mu_{20} -\mu_{02}$) plotted against each other for both the ground truth \texttt{x} and \texttt{y} galaxies (green) with the moments recovered by the RDN superimposed (blue). We see no evidence of any preferred axis in the recovered distribution of these quantities.
This leads to a curious result. The ensemble properties of the RDN recovered fluxes are correctly distributed. However, the per-galaxy results have structure that goes beyond the normal scatter around the truth result. In particular, the cross seen in Figure \ref{debtr_mu11} implies that galaxies with a significant $\mu_{11}$ sometimes end up having zero recovered $\mu_{11}$, but also vice-versa, that galaxies with zero true $\mu_{11}$ occasionally end up having a significant recovered $\mu_{11}$.
One plausible explanation for this curious result is as follows. In Figure \ref{fig:hist} we plot the histogram of the truth value of $|\mu_2| = \sqrt{(\mu_{20}-\mu_{02})^2 + (2\mu_{11})^2}$, i.e. the rotationally invariant measure of the non-circularity of the image. This histogram shows a distinct peak at around $\mu=0$ region. The simulated galaxies fall into two classes, those that are approximately round and those which are not. The RDN network seems to have learned this and tries to classify the galaxy into one of the two categories before reconstructing its image. The arms of the crosses correspond to these failures: one for the non-circular galaxies reconstructed as circular ones and the other for the circular galaxies reconstructed as the non-circular ones.
\subsection{Comparison with SExtractor}
\label{sec:comp-with-sextr}
\begin{figure}
\captionsetup{justification=raggedright}
\includegraphics[width=8.5cm,height=8.5cm]{plots/comp_rdnvssex_sex_flux.png}
\caption{Plot showing the comparison of flux recovery from the RDN and SExtractor.The top left and right panels show the true fluxes on the X-axis and the RDN and SExtractor fluxes on the y-axis respectively for \texttt{x} galaxies. The red line is a linear regression fit with a $95\%$ confidence interval represented by light green shading. The dashed black line is the $y=x$ line. The bottom left and right panels are the same representations for \texttt{y} galaxies. }
\label{rdnsexcomp_flux}
\end{figure}
In this section, we compare our deblending strategy with what is widely considered as the industry standard, Source Extractor (SExtractor, \cite{B&A1996}), which has been the baseline detection, deblending and image extraction software in astronomy for over two decades. It returns a set of user specified parameters from the more extensive default parameter file, by following the specific configurations defined by the user. In this work, we have used the Python formulation of SExtractor, \textit{sep} \cite{sep2015} with the settings employed for DES\footnote{\url{https://github.com/esheldon/sxdes}}.
\begin{figure}
\includegraphics[width=\linewidth]{plots/comp_smom_xy.png}
\captionsetup{justification=raggedright}
\caption{Plot showing the second order image moments for \texttt{x} and \texttt{y} ground truth images in green, with the SExtractor recovered moments superimposed in blue. While the distribution is similar to that in Figure \ref{mom_comp_xy}, the SExtractor recovered moments exhibit much larger scatter than the RDN recovered moments. }
\label{mom_sex_xy}
\end{figure}
We note that this is a fundamentally unfair comparison for several reasons. First, the RDN has been trained for this particular set of galaxies and thus internally employs a correctly tuned prior for the distribution of morphologies, fluxes and ellipticities, while SExtractor employs a general algorithm and is thus intrinsically more robust. Moreover, following DES configuration, SExtractor is run on the co-added image (R+G+B) and is thus missing the color information. Nevertheless, it is an appropriate sanity check to measure what kind of improvements can be brought about by employing more sophisticated methods.
Figure \ref{rdnsexcomp_flux} shows the comparison of the flux recovery by the RDN and SExtractor for both the \texttt{x} and \texttt{y} galaxies. The top left and right panels feature the RDN recovered and SExtractor recovered fluxes on the y-axis and the true fluxes on the X-axis for the \texttt{x} galaxies. The bottom left and right panels are the same representations for the \texttt{y} galaxies. The red line is a linear regression fit to the data, with the light green shading representing a $95\%$ confidence interval. The dashed black line shows $y=x$. As is evident from the plots, the RDN does a better job of recovering the object fluxes for both the \texttt{x} and \texttt{y} galaxies. SExtractor has a tendency to put a disproportionate amount of flux into the first detected objects, and hence the \texttt{x} images are biased high, while the \texttt{y} images are biased low.
Measuring moments for SExtractor images in considerably more difficult due to the presence of noise in them and the results are overly noisy. Correctly subtracting the noise contribution in a fair manner goes beyond the scope of this paper.
\subsection{Failure modes}
In this section we investigate the most common failure modes in the two approaches by cherry-picking cases, in which one method performs correctly while the other fails.
We manually looked at all the cases where RDN detects just a single galaxy. In a majority of those, the secondary object was very faint and overlapped almost perfectly with the primary, leading to the deblending being suspended because the \emph{classifier}\ determined that there are no further objects. However, for $12$ out of the $1000$ objects in the test set, for the deblended \texttt{x} galaxies, the result clearly shows two objects, examples of which are shown in Figure \ref{rdnfail_1}. These problems are trivial for the SExtractor approach. A visual inspection of all these images was conducted and it was observed that, both the objects in $7$ of these fields are very similar in shape and, in $5$ fields, they had similar brightnesses. It stands to reason that the RDN has some difficulty in deblending objects with similar structural parameters, most likely because it cannot decide which is the ``brighter'' object, especially in the presence of noise. In fact, when presented with such examples in training, it simply pays a large loss penalty whenever it starts with the ``wrong'' object, without a clear rule about how to pick one of the two objects if the noise hides the identity of the bright ones. We will discuss this further in Section \ref{sec:Conclusion}.
\begin{figure*}
\includegraphics[width=\textwidth]{plots/combims-rdnfail.png}
\captionsetup{justification=raggedright}
\caption{Instances where the RDN fails, while SExtractor succeeds in detecting and deblending the objects in the field. On the left is the original blend, in the centre is the output of the RDN and on the right is the blend with the SExtractor detections superimposed in red.}
\label{rdnfail_1}
\end{figure*}
When SExtractor is run on the test set, it returns photometric parameters for $979$ \texttt{x} galaxies and $729$ \texttt{y} galaxies (the \texttt{x} counterparts of all \texttt{y} galaxies are present in the \texttt{x} set). As seen in Figure \ref{rdnfail_1}, this includes $9$ instances where the RDN fails to deblend the different galaxies in the field. However, overall, there are more instances where the RDN successfully deblends the \texttt{x} and \texttt{y} galaxies, while SExtractor either doesn't detect either galaxy, or only detects the \texttt{x} galaxy, but not the the \texttt{y} galaxy, a few examples of which are illustrated by Figure \ref{sexfail_1}. We note that these are traditional blend merges, where the deblender merges two distinct objects into a single one. Since human eye is pretty good at detecting these, we could possibly improve upon this by fine-tuning the SExtractor settings, potentially at the cost of artificially shredding objects with substructure. These are also examples where color information is most helpful.
\begin{figure*}
\includegraphics[width=\textwidth]{plots/combims-sexfail.png}
\captionsetup{justification=raggedright}
\caption{Instances where SExtractor fails to detect and deblend, while the RDN succeeds. On the far left is the original blend, the RDN result is represented in the two central panels for the \texttt{x} and \texttt{y} galaxies, and the blend with the SExtractor detection superimposed in red is on the far right panel.}
\label{sexfail_1}
\end{figure*}
\section{Summary and Conclusions} \label{sec:Conclusion}
In this work we present a new approach to deblending using a Residual Dense Network, which was trained on blended galaxy images with a realistic PSF and noise levels, and which has performed decently in deblending and recovering object fluxes and shapes. Compared to previous works, our set up does not assume that the object to be deblended is located at the center of the image. We also do not need to assume in advance, the number of objects to be deblended. By using two neural networks, one to perform deblending and one to determine the number of remaining objects in the field, we can classify the quality of deblends. We have shown that deblends that are designed to be higher quality have indeed, less noisy fluxes and shapes.
In the most current deblending networks found in literature, the network is trained to deblend the central galaxy. Given that centering in the presence of noise presents its own problems, we have trained our network to deblend the brightest galaxy remaining in the image. This works very well, where the choice of the brightest is clear, but leads to a particular failure mode where the network cannot decide where to start, when two objects are approximately equally bright (see Figure \ref{rdnfail_1}). This might be a generic feature of any kind of deblender that proceeds by peeling off one galaxy at a time. For example, even if we choose the galaxy closer to the center, there would likely be an equivalent failure mode when two galaxies are equally close to the center. The same thing could happen for other initial assumptions that attempt to distinguish between the galaxies. There are several approaches to solve this. One possibility would be to use a symmetrized loss function that would train the network to deblend either galaxy. Another possibility would be to have a more sophisticated network that deblends multiple galaxies at once, perhaps again with an appropriately symmetrized loss function.
We have found that our method outperforms the industry standard SExtractor on flux recovery, subject to strong caveats mentioned in the Section \ref{sec:comp-with-sextr}. However, we found that the RDN recovered shapes are either very good or catastrophically wrong. The catastrophical failures result in the cross-effect discussed in Section \ref{sec:flux-image-moment}. One possible explanation for this involves bimodality in the shapes of the truth objects and the RDN internally assigning the image to the wrong class. It also serves as a warning for use of a deeply non-linear deblender based on NN for work requiring precision shapes, such as weak lensing. At the same time, such work could lead to potentially interesting new work on galaxy-star separation and morphological classification of galaxies.
Similarly to the majority of other neural network approaches, our deblender cannot deal optimally with a PSF that is variable and is different for the different input channels (for e.g. for different bands), nor can it deal with a pixel mask. This is a significant deficiency that is usually swept under the rug under the assumption that the network can be retrained for a set of different PSF. However, in practice, we have a different PSF for different bands and with the typical astronomical observations in five or six bands, the number of possible PSF combinations becomes unmanageable. The PSF shape in each band thus needs to be part of the input and network training. We are leaving these important aspects for future work.
One of the main advantages of most ML approaches to deblending is that they also automatically denoise and deconvolve the image and can in principle fill in the missing information in case of pixel masks. For a sufficiently sophisticated neural network, this would trivialize many typical operations such as flux and shear estimation on the resulting truth images. However, it would also require us to have a more advanced way of propagation of both statistical and systematic uncertainties.
This work has demonstrated the feasibility of using RDN in astronomical image analysis and has highlighted some of the unresolved issues with the current machine learning approaches to the deblending problem in the case of realistic astronomical images.
\section*{Acknowledgements}
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research through the SciDAC grant ``Accelerating HEP Science: Inference and Machine Learning at Extreme Scales''. We thank our collaborators for useful input.
|
1,108,101,562,926 | arxiv | \section{\label{sec:Intro}Introduction}
Low-dimensional quantum spin systems are excellent examples to explore the physics of strongly interacting
quantum many-body systems.~\cite{diep} In addition to the inherent quantum nature of the interacting elements (for example
localized spins with S=1/2), these systems provide an array of choices where the effects of competing interactions,
non-equivalent nearest neighbor bonds, and frustration on quantum fluctuations of the long range order parameter and on
quantum phase transitions at $T=0$K (no thermal fluctuations) can be explored. Although extensive studies using
different theoretical approaches and using different spin models have been done over the last several
decades we will first discuss two simple models relevant to our present work.~\cite{mila,subir1}
They are (i) two-dimensional (2D) antiferromagnetic S=1/2 Heisenberg
model on a square lattice with nearest (NN) and next nearest neighbor (NNN)
antiferromagnetic interactions ($J_1,J_2$) [Model I] and (ii) one-dimensional (1D) spin chain consisting of alternating
S$_1$=1 and S$_2$=1/2 spins interacting
antiferromagnetically [Model II]. The classical ground state (GS) of model I in certain ($J_1, J_2$) domain is long range ordered (LRO) antiferromagnet
whereas that of model II is a long range ordered ferrimagnet. Quantum spin fluctuations (QSF) dramatically affect the physical properties of
these systems, which we review briefly after first introducing a new model [Model III] below.
We propose a novel 2D Heisenberg model at $T=0$K consisting of
only S=1/2 spins, which combines the essential features of the two above models, extreme anisotropy of the NN
bonds (some ferro and some antiferro) and frustration. The classical ground state (discussed in detail later in the
paper) is a four-sublattice ferrimagnet
in certain parameter space. Our focus in this paper is on the stability of this ferrimagnetic ground state and effect of
QSFs at $T=0$K on the
long range ordered sublattice magnetizations.
\section{\label{sec:Models}Review of Models I and II}
\subsection{Model I} The classical ground state of the 2D S=1/2 ($J_1, J_2$) Heisenberg model on a square lattice depends on the
frustration parameter $\eta=J_2/J_1$.\cite{diep,mila}
For $\eta <0.5$, the GS is a Ne\'{e}l state with ordering wave vector ($\pi,\pi$), similar to the unfrustrated case
whereas for $\eta>0.5$ the GS is
the degenerate columnar antiferromagnetic state (CAF) with ordering wave vectors ($\pi,0$) and ($0,\pi$). There is
a first-order phase
transition from the Ne\'{e}l state to CAF state at $\eta=0.5$. Effects of QSF on this phase transition and other properties of this
model have been investigated using a large number of
methods.~\cite{anderson,harris71,igar93,majumdar10,richter,bishop,sandvik01,isaev,syro,kivelson} Here we
review the main results obtained within linear spin wave theory (LSWT). Sublattice magnetization, $m$ is reduced
by QSF from its classical value of 0.5 to 0.303 at $\eta=0$ and then decreases
monotonically with increasing $\eta$ and approaches zero at the first critical point $\eta_{c1}=0.38$. Similarly
$m=0.37$ at $\eta=1$ and then steadily decreases to zero at the second critical point $\eta_{c2}=0.49$.
LSWT clearly indicates that QSF effects are enhanced in the presence of frustration. Also it suggests that in
the region $\eta_{c1} <\eta<\eta_{c2}$, the classical GSs are not stable. The nature of the GS (e.g. spin-liquid, valence bond)
and low energy excitations in this region
have been extensively studied during past several years and continue to be of great current interest.
\subsection{Model II} The second model deals with ferrimagnets. Ferrimagnets are somewhere between ferromagnets and
antiferromagnets.~\cite{mikeska,ivanova,Kolezhuk1,brehmer,pati1,pati2,NBIvanov0,Kolezhuk2,shoji1,shoji2,NBIvanov1,NBIvanov3}
It is well known that for 1D quantum S=1/2 ferromagnet, the ground state is long-range ordered
and QSFs do not reduce the classical value of $m$. In contrast, in a 1D quantum S=1/2 antiferromagnet (AF), QSFs
completely destroy the classical LRO. The question what happens for ferrimagnets drew considerable interest
in the late 90's and several interesting works were done using a simple isotropic NN antiferromagnetic
Heisenberg model with two types of spins, ${\rm S}_1=1$ and ${\rm S}_2 = 1/2$ in a magnetic
unit cell (MUC).~\cite{Kolezhuk2, brehmer, pati1, NBIvanov0, NBIvanov1, NBIvanov2, NBIvanov3,shoji1,shoji2}
Following Refs.~[\onlinecite{brehmer}] and [\onlinecite{Kolezhuk2}] we discuss some
of the interesting physical properties of this 1D system and point out how our proposed 2D model differs from this.
The Hamiltonian of the 1D system is given by
\be
{\cal H}=\sum_n\left[ J\big({\bf S}_{1n}\cdot {\bf S}_{2n}+{\bf S}_{2n}\cdot {\bf S}_{1n+1}\big)
-hS_{Tn}^z\right],
\ee
where ${\bf S}_{1n}$ and ${\bf S}_{2n}$ are spin-1 and spin-1/2 operators respectively in the $n^{\rm th}$ unit cell (UC), effective
field $h(=g\mu_B H$ with
$g$ gyro-magnetic ratio, $\mu_B$ Bohr magneton, and $H$ external magnetic field) and $S_{Tn}^z=S_{1n}^z+S_{2n}^z$.
According to the Lieb-Mattis theorem~\cite{Lieb}, for $H = 0$, the GS is long range ordered as
the system has total spin $S_T=N/2$, where $N$ is the number of UCs in GS, $\langle S_{1n}^z \rangle=1$ and
$\langle S_{2n}^z \rangle=0.5$. The problem of looking at the excitations is well suited
for the LSWT approach. Since the elementary magnetic cell consists of two spins, LSWT predicts
two types of magnons: a gapless ``acoustic'' or ``ferromagnetic'' branch with $S_{T}^z=N/2-1$, and a
gapped ``optical'' or ``antiferromagnetic'' branch with $S_{T}^z=N/2+1$. The optical magnon gap for this model
has been numerically found to be $\Delta_{\rm opt}=1.759J$.~\cite{shoji2}
An intriguing property of this 1D quantum ferrimagnet is that when one turns on the magnetic field $H$,
the acoustic branch opens up a gap but the optical gap decreases and at a critical value of the field $H_{c1}$ this gap
vanishes, the system then enters into a quantum spin liquid (QSL) phase, where the GS is dominated by QSFs with
spinon-like excitations.~\cite{Kolezhuk2,brehmer,pati1} With further increase in the strength of the field this QSL phase
goes into a saturated ferromagnetic phase.
Brehmer et al.~[\onlinecite{brehmer}] calculated the sublattice magnetization for the S=1 sublattice ($m_A$) and found
it to be (1-$\tau$) with $\tau \approx 0.305$. The sublattice magnetization of
the S=1/2 sublattice can be calculated using their method and is found to be $m_B=-0.5+\tau$. The ordered moment
of the S=1/2 sublattice is reduced by a factor of $\sim 2.5$ due to QSF. There are
two important points worth noting here: (1) the total magnetization (ferromagnetic) per magnetic unit cell is
$m_A+m_B=0.5$, the classical value and (2) QSF reduction of the S=1/2 sublattice is larger than the 2D S=1/2 Heisenberg model
for a square lattice where $\eta \sim 0.2$. Point (1) is consistent with the fact that the ferromagnetic long range order is
not affected by QSF. Also $m_A$ and $m_B$ are independent of the magnetic field.
\section{Model III} As mentioned in the beginning, we introduce a new 2D Heisenberg model which incorporates different aspects of the two
models discussed above, anisotropic bonds and frustration. Also, instead of two types of spins and single exchange
parameter, our model consists of only S=1/2 spins interacting with Heisenberg exchange couplings of different
signs (both ferro and antiferro). The unit cell consists of four types of spins which we denote as ${\bf S}^{(\mu)} \;(\mu=1..4)$,
it is a Bravais lattice. The lattice vectors for the four spins in
a rectangular lattice with parameters ($a,b$) along the
$x$ and $y$ directions are given by ${\bf R}_{i\mu}=i_x a{\bf \hat x}+i_y b{\bf \hat y}+{\bm \tau}_{\mu}$ where
${\bm \tau}_1=(0,0),
{\bm \tau}_2=(0,b/2), {\bm \tau}_3=(a/2,b/4)$ and ${\bm \tau}_4=(a/2,3b/4)$ (see Fig.~\ref{fig:CrMWstruc1}).
As we will show, the ground state is ferrimagnetic in certain range of
exchange parameter space. Three spins combine to form the S=3/2 sublattice. In contrast to the 1D
S=(3/2,1/2) model, where the magntitudes of the spins in each sublattice are fixed, in our model, the
S=3/2 sublattice can undergo amplitude fluctuations. In fact, the present model was inspired by recent inelastic neutron scattering experiments
on a quasi 2D spin systems containing Cu$^+_2$ ions, Cu$_2$(OH)$_3$Br.~\cite{Ke} However, in this system the effect
of orbital ordering of active magnetic orbitals driven by the ordering of the Br$^+$ ions on the exchange parameters
is such that the ground state is an antiferromagnet with eight spins per unit cell.
The Heisenberg spin Hamiltonian (${\cal H}$) for model III is divided into two parts, intra-chain (${\cal H}_1$) and
inter-chain (${\cal H}_2$):
\be
{\cal H}={\cal H}_1 + {\cal H}_2,
\label{fullH}
\ee
where
\begin{subequations}
\label{ham}
\bea
{\cal H}_1 &=& -J_1\sum_i \Big[{\bf S}_i^{(1)}\cdot {\bf S}_i^{(2)}+
\frac 1{2}\Big({\bf S}_i^{(1)}\cdot {\bf S}_{i-b\hat y}^{(2)}+ {\bf S}_i^{(2)}\cdot {\bf S}_{i+b\hat y}^{(1)}\Big) \Big] \non \\
&+& J_2\sum_i \Big[{\bf S}_i^{(3)}\cdot {\bf S}_i^{(4)}+
\frac 1{2}\Big({\bf S}_i^{(3)}\cdot {\bf S}_{i-b\hat y}^{(4)}+ {\bf S}_i^{(4)}\cdot {\bf S}_{i+b\hat y}^{(3)}\Big) \Big],\non \\
\\
{\cal H}_2 &=& \half J_3\sum_i \Big({\bf S}_i^{(1)}+{\bf S}_i^{(2)}\Big)
\cdot \Big({\bf S}_i^{(3)}+{\bf S}_{i-a\hat x}^{(3)}\Big)\non \\
&+& \half J_4 \sum_i \Big[{\bf S}_i^{(1)}\cdot \Big({\bf S}_{i-b\hat y}^{(4)}+{\bf S}_{i-a\hat x-b\hat y}^{(4)}\Big)\non \\
&+&{\bf S}_i^{(2)}\cdot \Big({\bf S}_i^{(4)}+{\bf S}_{i-a\hat x}^{(4)}\Big)\Big]\non \\
&+& \half J_3 \sum_i {\bf S}_i^{(3)}\cdot \Big({\bf S}_i^{(1)}+{\bf S}_{i+a\hat x}^{(1)}+{\bf S}_i^{(2)}
+{\bf S}_{i+a\hat x}^{(2)}\Big) \non \\
&+& \half J_4 \sum_i{\bf S}_i^{(4)}\cdot \Big({\bf S}_i^{(2)}+{\bf S}_{i+a\hat x}^{(2)}+{\bf S}_{i+b\hat y}^{(1)}
+{\bf S}_{i+a\hat x+b\hat y}^{(1)}\Big).\non \\
\eea
\end{subequations}
All exchange parameters $J_\mu $ are positive (see Fig.~\ref{fig:CrMWstruc1} for an illustrative
long range ordered ferrimagnetic). We refer to this model as ($J_1, J_2, J_3, J_4$) model.
\begin{figure}[httb]
\centering
\includegraphics[width=2.0in,clip]{spinstruct.eps}
\caption{\label{fig:CrMWstruc1}(Color online)
Classical ferrimagnetic ground state
of 2D F-AF chain. The basic magnetic unit cell comprises of three up-spins
${\bf S}_1, {\bf S}_2, {\bf S}_4$ and one down-spin ${\bf S}_3$. The interaction strengths
$J_1, J_2, J_3$ are all positive and $J_4$ is the frustrated bond.}
\end{figure}
{\em Classical Ground State:} The basic model consists of alternating 1D ferro (strength $J_1=1$) and
antiferromagnetic (strength $J_2=\eta_2 J_1$) S=1/2 chains (along the $y$-axis). The nearest chains interact with
interaction strengths $J_3$ ($= \eta_3 J_1$) and $J_4$ ($= \eta_4 J_1$) which are
antiferromagnetic. Before discussing the excitations and quantum spin fluctuations, we first consider the
ground state
of our model when the spins are treated classically (mean field state). With $J_3=J_4=0$, the ground state ($G_0$)
with broken global symmetry consists of decoupled alternating ordered F chains ({\bf S}$_1$ and {\bf S}$_2$ spins) and
AF chains ({\bf S}$_3$ and {\bf S}$_4$ spins). Due to the time reversal symmetry, the F chains can be either up or
down (chosen arbitrarily) and the AF chains can be in one of the two Ne\'{e}l states. The degeneracy of the $G_0$ is $2^{2M}$,
where $M$ is the number of F (or AF) chains. For $J_3>0$ and $J_4=0$, if we fix the orientation of one F chain, the nearest
two AF chain orientations are fixed by the $J_3$ bond. The neighboring F chain orientations are then fixed. In this way, we
have the exact ground state $G$ as each bond takes its minimum energy value. When $\eta_3>0$ and $\eta_4<0$ (ferromagnetic),
the system is not frustrated and the classical GS is
a collinear ferrimagnetic state as shown in Fig.~\ref{fig:CrMWstruc1}.
However, for $\eta_3>0$ and $\eta_4>0$, spin ${\bf S}_4$ is frustrated. For
weak frustration i.e. $\eta_4<<\eta_3$, $G$ is most likely the exact ground state and with increasing frustration ($J_4$)
the system will undergo a phase transition to a new state which may or may not be long range ordered. One approach to attack the problem
is to use the generalized Luttinger Tisza method [\onlinecite{LT}] first proposed by Lyons and Kaplan.\cite{LK} It turns out that
for our Bravais lattice with four-spin/unit cell system the calculations are quite difficult. So in the absence of the
knowledge of the exact ground state for large $J_4$, we have used a different approach. We study the local stability of $G$
with increasing strength of the frustrating bond ($J_4$). As we will show later, depeding on the strength of $J_2/J_1$, there
is a critical value of $J_4/J_3$ where the ground state $G$ is no longer locally stable. Thus in our current analysis of the
phase diagram and excitations of the model using spin-wave theory we use $G$ as the ground state.
\section{Spin-wave Theory}
It is well-known that spin-wave theory is best suited to treat the dynamics of long range-ordered states in quantum
spin system with large spin $S$. In the leading order (linear spin wave theory - LSWT), the excitations are magnons.
When magnon-magnon interaction effects are negligible (for example for $S>>1/2$ and three dimensions), LSWT provides a
very good description of the quantum spin fluctuation effects, one example being the reduction of the ordered moment in
Heisenberg quantum antiferromagnets. However, for S=1/2 systems in 2D, magnon-magnon interactions are not negligible
and one must incorporate higher order spin (1/S) corrections to treat the system.\cite{igar92,igar93,igar05,majumdar10} Even for these systems,
LSWT provides
qualitatively correct physics. For example, for 2D Heisenberg spin systems with nearest neighbor (NN) antiferromagnetic (AF)
coupling [($J_1,J_2$) model with no frustration i.e. $J_2=0$] on a square lattice, the ordered moment (average sublattice spin $\langle S_z \rangle$) reduces due to
QSF from 0.5 to 0.303 as given by LSWT.\cite{anderson,harris71} When one includes the higher order magnon-magnon interaction
effects using (1/S) expansion theory $\langle S_z\rangle = 0.307$,\cite{igar05,majumdar10} indicating that LSWT is very reasonable.
For the general ($J_1,J_2$) model, the effect of frustration is much more subtle. Frustration tends to destabilize long range order.
With
increase in the strength of frustration, $\langle S_z \rangle = 0$ at a critical value of $J_2= J_{2c}$. LSWT
gives $J_{2c}=0.38$ whereas including the magnon-magnon interaction one finds $J_{2c}=0.41$,\cite{igar93,majumdar10} again indicating the
reasonableness of LSWT in providing a measure of the QSF induced reduction of the magnetization $M$.
In a recent work (Ref.~[\onlinecite{syro}]) results for this model is obtained using a four-spin
bond operator technique where it is found that $\langle S_z \rangle = 0.301$ for $J_2=0$ and $J_{2c} = 0.36$, which are close
to the LSWT results. We should mention here that all these method fail in the spin disordered state i.e. when
$J_2 > J_{2c}$.
In view of the above discussion, we opted to use LSWT to analyze the effect of QSF on the average magnetic moment and the
critical strength of the frustration where the ordered moments vanish. Unlike the ($J_1,J_2$) model (two sublattice with same
value of the ordered moment) our 2D frustrated ($J_1, J_2, J_3, J_4$) model has a 4-sublattice structure as shown below
and different sublattice moments are affected differently by QSF.
For our analysis we only
consider the parameter space $(\eta_2,\eta_3,\eta_4)$ of the Hamiltonian ${\cal H}$ [Eq.~\eqref{fullH}] where the
GS is stable and is long range ordered collinear ferrimagnetic state.
The spin Hamiltonian in Eq.~\eqref{ham} is mapped onto a Hamiltonian of interacting bosons by expressing the spin operators in
terms of bosonic creation
and annihilation operators $a^\dag, a$ for three ``up'' spins (spins 1, 2, and 4) and $b^\dag, b$ for one ``down' spin (spin 3)
using the standard Holstein-Primakoff representation~\cite{HP}
\begin{eqnarray}
S_{in}^{+ i} &\approx& \sqrt{2S}a_{in},\;
S_{in}^{- i} \approx \sqrt{2S}a_{in}^{\dag},\;
S_{in}^{z i} = S-a^{\dag}_{in} a_{in}, \label{hol1} \non \\
S_{jn}^{+ j} &\approx& \sqrt{2S}b_{jn}^\dag,\;
S_{jn}^{- j} \approx \sqrt{2S}b_{jn}, \;
S_{jn}^{z j} = -S+b^\dag_{jn}b_{jn}, \label {hol2}\non
\label{holstein}
\end{eqnarray}
and expand the Hamiltonian [Eq.~\eqref{ham}] perturbatively in powers of $1/S$ keeping terms only up to
the quadratic terms. The resulting quadratic Hamiltonian is given as:
\be
{\cal H}=E_{\rm cl}+{\cal H}_0 + \cdots,
\ee
where
\be
E_{\rm cl} = -2J_1NS^2\big[1+\eta_2+2(\eta_3-\eta_4)\big]
\label{classH}
\ee
is the classical GS energy and
\bea
&&{\cal H}_{0}= 2SJ_1\sum_{{\bf k}\in {\rm BZ}}\Big[(1+\eta_3-\eta_4)\Big(a^{(1)\dag}\k a\k^{(1)}+
a^{(2)\dag}\k a\k^{(2)}\Big)\non \\
&&-\gamma_y\Big(a^{(1)}\k a\k^{(2)\dag}+ a^{(1)\dag}\k a\k^{(2)} \Big)
+ (\eta_2-2\eta_4)a^{(4)\dag}\k a\k^{(4)} \non \\
&&+ (\eta_2+2\eta_3)b^{(3)\dag}\kk b\kk^{(3)}
+\eta_2\gamma_y\Big(b^{(3)\dag}\kk a\k^{(4)\dag}+ b^{(3)}\kk a\k^{(4)} \Big) \non \\
&&+\eta_3\gamma_x\Big(e^{ik_yb/4}a^{(1)\dag}\k b\kk^{(3)\dag}+ e^{-ik_yb/4}a^{(2)\dag}\k b\kk^{(3)\dag} +h.c.\Big)\non \\
&&+\eta_4 \gamma_x\Big(e^{-ik_yb/4}a^{(1)\dag}\k a\k^{(4)}+ e^{ik_yb/4}a^{(2)\dag}\k a\k^{(4)} +h.c.\Big)
\Big]\label{H0}
\eea
with $\gamma_x=\cos (k_xa/2)$ and $\gamma_y=\cos (k_y b/2)$.
In the absence of inter-chain coupling ($\eta_3=\eta_4=0$) the magnon spectrum can be obtained using the standard
Bogoliubov transformations.~\cite{Bogo} We find
four modes for each $k_y\;(-\pi/b <k_y<\pi/b)$ independent of $k_x\;(-\pi/a<k_x<\pi/a)$: two from the F-chains ($\alpha$-branches)
and two from the AF-chains (one $\alpha$ and one $\beta$). The quadratic Hamiltonian takes the following form:
\begin{eqnarray}
{\cal H}_0 &=& \sum_{{\bf k}\in {\rm BZ}} \Big[\epsilon\k^{(1)} \alpha^{(1)\dag}\k\alpha\k^{(1)}+
\epsilon\k^{(2)} \alpha^{(2)\dag}\k\alpha\k^{(2)} \non \\
&+& \epsilon\k^{(3)} \Big(\alpha^{(4)\dag}\k\alpha\k^{(4)}+
\beta^{(3)\dag}\kk \beta\kk^{(3)}\Big) \Big]
+ \sum_{{\bf k}\in {\rm BZ}} \Big(\epsilon\k^{(3)}-2J_1S \Big),\non \\
\label{H0special}
\end{eqnarray}
where
\begin{subequations}
\label{spcase}
\begin{eqnarray}
&&\epsilon\k^{(1,2)}= 2J_1S[1 \mp \gamma_y], \\
&&\epsilon\k^{(3)}= 2J_2S\sqrt{1 - \gamma_y^2}=2J_2S|\sin (k_y b/2)|.
\end{eqnarray}
\end{subequations}
The last term in Eq.~\eqref{H0special} are the LSWT corrections to the classical ground state energy $E_{\rm cl}$
in Eq.~\eqref{classH} for the special case $\eta_3=\eta_4=0$.
With inter-chain coupling (i.e. $\eta_3, \eta_4>0$), we have not been able to find the analytical Bogoliubov transformations that
transforms the bosonic spin operators to Bogoliubov quasiparticle operators that
diagonalize the Hamiltonian ${\cal H}_0$ [Eq.~\eqref{H0}]. For the special case $k_x=\pi/a$ i.e. $\gamma_x=0$,
we use the equation of motion method (see Appendix \ref{EMM}) and obtain analytical solutions for the
magnon dispersion which are:
\begin{subequations}
\bea
&&\epsilon\k^{(1,2)}=2J_1S\big[(1+\eta_3-\eta_4) \pm \gamma_y\big], \label{dispgx0F} \\
&&\epsilon\k^{(3,4)}=2J_1 S\big\vert(\eta_3+\eta_4) \pm \sqrt{(\eta_3-\eta_4+\eta_2)^2-\eta_2^2\gamma_y^2}\big \vert.\non \\
\label{dispgx0AF}
\eea
\label{anaeqs}
\end{subequations}
When $\eta_3=\eta_4=0$ the above dispersions reduce to Eq.~\eqref{spcase} as expected.
For the general case we use an elegant method developed by Colpa to obtain both the eigenenergies
(magnon dispersions) and eigenvectors (required for the calculation of magnetization).\cite{Colpa,TL} First we write the $8 \times 8$ Hamiltonian [Eq.~\eqref{H0}] in a symmetrized form:
\begin{eqnarray}
{\cal H}_{0} &=& J_1S\sum_{{\bf k}\in {\rm BZ}} \sum_{i=1}^8 X\k^{(i)\dag}h\k X\k^{(i)} \non \\
&-& 2J_1SN\left[1+\eta_2+2(\eta_3-\eta_4)\right],
\end{eqnarray}
with the eigenvectors \\
$X\k=[a\k^{(1)}, a\k^{(2)}, a\k^{(4)}, b\k^{(3)}, a\k^{(1)\dag}, a\k^{(2)\dag}, a\k^{(4)\dag}, b\k^{(3)\dag}]$.
The hermitian matrix $h\k$ is:
\be
h\k=
\begin{bmatrix} A_1 & -B_1 & C_2^* & 0 & 0 & 0 &0 &C_1 \\
-B_1 & A_1 & C_2 & 0 & 0 & 0 &0 & C_1^* \\
C_2 & C_2^* & A_2 & 0 & 0 & 0 &0 & B_2 \\
0 & 0 & 0 &A_3 & C_1 & C_1^* & B_2 & 0\\
0 & 0 & 0 &C_1^* & A_1 & -B_1 & C_2 & 0 \\
0 & 0 & 0 &C_1 & -B_1 & A_1 & C_2^* & 0 \\
0 & 0 & 0 &B_2 & C_2^* & C_2 & A_2 & 0 \\
C_1^* & C_1 & B_2 & 0 & 0 & 0 & 0 & A_3
\end{bmatrix},
\label{hkmatrix}
\ee
where the constants are given in Eqs.~\eqref{coeffs}.
The Cholesky decomposition has to be applied on $h\k$ to find the complex $K$ matrix that fulfills the condition $h\k=K^{\dag}K$.
However, the Cholesky decomposition only works if the matrix $h\k$ is positive definite (i.e. the eigenvalues are all
positive).~\cite{Colpa} In case the spectrum of the Hamiltonian ${\cal H}_0$ contains zero modes, one can add a small positive
value to the diagonal of $h\k$ to make the matrix positive ``definite''.
We find that the criterion for the Cholesky decomposition to work for all ${\bf k}$ is
$\eta_4 \le \eta_2\eta_3/(\eta_2+2\eta_3)$. As an example, with $\eta_2=3.0, \eta_3=0.4$,
$\eta_4 \le \eta_{4c}$, where $\eta_{4c}=0.316$. If $\eta_4>\eta_{4c}$ the matrix $h\k$ is not positive
definite and the procedure fails. As we discuss later, this is precisely the same condition for the stability of
the ferrimagnetic state.
After obtaining the matrix $K$, we solve the eigenvalue problem of the
hermitian matrix $KgK^{\dag}$, where $g$ is a diagonal paraunitary
matrix with elements $g_{ii}={\rm diag}(1, 1, 1, 1, -1, -1, -1, -1)$. The resulting eigenvectors are then
arranged in such a way that the first four diagonal elements of the diagonalized $L=U^\dag KgK^\dag U$ matrix are positive and
the last four elements are negative. The first four positive diagonal elements correspond to the magnon dispersions.
To calculate the sublattice magnetization $m_i$ we first construct the diagonal matrix, $E=gL$ and then find
the transformation matrix $T$, which relates the boson modes $X\k$ with the Bogoliubov modes $\alpha\k$ via $X\k=T\alpha\k$.
The matrix $T$ is calculated using~\cite{TL}:
$
T=K^{-1}UE^{1/2}.
$
$m_{i=1,2,4}$ of spins ${\bf S_1}, {\bf S_2}, {\bf S_4}$ are positive but $m_3$ for spin ${\bf S_3}$ is negative.
So we calculate the magnitude of $m_{i=1-4}$ for each of the four sublattices using
\be
\vert m_i \vert =0.5-\vert \tau_i \vert.
\ee
where $\tau_i$ are the reduction caused by QSFs:
\be
\vert \tau_i \vert=\frac 1{N} \sum_{{\bf k}\in {\rm BZ}}\Big\{T\k {\cal D} T\k^\dag\Big\}_{i+4,i+4}.
\ee
${\cal D}$ is a diagonal matrix with $[0, 0, 0, 0, 1, 1, 1, 1]$ as the diagonal elements. We again reiterate that the parameters
$\eta_2, \eta_3, \eta_4$ are chosen such a way that the condition for the Cholesky decomposition is
satisfied, i.e. $\eta_4 \le \eta_{4c}$.
\section{\label{sec:results}Magnon Dispersion and Sublattice Magnetization}
\subsection{Magnon Dispersion}
Effects of inter-chain interaction on the magnon dispersion is displayed in Fig.~\ref{fig:energydispcomp}(a-e)
where for illustration we have chosen $\eta_2=3, \eta_3=0.4$
and the frustration parameter $\eta_4$ is increased from 0.05 to 0.315. The dispersion along $k_y$ (along the chains)
is given for two values of $k_x$:
$k_x= 0$ (top two panels) and $k_x=\pi/a$ (bottom two panels). Also for comparison we give the dispersions for the
non-interacting chains ($\eta_3=\eta_4=0$). Later we will discuss the $k_x$ dependence
for some special modes. As expected,
there are four magnon modes for each ${\bf k}$. For the non-interacting chains, there are two F-magnon modes which
are split (the lower mode $\sim k_y^2$ for small $k_y$) and two AF-magnons which are degenerate ($\sim k_y$ for small $k_y$).
In the presence of couplings (discussed below) we will (loosely) refer to these four modes as two F and two AF modes.
\begin{figure}[httb]
\centering
\includegraphics[width=2.8in,clip]{Dispersionkx0J23J304eta4.eps}
\qquad
\includegraphics[width=2.8in,clip]{DispersionkxPiJ23J304eta4.eps}
\caption{Magnon dispersion of the ferrimagnetic state for A. $k_x=0$ and B. $k_x=\pi/a$ [Figs. (a-e)]
with $\eta_2=3.0, \eta_3=0.4$. The frustration
parameter $\eta_4$ is varied from 0.05 (small frustration) to $\eta_{4}=0.315$.
(f) Limiting case with no inter-chain coupling: the two AF-branches are degenerate, the F-branches
are gapped, and the lower F-branch vanishes at $k_y=2\pi/b (=0)$. Note that due to 2$\pi$ periodicity
$k_y=[-\pi/b, \pi/b]=[\pi/b, 3\pi/b]$.}
\label{fig:energydispcomp}
\end{figure}
First we consider the case $k_x=\pi/a$ (bottom two panels) where the hybridization between the F and AF modes is absent (as $\gamma_x=0$) - so
the F and AF chains interact only through effective fields. In this limit, we find from Eq.~\eqref{dispgx0F} and
Eq.~\eqref{dispgx0AF} that the F-modes get rigidly shifted upwards by
$2J_1S(\eta_3-\eta_4)$, the two degenerate AF-modes
are split by $4J_S(\eta_3+\eta_4)$, and both the modes $\sim k_y^2$. At $k_y=0$ the lower F-mode and the lower AF-mode
are gapped, $\Delta_{\rm F}(\pi/a,0)=2J_1S(\eta_3-\eta_4)$ and
$\Delta_{\rm AF}(\pi/a,0)=2J_1S[\sqrt{(\eta_2+\eta_3-\eta_4)^2-\eta_2^2}-(\eta_3+\eta_4)]$. When the
frustration parameter $\eta_4$ is increased towards $\eta_3$, there is a
critical value $\eta_{4c}=\eta_2\eta_3/(\eta_2+2\eta_3)<\eta_3$, where $\Delta_{\rm AF}(\pi/a,0)=0$
but $\Delta_{\rm F}(\pi/a,0) >0$. The ferrimagnetic GS becomes locally unstable and the system
transits to a new ground state (For the parameter values we have chosen $\eta_{4c}=0.316$ - this is also the place where
Cholesky decomposition fails because the matrix $h_{\bf k}$ is not positive definite). This is similar to the field
induced quantum phase transition as a function of the external magnetic field for the 1D quantum
${\rm S}_1=1, {\rm S}_2=1/2$ model discussed in the introduction.\cite{brehmer,Kolezhuk2} Here the optic mode gap goes to zero at a critical
field and the system undergoes a quantum phase transition from a ferrimagnetic state to some other state. This
phase transition occurs in the range $\eta_3>\eta_4>\eta_{4c}$. Fig.~\ref{fig:phaseD} shows a schematic phase diagram in the
$(\eta_4/\eta_2,\eta_3/\eta_2)$ space.
We also note that for given $\eta_3$ and $\eta_4 \le \eta_3$, the strength of the exchange in the AF chains $\eta_2$
should be greater than a
critical value $\eta_{2c}=2\eta_3\eta_4/(\eta_3-\eta_4)$ for the ferrimagnetic state to be stable.
\begin{figure}[httb]
\centering
\includegraphics[width=2.0in,clip]{PhaseDia.eps}
\caption{Phase diagram of ${\cal H}$ [Eq~\eqref{fullH}]: normalized ${\tilde \eta_4}=\eta_4/\eta_2$ is plotted
against normalized ${\tilde \eta_3}=\eta_3/\eta_2$. The dashed lines are given by the equations
${\tilde \eta_4}={\tilde \eta_3}/(1+2{\tilde \eta_3})$ (lower one)
and ${\tilde \eta_3}={\tilde \eta_4}/(1+2{\tilde \eta_4})$ (upper one).
They are the boundaries of the stability of the ferrimagnetic
state. The solid thick line ${\tilde \eta_4}={\tilde \eta_3}$ is most likely a critical line.}
\label{fig:phaseD}
\end{figure}
For $k_x=0$, the picture is qualitatively similar, but with two fundamental differences resulting from hybridization between
ferro and antiferro chain excitations. First, the lower F-mode goes
to zero when $k_y \rightarrow 0$ as it should for the Goldstone mode. However the dispersion for large $k_y$ differs
qualitatively from the
non-interacting chains. Second, hybridization between the upper F-mode and the lower AF-mode opens up
a hybridization gap at a finite $k_y$ and the size of the gap increases with $\eta_4$. However, as for the $(k_x,k_y)=(\pi/a,0)$
the gap $\Delta_{\rm AF}(\pi/a,0) \rightarrow 0$ as $\eta_4 \rightarrow \eta_{4c}$.
In fact $\Delta_{\rm AF}(k_x,0) \rightarrow 0$ for all values of $0\le k_x\le \pi/a$ for $\eta_4 \rightarrow \eta_{4c}$.
In Fig.~\ref{fig:gap}B we show the $k_x$ dependence of $\Delta_{\rm AF}(k_x,0)$ for three different values of the
frustration parameter $\eta_4$. Also we show in Fig.~\ref{fig:gap}A the $k_x$ dependence of $\Delta_{\rm F}(k_x,0)$.
This suggests that the chains become dynamically decoupled and since the decoupled AF chains are spin liquids without any long range
order, the system goes from an ordered state to a spin disordered state when $\eta_4>\eta_{4c}$. Exact calculations will tell
us about the precise nature of the ground state for $\eta_{4c} <\eta_4<\eta_3$.
\begin{figure}[httb]
\centering
\includegraphics[width=3.2in,clip]{Gap.eps}
\caption{Gaps for F-mode ($\Delta_{\rm F}$) and AF-mode ($\Delta_{\rm AF}$) with increase in $k_x$ for $k_y=0$ with
$\eta_2=3.0, \eta_3=0.3$ and three different values of $\eta_4=0.05, 0.2,$ and 0.315.
}
\label{fig:gap}
\end{figure}
\subsection{Sublattice Magnetization}
Following Colpa's method we have calculated the sublattice magnetizations $m_i$ for the four sites.
We have checked that the sum of the reduction in the four sublattice moments due to quantum
fluctuations, $\sum_{i=1}^4 \tau_i = 0$, which results in the total
magnetic moment equal to one as expected. This is equivalent to the results obtained for S$_1=1$, S$_2=1/2$ 1D quantum
ferrimagnetic state for which the total magnetization/unit cell is equal to 0.5.
Next we discuss the effect of frustration on the quantum fluctuation induced reduction of the
long-range ordered moments for the four different spins of the unit cell. In
the absence of interchain coupling [Fig.~\ref{fig:CrMWstruc1}],
$m_1=\langle S_{1z} \rangle =m_2=\langle S_{2z} \rangle =0.5$ and
$m_3=\langle S_{3z} \rangle = -m_4=\langle S_{4z} \rangle =0$ (due to quantum spin fluctuation in 1D AF).
When we turn on $\eta_3$, its effect is to produce an ordering field at the ${\bf S}_3$ sites and order them in the direction
opposite to the F-chain spins. The intra AF chain interaction orders the ${\bf S}_4$ spins parallel to the F-chain spins,
resulting in a 2D ferrimagnetic ground state.
If $\eta_2 \ll \eta_3$ then the system will be more 2D, $m_1=m_2\cong 0.5$, and $m_3, m_4$ will be non-zero with the magnitude
of $m_3$ larger than $m_4$. On the other hand if $\eta_2 \gg \eta_3$, then intra-chain AF bonds will dominate,
making the AF chains nearly decoupled and the LRO in the AF chains will be small, $m_4 \approx -m_3 \ll 0.5$.
\begin{figure}[httb]
\centering
\includegraphics[width=2.0in,clip]{MagJ4.eps}
\caption{Magnitude of sublattice magnetizations, $m_i$, for $\eta_2=3.0, \eta_3=0.4$ as function of $\eta_4$.
Magnetizations for the two degenerate ($m_1=m_2$) ferro-modes (solid) corresponding to spins 1 and 2
slowly increase as $\eta_4$ is increased. Magnetizations $m_3, m_4$ for spins 3 (dashed) and 4 (solid)
decrease due to the increase in quantum fluctuations with increase in
$\eta_4$. The ferrimagnetic ground state is stable in the parameter space $(\eta_2, \eta_3, \eta_4)$
as long as $\eta_3>\eta_4$ and $\eta_4\leq \eta_{4c}$, where $\eta_{4c}=0.316$ for $\eta_2=3.0, \eta_3=0.4$.}
\label{fig:magJ4}
\end{figure}
In Fig.~\ref{fig:magJ4}, we show
how the ordered moments change
with the increasing strength of the frustrated bond $\eta_4$ for specific values of $\eta_2=3.0$ and $\eta_3=0.4$.
As $\eta_4$ approaches the critical value 0.316 the magnetization of the AF chain decreases but
remains finite ($|m_3| \sim 0.07, |m_4| \sim 0.06$) just before quantum phase transition to other ground state within LSWT.
This is in contrast to what happens in the
($J_1, J_2$) model where as $J_2$ approaches $J_c$ ($J_{c1}$ from the Ne\'{e}l state and $J_{c2}$ from the CAF state), the
sublattice magnetization goes to zero.
Finally, in Fig.~\ref{fig:magJ2}(a-b), we show the $\eta_2$ dependence of the magnitudes of the four order parameters $m_i$ ($i=1..4$) for
$\eta_3=0.4$ for two fixed values of the frustrated interchain bond $\eta_4$. For our assumed collinear ferrimagnetic
ground state $\eta_3 > \eta_4$ and $\eta_2 > \eta_2^c$. For $\eta_3=0.4, \eta_4=0.1$, the critical value of $\eta_2^c$ is =0.27
and for $\eta_4=0.2$, $\eta_{2c}=0.80$.
\begin{figure}[httb]
\centering
\includegraphics[width=2.0in,clip]{MagJ2-1.eps}
\qquad
\includegraphics[width=2.0in,clip]{MagJ2-2.eps}
\caption{(a-b) Magnitude of sublattice magnetizations, $m_i$ for $\eta_3=0.4$, $\eta_4=0.1$ (Fig. a) and $0.2$ (Fig. b)
as function of $\eta_2$. The ferrimagnetic ground state is stable for $\eta_2 \ge \eta_{2c}$ where $\eta_{2c}=0.27$
for $\eta_4=0.1$ and $\eta_2^c=0.80$ for $\eta_4=0.2$.
Ferro modes (solid) corresponding to spins 1 and 2 are degenerate ($m_1=m_2$). Magnetizations $m_3, m_4$ for spins 3 (dashed)
and 4 (solid) decrease due to the increase in quantum fluctuations with increase in $\eta_2$.
}
\label{fig:magJ2}
\end{figure}
For small $\eta_2$ i.e $\eta_2 \ll \eta_3$, $m_1=m_2=\vert m_3 \vert \sim 0.46$ and $m_4 \sim 0.38$, a reduction from 0.5
by 8\% and 24\% respectively. The small antiferromagnetic coupling between spins of the AF-chain induces a relatively large value
of the moment at the site 4. When $\eta_2$ increases the QSF in the AF-chain reduces the moments at sites 3 and 4. Notice that site 3
still has a larger moment (in magnitude) than at site 4. For large $\eta_2$ values, say $\eta_2 \sim 6$, ferro chain spins
have moments $\sim 0.495$, whereas AF chain spins have moments of magnitude $\sim 0.14 (>0)$ due to small stabilizing
interchain coupling $\eta_3=0.4$ [Fig.~\ref{fig:magJ2}(a)]. Increasing the strength of the frustrated bond $\eta_4$ essentially
decouples the chains. For example with $\eta_4=0.2$, at $\eta_2=6.0$ ferro chains have moments close to 0.5 and AF-chains
have moments of magnitude $~0.08$ [Fig.~\ref{fig:magJ2}(b)].
For $\eta_2 < \eta_{2c}$, the system is most likely a spin liquid state without LRO.
\section{\label{sec:conclusions}Conclusions}
In summary, we have proposed a 2D frustrated Heisenberg model consisting of alternating 1D ferro ($J_1$)
and antiferro ($J_2$) chains which interact with alternating frustrated ($J_4$) and
unfrustrated ($J_3$) bonds (strengths). The ground state is a long range ordered ferrimagnetic state in
certain region of the parameter space. Analysis using linear spin wave theory suggests that the system undergoes
a quantum phase transition to a quantum disordered phase with increasing strength of $\eta_4$, similar
to the classic 2D ($J_1,J_2$) model. However in contrast to the ($J_1, J_2$) model, the sublattice magnetizations of the AF chains do not vanish at the critical
value $\eta_{4c}$, similar to the 1D ${\rm S}_1=1, {\rm S}_2=1/2$ model of a quantum ferrimagnet. The exact nature of the
phase transition, the nature of the GS above $\eta_{4c}$, and whether the order parameter vanishes at the transition should be explored by other theoretical
and numerical techniques.
\section{Acknowledgment}
SDM would like to thank Dr. Xianglin Ke for stimulating discussions.
|
1,108,101,562,927 | arxiv | \section{#1} \setcounter{equation}{0}}
\newcommand{ \hfill (\thesection.\arabic{equation}\alph{letter})}{ \hfill (\thesection.\arabic{equation}\alph{letter})}
\newcommand{{(\Box+e^2\rho^2)}}{{(\Box+e^2\rho^2)}}
\newcommand{\eql}{\nonumber & \hfill (\thesection.\arabic{equation}\alph{letter}) \cr
\addtocounter{letter}{1}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\setcounter{letter}{1} \begin{eqnarray}}{\setcounter{letter}{1} \begin{eqnarray}}
\newcommand{\addtocounter{equation}{1} \end{eqnarray}}{\addtocounter{equation}{1} \end{eqnarray}}
\newcommand{\nonumber \\}{\nonumber \\}
\newcommand{\req}[1]{Eq.(\ref{#1})}
\newcommand{\reqs}[1]{Eqs.(\ref{#1})}
\newcommand{\,\,\,\,\hbox to 30pt{\rightarrowfill}{\,\,\,\,\hbox to 30pt{\rightarrowfill}
\,\,\,\,}
\newcommand{\,\,\,\hbox to 20pt{\rightarrowfill}{\,\,\,\hbox to 20pt{\rightarrowfill}
\,\,\,}
\newcommand{{1\over2}}{{1\over2}}
\newcommand{{\vec{x}}}{{\vec{x}}}
\newcommand{{\vec{y}}}{{\vec{y}}}
\newcommand{Z_{{FP}}}{Z_{{FP}}}
\newcommand{Z_{{F}}}{Z_{{F}}}
\newcommand{Z_{{R}}}{Z_{{R}}}
\newcommand{Z_{{OP}}}{Z_{{OP}}}
\newcommand{Z_{EKT}}{Z_{EKT}}
\newcommand{{\varphi^\dagger}}{{\varphi^\dagger}}
\centerline{\bf YANG-MILLS FLOW AND UNIFORMIZATION THEOREMS}
\bigskip
\begin{center}
S.\@ P.\@ Braham\\
{\it Center for Experimental and Constructive Mathematics\\
Simon Fraser University\\
Burnaby, BC, V5A 1S6 Canada\\}
\medskip
J.\@ Gegenberg\\
{\it Department of Mathematics and Statistics\\
University of New Brunswick\\
Fredericton, NB E3B 4A3 Canada}\\
\end{center}
\bigskip\noindent
{\bf Abstract}
\noindent
We consider a parabolic-like systems of differential
equations involving geometrical quantities to examine uniformization
theorems for two- and three-dimensional closed orientable manifolds. We
find that in the two-dimensional case there is a simple gauge theoretic
flow for a connection built from a Riemannian structure, and that the
convergence of the flow to the fixed points is consistent with the
Poincare Uniformization Theorem. We construct a similar system for the
three-dimensional
case. Here the connection is built from a Riemannian geometry,
an SO(3) connection and two other 1-form fields which take their values in the
SO(3) algebra. The flat connections include the eight homogeneous
geometries relevant to the three-dimensional uniformization theorem
conjectured by W. Thurston. The fixed points of the flow include, besides
the flat connections (and their local deformations), non-flat solutions of
the Yang-Mills equations. These latter ``instanton" configurations may be
relevant to the fact that generic 3-manifolds do not admit one of the
homogeneous geometries, but may be decomposed into ``simple 3-manifolds"
which do.
\bigskip
\noindent
UNB Technical Report 97-01
\bigskip
\begin{center}
March, 1997
\end{center}
\clearpage
\section{Introduction}
The uniformization theorem in two dimensions is a powerful tool in geometry
and topology, with applications in physics. In essence, the theorem states
that the topology of a closed orientable two-dimensional manifold (a Riemann
surface) determines which geometries it admits. In particular, if the
manifold has handle number zero, it admits the spherical geometry and its
local deformations; handle number one admits the flat geometry and its local
deformations; and for handle number two or greater, the admissible geometry is
that of the hyperbolic plane and its local deformations. It is important that
one cannot deform any of these three geometries to obtain one of the other
two.
This theorem was proved finally around the beginning of the twentieth
century by H. Poincare \cite{poincare}. It is a heroic proof, using
the most sophisticated mathematics of the day. Unfortunately, the
classical proof depended heavily on results in complex analysis, and the
generalization to three or higher dimensions was not obvious. In fact,
the three-dimensional analogue of this theorem was first consistently
formulated only in the late 1970's by W. Thurston and is called Thurston's
Geometrization Conjecture \cite{thurston78}. To date, although no
counterexamples have emerged and it has been shown to hold for very
large classes of manifolds, the conjecture remains unproved.
Recently, R. Hamilton and B. Chow have constructed a new proof of the
two-dimensional uniformization theorem using techniques not obviously
restricted to that number of dimensions \cite{ham,chow}. They consider a
one parameter family of Riemannian metrics $g_{\mu\nu}$
on an $n$-dimensional smooth
manifold $M_n$, with the ``flow" governed
by the Ricci curvature tensor $R_{\mu\nu}$:
\begin{equation}
{\partial g_{\mu\nu}\over\partial t}=-2R_{\mu\nu}+{2\over n}r g_{\mu\nu}.
\end{equation}
In the above $r$ is given by
\begin{equation}
r:=\left(\int_{M_n}d^nx\sqrt{g}\right)^{-1}\int_{M_n}d^nx\sqrt{g}R,
\end{equation}
where $d^nx\sqrt{g}$, with $g=\det\left(g_{\mu\nu}\right)$, is the volume
element on $M_n$. It is easy to show that ({\it i}.)
$\partial r/\partial t=0$
along the flow; and ({\it ii.}) the fixed points of the
flow are the ``Einstein
metrics", satisfying $R_{\mu\nu}=(r/n)g_{\mu\nu}$. In two dimensions, the
Einstein metrics have constant curvature, and include all homogeneous
geometries; in three dimensions, the Einstein metrics include the
constant curvature,
but not all the homogeneous geometries; and finally, in four and
higher dimensions, the Einstein metrics do not have nearly so clear a
geometric significance as they do in lower dimensions.
The Hamilton/Chow proof of the uniformization theorem in two dimensions
analyzes the ``Ricci scalar flow"
\begin{equation}
{\partial R\over\partial t}=\Delta R+R(R-r),
\end{equation}
derived from the flow of the metric above. For the cases where $R$ is
non-positive, it is fairly straightforward to show that the flow converges
to the fixed points with non-positive constant curvature \cite{ham}.
It took a few more years to analyze the case of $R>0$, since this involves
a repulsive fixed point \cite{chow}.
Isenberg and Jackson \cite{isenberg} examined the Ricci flow in three
dimensions in order to shed light on Thurston's Geometrization
Conjecture \cite{thurston78}. Unlike the two-dimensional case, it is
not true in three dimensions that all closed orientable manifolds
admit one of the constant curvature geometries. Rather, one must
first canonically decompose the given manifold $M$ into its prime
pieces \cite{prime}, obtained by cutting along 2-spheres and gluing
3-balls onto the cuts until one or both of pieces is homeomorphic to
the 3-ball; then cutting along incompressible $T^2$ until the pieces
are either Seifert fiber spaces or contain no embedded incompressible
$T^2$. Denoting the resulting manifolds by $M_i$, so that
$M=M_1\#M_2\#...$, Thurston then conjectures that the universal
covers $\tilde M_i$ of each $M_i$ admits one and only one of the {\it
eight} homogeneous geometries possible in three dimensions.
See Table~\ref{manifolds}. for a list of these eight geometries.
\begin{table}
$$\begin{array}{|l|l|l|}
\hline
\mbox{Manifold} & \mbox{Isometry Group} &
\mbox{Metric}\\
\hline
S^3 & SO(4) & \cos^2{y}\,dx^2+dy^2+(dz-\sin{y} \,dx)^2 \\
E^3 & R^3\times SO(3) & dx^2+dy^2+dz^2 \\
H^3 & PSL(2,C) & dx^2+e^{2x}\left(dy^2+dz^2\right) \\
S^2\times E^1 & \left(Isom(S^2)\times Isom(E^1)\right)^+ & dx^2+dy^2+\sin^2{y}
\, dz^2 \\
H^2\times E^1 & \left(Isom(H^2)\times Isom(E^2)\right)^+ & dx^2+dy^2+e^{2x}dz^2 \\
\widetilde{SL(2,R)} & Isom(H^2)\times R &
\cosh^2{y} \,dx^2+dy^2+(dz+\sinh{y} \,dx)^2 \\
Nil & Isom(E^2)\times R & dx^2+dy^2+(dz-xdy)^2\\
Sol & Sol\times \left(Z_2\right)^2 & dx^2+e^{-2x}dy^2+e^{2x}dz^2 \\
\hline
\end{array}$$
\caption{The Eight Homogeneous Geometries}\label{manifolds}
\end{table}
The problem with the approach of Isenberg and Jackson is that on the one
hand, there does not seem to be sufficient structure in the Ricci
flow to cope with the neccessity of decomposing the given manifold as
above; and on the other the fixed points are the
constant curvature geometries, a proper subset of the homogeneous
geometries. Hence, one must first cut and
paste the manifold, then flow the geometry on each piece; and one
must look in general for asymptotics, rather than convergence to
fixed points
What we propose here is a one-parameter family of {\it connections}
whose flow converges to flat connections. In two dimensions, these
flat connections are equivalent to the constant curvature
geometries. This suggests the possibility of another proof of the
two-dimensional uniformization theorem. What is important here is
that this flow generalizes to three dimensions such that the fixed
points of the flow are the eight homogeneous geometries plus certain
``instanton" configurations, which describe ``necks" between
three-manifolds. We believe that this is a promising aproach to
proving Thurston's Geometrization Conjecture.
In Section 2., motivated by the gauge-theoretic formulation of certain
two-dimensional gravity theories, we will recast the structure equations for
a constant curvature Riemannian {\it metric} into the form of a flatness
condition on an appropriate {\it connection}.
This sets the stage for
constructing a one-parameter family of connections -- the Yang-Mills
flow. In section 3., the properties of the Yang-Mills flow will be
explicated. We will show that the fixed points of the flow are
Yang-Mills connections, a subclass of which is the set of flat
connections, and that the latter describe Riemannian metrics of
constant curvature. In section 4., we analyze the behaviour of the
flow. For the cases of spherical
and flat Euclidean topologies, we may conclude that the flow
converges to the fixed points. We present arguments to show that
with initial conditions consistent with regular torsion-free
Riemannian geometries, the flow actually converges to the fixed
points associated with flat connections, and hence to constant
curvature Riemannian geometries. This leads us to believe that one
may prove the two-dimensional uniformization theorem using the
Yang-Mills flow. In section 5., we outline a numerical integration
of the Yang-Mills flow. Finally, in section 6., we will propose a
generalization of the Yang-Mills flow to three dimensions. We will
conclude by sketching the outline of the form that a proof of the
Thurston Geometrization Conjecture, using this three-dimensional
flow, might take.
\section{Gauge Theory Form of 2D Gravity}
One of the most fruitful areas of research in fundamental physics at the
moment begins with the reformulation of Einstein's theory of gravity -- the
general theory of relativity -- as a {\it gauge theory} of the (complexified)
rotation group SO(3). This approach,
pioneered by A. Ashtekar and his collaborators \cite{ash}, has resulted in
some
progress in constructing a quantum theory of gravity. This is in part due
to the fact that the constraints in the theory become polynomial
when expressed
in terms of the above connection and its conjugate momentum; and this in
turn has allowed for the construction of solutions of the constraints. Even
more striking results have been attained by reformulating lower dimensional
gravity theories as gauge theories. In particular, three-dimensional
general relativity can be formulated as a Chern-Simons gauge theory \cite{3d};
and some simple two-dimensional gravity theories can be expressed as
topological field theories of the so-called BF type \cite{it,lineal}.
This suggests that it might be useful to examine the flow of
connections constructed from the Riemannian geometry, rather than the
flow of the metrics themselves. For one thing, as in the above
theories of gravity, when expressed in connection form, we expect
that the partial differential equations that describe the flow will
be polynomial, unlike the Ricci flow, where the terms involving the
curvature tensor are non-polynomial in the metric. More importantly,
we note that the connection formulation of lower dimensional gravity
theories are topological field theories, and provide a more direct
route to the global issues that must be addressed.
The fixed points of the Ricci flow are the Einstein spaces. In two dimensions
these are just the spaces which have constant curvature Riemannian metrics:
$R=2k$.
About ten years ago, Jackiw and Teitelboim (separately) considered the above as
a toy model of gravity in two dimensions \cite{jt}. A few years later, at
least three groups constructed a gauge theory formulation \cite{it}.
In this formulation, the field equations of the theory required that a
certain connection over spacetime is flat; this in turn was equivalent to
the existence of a constant curvature (pseudo-)Riemannian geometry on the
spacetime. In the following, we will construct the connection which has this
property.
We now consider Riemannian geometry in the first-order Cartan formalism.
Instead of the metric tensor $g_{\mu\nu}$ we consider a
frame-field $e^a$, a set of two 1-form
fields on $M^2$. The indices $a,b,..=1,2$. The metric and frame-fields are related by $g_{\mu\nu}=
\delta_{ab}e^a_\mu e^b_\nu$. Instead of the Christoffel symbols, we have
the spin-connection $\omega^a{}_b$, also a set of 1-form fields on $M^2$.
The spin-connection is skew-symmetric in the indices $a,b$, so in two
dimensions, there is only one algebraically independent component, which
we denote simply by $\omega$, defined by
$\omega^a{}_b=-\omega\epsilon^a{}_b$.
We can now define a
connection 1-form field $A$:
\begin{equation}
A:=e^a P_a + \omega J,\label{eq:conn}
\end{equation}
where $\{P_a,J\}$ generate the Lie algebra:
\begin{equation} \left[P_a,P_b\right]=k\epsilon_{ab} J;\,\,\, \left[J,P_a\right]=\epsilon_{ab}
\delta^{bc}P_c,\label{eq:alg}
\end{equation}
with $\delta^{ab}$ the Euclidean metric.
The algebra generated by $\{P_a,J\}$ is so(3) if $k=+1$,
iso(2) (the ``Poincare algebra") if $k=0$, or so(2,1) if
$k=-1$.
If the torsion
\begin{equation}
T^a:=de^a-\epsilon^a{}_b \omega\wedge e^b=0.
\end{equation}
then the connection $A$ determines a Riemannian geometry.
The curvature 2-form corresponding to the connection $A$ is given by:
\begin{eqnarray}
F(A):&=&dA+{1\over2} [A,A]\\ \nonumber \\
&=&T^a P_a +\left(d\omega+{k\over 2}\epsilon_{ab}e^a
\wedge e^b\right)J\\ \nonumber \\
&=&T^a P_a -{1\over2}\left(R-2k\right)v J\label{eq:curvature}.
\end{eqnarray}
In the above, $v={1\over2}\epsilon_{ab}e^a\wedge e^b$ is the volume element on the
manifold $M$ induced by the Riemannian metric.
If the connection $A$ is flat, then the curvature $F=0$, and hence
\begin{eqnarray}
T^a&=&0;\\
R&=&2k.
\end{eqnarray}
Hence, a flat connection $A$, with algebra given by $k$, determines a
Riemannian geometry with constant curvature $2k$, and vice-versa.
\section{Yang-Mills Flow}
In this section, we will describe a one-parameter family of connections, of
the form given in the previous section.
We start with a two-dimensional manifold $M$ and an admissible Riemannian
metric $\bar g_{\mu\nu}$
and a (not necessarily compatible) spin-connection
$\bar\omega$. We use this structure to define the
{\it duals}
of form fields, e.g.
\begin{equation}
*\left(b_\mu dx^\mu\right):=\bar g^{1/2}\epsilon_{\mu\nu}\bar
g^{\nu\sigma}b_\sigma
dx^\mu,
\end{equation}
where $\bar g$ is the
determinant of $\bar g_{\mu\nu}$. The algebra
\req{eq:alg} is characterized by the constant $k$. This is determined from
the topological structure of $M$ by the Euler number $\chi(M)$:
\begin{equation}
k:={\chi(M)\over\mid \chi(M)\mid},
\end{equation}
if $\chi(M)\neq 0$, and by
\begin{equation}
k=0,
\end{equation}
if $\chi(M)=0$.
In fact, $\chi(M)$ can be computed from
$\bar g_{\mu\nu}$ by
\begin{equation}
\chi(M)={1\over4\pi}\int_M d^2x \sqrt{\bar g}\bar
g^{\mu\nu} \bar R_{\mu\nu}
,\label{eq:euler}
\end{equation}
where $\bar R_{\mu\nu}$ is the Ricci tensor of $\bar g_{\mu\nu}$.
The Euler number $\chi(M)$ is related to
the handle number, or genus $h(M)$ of $M$ by $\chi(M)=2-2h(M)$.
We now flow the connection $A(t)$
given by
\begin{equation}
A(t)=e^a(t)P_a+\omega(t)J.
\end{equation}
The initial values are given by
\begin{equation}
A(0)=\bar e^a P_a+\bar\omega J.
\end{equation}
The differential equations that determine the flow are
\begin{equation}
{\partial A\over\partial t}=-*D_A*F(A).\label{2dflow}
\end{equation}
The dual $*$ and the Lie algebra are determined by the initial fields,
as discussed above. Hence $*$ and $\partial/\partial t$ commute.
The parabolic-like structure of the flow is displayed most transparently in
the equations for the flow of the {\it curvature}. Using \req{2dflow} and
the duality in two dimensions of 2-forms with 0-forms, we arrive at
\begin{equation}
{\partial f\over\partial t}=\Delta_A f,\label{fflow}
\end{equation}
where $f:=*F(A)$ is a Lie algebra valued 0-form equivalent to the curvature;
and $\Delta_A:=-\left(*D_A*D_A+D_A*D_A*\right)$ is the Laplacian with
respect to the connection $A(t)$.
The fixed points of the flow are the Yang-Mills connections $A_{ym}$, i.e.,
connections which satisfy the Yang-Mills equations:
\begin{equation}
-*D_{A_{ym}}*F(A_{ym})=0.\label{ym}
\end{equation}
There are two types of Yang-Mills connections. The first are flat connections,
i.e. $F(A)=0$. If the $e^a$ are non-degenerate, these connections are
equivalent to the constant curvature Riemannian geometries, as we discussed
in the last section. The second type of Yang-Mills connections are
``instantons," with $F(A)\neq 0$. In this case, the structure group of the
connection is reduced to a subgroup which commutes with $f:=*F$ \cite{ym}.
In order to prove a two-dimensional uniformization theorem, we must
establish the following:
\noindent {\bf Conjecture}: {\it From an initial connection corresponding to a
sufficiently smooth non-degenerate Riemannian geometry on a
2-manifold $M$ with Euler number $\chi(M)$, the Yang-Mills flow converges to
the
flat connection corresponding to a Riemannian geometry with constant curvature
having the same sign as $\chi(M)$.}
In the following two sections we will provide analytical and
numerical evidence for this conjecture.
\section{Convergence of the Yang-Mills Flow: Analytical Evidence}
There has been some discussion of the properties of Yang-Mills (and
related) flows by mathematicians and mathematical physicists
\cite{ymflow}. It is clear from this literature, in particular from
the thesis of Rade, that the flow exists and is unique for
short times, at least for the case of non-negative Euler number.
Unfortunately, the situation with regard to the question of the
convergence of flow as $t\to\infty$ is not clear at the moment. In
the following, we will discuss the question of convergence from an
analytic (but fairly heuristic) perspective. In the next section,
encouraging results from numerical treatment will be presented.
The Yang-Mills flow resembles the heat equation
\begin{equation}
{\partial\phi\over\partial t}=\Delta \phi,\label{heat}
\end{equation}
where $\Delta$ is the Laplacian operator with respect to some Riemannian
structure defined on the manifold $M$ upon which the field $\phi$ takes its
values. The existence/uniqueness for short times and the convergence as
$t\to\infty$ to the ``average" initial data is well-known \cite{heatflow}.
Indeed, it is easy to see that for initial data infinitesimally close to a
fixed point, the Yang-Mills flow is parabolic. In general the Yang-Mills
flow is polynomially non-linear.
We will now discuss the question of convergence for each of the cases
$k=0$ and $k=+1$. The $k=-1$ case is the least well-understood at the
moment, and is under investigation by the authors.
For the case $k=0$ the Lie algebra is ISO(2), which is a semi-direct
product of the Abelian group SO(2) with the two-parameter group of
translations. The Yang-Mills flow itself splits into an SO(2) piece which
depends only on the $A^2$-component of the ISO(2) connection:
\begin{equation}
{\partial A^2\over\partial t}=-*d*dA^2. \label{A2flow}
\end{equation}
Although this is not strictly parabolic, the flow of the corresponding
dual of the curvature component, $f^2:=*F^2=*dA^2$ is a parabolic system:
\begin{equation}
{\partial f^2\over\partial t}=\bar\Delta f^2, \label{f2flow}
\end{equation}
where $\bar\Delta$ is the Laplacian with respect to the initial Riemannian
geometry. Now the average of the initial curvature component,
\begin{equation}
\bar F^2:=\int_{M^2}dA^2=0,
\end{equation}
where the last equality follows from the fact the in this case $M^2$ has
Euler number zero. Since $f^2$ converges to a constant 0-form, it must
converge to zero everywhere on $M_2$.
What we have shown is that the manifold which is topologically $T^2$,
i.e which has Euler number 0, admits a closed 1-form. This
is a sufficient condition that the manifold admits a Riemanian
geometry with a compatible spin-connection with zero curvature. To
see this, consider the following: Let
$\omega$ be a closed 1-form.
But since
for $T^2$ the space of harmonic 1-forms is two-dimensional, there is
a harmonic 1-form $\beta$ such that $\omega\wedge\beta\neq 0$. We can
find a chart in which there is a function $p(x)$, such that
$\partial_\mu p(x)=\beta_\mu(x)$. Now define the 1-forms $e^0,e^1$ in this
chart by $e^0(x)=p(x)\omega(x), e^1(x)=-dp(x)$. In the chart, the
volume element $e^0(x)\wedge e^1(x)\neq 0$ by construction, and the
compatibility condtion $de^a-\epsilon^a{}_b\omega\wedge e^b=0$ since
$d\omega=0$.
It remains to address the question of which initial conditions, if any,
determine flows which converge to
the instantons, with $F(A_\infty)\neq 0$.
In the $k=+1$ case, for which the gauge group is SO(3), we have the
results of Rade \cite{ymflow} wherein in is proved that for compact
simple gauge groups, {\it e.g.} SO(3), the Yang-Mills flow converges
with respect to the Sobolev norm $H^1$ to a Yang-Mills connection.
What remains here is the question of the instantons, as well as
whether the flow converges under stronger smoothness
requirements.\footnote{It is easy to show that, analytically, a round sphere
with arbitrary radius
will exponentially converge to the sphere with ``correct" radius
$\sqrt 2$, {\it i.e.} so that $R=2$.}
\section{Convergence of the Yang-Mills Flow:
Experimental Differential Geometry}
The system of coupled partial differential equations
\req{2dflow} is comprised of
polynomially {\it nonlinear} PDEs, and is therefore potentially quite
complicated.
However, the right-hand side of \req{2dflow}
contains second derivatives in
the spatial variables and is therefore much like a diffusive system with
strange convective terms.
If we
effectively restrict consideration to
the very high frequency
components of \req{2dflow} by only retaining terms that
contain the highest number of spatial derivatives, we find that
\begin{equation}
\frac{\partial A}{\partial t} \approx \delta d A,\label{highf}
\end{equation}
where $\delta = -*d*$. The right-hand side of
\req{highf} is not the Laplacian on $M$ defined by
the initial connection, which would be given by
$\Delta = \delta d + d \delta$. Thus, the system does not, even in this
approximation, represent an exact diffusive evolution. However, under
the same conditions for which \req{highf} is valid, we have
\begin{equation}
\frac{\partial f}{\partial t} \approx \Delta f,\label{fhighf}
\end{equation}
which {\it is} a diffusive equation. It is therefore highly probable
that high frequency spatial perturbations in the curvature
and torsion induced on
$M$ by $A(t)$ are rapidly damped as $t \rightarrow \infty$. We
would therefore expect
a breakdown of our conjectured
convergence
behaviour
of the flow
only
if the low frequency modes do not have the appropriate
evolution. These modes are dominated by the nonlinear coupling in
\req{2dflow}.
It seems highly probable that such a system
will not lend itself to easy analytic study. Rather, before
attempting such an analysis, it seems appropriate that we should
verify the conjectured behaviour as best we can. To do this, we
resort to techniques of {\it experimental mathematics},
wherein we
view the conjectured convergence behaviour as a {\it hypothesis}.
We will seek experimental evidence for or against the hypothesis.
We can then study the following two questions:
\begin{enumerate}
\item Does $A$ evolve smoothly under \req{2dflow} from smooth initial
data?
\item Does $A$ converge to a fixed point in an appropriate space\footnote{
In general, we expect convergence within an appropriately normed space,
which we would have to describe. However, since the computer experiments
use only a fixed number of mesh points
to represent the manifold, all norms are effectively equivalent for purposes
of this section.}
of connections over $M$?
\end{enumerate}
\noindent
If both questions have affirmative answers, we would also like to answer
the concomitant question:
\begin{enumerate}\setcounter{enumi}{2}
\item
Does $F(A) \rightarrow 0$ as $t \rightarrow \infty$?
\end{enumerate}
Should the answers to these questions turn out to be yes, for a reasonably
wide set of initial conditions and topologies, we would have hope that
further
study of the flow method ought to be useful in understanding uniformization
theorems. We might further hope that our numerical ``experiments'' would
enable us to observe useful properties of
\req{2dflow} that help suggest ways to prove our conjecture.
To the contrary, should
we find a numerical counterexample to our conjecture, we
could rapidly verify that the conjecture would be false. This is the heart
of the experimental mathematics concept.
Given that we conjecture \req{2dflow} to be diffusive and parabolic,
we must first choose a numerical method appropriate to such equations.
It is then an important part of the experimental process to verify
the consistency of the chosen method.
The explicit forward Euler method is the most naive numerical integration
method appropriate for diffusive parabolic PDEs. It is important to
choose carefully.
Assume for the moment that the ``convection'' terms
were to dominate over the ``diffusion''
terms, producing dynamics for $A$ that are far more like hyperbolic
systems. Hyperbolic PDEs are generally unstable
when evolved via naive numerical methods, such as the forward Euler method.
Furthermore, it would seem
improbable that $A$ would converge if \req{2dflow} was primarily
hyperbolic in nature and if $M$ was compact.
In the Euler method, the iterated value of the connection at time
$t_{n+1}>t_n$
is taken to be
\begin{equation}
A_{n+1} = A_n - \kappa (*D_A f(A_n)),\label{FE}
\end{equation}
where $A_n$ is the value of $A$ at a time $t_n:=
t_0 + n \kappa$, $\kappa>0$, and where $A_i$ is
given on a mesh of discrete points
approximating
the manifold $M$. Such
an approximation is shown in Figure~\ref{exp71}, where an initial
connection is represented on a torus. The curvature is represented
by variations of the torus geometry, and the actual torus generated
is only a representation of the state of $A(t)$ and $f(t)$.
\begin{figure}[ht]
\epsfxsize=3.0in
\centerline{\epsfbox{exp7f1.ps}}
\caption{An initial toroidal mesh.}\label{exp71}
\end{figure}
We wish
to compare \req{2dflow} to the evolution of a linear system of
the form
\begin{equation}
\frac{\partial u}{\partial t}=Ku(t),\label{genflow}
\end{equation}
where $u$ is a quantity given on $M$, and $K$ is a linear operator
acting on $u$. The corresponding forward Euler method is
\begin{equation}
u_{n+1} = u_n + \kappa K u_n,\label{GFE}
\end{equation}
where everything is expressed on a mesh over $M$.
It is well-known that
\req{GFE} fails to
produce a valid approximation to the solution of the hyperbolic equation
\req{genflow}: the numerically produced solution undergoes
rapid growth in modes that are high in spatial frequency. The approximation
is bad, independently of how small we take $\kappa$ to be. Diffusive
parabolic systems are different in behaviour: for $\kappa$ small enough, the
numerical solution obeys \req{genflow}.
In view of this comparison, part of our experiment consists of identifying
whether \req{FE} produces a stable approximation (suggesting parabolic
behaviour) or an unstable one (suggesting hyperbolic behavour).
The experimental procedure is as follows:
the symbolic
system {\it Maple} is used to write part of the computer program,
converting \req{FE} into computer code representing evolution on an
appropriate
mesh with cell size $h$ and time-step $\kappa$. We then verify the
stability behaviour for $\kappa$ {\it vs}.~$h$. If we observe the
expected stability behaviour, we can then analyse the resulting evolution
to see if it is consistent with our conjecture.
Happily, \req{FE} does indeed display, for $k=0$,
the precise behaviour of a diffusive parabolic PDE. To be precise,
for a toroidal rectangular mesh,
the numerical stability is observed when $\kappa$ is smaller than a value
that is of order $h^2$, for a wide range of initial conditions. One
particular example of this can be seen in Figure~\ref{stabfig}.
\begin{figure}[h]
\vspace*{0.5in}
\epsfxsize=2.5in
\centerline{\epsfbox{stab.ps}}
\vspace*{0.1in}
\caption{Stability behaviour of the forward Euler method for the
flow equations. Line marks linear fit.}\label{stabfig}
\end{figure}
This is the anticipated
signature of a diffusive system, whence we have verified one part of
our conjecture, as discussed above.
Furthermore, we obtain evidence that, for the $k=0$ case, all three of our
earlier questions may be answered in the affirmative.
We find that
the numerical evolution of \req{2dflow} under \req{FE} produces
a convergent $A_n$, and that $F(A_n) \rightarrow 0$, as $n \rightarrow
\infty$. This is shown in the final state of the mesh; {\it cf.}
Figure~\ref{exp715}, also drawn on top of the representative torus.
\begin{figure}[ht]\label{exp715}
\epsfxsize=2.0in
\centerline{\epsfbox{exp7f15.ps}}
\caption{The final configuration, after numerical evolution.}
\end{figure}
We have therefore found support for our conjectured convergence
behaviour of solutions of \req{2dflow}, at the experimental
level, for $k=0$.
The conjecture can also be experimentally verified for the
$k=1$ (spherical topology) case. In the spherical case, we construct
an initial connection on a mesh with spherical topology, as shown in
Figure~\ref{sph3a}. The radius from the centre in this image denotes the
curvature radius at that point.
\begin{figure}[ht]\label{sph3a}
\epsfxsize=2.0in
\centerline{\epsfbox{sph3a.ps}}
\caption{An initial connection for a spherical topology.}
\end{figure}
\begin{figure}[hb]\label{sph3b}
\epsfxsize=2.0in
\centerline{\epsfbox{sph3b.ps}}
\caption{Final, unit sphere, result of evolution.}
\end{figure}
The flow equations can then be evolved. The connection then flows
to one corresponding to a unit 2-sphere, as shown in Figure~\ref{sph3b}.
We thus have experimental confirmation of the conjecture for $k=0$
and $k=1$.
It remains to verify the conjecture for $k<0$. Unfortunately,
\req{FE} is not very useful in this case.
The fact that $\kappa$ must be smaller than something of order $h^2$
means that the equations cannot be evolved quickly when we wish to have
$h$ small. For complicated topologies that arise for $k<0$ (particularly
handlebodies, which do not arise in the other cases), we need many cells,
and $h$ {\it must} be small. Furthermore, it becomes difficult to control
the build up of numerical noise with such a naive, conditionally
stable, method. For $k<0$ ({\it and} the 3-D flow described
in the next section), more sophisticated techniques will be needed.
These techniques will have to deal with the fact that $M$ will
generally need to be covered by more than one coordinate patch,
more than one gauge patch, will have complicated topology, and
even more nonlinear behaviour. The best practical algorithms seem to be those
using multigrid finite element methods. One of us (SPB)
is currently developing such techniques.
\clearpage
\section{The 3-D Flow}
The gauge-theoretic version of a two-dimensional theory of gravity
was the starting point for a new approach to proving the two-dimensional
uniformization theorem. We emphasize here that we have not completely
succeeded in constructing the proof; we have only demonstrated its
plausibility. Neverthless, this suggests that it might be useful to
examine gauge-theoretic versions of 3-D gravity theories as possible
starting points for a proof of the 3-D uniformization theorem conjectured
by Thurston.
Such theories are well-known. Einstein gravity (with or without a
cosmological constant) can be formulated as a Chern-Simons gauge theory
with gauge group ISO(3) (if the cosmological constant is zero) or
SO(4) or SO(3,1) if the cosmological constant is positive or negative
\cite{3d}. This suggests that we consider a flow of the form
\begin{equation}
{\partial A\over\partial t}=*D_A*F(A) ,
\end{equation}
where $A$ is the connection 1-form on some 3-manifold for some gauge group
$G$. One recovers the Riemannian geometry from the connection 1-form via
a relation of the form:
\begin{equation}
A=\omega^aG_a +e^a F_a + ...,
\end{equation}
where the $G_a, F_a$ are generators of $G$, and the $e^a,\omega^a$ can be
interpreted as a frame field and spin-connection, respectively.
The $...$ indicates other terms in any additional generators of the
gauge group $G$. The idea
is to choose the gauge group $G$ so that the flat connections (which are a
subset of the
fixed points of the flow)
include at least the {\it eight} homogeneous geometries that occur in
Thurston's conjecture. This is not the case if one chooses one of the
groups (ISO(3), etc.) relevant to Einstein gravity.
The flat connections for these
groups determine frame-fields $e^a$ and compatible spin-connections
$\omega^a$ which have {\it constant curvature}.
There is a gauge group whose flat connections include the eight
homogeneous three-dimensional geometries. The group is the ``doubly
inhomogenized" group IISO(3). This group is the semi-direct product of
the ``Poincare group" ISO(3) with its Lie algebra. It is a twelve
parameter non-compact group, whose Lie algebra is
\begin{eqnarray}
\left[F^a,G^b\right]&=&{1\over2}\epsilon^{abc}F_c;\,\,\,
\left[G^a,G^b\right]={1\over2}\epsilon^{abc}G_c; \\
\nonumber \\
\left[G^a,J^b\right]&=&{1\over2}\epsilon^{abc}J_c;\,\,\,
\left[G^a,K^b\right]={1\over2}\epsilon^{abc}K_c; \\
\nonumber \\
\left[J^a,K^b\right]&=&{1\over2}\epsilon^{abc}F_c\,\,\,. \label{pbs}
\end{eqnarray}
The remaining brackets vanish. The $G^a$ generate the SO(3)
subgroup, while the remaining generators $F^a,J^a,K^a$ behave like
generators of translations, except that the latter two do not
commute.
It was shown in \cite{cargeg} that the Chern-Simons functional with this
gauge group is equivalent to a three-dimensional theory of gravity
interacting with topological matter. This is accomplished by constructing
the IISO(3) connection as follows:
\begin{equation}
A=\omega^a G_a +e^a F_a +B^a J_a +C^a K_a.
\end{equation}
If $A$ is flat, i.e. if $F(A)=0$, then by use of the algebra \req{pbs} it
follows that $\omega^a$ is a flat SO(3) connection, $B^a, C^a$ are
covariantly constant with respect to $\omega^a$ and $e^a$ satisfies:
\begin{equation}
D_\omega e^a+{1\over2}\epsilon^{abc}B_b\wedge C_c=0, \label{eqmot}
\end{equation}
where $D_\omega$ is the gauge covariant derivative with respect to the
connection $\omega^a$.
One may now construct a spin-connection which is compatible
with the frame-field $e^a$, and hence determines a Riemannian geometry.
In particular, one can show that if the $e^a$ are the frame-fields for
a homogeneous geometry, then there exist flat connections $\omega^a$ and
fields $B^a,C^a$ satisfying $D_\omega B^a=0, D_\omega C^a=0$ such that
$e^a$ satifies \req{eqmot}. The explicit expressions are shown in
Tables~\ref{es} and~\ref{bcs},
where we have taken $\omega^a=0$ for simplicity.
\begin{table}[ht]
$$\begin{array}{|l|l|}
\hline
\mbox{3-manifold}& \left(e^1,e^2,e^3\right) \\
\hline
S^3 &\left(\cos{y}\, dx,dy,dz-\sin{y} \,dx\right) \\
E^3 & \left(dx,dy,dz\right)\\
H^3 & \left(dx,e^xdy,e^xdz\right)\\
S^2\times E^1& (dx,dy,\sin{y} \,dz)\\
H^2\times E^1 & \left(dx,dy,e^xdz\right)\\
\widetilde{SL(2,R)} & (\cosh{y} \,dx,dy,dz+\sinh{y} \,dx)\\
Nil& (dx-zdy,dy,dz)\\
Sol& \left(dx,e^{-x}dy,e^xdz\right)\\
\hline \end{array}$$
\caption{Frame Fields for Homogeneous Geometries}\label{es}
\end{table}
\begin{table}[ht]
$$\begin{array}{|l|l|l|}
\hline
\mbox{3-manifold}&
\left(B^1,B^2,B^3\right)&\left(C_1,C_2, C_3\right)\\
\hline
S^3 &\left(0,dx,0\right)&\left(d(2\sin{y}),
0,
d(2\cos{y})\right)
\\
E^3 &(0,0,0)&(0,0,0)\\
H^3 &\left(d\left(2e^x\right),0,0\right)&
(0,d(-z),dy)\\
S^2\times E^1& (d(-2\sin{y}),0,0)&(0,dz,0)\\
H^2\times E^1 & \left(d\left(-2e^x\right),0,0\right)&
(0,dz,0)\\
\widetilde{SL(2,R)} & (0,dx,0)&
(d(-2\sinh{y}),0,
d(2\cosh{y})) \\
Nil& (0,dy,0)& (0,0,d(-2z))\\
Sol &\left(d\left({z-y\over2}\right),d\left(e^x
\right),d\left(-e^{-x}\right)\right)& \left(d\left({z+y\over2}\right),d\left(
2e^x\right),
d\left(2e^{-x}\right)\right) \\
\hline \end{array}$$
\caption{$B$ and $C$ 1-Form Fields for Homogeneous Geometries}\label{bcs}
\end{table}
It must be remarked here that one may perform an IISO(3) gauge transformation
on the flat connection equivalent to the homogeneous geometries. The
connections are still flat, of course, but in general the $e^a$ are no
longer frame-fields for a homogeneous geometry. It is not clear at this
point how general the gauge-transformed $e^a$ are; though it was shown in
\cite{cargeg} that for the case of the 3-manifold topology $S^2 \times S^1$,
all admissible $e^a$ could be obtained from the ``trivial" configuration
$\omega^a=e^a=B^a=C^a=0$ by a gauge transformation.
If it could be shown that an arbitrary IISO(3) connection on the 3-manifold
flowed to one of the flat connections, then it would follow that the
3-manifold would admit the {\it homogeneous} representative of the gauge
orbit. This is what we expect of a proof of the uniformization conjecture
for 3-manifolds.
In the two-dimensional Yang-Mills flow, the fixed points of the system are the
Yang-Mills connections. However, it seems to be the case that regular
initial conditions will flow to the appropriate subclass of flat connections.
The reason for this is that in two dimensions, the non-flat Yang-Mills
connections are reducible. This is not the case in three dimensions, since
the duals of the curvature 2-forms are 1-forms, and hence {\it not}
gauge parameters. Hence we conjecture that in three dimensions some
regular connections, built from non-degenerate Riemannian metrics, will flow
to instanton fixed points, not flat fixed points. This might be related to
a fundamental difference between the two- and three-dimensional cases.
Indeed, as we
discussed in the Introduction, in contrast to the two-dimensional
case, a given closed orientable 3-manifold does not in general admit a
homogeneous Riemannian geometry. Thurston's conjecture is that any 3-manifold
admits a canonical decomposition into the connected sum of
{\it simple manifolds}, i.e., prime manifolds
which either have no incompressible embedded $T^2$ or are Seifert fiber
spaces;
and each simple manifold in turn admits one and only one of the eight
homogeneous geometries. If the flow converged to fixed points which were
flat connections, then in general it would have to be singular when the
manifold was not simple.
However, consider the flow of the curvature as above.
The fixed points are the flat connections
$F(A)=0$ {\it and} the Yang-Mills instantons which satisfy
$*D_A*F(A)= 0, \,\, F(A)\neq 0$. It would be interesting to examine
IISO(3) connections on 3-manifolds to see if the Yang-Mills equations have
non-flat solutions when the manifold is simple. If it turns out that simple
3-manifolds admit flat solutions of the Yang-Mills equations, then the
next question to examine would be the structure of 3-manifolds which
consist of two simple manifolds joined by a ``neck". If Thurston's
conjecture is correct, then the following scenario should hold:
the manifold-with-neck would presumably admit
an instanton
which was asymptotically flat at the ends of the neck. In general,
non-simple 3-manifolds would {\it not} admit
non-trivial flat IISO(3) connections. Work in this
direction is underway by the authors.
\section*{Acknowledgments}
The authors thank Eric Woolgar for many useful discussions and for
his initial collaboration in this work. They also thank Steve Boyer
and Jim Isenberg for helpful discussions.
|
1,108,101,562,928 | arxiv | \section{Introduction}
Today we believe that the final product of the gravitational collapse is a Kerr-Newman black hole. This is the Carter-Israel conjecture and it is based on the following argument. First, there are some singularity theorems showing that in general relativity the collapsing matter produces space-time singularities~\cite{hawking}. These theorems do not say anything about the nature of the singularities. The second step is thus to assume the Weak Cosmic Censorship conjecture~\cite{penrose}, according to which space-time singularities must be hidden behind an event horizon. Surprisingly, in general relativity in 4 dimensions, the only possibility is the Kerr-Newman space-time~\cite{no-hair1,no-hair2,no-hair3,no-hair4}.
The Kerr-Newman metric has three parameters: the mass of the object, $M$, its spin, $J$, and its electric charge, $Q$. The spin $J$ can be replaced by the spin parameter $a = J/M$, or by the dimensionless spin parameter $a_* = J/M^2$. For macroscopic bodies, the electric charge is usually very small and can be neglected\footnote{For example, the electric charge is important for black holes with a mass $M \lesssim 10^{20}$~g in a ionized plasma~\cite{dolgov}. Such small black holes cannot be produced today in the Universe, but could have been produced in the early Universe, when the matter density was much higher.}. Hereafter, we will thus restrict our attention to Kerr space-times with $Q = 0$. A fundamental property of Kerr black holes is the Kerr bound $|a_*| \le 1$, which is the condition for the existence of the event horizon.
Observational evidences supporting the Carter-Israel conjecture are still definitively weak~\cite{narayan}. Astronomical observations have led to the discovery of at least two classes of astrophysical black hole candidates\footnote{The existence of a third class of astrophysical black holes, intermediate mass objects with $M \sim 10^2 - 10^4$~$M_\odot$, is still quite controversial, because there are no reliable dynamical measurements of their masses.}: stellar-mass objects in X-ray binary systems ($M \sim 5 - 20$~$M_\odot$) and super-massive objects at the center of many galaxies ($M \sim 10^5 - 10^{10}$~$M_\odot$). All these objects are supposed to be Kerr black holes because they cannot be explained with something else without introducing new physics. For example, stellar-mass black hole candidates in X-ray binary systems are too heavy to be neutron/quark stars for any reasonable matter equation of state~\cite{bh1a,bh1b}. The super-massive black hole candidate at the center of the Galaxy is too massive and compact to be a cluster of non-luminous bodies~\cite{bh2}. On the other hand, we do not have any observational evidence of the Kerr metric and of the existence of the event horizon.
New physics may instead be expected: it is indeed difficult to believe that space-time singularities can exist, even if behind an event horizon. So, the assumption of the Weak Cosmic Censorship conjecture seems to be a quite artificial trick, to exclude space-times in which new physics can be causally connected to distant observers. In other words, the conjecture would be motivated by our poor knowledge of the theory at high energy, but deviations from the Kerr space-times may be possible in Nature. Motivated by such a simple and general argument, one can consider the possibility that the Carter-Israel conjecture can be violated. Here, I will discuss the case of super-spinars, compact objects with $|a_*| > 1$. I will review the basic features of the accretion process onto super-spinars; more details on the subject can be found in the original papers~\cite{sim1, sim2, sim3}. Other observational properties of super-spinars were studied in~\cite{ss1,ss2,ss3}.
\section{Super-spinars}
The Weak Cosmic Censorship conjecture was originally motivated by the fact that space-times with naked singularities present several kind of pathologies. One can however interpret such pathological features with the necessity of new physics. For example, there is a connection between the existence of naked singularities and regions with closed time-like curves~\cite{nk-ctc}. The physical interpretation of space-times with causality violating regions has been recently investigated by some authors in Supergravity~\cite{sol1,sol2,sol3,sol4}. Here, one finds space-times which apparently cannot be ruled out as unphysical and where causality can be violated. The solution of this puzzle seems to be in high energy corrections of the theory: at least in some cases, one can expect that the space-time goes to a new phase and a domain wall forms. Across the domain wall, the metric is non-differentiable and the expected region with closed time-like curves arises from the naive continuation of the metric ignoring the domain wall. The latter can be made of very exotic stuff, like super-tubes~\cite{sol1,sol3,sol4} or fundamental strings~\cite{sol2}. It is also remarkable that we know several counterexamples which look physically reasonable and in which the collapsing matter starts from regular initial data and evolves into a naked singularity, see e.g. Refs.~\cite{cex1,cex2,cex3,cex4,cex5,cex6,cex7}.
The simplest object violating the Carter-Israel conjecture is probably the super-spinar~\cite{g-h}, a compact body with $|a_*| > 1$. In absence of a uniqueness theorem similar to the one for Kerr black holes in 4 dimensions, super-spinars may be quite complex objects, characterized by many parameters. Nevertheless, at first approximation one can still expect to be able to describe their gravitational field with the Kerr metric. The first two terms in a multipole moment expansion of the space-time correspond to the mass and the spin of the massive object, while its deformation is encoded in higher order moments, which are typically much less important. In other words, deviations from the Kerr metric are usually very small, as one can see in~\cite{tomi-sato}. If a compact object has $|a_*| > 1$, it cannot be a Kerr black hole. So, the Kerr bound can be used to test the Carter-Israel conjecture.
In absence of a complete theory of gravity, we have to take a phenomenological approach to study the astrophysical properties of super-spinars. They cannot be extremely compact, like a naked singularity, because otherwise they would be unstable due to the ergoregion instability~\cite{enrico}. It is probably reasonable to take their radius of order of their gravitational radius, which makes them more similar to a relativistic star made of very exotic matter than to a naked singularity.
Another issue is how super-spinars can be created. At present, it is not clear if it is possible to overspin an existent black hole~\cite{j-s}. If super-spinars can be the final product of the gravitational collapse of a star, then one should be able to obtain a super-spinning object from numerical simulations of a collapsing star. However this is not an easy job and there are so many unknown ingredients (and unknown physics) that it is unlikely to get an answer in a near future. We can however notice that recent attempts to measure the spin of stellar-mass black hole candidates suggest that these objects can rotate very fast~\cite{mcclintock}\footnote{Let us notice that current estimates of black hole spin assume $|a_*| \le 1$ and cannot be used to say that these objects are not super-spinars. If we allow for any value of the spin parameter, we would obtain two estimates, one with $|a_*| \le 1$ and another with $|a_*| > 1$, because of the degeneracy discussed in~\cite{ss3}.}. Even if these measurements should be taken with great caution, it is intriguing the case of GSR~$1915+105$, whose spin parameter is estimated to be in the range $0.98 - 1$. Since the evolution of the spin parameter due to accretion should be negligible in this system, the estimated $a_*$ would reflect the initial spin of the object after its formation. One can thus think that the gravitational collapse of a star can produce a very fast rotating object. Since from the theory of stellar evolution we expect around $10^8$ stellar-mass black holes in the Galaxy, even a low probability of violating the bound $|a_*| \le 1$ may lead to a population of super-spinars in the Galaxy.
\section{Numerical study of the accretion process}
The study of the accretion process plays a fundamental role in the physics of compact objects, because it is the accretion process that determines how radiation is released by the accreting matter, and so what we can see from the compact object. It is thus not surprising that the accretion process onto Kerr black holes has been well studied in the literature and many research groups work on the subject~\cite{chak,font}. The quasi-steady-state configuration of adiabatic and spherically symmetric accretion onto a Schwarzschild black hole can be studied analytically~\cite{michel}. However, in general, a numerical approach is necessary. The first numerical hydrodynamics simulations of the accretion process onto black holes can be traced back to the work of Wilson in 1972~\cite{wilson}, and were then extended in~\cite{hsw1,hsw2}. After these works, the research was mainly devoted to the study of accretion from thick disks and tori, and to the study of the tori instabilities~\cite{h91,yokosawa,i-b,devilliers}.
In~\cite{sim1,sim2,sim3}, I studied numerically the accretion process in Kerr space-time with arbitrary value of the spin parameter $a_*$. I neglected the back-reaction of the fluid to the geometry of the space-time, as well as the increase in mass and the variation in spin of the central object due to accretion. Such an approximation is surely reasonable if we want to study a stellar-mass compact object in a binary system, because in this case the matter captured from the stellar companion is typically small in comparison with the total mass of the compact object. The results of this simulations should not be applied to long-term accretion onto a super-massive object at the center of a galaxy, where accretion makes the mass of the compact object increase by a few orders of magnitude from its original value. The accreting matter was modeled as a perfect fluid.
The master formulas are the equations of conservation of baryon number and of the fluid energy-momentum tensor
\begin{eqnarray}\label{eq-cons}
\nabla_\mu J^\mu = 0 \, , \quad
\nabla_\mu T^{\mu\nu} = 0 \, .
\end{eqnarray}
Here $J^\mu = \rho u^\mu$ and $T^{\mu\nu} = \rho h u^\mu u^\nu + p g^{\mu\nu}$, where $\rho$ is the rest-mass energy density, $p$ is the pressure, $u^\mu$ is the fluid 4-velocity, $h = 1 + \epsilon + p/\rho$ is the specific enthalpy, and $\epsilon$ is the specific internal energy density. In order to solve the system, an equation of state $p = p (\rho,\epsilon)$ must be specified.
The calculations were made with the relativistic hydrodynamics module of the public available code PLUTO~\cite{pluto1,pluto2}, properly modified for the case of curved space-time. I used the $3+1$ Eulerian formalism of Ibanez and collaborators~\cite{ibanez}. The line element of the space-time is written in the form
\begin{eqnarray}
ds^2 = - \left(\alpha^2 - \beta_i \beta^i\right) dt^2
+ 2 \beta_i dt dx^i + \gamma_{ij} dx^i dx^j \, ,
\end{eqnarray}
where $\alpha$ is the lapse function, $\beta^i$ is the shift vector, and $\gamma_{ij}$ is the 3-metric induced on each space-like slice. Here it is convenient to use two set of variables. The {\it primitive variables} are
\begin{eqnarray}
{\bf V} = \left( \rho, v^i, p \right)^T
\end{eqnarray}
and are the quantities whose evolution and quasi-steady-state (if any) we want to determine. The hydrodynamical equations are instead solved in term of the {\it conserved variables}
\begin{eqnarray}
{\bf U} = \left( D, S_i, \tau \right)^T \, ,
\end{eqnarray}
which can be written in term of the primitive ones as $D = \rho W$, $S_i = \rho h W^2 v_i$, and $\tau = \rho h W^2 - D - p$, where $W$ is the Lorentz factor. The equations of conservation (\ref{eq-cons}) can now be written as
\begin{eqnarray}
\frac{1}{\sqrt{-g}} \left[
\frac{\partial}{\partial t}
\left( \sqrt{\gamma} \, {\bf U} \right)
+ \frac{\partial}{\partial x^i}
\left( \sqrt{-g} \, {\bf F}^i \right)
\right] = {\bf \mathcal S} \, ,
\end{eqnarray}
where ${\bf F}^i$ and ${\bf \mathcal S}$ are defined by
\begin{eqnarray}
{\bf F}^i &=& \Big(
D \left( v^i - \beta^i/\alpha \right) , \;
S_j \left(v^i - \beta^i/\alpha \right) + p \delta_j^i , \;
\tau \left(v^i - \beta^i/\alpha \right) + p v^i \;
\Big)^T \, , \\
{\bf \mathcal S} &=& \Big(
0 , \;
T^{\mu\nu} \left( \partial_\mu g_{\nu j}
- \Gamma^{\lambda}_{\mu\nu} g_{\lambda j} \right) , \;
\alpha \left( T^{\mu 0} \partial_\mu \ln\alpha
- T^{\mu\nu} \Gamma^{0}_{\mu\nu} \right) \;
\Big)^T \, .
\end{eqnarray}
\section{Results}
Roughly speaking, naked singularities are not hidden behind an event horizon because their gravitational field is too weak to trap light rays. Close to the expected naked singularity, the gravitational field may be even repulsive~\cite{rep1,rep2,rep3,rep4}. So, a quasi-steady-state accretion flow onto a naked singularity may be impossible: in some cases, the gas is accumulated around the massive object, forming a high density cloud that continues to grow~\cite{sim1, babichev}.
In~\cite{sim1,sim2,sim3}, I considered adiabatic and spherically symmetric (at large radii) accretion process onto Kerr black holes and Kerr super-spinars. The initial configuration is a static cloud of gas around the massive object and then the system evolves to find a quasi-steady-state configuration. Gas is injected into the computational domain from the outer boundary at a constant rate and isotropically. With this set-up, the two parameters determining the accretion process are the spin parameter, $a_*$, and the radius of the compact object, $R$.
One can distinguish three kinds of accretion~\cite{sim2,sim3}:
\begin{enumerate}
\item {\it Black hole accretion}. For black holes and super-spinars with $|a_*|$ moderately larger than 1, one finds the usual accretion process onto a compact object. For a given $R$, the increase in $|a_*|$ makes the accretion process more difficult: in the quasi-steady-state configuration, the velocity of the gas around the compact object is lower, while the density and the temperature are higher. The gravitational field indeed
becomes weaker for higher spin parameter, as one can easily understand by noticing that the radius of the event horizon of a black hole monotonically decreases with $a_*$. The difference, however, is very small and the exact value of the spin parameter does not affect significantly the process.
\item {\it Intermediate accretion}. As the spin parameter increases, the gravitational force around the super-spinar becomes weaker and even repulsive. Now the accretion process is significantly suppressed: the flow around the super-spinar becomes subsonic and the density and the temperature of the gas increase further.
\item {\it Super-spinar accretion}. For high value of the spin parameter, the process of accretion is very different: matter accretes from the poles, while the repulsive gravitational field produces outflows around the equatorial plane (see Fig.~\ref{f0}). 3-dimensional simulations show that the production of outflows is a quite chaotic phenomenon, without the formation of a stable structure~\cite{sim3}.
\end{enumerate}
In Figs.~\ref{f1} and \ref{f2}, I have plotted the quantity $v^{(r)} = e^{(r)}_{i} v^i$, where $e^{(a)}_{b}$ is the tetrad of a locally non-rotating observer~\cite{bpt}. Figs.~\ref{f3} and \ref{f4} show the temperature profile around black holes and super-spinars. In these simulations, $R = 2.5$~$M$ and the gas has a polytropic index $\Gamma = 5/3$ (non-relativistic particles). However, the qualitative behavior of the accretion process does not depend on the gas equation of state. For $a_* = 0$, 1, and 1.5, we find a black hole-like accretion; for $a_* = 2$, an intermediate accretion; for $a_* = 2.9$, the accretion is of super-spinar type. When $a_* = 2.5$ and 2.8, the accretion is essentially of the second kind, but there is some very weak ejection of matter near the equatorial plane.
Unlike jets and outflows produced around black holes, here the outflows are produced by the repulsive gravitational force at a small distance from the super-spinar and are ejected around the equatorial plane. These outflows become more energetic for higher value of the spin parameter. In some circumstances, the amount of matter in the outflow is considerable, which can indeed significantly reduce the mass accretion rate. On the other hand, for lower values of the spin parameter, the outflows may not be energetic enough to be ejected at large radii and escape from the gravitational field of the object. In these cases, one finds a convective region around the super-spinar, where the ejected gas is pushed back by the accreting fluid. This possibility is shown in Fig.~\ref{f5}, for the case of a super-spinar with $a_* = 2.8$ and a gas made of relativistic particles ($\Gamma = 4/3$).
Since the repulsive gravitational force around super-spinars seems to be able to create collimated jets with high Lorentz factor, in~\cite{sim2} I put forward the possibility that long gamma ray bursts might be explained with the formation of a super-spinar.
\begin{figure}[ht]
\includegraphics[width=17pc]{velvec.eps}
\hspace{2pc}
\begin{minipage}[b]{17pc}
\caption{\label{f0} Snapshot of the direction of the gas velocity around a super-spinar with $a_* = 3.0$, on a plane containing the axis of symmetry $z$. Here matter accretes from the poles, while the repulsive gravitational field produces outflows around the equatorial plane. The unit of length along the axes is $M$.}
\end{minipage}
\end{figure}
\begin{figure}[ht]
\begin{minipage}{17pc}
\includegraphics[width=17pc]{rvp25.ps}
\caption{\label{f1} Snapshot at $t = 500$~$M$ of the radial velocity as a function of the radial coordinate on the equatorial plane. $R = 2.5$~$M$, radial coordinate in units $M=1$.}
\end{minipage}
\hspace{2pc}
\begin{minipage}{17pc}
\includegraphics[width=17pc]{rvp25z.ps}
\caption{\label{f2} Snapshot at $t = 500$~$M$ of the radial velocity as a function of the radial coordinate along the $z$-axis. $R = 2.5$~$M$, radial coordinate in units $M=1$.}
\end{minipage}
\end{figure}
\begin{figure}[ht]
\begin{minipage}{17pc}
\includegraphics[width=17pc]{tp25.ps}
\caption{\label{f3} Snapshot at $t = 500$~$M$ of the temperature as a function of the radial coordinate on the equatorial plane. $R = 2.5$~$M$, temperature in GeV, radial coordinate in units $M=1$.}
\end{minipage}
\hspace{2pc}
\begin{minipage}{17pc}
\includegraphics[width=17pc]{tp25z.ps}
\caption{\label{f4} Snapshot at $t = 500$~$M$ of the temperature as a function of the radial coordinate along the $z$-axis. $R = 2.5$~$M$, temperature in GeV, radial coordinate in units $M=1$.}
\end{minipage}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=17pc]{cloud.eps}
\hspace{2pc}
\begin{minipage}[b]{17pc}
\caption{\label{f5} Radial velocity of the gas around a super-spinar with $a_* = 2.8$ and $R = 2.5$~$M$.}
\end{minipage}
\end{figure}
\ack
I would like to thank Naoki Yoshida for careful reading of the first draft of this manuscript. This work was supported by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan, and by the JSPS Grant-in-Aid for Young Scientists (B) No. 22740147.
\section*{References}
|
1,108,101,562,929 | arxiv | \section{Introduction}
The multiple antenna broadcast channel (BC) has recently been the subject of tremendous interest, primarily
due to the realization that such a channel can provide MIMO spatial multiplexing benefits without requiring
multiple antenna elements at the mobile devices \cite{Caire_IT03}. Indeed, it is now well known that dirty
paper coding (DPC) achieves the capacity region of the multiple antenna BC \cite{Weingarten_IT06}. However,
implementation of DPC requires significant additional complexity at both transmitter and receiver, and the
problem of finding practical dirty paper codes that approach the capacity limit is still unsolved.
On the other hand, linear precoding is a low complexity but sub-optimal transmission technique (with
complexity roughly equivalent to point-to-point MIMO) that is able to transmit the same number of data
streams as a DPC-based system. Linear precoding therefore achieves the same multiplexing gain (which
characterizes the slope of the capacity vs.~SNR) curve) as DPC, but incurs an absolute rate/power offset
relative to DPC. The contribution of this work is the quantification of this rate/power offset.
In this work, we apply the high SNR affine approximation \cite{Shamai_IT01} to the sum rate capacity (DPC)
and to the linear precoding sum rate. Both approximations have the same slope (i.e., multiplexing gain), but
by characterizing the difference in the additive terms the rate/power offset between the two strategies is
determined. By averaging the per-channel realization rate offset over the iid Rayleigh fading distribution
we are able to derive very simple expressions for the average rate offset as a function of only the number of
transmit and receive antennas and users for systems in which the aggregate number of receive antennas is no
larger than the number of transmit antennas.
Note that previous work has analyzed the \textit{ratio} between the sum rate capacity and the linear
precoding sum rate \cite{Jindal_IT05_DPCvsTDMA}\cite{Shen_ISIT06}. In this work we alternatively study the
\textit{absolute difference} between these quantities, which appears to be a more meaningful metric precisely
because both strategies provide the same multiplexing gain.
In addition to sum rate, we also study weighted sum rate maximization (using DPC and linear precoding) and
provide simple expressions for the rate offsets in this scenario. One of the most interesting results is that
weighted sum rate (for either DPC or linear precoding) is maximized at asymptotically high SNR by
\textit{allocating power directly proportional to user weights.} A similar result was recently observed in
\cite{Lozano_Tulino_Verdu_ISSSTA} in the context of parallel single-user channels (e.g., for OFDMA systems).
Because the linear precoding strategies we study result in parallel channels, the result of
\cite{Lozano_Tulino_Verdu_ISSSTA} shows that it is asymptotically optimal to allocate power in direct
proportion to user weights whenever linear precoding is used. By showing that weighted sum rate maximization
when DPC is employed can also be simplified to power allocation over parallel channels, we are able to show
that the same strategy is also asymptotically optimal for DPC. To illustrate the utility of this simple yet
asymptotically optimal power allocation policy, we apply it to a system employing queue-based scheduling (at
finite SNR's) and see that it performs extremely close to the true optimal weighted sum rate maximization.
This paper is organized as follows: Section II presents the system model and Section III introduces the high
SNR capacity approximation from \cite{Shamai_IT01}. Section IV describes dirty paper coding and linear
precoding and derives simple expressions for their sum rates at high SNR, and in Section V the relative
rate/power offsets between DPC and linear precoding are computed. Section VI extends the analysis to weighted
sum rate maximization and considers a queue-based system with the weighted sum rate solution. We conclude in
Section VII.
\section{System Model}
We consider a $K$-user Gaussian MIMO BC in which the transmitter has $M$ antennas and each receiver has $N$
antennas with $M \geq KN$, i.e., the number of transmit antennas is no smaller than the aggregate number of
receive antennas. The received signal ${\bf y}_k$ of user $k$ is given by
\begin{equation} \label{eq:y_Hx_n}
{\bf y}_k = {\bf H}_k {\bf x}+{\bf n}_k, \qquad k=1,\cdots, K,
\end{equation}
where ${\bf H}_k(\in {\mathbb{C}}^{N \times M})$ is the channel gain matrix for user $k$, ${\bf x}$ is the
transmit signal vector having a power constraint ${\rm{tr}}(E[{\bf{xx}}^H ]) \le P$, and ${\bf n}_k$
$(k=1,\cdots,K)$ is complex Gaussian noise with unit variance per vector component (i.e., $E[{\bf n}_k^H{\bf
n}_k] = {\bf I}$). We assume that the transmitter has perfect knowledge of all channel matrices and each
receiver has perfect knowledge of its own channel matrix. For the sake of notation, the concatenation of the
channels is denoted by ${\bf H}^H = [{\bf H}_1^H \;{\bf H}_2^H\; \cdots \; {\bf H}_K^H](\in
\mathbb{C}^{M\times KN})$, which can be decomposed into row vectors as ${\bf H}^H = [{\bf h}_{1,1}^H \;{\bf
h}_{1,2}^H\; \cdots \; {\bf h}_{1,N}^H\; {\bf h}_{2,1}^H \;{\bf h}_{2,2}^H\; \cdots \; {\bf
h}_{2,N}^H\;\cdots\; {\bf h}_{K,N}^H]$, where ${\bf h}_{k,n} (\in \mathbb{C}^{1\times M})$ is the $n$th row
of ${\bf H}_k$. We develop rate offset expressions on a per realization basis as well as averaged over the
standard iid Rayleigh fading distribution, where the entries of ${\bf H}$ are iid complex Gaussian with unit
variance.
\emph{Notations}: Boldface letters denote matrix-vector quantities. The operations $\text{tr}(\cdot)$ and
$(\cdot)^H$ represents the trace and the Hermitian transpose of a matrix, respectively. The operations
$|\cdot|$ and $\|\cdot\|$ denote the determinant of a matrix and the Euclidean norm of a vector,
respectively. The operations $\E[\cdot]$ and $\I(\cdot)$ denote the expectation and the mutual information,
respectively.
\section{High SNR Approximation}
This section describes the key analytical tool used in the paper, namely the
affine approximation to capacity at high SNR developed by Shamai and Verd\'u
\cite{Shamai_IT01}.
At high SNR, the channel capacity $C(P)$ is well approximated by an affine function
of SNR ($P$):
\begin{eqnarray}
C(P) &=& \mathcal{S}_{\infty} \left(\log_2 P -\mathcal{L}_{\infty} \right) +o(1) \nonumber \\
&=& \mathcal{S}_{\infty} \left( \frac{P_{dB}}{3 dB} - \mathcal{L}_{\infty}\right) +o(1),
\label{eq:cap_affine}
\end{eqnarray}
where $\mathcal{S}_{\infty}$ represents the multiplexing gain,
$\mathcal{L}_{\infty}$ represents the power offset (in 3 dB units), and
the $o(1)$ term vanishes as $P \rightarrow \infty$.
The multiplexing gain $\mathcal{S}_{\infty}$
and the power offset $\mathcal{L}_{\infty}$ are defined as:
\begin{eqnarray}
\mathcal{S}_{\infty} &=& \lim_{P \rightarrow \infty}
\frac {C(P)}{\log_2(P)}, \\
\mathcal{L}_{\infty} &=& \lim_{P \rightarrow \infty}
\left( \log_2(P) - \frac{C(P)}{\mathcal{S}_{\infty}} \right).
\end{eqnarray}
This high SNR approximation is studied for point-to-point MIMO channels in
\cite{Lozano_IT05}. In this context the multiplexing gain $\mathcal{S}_{\infty}$ is well known to equal
the minimum of the number of transmit and receive antennas, and thus is
essentially independent of the fading environment. However, the rate
offset $\mathcal{L}_{\infty}$ does depend on the actual fading
statistics (and possibly on the level of channel state information
available to the transmitter as well), and \cite{Lozano_IT05} provides
exact characterizations of these offset
terms for the most common fading models, such as iid Rayleigh fading,
spatially correlated fading, and Ricean (line-of-sight) fading.
Indeed, one of the key insights of \cite{Lozano_IT05} is the necessity to
consider these rate offset terms, because considering only the multiplexing
gain can lead to rather erroneous conclusions, e.g., spatial correlation
does not affect MIMO systems at high SNR.
In a similar vein, in this work we utilize the high SNR approximation to quantify the difference between
optimal dirty paper coding and simpler linear precoding in an iid Rayleigh fading environment. The
multiplexing gain is easily seen to be the same for both strategies, but a non-negligible difference exists
between the rate offsets. By investigating the differential offsets between these two strategies, we are
able to very precisely quantify the throughput degradation that results from using linear precoding rather
than the optimal DPC strategy in spatially white fading\footnote{Although we do not pursue this avenue in the
present publication, it would also be interesting to investigate the DPC-linear precoding offset in other
fading environments, e.g., Ricean and spatially correlated fading. However, one must be careful with respect
to channel models because some point-to-point MIMO models do not necessarily extend well to the MIMO
broadcast channel. For example, in point-to-point channels spatial correlation captures the effect of sparse
scattering at the transmitter and/or receiver and is a function of the angle-of-arrival. In a broadcast
channel, the angle-of-arrival is typically different for every receiver because they generally are not
physically co-located; as a result, using the same correlation matrix for all receivers is not well motivated
in this context.}.
Although the high SNR approximation is exact only at asymptotically high SNR, it is seen to provide very
accurate results for a wide range of SNR values, e.g., on the order of 5 dB and higher. Because multi-user
MIMO systems generally provide a substantial advantage over point-to-point systems (e.g., TDMA-based systems)
at moderate and high SNR's, the approximation is accurate in the range of interest.
\section{Dirty Paper Coding vs.~Linear Precoding}
In this section we compute the affine approximation to the dirty paper coding sum rate and the linear
precoding sum rate using the high SNR approximation.
\subsection{Dirty Paper Coding}
Dirty paper coding (DPC) is a pre-coding technique that allows for pre-cancellation of interference at the
transmitter. Costa introduced DPC while considering an AWGN channel with additive interference known
non-causally at the transmitter but not at the receiver \cite{Costa_IT83}. DPC was applied to the MIMO
broadcast channel, where it can be used to pre-cancel multi-user interference, by Caire and Shamai and was
shown to achieve the sum capacity of the 2-user, $M>1$, $N=1$ MIMO broadcast channel \cite{Caire_IT03}. The
optimality of DPC was later extended to the sum capacity of the MIMO broadcast channel with an arbitrary
number of users and antennas \cite{Vishwanath_IT03}\cite{Viswanath_IT03}\cite{Yu_IT04_SumCapBC}, and more
recently has been extended to the full capacity region \cite{Weingarten_IT06}.
We now describe the transmit signal when DPC is utilized. Let ${\bf
s}_k (\in \mathbb{C}^{N\times 1})$ be the $N$-dimensional vector of
data symbols intended for user $k$ and ${\bf V}_k (\in
\mathbb{C}^{M\times N})$ be its precoding matrix. Then the transmit
signal vector ${\bf x}$ can (roughly) be represented as
\begin{equation} \label{eq:dirty_paper_sum}
{\bf x} = {\bf V}_1 {\bf s}_1 \oplus ({\bf V}_2 {\bf s}_2 \oplus \cdots \oplus ({\bf V}_{K-2} {\bf s}_{K-2} \oplus ({\bf V}_{K-1} {\bf s}_{K-1} \oplus {\bf V}_K {\bf s}_K))\cdots),
\end{equation}
where $\oplus$ represents the non-linear \emph{dirty paper sum}. Here we have assumed, without loss of
generality, that the encoding process is performed in descending numerical order. Dirty-paper decoding at the
$k$-th receiver results in cancellation of ${\bf V}_{k+1} {\bf s}_{k+1}, \ldots, {\bf V}_{K} {\bf s}_{K}$,
and thus the effective received signal at user $k$ is:
\begin{equation} \label{eq:dpc_y}
\tilde{\bf y}_k = {\bf H}_k {\bf V}_k {\bf s}_k + \sum_{j=1}^{k-1} {\bf H}_k {\bf V}_j {\bf s}_j + {\bf n}_k,
\end{equation}
where the second term is the multi-user interference that is not
cancelled by DPC. If the ${\bf s}_k$ are chosen Gaussian, user
$k$'s rate is given by:
\begin{eqnarray}
R_k
&=& \log_2 \frac{\left|{\bf I}+{\bf H}_k\left(\sum_{j=1}^k {\boldsymbol \Sigma}_j \right) {\bf H}_k^H\right|}{\left|{\bf I}+{\bf H}_k\left(\sum_{j=1}^{k-1} {\boldsymbol \Sigma}_j \right) {\bf
H}_k^H\right|},
\end{eqnarray}
where ${\boldsymbol \Sigma}_j = {\bf V}_j E[{\bf s}_j {\bf s}_j^H] {\bf V}_j^H$ denotes the transmit
covariance matrix of user $j$. Since DPC is optimal, the sum capacity of the MIMO BC can be expressed as:
\begin{equation} \label{eq-sum_cap_dpc}
{\cal C}_\text{DPC}({\bf H},P) = \max_{\sum_k \textrm{tr}({\boldsymbol \Sigma}_k)\le P}
\sum_{k=1}^K \log_2 \frac{\left|{\bf I}+{\bf H}_k\left(\sum_{j=1}^k {\boldsymbol \Sigma}_j \right) {\bf H}_k^H\right|}{\left|{\bf I}+{\bf H}_k\left(\sum_{j=1}^{k-1} {\boldsymbol \Sigma}_j \right) {\bf
H}_k^H\right|},
\end{equation}
where the maximization is performed over the transmit covariance
matrices ${\boldsymbol \Sigma}_1, {\boldsymbol \Sigma}_2, \cdots,
{\boldsymbol \Sigma}_K$.
The duality of the MIMO broadcast channel and the MIMO multiple
access channel (MAC) allows the sum capacity to alternatively be
written as \cite{Vishwanath_IT03}:
\begin{equation} \label{eq:sum_cap}
{\cal C}_\text{DPC}({\bf H},P) = \max_{\sum_k \textrm{tr}({\bf Q}_k)\leq P} \log_2
\left|{\bf I}+\sum_{k=1}^K {\bf H}_k^H{\bf Q}_k {\bf H}_k
\right|,
\end{equation}
where ${\bf Q}_k$ represent the $N \times N$ transmit covariance
matrices in the dual MAC.
No closed-form solution to either (\ref{eq-sum_cap_dpc}) or to
(\ref{eq:sum_cap}) (which is a convex problem) is known to exist,
but it has been shown that ${\cal C}_\text{DPC}({\bf H},P)$
converges (absolutely) to the capacity of the point-to-point MIMO
channel with transfer matrix ${\bf H}$ whenever $M \geq KN$:
\begin{theorem}[Theorem 3 in \cite{Caire_IT03}] \label{thm:sum_rate_dpc}
When $M\ge KN$ and ${\bf H}$ has full row rank,
\begin{equation}
\lim_{P\to\infty} \left[{\cal C}_\text{DPC}({\bf H},P)
-\log_2\left|{\bf I}+ \frac{P}{KN} {\bf H}^H {\bf H} \right| \right]=0.
\end{equation}
\end{theorem}
We are now able to make a few important observations regarding the
optimal covariance matrices at high SNR. Since
\begin{equation} \label{eq:sum_cap_mod}
\log_2 \left|{\bf I}+ \frac{P}{KN} \sum_{k=1}^K {\bf H}_k^H {\bf H}_k
\right| = \log_2\left|{\bf I}+ \frac{P}{KN} {\bf H}^H {\bf H}
\right|,
\end{equation}
choosing each of the dual MAC covariance matrices as ${\bf Q}_k = \frac{P}{KN} {\bf I}$ in \eqref{eq:sum_cap}
achieves sum capacity at asymptotically high SNR. Thus, uniform power allocation across the $KN$ antennas in
the dual MAC is asymptotically optimal. It is also possible to determine the optimal form of the downlink
covariance matrices ${\boldsymbol \Sigma}_1, \ldots, {\boldsymbol \Sigma}_K$, or equivalently of the downlink
precoding matrices ${\bf V}_1, \ldots, {\bf V}_K$. When $N=1$, Theorem 3 of \cite{Caire_IT03} shows that a
technique referred to as \emph{zero-forcing DPC} asymptotically achieves sum capacity. Zero-forcing DPC,
which is implemented via the QR-decomposition of the concatenated channel matrix ${\bf H}$ in
\cite{Caire_IT03}, implies that the precoding matrices ${\bf V}_1, \ldots, {\bf V}_K$ are chosen to
completely eliminate multi-user interference, and thus to satisfy ${\bf H}_k {\bf V}_j = 0$ for all $j < k$.
Because DPC eliminates some of the multi-user interference terms, ${\bf V}_1$ has no constraint on it, ${\bf
V}_2$ must be orthogonal to ${\bf H}_1$, ${\bf V}_3$ must be orthogonal to ${\bf H}_1$ and ${\bf H}_2$, and
so forth. If multi-user interference is eliminated, then the system decouples into $K$ parallel channels, and
simply using equal power allocation across all of the channels is asymptotically optimal due to the well
known fact that waterfilling over parallel channels provides no advantage at high SNR.
As a result of Theorem \ref{thm:sum_rate_dpc}, an affine approximation for the sum rate can be found as:
\begin{equation}
{\cal C}_\text{DPC}({\bf H},P) \cong KN\log_2 P -KN\log_2 KN+
\log_2\left|{\bf H}{\bf H}^H \right|,
\label{eq:sum_rate_dpc_appr}
\end{equation}
where $\cong$ refers to equivalence in the limit (i.e., the difference between both sides converges to zero
as $P\to\infty$). Since the MIMO broadcast and the $M \times KN$ point-to-point MIMO channel are equivalent
at high SNR, the high SNR results developed in \cite{Lozano_IT05} directly apply to the sum capacity of the
MIMO broadcast channel. It is important to be careful regarding the ordering of the equivalent
point-to-point MIMO channel: due to the assumption that $M \geq KN$, the MIMO broadcast is equivalent to the
$M \times KN$ MIMO channel \textit{with CSI at the transmitter}, which is equivalent to the $KN \times M$
MIMO channel \textit{with or without CSI at the transmitter}. When $M > KN$, the level of CSI at the
transmitter affects the rate offset of the $M \times KN$ point-to-point MIMO channel. Finally, notice that
the high SNR sum rate capacity only depends on the product of $K$ and $N$ and not on their specific values;
this is not the case for linear precoding.
\subsection{Linear Precoding}
Linear precoding is a low-complexity, albeit sub-optimal, alternative to DPC.
When linear precoding is used, the transmit signal vector ${\bf x}$ is a
linear function of the symbols
${\bf s}_k (\in \mathbb{C}^{N\times 1}),\;\;k=1,\cdots,K$:
\begin{equation}
{\bf x}=\sum_{k=1}^K {\bf V}_k {\bf s}_k,
\end{equation}
where ${\bf V}_k (\in\mathbb{C}^{M\times N})$ is the precoding matrix for user $k$. This expression
illustrates linear precoding's complexity advantage: if DPC is used, the transmit signal is formed by
performing dirty-paper sums, which are complex non-linear operations, whereas linear precoding requires only
standard linear operations. The resulting received signal for user $k$ is given by
\begin{equation}
{\bf y}_k = {\bf H}_k {\bf V}_k {\bf s}_k + \sum_{j\ne k}{\bf H}_k {\bf V}_j {\bf s}_j + {\bf n}_k,
\label{eq:y_precoding}
\end{equation}
where the second term in \eqref{eq:y_precoding} represents the
multi-user interference. If single-user detection and Gaussian
signalling are used, the achievable rate of user $k$ is:
\begin{equation}
R_k = \I ({\bf s}_k ; {\bf y}_k)
= \log_2 \frac{\left|{\bf I}+{\bf H}_k\left(\sum_{j =1}^{K} {\boldsymbol \Sigma}_j \right) {\bf H}_k^H\right|}
{\left|{\bf I}+{\bf H}_k\left(\sum_{j \ne k} {\boldsymbol \Sigma}_j \right) {\bf
H}_k^H\right|}.
\end{equation}
Since DPC is not used, each user is subject to multi-user
interference from every other user's signal. As a result, the
precoding matrices must satisfy very stringent conditions in order
to eliminate multi-user interference. Note that eliminating
multi-user interference is desirable at high SNR in order to prevent
interference-limited behavior.
In this paper we consider two linear precoding schemes that
eliminate multi-user interference when $M\ge KN$: zero-forcing (ZF)
and block diagonalization (BD). The precoding matrices $\{{\bf
V}_j\}_{j=1}^K$ for BD are chosen such that for all $j (\ne k) \in
[1, K]$,
\begin{equation}
{\bf H}_k {\bf V}_j = {\bf O},
\end{equation}
while those for ZF are chosen so that
\begin{eqnarray}
&{\bf h}_{k,n} {\bf v}_{j,l} = 0, &\quad \forall j (\ne k) \in [1, K], \;\; \forall n, l\in [1, N],\\
&{\bf h}_{k,n} {\bf v}_{k,l} = 0, &\quad \forall l (\ne n) \in [1, N],
\end{eqnarray}
where ${\bf v}_{j,l}$ denotes the $l$th column vector of ${\bf V}_j$. Consequently, performing ZF in a system
with $K$ users with $N(>1)$ antennas is equivalent to performing ZF in a channel with $KN$ single antenna
receivers. Note that ${\bf H}$ having full row rank is sufficient to ensure ZF and BD precoding matrices
exist. In iid Rayleigh fading ${\bf H}$ has full row rank with probability one.
\subsubsection{Zero-forcing}
When ZF is employed, there is no multi-user or inter-antenna interference. Then the received signal at the
$n$th antenna of user $k$ is given by
\begin{equation}
y_{k,n} = {\bf h}_{k,n} {\bf v}_{k,n} {s}_{k,n} + n_{k,n}, \qquad n=1, \cdots, N,
\end{equation}
where $s_{k,n}$ and $n_{k,n}$ denote $n$th component of ${\bf s}_k$ and ${\bf n}_k$, respectively. Thus, ZF
converts the system into $KN$ parallel channels with effective channel $g_{k,n} ={\bf h}_{k,n} {\bf
v}_{k,n}$. Sum rate is maximized by optimizing power allocation across these parallel channels:
\begin{equation} \label{eq:c_sum_zf}
{\cal C}_\text{ZF}({\bf H},P)= \max_{\sum_k\sum_n P_{k,n}\le P}
\sum_{k=1}^{K} \sum_{n=1}^N \log_2 \left(1+P_{k,n}|g_{k,n}|^2 \right).
\end{equation}
Since the optimum power allocation policy converges to uniform power at asymptotically high SNR
\cite{Jindal_ISIT05}, we have:
\begin{equation}
{\cal C}_\text{ZF}({\bf H},P) \cong KN\log_2 P -KN\log_2 KN+ \log_2\prod_{k=1}^K\prod_{n=1}^N |g_{k,n} |^2.
\label{eq:sum_rate_zf_appr}
\end{equation}
This approximation is identical to that for DPC in (\ref{eq:sum_rate_dpc_appr}) except for the final constant
term.
\subsubsection{Block Diagonalization}
When BD is employed, there is no multi-user interference because the precoding matrix for BD is chosen to be
${\bf H}_k {\bf V}_j = {\bf O}$ for $k\ne j$. Then the received signal for user $k$ is given by
\begin{equation}
{\bf y}_k = {\bf H}_k {\bf V}_k {\bf s}_k + {\bf n}_k.
\end{equation}
Thus, BD converts the system into $K$ parallel MIMO channels with effective channel matrices ${\bf G}_k={\bf
H}_k{\bf V}_k$, $k=1,\cdots,K$. The BD sum rate is given by \cite{Choi_WIRELESS04}\cite{Spencer_SP04}
\begin{equation} \label{eq:sum_bd}
{\cal C}_\text{BD} ({\bf H},P) = \max\limits_{{\bf Q}_k : \sum\limits_{k=1}^K \text{tr}\{{\bf Q}_k\} \le P}\;\;\;\sum_{k=1}^K \log_2 \left|{\bf I}+ {\bf G}_k{\bf Q}_k{\bf
G}_k^H \right|,
\end{equation}
and the optimal rate is achieved asymptotically by uniform power allocation at high SNR since the channel can
be decomposed into parallel channels. Hence, the sum rate is asymptotically given by
\begin{equation}
{\cal C}_\text{BD} ({\bf H},P) \cong KN \log_2 P - KN\log_2 KN + \log_2 \prod_{k=1}^K |{\bf G}_k^H{\bf
G}_k|. \label{eq:sum_rate_bd_appr}
\end{equation}
\subsection{Equivalent MIMO Interpretation}
\label{sec-equiv_mimo}
Due to the properties of iid Rayleigh fading,
systems employing either zero-forcing or block diagonalization are equivalent to parallel point-to-point MIMO
channels, as shown in \cite{Choi_WIRELESS04}. When ZF is used, the precoding vector for each receive antenna
(i.e., each row of the concatenated channel matrix ${\bf H}$) must be chosen orthogonal to the other $KN-1$
rows of ${\bf H}$. Due to the isotropic nature of iid Rayleigh fading, this orthogonality constraint
consumes $KN-1$ degrees of freedom at the transmitter, and reduces the channel from the $1 \times M$ vector
${\bf h}_{k,n}$ to a $1 \times (M - KN + 1)$ Gaussian vector. As a result, the effective channel norm
$|g_{k,n}|^2$ of each parallel channel is chi-squared with $2(M-KN+1)$ degrees of freedom (denoted
$\chi^2_{2(M-KN+1)}$). Therefore, a ZF-based system with uniform power loading is exactly equivalent (in
terms of ergodic throughput) to $KN$ parallel $(M-KN+1) \times 1$ MIMO channels (with CSIT).
When BD is used, the orthogonality constraint consumes $(K-1)N$ degrees of freedom. This reduces the channel
matrix ${\bf H}_k$, which is originally $N \times M$, to a $N \times (M - (K-1)N)$ complex Gaussian matrix.
As a result, the $N \times N$ matrix ${\bf G}_k^H {\bf G}_k$ is Wishart with $M-(K-1)N$ degrees of freedom,
and therefore a BD-based system is equivalent to $K$ parallel $(M- (K-1)N) \times N$ parallel MIMO channels
(with CSIT).
Finally, when DPC is used, the MIMO broadcast channel is equivalent to the $M \times KN$ point-to-point MIMO
channel, where $M \geq KN$ and CSIT is again assumed. Note that a MIMO channel of this dimension can be
interpreted as a series of parallel channels as well: in this case, the $M \times KN$ channel is equivalent
to $M \times 1$, $M - 1 \times 1$, \ldots, $M - KN + 1 \times 1$ channels in parallel \cite{Foschini_Gans}.
For all three cases, the MIMO equivalence is exact when uniform
power loading is used with ZF ($P_{k,n} = \frac{P}{KN}$ for all $k,n$ in
(\ref{eq:c_sum_zf})), BD
(${\bf Q}_k = \frac{P}{KN} {\bf I}$ for all $k$ in (\ref{eq:sum_bd})),
and DPC ( ${\bf Q}_k = \frac{P}{KN} {\bf I}$ for all $k$ in (\ref{eq:sum_cap})).
If optimal power allocation is performed, for either ZF, BD, or DPC,
the MIMO broadcast systems can achieve a larger ergodic throughput
than the MIMO equivalent at finite SNR.
However, because waterfilling provides a vanishing benefit as SNR is increased,
this advantage disappears at asymptotically high SNR.
The equivalent MIMO channels are summarized in Table
\ref{tbl:summary_sum_rates} and illustrated in
Fig.~\ref{fig:summary_sum_rates} for $M=7$, $N=2$, $K=3$. In this
case ZF is equvalent to 6 parallel $2 \times 1$ channels, BD is
equivalent to 3 parallel $3 \times 2$ channels, and DPC is
equivalent to a $7 \times 6$ channel. The absolute difference in
throughput at asymptotically high SNR is indeed due to the diference
in the degrees of freedom in the available parallel channels, as
made precise in the following section.
\begin{table}
\centering
\caption{Sum rates at high SNR and their equivalent MIMO interpretation}
\label{tbl:summary_sum_rates}
\begin{tabular}{|c|c|c|}
\hline
& ${\cal C}({\bf H},P)$ & MIMO Interpretation \\
\hline \hline
DPC & $\log\left|\frac{P}{KN}{\bf H}^H{\bf H}\right|$ & one $M\times KN$ \\
\hline
BD & $\sum_{k=1}^K \log \left| \frac{P}{KN} {\bf G}_k^H {\bf G}_k\right|$ & $K$ parallel $(M-(K-1)N)\times N$ \\
\hline
ZF & $\sum_{k=1}^{K} \sum_{n=1}^N \log\left(\frac{P}{KN} |g_{k,n}|^2\right)$ & $KN$ parallel $(M-KN+1)\times 1$ \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{bc_equivalent}
\caption{The broadcast channel with $M=7$, $N=2$, and $K=3$ can be interpreted in terms of its sum rate as (a) $7\times 6$ point-to-point MIMO channel when
DPC is employed, (b) 3 parallel $3\times 2$ MIMO channels when BD is employed, and (c) 6 parallel $2\times 1$ MISO channels when ZF
is employed.}
\label{fig:summary_sum_rates}
\end{figure}
Our analysis is limited to channels in which $M \geq KN$. If $M < KN$, i.e., there are strictly less
transmit antennas than aggregate receive antennas, then no MIMO equivalent channel exists for either DPC or
linear precoding. The sum capacity (DPC) is smaller than the capacity of the $M \times KN$ (forward)
cooperative channel (in which CSIT is not required at high SNR), but is larger than the capacity of the
reverse $KN \times M$ cooperative channel without CSIT. Zero forcing and block diagonalization are clearly
only feasible when the number of data streams is no greater than $M$. Thus, if there are more than $M$
receive antennas, some form of selection (of users and possibly of the number of data streams per receiver)
must be performed. As a result of these complications, it does not appear that the high SNR framework will
yield closed-form solutions for either DPC or linear precoding when $M < KN$.
\section{Sum Rate Analysis}
This section quantifies the sum rate degradation incurred by linear precoding relative to DPC. In terms of
the high SNR approximation, this rate offset is essentially equal to the difference between the $\mathcal{L}_{\infty}$ terms
for DPC and linear precoding.
\subsection{DPC vs.~ZF}
We define the rate loss as the asymptotic (in SNR) difference between the sum rate capacity and the zero
forcing sum rate:
\begin{equation}
\beta_\text{DPC-ZF}({\bf H}) \triangleq \lim_{P\to\infty} \left[{\cal C}_\text{DPC}({\bf
H},P) - {\cal C}_\text{ZF}({\bf H},P)\right].
\end{equation}
Since each of the capacity curves has a slope of $\frac{KN}{3}$ in
units of bps/Hz/dB, this rate offset
(i.e., the vertical offset between capacity vs. SNR curves) can be
immediately translated into a power offset (i.e., a horizontal
offset): $\Delta_\text{DPC-ZF}({\bf
H})=\frac{3}{KN}\beta_\text{DPC-ZF}({\bf H})$ dB. Because
$\Delta_\text{DPC-ZF}$ is in dB units, we clearly have
$\Delta_\text{DPC-ZF}({\bf H})=3({\cal
L}_\infty^\text{ZF}({\bf H})-{\cal L}_\infty^\text{DPC}({\bf H}))$, which implies
\begin{eqnarray}
\mathcal{L}_{\infty}^\text{ZF} &=& \mathcal{L}_{\infty}^\text{DPC} + \frac{1}{3} \Delta_\text{DPC-ZF} \\
&=& \mathcal{L}_{\infty}^\text{DPC} + \frac{1}{KN} \beta_\text{DPC-ZF} \label{eq-linf_zf}
\end{eqnarray}
From the affine approximation to DPC and ZF sum rate found in \eqref{eq:sum_rate_dpc_appr} and
\eqref{eq:sum_rate_zf_appr}, the rate loss incurred by ZF is:
\begin{equation}
\beta_\text{DPC-ZF}({\bf H})= \log_2 \frac{|{\bf H}^H{\bf H}|}{\prod_{k=1}^{K}\prod_{n=1}^N |g_{k,n}|^2}.
\end{equation}
While the above metric is the rate loss per realization, we are more interested in the average rate offset
across the fading distribution:
\begin{eqnarray} \label{eq-dpc-zf_offset_defn}
\bar{\beta}_\text{DPC-ZF} = \E_{\bf H} \left[\beta_\text{DPC-ZF}({\bf H}) \right],
\end{eqnarray}
which allows a comparison of ergodic (over the fading distribution) throughput. Likewise, the average power
offset is denoted as $\bar{\Delta}_\text{DPC-ZF}$ and can be immediately calculated in the same fashion.
Under iid Rayleigh fading, the matrix ${\bf H}{\bf H}^H$ is Wishart with $2M$ degrees of freedom while
$|g_{k,n}|^2$ are identically $\chi^2_{2(M-KN+1)}$, as explained in Section \ref{sec-equiv_mimo}.
The key to computing the average offset is the following closed form
expression for the expectation of the log determinant of a Wishart matrix:
\begin{lemma}[Theorem 2.11 of \cite{Tulino_Book04}] \label{lem-wishart}
If $m \times m$ matrix ${\bf H}{\bf H}^H$ is complex Wishart distributed with $n$ ($\geq m$) degrees of
freedom (d.o.f), then:
\begin{equation}
\E\Big[\log_e \left|{\bf H}{\bf H}^H \right|\Big] = \sum_{l=0}^{m-1}\psi(n-l),
\end{equation}
where $\psi(\cdot)$ is Euler's digamma function, which satisifes
\begin{eqnarray}
\psi(m) = \psi(1) + \sum_{l=1}^{m-1} \frac{1}{l}
\end{eqnarray}
for positive integers $m$ and $\psi(1)\approx -0.577215$.
\end{lemma}
This result can be directly applied to chi-squared random variables by noting that a $1 \times 1$ complex
Wishart matrix with $n$ degrees of freedom is $\chi^2_{2n}$:
\begin{equation}
\E[\log_e \chi^2_{2n}] = \psi(n).
\end{equation}
Using Lemma \ref{lem-wishart} we can compute the average rate offset
in closed form:
\begin{theorem} \label{thm:exp_beta_zf}
The expected loss in Rayleigh fading due to zero-forcing is
given by
\begin{equation}
\boxed{\bar{\beta}_\text{DPC-ZF}(M,KN) =\log_2 e \sum_{j=1}^{KN-1} \frac{j}{M-j}\quad \text{(bps/Hz)}. }\label{eq:E_beta_zf}
\end{equation}
\end{theorem}
\vspace{5pt}
\begin{proof}
Since ${\bf H}{\bf H}^H$ is $KN \times KN$ Wishart with $M$ d.o.f, and $|g_{k,n}|^2$ is $\chi^2_{2(M-KN+1)}$,
Lemma 1 applied to (\ref{eq-dpc-zf_offset_defn}) gives:
\begin{eqnarray}
\bar{\beta}_\text{DPC-ZF} &=&\E\left[\log_e |{\bf H}^H{\bf H}|
\right] - KN \cdot \E\left[\log_e |g_{1,1}|^2\right] \\
&=& \log_2 e \left[ \left(\sum_{l=0}^{KN-1}\psi(M-l)\right)-KN \psi(M-KN+1) \right] \label{eq-dpc_zf_proof1}
\end{eqnarray}
By expanding the digamma function and performing the algebraic manipulations shown in Appendix
\ref{sec:pf_thm:exp_beta_zf}, the form \eqref{eq:E_beta_zf} can be reached.
\end{proof}
Using this result we easily get an expression for the rate offset $\mathcal{L}_{\infty}^\text{ZF}(M,KN)$ by plugging into
\eqref{eq-linf_zf}
\begin{eqnarray}
\mathcal{L}_{\infty}^\text{ZF}(M,KN) &=& \mathcal{L}_{\infty}^\text{DPC}(M,KN) + \frac{1}{KN}
\bar{\beta}_\text{DPC-ZF}(M,KN) \\
&=& \mathcal{L}_{\infty}^\text{MIMO}(KN,M) + \frac{\log_2 e}{KN} \sum_{j=1}^{KN-1} \frac{j}{M-j},
\end{eqnarray}
where $\mathcal{L}_{\infty}^\text{MIMO}(KN,M)$ is the rate offset of a $KN$ transmit antenna, $M$ receive antenna MIMO
channel in iid Rayleigh fading, which is defined in Proposition 1 of \cite{Lozano_IT05}.
When the total number of receive antennas is equal to $M$ (i.e., $M=KN$),
ZF incurs a rather large loss relative to DPC that can be approximated as:
\begin{equation} \label{eq:beta_zf_M_K_equal}
\bar{\beta}_\text{DPC-ZF} (M) \approx M\log_2 M \quad \text{(bps/Hz)}
\end{equation}
in the sense that the ratio of both sides converges to one as $M$ grows large
(see Appendix \ref{sec:pf_eq:beta_zf_M_K_equal} for the proof).
In this scenario, the ZF sum
rate is associated with the capacity of $M$ parallel $1 \times 1$ (SISO)
channels while the DPC sum rate is associated with a $M\times M$
MIMO channel. This corresponds to a power offset of $3\log_2 M$ (dB),
which is very significant when $M$ is large.
Note that the approximation $3\log_2 M$ (dB) overstates the power penalty by 1 to 1.5 dB for reasonable values of
$M(<20)$, but does capture the growth rate. Such a large penalty is not surprising, since the use of
zero-forcing requires inverting the $M\times M$ matrix $\bf H$, which is poorly conditioned with high
probability when $M$ is large.
We can also consider the asymptotic ZF penalty when the number of transmit
antennas is much larger than the number of receive antennas.
If the number of users and transmit antennas are taken to infinity at a
fixed ratio according to $M=\alpha KN$ with $KN\to\infty$ for some $\alpha >1$,
then the power offset between DPC and ZF converges to a constant:
\begin{theorem} \label{thm:asymp_zf_penalty}
For $M=\alpha KN$ with $\alpha >1$, $KN\to \infty$, the asymptotic power penalty
for ZF is given by
\begin{equation} \label{eq:penalty_zf}
\bar{\Delta}_\text{DPC-ZF}(\alpha) = -3 \left(\log_2 e + \alpha \log_2 \left(1-\frac{1}{\alpha} \right) \right)\qquad\text{(dB)}.
\end{equation}
\end{theorem}
\begin{proof}
See Appendix \ref{sec:pf_thm_asymp_zf_penalty}.
\end{proof}
This power offset is incurred due to the fact that the DPC sum rate increases according to a $KN\times \alpha
KN$ MIMO channel capacity while the ZF sum rate increases according to $KN$ parallel $(\alpha-1)KN \times 1$
MISO channels. For example, if $\alpha=2$, or the number of transmit antennas is double the number of
receivers, the zero-forcing penalty is no larger than 1.67 dB, and monotonic convergence to this asymptote is
observed. Thus for large systems, ZF is a viable low-complexity alternative to DPC if the number of transmit
antennas can be made suitably large. A similar conclusion was drawn in \cite{Hochwald_Allerton02} where the
ratio of the rates achievable with ZF relative to the sum capacity is studied. Note that using ZF on the MIMO
downlink channel is identical to using a decorrelating receiver on the multiple antenna uplink channel or in
a randomly spread CDMA system; as a result Theorem \ref{thm:asymp_zf_penalty} is identical to the asymptotic
performance of the decorrelating CDMA receiver given in Eq.~(152) of \cite{Shamai_IT01}.
Figure \ref{fig:dpc_zf_linf_55_105} plots the ZF and DPC throughputs for two five receiver systems. In a
five-transmit-antenna/five-receiver system ($M=K=5, N=1$), Theorem \ref{thm:exp_beta_zf} gives a throughput
penalty of 9.26 bps/Hz, which is equivalent to a power penalty of 5.55 dB (whereas the approximation in
\eqref{eq:beta_zf_M_K_equal} gives 6.97 dB). Although this penalty is exact only in the asymptote, the figure
shows that it gives accurate results throughout the entire SNR range. Throughput curves for a $(M=10, K=5,
N=1)$ system are also shown. The ZF power penalty is only 1.26 dB, which is reasonably close to the
asymptotic penalty of 1.67 dB given by Theorem \ref{thm:asymp_zf_penalty} for $\alpha=2$. Increasing the
number of transmit antennas from 5 to 10 shifts the sum capacity curve by 5.59 dB, but improves the
performance of ZF by 9.88 dB.
This is because ZF gains the increase in the ${\cal L}_\infty$ term of sum capacity,
along with the significantly decreased ZF penalty due to the increased
number of transmit antennas (5.55 dB to 1.26 dB).
Thus adding transmit antennas has the dual benefit of increasing the
performance of DPC as well as reducing the penalty of
using low-complexity ZF.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{dpc_zf_linf_55_105}
\caption{DPC vs.~zero-forcing at high SNR}
\label{fig:dpc_zf_linf_55_105}
\end{figure}
\subsection{DPC vs.~BD}
We similarly define the rate loss between DPC and BD as:
\begin{equation}
\beta_\text{DPC-BD}({\bf H}) \triangleq \lim_{P\to\infty} \left[{\cal C}_{\rm DPC}({\bf H},P)-{\cal C}_{\rm BD}({\bf H},P)\right],
\end{equation}
and denote the expected loss as $\bar{\beta}_\text{DPC-BD}\triangleq \E_{{\bf H}}[\beta_\text{DPC-BD}({\bf
H})]$. Similar to the analysis for ZF, we can calculate the loss terms for a fixed channel and also average
over Rayleigh fading. In order to compute the average rate loss, we use the fact
that the BD sum rate is asymptotically equal to the capacity of $K$
parallel $N\times (M-(K-1)N)$ iid Rayleigh MIMO channels \cite{Choi_WIRELESS04}.
\begin{theorem} \label{thm:exp_beta_bd}
The expected loss in Rayleigh fading due to block
diagonalization is given by
\begin{equation}
\boxed{
\bar{\beta}_\text{DPC-BD} (M, K, N) = (\log_2 e)\left(
\sum_{k=0}^{K-1}\sum_{n=0}^{N-1}\sum_{i=kN+1}^{(K-1)N}\frac{1}{M-n-i}
\right) \;\;\; \text{\rm (bps/Hz)}.} \label{eq:E_beta_bd}
\end{equation}
\end{theorem}
\begin{proof}
See Appendix \ref{sec:pf_thm:exp_beta_bd}.
\end{proof}
Eq.~\eqref{eq:E_beta_bd} simplifies to \eqref{eq:E_beta_zf} when $N=1$; i.e., zero-forcing is a special case
of block diagonalization. If the number of transmit antennas $M$ is kept fixed but $N$ is increased and $K$
is decreased such that $KN$ is constant, i.e., the number of antennas per receiver is increased but the
aggregate number of receive antennas is kept constant, then the rate offset decreases. In the degenerate case
$M=N$ and $K=1$ the channel becomes a point-to-point MIMO channel and the offset is indeed zero. Using the
same procedure as for ZF, we can easily get an expression for the rate offset $\mathcal{L}_{\infty}^\text{BD}(M,K,N)$
(\ref{eq-linf_zf})
\begin{eqnarray}
\mathcal{L}_{\infty}^\text{BD}(M,K,N) &=& \mathcal{L}_{\infty}^\text{MIMO}(KN,M) + \frac{1}{KN} \bar{\beta}_\text{DPC-BD}(M,N,K).
\end{eqnarray}
Although it is difficult to obtain insight directly from
Theorem \ref{thm:exp_beta_bd}, it is much more useful to consider the
offset between BD ($K$ receivers with $N$ antennas each) and
ZF (equivalent to $KN$ receivers with 1 antenna each).
\begin{theorem} \label{thm:diff_beta}
If $M=\alpha KN$ with $N>1$ and $\alpha > 1$,
the expected throughput gain of BD relative to ZF is:
\begin{eqnarray*}
\bar{\beta}_\text{BD-ZF} &\triangleq& \bar{\beta}_\text{DPC-ZF}(M,NK) -
\bar{\beta}_\text{DPC-BD}(M,N,K) \nonumber \\
&=& (\log_2 e)K\sum_{j=1}^{N-1} \frac{(N-j)}{(\alpha-1)KN+j} \quad \text{\rm (bps/Hz)}
\label{eq:bd_zf_diff} \\
&=& \frac{3 \log_2 e}{N} \sum_{j=1}^{N-1} \frac{(N-j)}{(\alpha-1)KN+j} \quad \text{\rm (dB)}.
\end{eqnarray*}
\end{theorem}
\begin{proof}
See Appendix \ref{sec:pf_thm:diff_beta}.
\end{proof}
A direct corollary of this is an expression for the expected power offset when $M=KN$:
\begin{equation}
\boxed{
\bar{\Delta}_\text{BD-ZF}(N) = \frac{3(\log_2 e)}{N}\sum_{j=1}^{N-1}\frac{N-j}{j}\quad \text{\rm (dB)}.}
\label{eq:bd_zf_offset_M_KN_equal}
\end{equation}
Note that this expression only depends on the number of receive antennas per receiver and is independent of
the number of users, i.e., of the system size. For example, consider two system configurations: (i)
$\frac{M}{2}$ receivers each have two antennas, and (ii) $M$ receivers each have one antenna. Equation
\eqref{eq:bd_zf_offset_M_KN_equal} indicates that the power advantage of using BD in the $N=2$ system is
$\bar{\Delta}_\text{BD-ZF}=2.1640$ (dB) relative to performing ZF. Since this offset is independent of $M$,
it is the same for $M=4$ and $K=4, N=1$ vs.~$K=2, N=2$ systems as well as for $M=6$ and $K=6, N=1$ vs.~$K=3,
N=2$ systems. To illustrate the utility of the asymptotic rate offsets, sum rates are plotted in
Fig.~\ref{fig:beta_bd_zf_offset} for systems with $M=12$ and $N=3$, $K=4$, and $N=2$, $K=6$. Notice that the
asymptotic offsets provide insight at even moderate SNR levels (e.g., 10 dB). When $M=12,\; N=3,\; K=4$,
$\bar{\beta}_\text{BD-ZF}=14.4270$ (bps/Hz) and $\bar{\Delta}_\text{BD-ZF} = 3.6067$ (dB) while the numerical
values are 14.6 (bps/Hz) and 3.65 (dB), respectively.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{beta_bd_zf_offset_M12_1207}
\caption{Comparison of \eqref{eq:E_beta_bd} and \eqref{eq:bd_zf_offset_M_KN_equal} with simulated rate losses and power
offsets}
\label{fig:beta_bd_zf_offset}
\end{figure}
\subsection{Unequal Average SNR's}
The underlying assumption beforehand is that the strengths of channel gains are the same for all users.
However, there exist near-far effects in a typical wireless broadcast channel scenario which lead to
asymmetric channel gains. In this subsection, we consider the effect of asymmetric channel gains or unequal
average SNR and reformulate the rate offsets \eqref{eq:E_beta_zf} and \eqref{eq:E_beta_bd}.
We assume that the channel gain of each user can be decomposed into
\begin{equation} \label{eq:H_with_gamma}
{\bf H}_k = \sqrt{\gamma_k} \tilde{\bf H}_k,\quad k=1,\cdots,K,
\end{equation}
where $\gamma_k$ denotes the average SNR of user $k$. The elements of $\tilde{\bf H}_k$ have Gaussian
distribution with mean zero and unit variance. Notice that the quantities with tilde $\tilde{(\cdot)}$ are
derived under a zero mean unit Gaussian assumption. Then the channel model \eqref{eq:y_Hx_n} is changed to
\begin{equation}
{\bf y}_k = \sqrt{\gamma_k} \tilde{\bf H}_k{\bf x}_k + {\bf n}_k.
\end{equation}
In the preceding discussion, we have used the fact that the uniform power allocation is asymptotically
optimal for DPC at high SNR. It is important to note that the uniform power allocation is still
asymptotically optimal even when users' SNR are asymmetric. Since ${\bf
H}=(\text{diag}(\sqrt{\gamma_1},\cdots,\sqrt{\gamma_K})\otimes {\bf I}_{N\times N}) \tilde{\bf H}$ where
$\tilde{\bf H}^H = [\tilde{\bf H}_1^H \;\tilde{\bf H}_2^H\; \cdots \; \tilde{\bf H}_K^H]$ and $\otimes$
denotes the Kronecker product, the aggregate channel ${\bf H}$ is full rank with $M\ge KN$. Thus, Theorem
\ref{thm:sum_rate_dpc} holds. When ZF or BD is used, the effective channels are simply multiplied by the
corresponding $\sqrt{\gamma_k}$.
From \eqref{eq:sum_cap}, \eqref{eq:c_sum_zf}, and \eqref{eq:sum_bd} with uniform power allocation, we can
derive the sum rates for asymmetric channel gains as follows
\begin{eqnarray}
{\cal C}_\text{DPC}({\bf H},P) &\cong& {\cal C}_\text{DPC}(\tilde{\bf H},P) + N \sum_{k=1}^K \log_2 \gamma_k, \label{eq:C_dpc_gamma}\\
{\cal C}_\text{ZF}({\bf H},P) &\cong& {\cal C}_\text{ZF}(\tilde{\bf H},P) + N \sum_{k=1}^K \log_2 \gamma_k,\label{eq:C_zf_gamma}\\
{\cal C}_\text{BD}({\bf H},P) &\cong& {\cal C}_\text{BD}(\tilde{\bf H},P) + N \sum_{k=1}^K \log_2 \gamma_k, \label{eq:C_bd_gamma}
\end{eqnarray}
where ${\cal C}_\text{DPC}(\tilde{\bf H},P)$, ${\cal C}_\text{ZF}(\tilde{\bf H},P)$, and ${\cal
C}_\text{BD}(\tilde{\bf H},P)$ are the sum rates under the symmetric channel gain scenario. The derivation of
\eqref{eq:C_dpc_gamma}, \eqref{eq:C_zf_gamma}, and \eqref{eq:C_bd_gamma} can be found at Appendix
\ref{sec:derivation_gamma}. As a result, it is easy to see that the DPC-ZF and DPC-BD offsets are unaffected
by $\gamma_1, \cdots, \gamma_K$:
\begin{theorem}
The expected loss in Rayleigh fading when each user has a different average SNR is identical to the loss when
all users have the same average SNR at high SNR. That is,
\begin{equation}
\bar{\beta}_\text{DPC-ZF}(M,KN) =\log_2 e \sum_{j=1}^{KN-1} \frac{j}{M-j}\quad \text{(bps/Hz)}, \label{eq:E_beta_zf_gamma}
\end{equation}
and
\begin{equation}
\bar{\beta}_\text{DPC-BD} (M, K, N) = (\log_2 e)\left(
\sum_{k=0}^{K-1}\sum_{n=0}^{N-1}\sum_{i=kN+1}^{(K-1)N}\frac{1}{M-n-i}
\right) \;\;\; \text{\rm (bps/Hz)}, \label{eq:E_beta_bd_gamma}
\end{equation}
which are identical with \eqref{eq:E_beta_zf} and \eqref{eq:E_beta_bd}, respectively.
\end{theorem}
Fig.~\ref{fig:unequal_snr_avg_diff} illustrates that the sum rates by optimal power allocation and uniform
power allocation tend to zero as power grows both for DPC and ZF. Unlike the symmetric channel gain case,
more transmit power is required to make the difference sufficiently small.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{unequal_snr_avg}
\caption{Average sum rates with the optimal power allocation and the uniform power allocation
when $M=4$, $N=1$, $K=4$, with unequal SNR: $\gamma_1=0.1$, $\gamma_2=0.5$, $\gamma_3=1$, $\gamma_4=2$.}
\label{fig:unequal_snr_avg_diff}
\end{figure}
\section{Weighted Sum Rate Analysis}
In this section we generalize the rate offset analysis to weighted
sum rate. We first consider single antenna receivers ($N=1$), and then discuss the extension to $N>1$ at the
end of this section. Fig.~\ref{fig:cap_region} illustrates the capacity region (DPC) and the ZF achievable
region for a particular 2 user channel at 30 dB. While the sum rate is the point where the negative slope of
the boundary of the rate region is 1, weighted sum rate corresponds to points where the slope of the boundary
are specified by the particular choice of user weights. The sum rate offset quantifies the difference between
the sum rate points of both regions; the weighted sum rate offset is intended to describe the offset for the
other portions of the rate region.
We first show that allocating power in proportion to user weights is
asymptotically optimal for either DPC or ZF, and then use
this result to compute the associated rate offsets.
Then, we show the utility of our simple power allocation
policy via application to queue-based scheduling.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{cap_region_P1000}
\caption{Achievable regions of DPC and ZF for $M=2$, $N=1$, and $K=2$, with ${\bf h}_1 = [1\;\;0.5]$, ${\bf h}_2 = [0.5\;\;1]$ at SNR=30 (dB).}
\label{fig:cap_region}
\end{figure}
\subsection{Asymptotically Optimal Power Allocation}
Without loss of generality, we assume user weights are in descending order: $\mu_1 \ge \mu_2 \ge \cdots \ge
\mu_K\ge 0$ with $\sum_{k=1}^K \mu_k =1$. The maximum weighted sum rate problem (DPC), which is defined as
the maximum of $\sum_{k=1}^K \mu_k R_k$ over the capacity region, can be written in
terms of the dual MAC as:
\begin{equation} \label{eq:C_diff_mu_modified}
{\cal C}_{\rm DPC}({\boldsymbol \mu}, {\bf H}, P)=\max_{\sum_{k=1}^K P_k \le P}
\sum_{k=1}^{K} \mu_k \log_2\left(1+P_k{\bf h}_k({\bf A}^{(k-1)})^{-1} {\bf h}_k^H \right),
\end{equation}
where ${\bf A}^{(k-1)} = {\bf I}+\sum_{j=1}^{k-1} P_j {\bf h}_j^H{\bf h}_j$ for $k\ge 1$ and ${\bf A}^{(0)} =
{\bf I}$. Since $N=1$, each channel is a row vector and is written as ${\bf h}_k$.
Notice that the uplink decoding is done in order of increasing
weight, i.e., user $K$ does not get the benefit of any interference cancellation
while user $1$'s signal benefits from full interference cancellation
and is thus detected in the presence of only noise.
The following lemma shows that if we limit ourselves to
linear power allocation policies of the form $P_k = \alpha_k P$,
then the objective function in (\ref{eq:C_diff_mu_modified})
can be decoupled at high SNR:
\begin{lemma} \label{lem:wt_sum_rate_simple_form}
If $M\ge K$, then for any $\alpha_k > 0$, $k=1,\cdots,K$ with $\sum_{k=1}^K \alpha_k =1$,
\begin{equation}
\lim_{P\to\infty}\Bigg[\sum_{k=1}^{K} \mu_k \log_2\left(1+\alpha_k P{\bf h}_k({\bf A}^{(k-1)})^{-1} {\bf h}_k^H \right)
- \sum_{k=1}^{K} \mu_k \log_2\left(1+\alpha_k P\|{\bf f}_k\|^2\right) \Bigg]=0,
\end{equation}
where ${\bf f}_k$ is the projection of ${\bf h}_k$ onto the nullspace of $\{{\bf h}_1, \cdots, {\bf
h}_{k-1}\}$.
\end{lemma}
\begin{proof}
Lemma \ref{lem:decoupling} in Appendix \ref{sec:decoupling_lemma} shows that
${\bf h}_k({\bf A}^{(k-1)})^{-1} {\bf h}_k^H \rightarrow \|{\bf f}_k\|^2$ as $P \rightarrow \infty$. By the
continuity of $\log(\cdot)$ and the fact that $P \rightarrow \infty$, we get the result.
\end{proof}
Once the weighted sum rate maximization has been decoupled into the problem of maximizing weighted sum rate
over parallel single-user channels, we can use the result of \cite{Lozano_Tulino_Verdu_ISSSTA} to show that
the optimal power allocation is of the form $P_k^* = \mu_k P + O(1)$.
\begin{theorem} \label{thm:wt_sum_rate_pow_policy}
When $M\ge K$, allocating power according to
\begin{equation}
\boxed{P_k = \mu_k P,\qquad k=1,\cdots,K
\label{eq:wt_sum_rate_pow_policy}}
\end{equation}
asymptotically achieves the optimal solution to \eqref{eq:C_diff_mu_modified} at high SNR.
\end{theorem}
\begin{proof}
By Lemma \ref{lem:wt_sum_rate_simple_form}, the following optimization will yield an
asymptotically optimal solution
(albeit with a weak restriction on allowable power policies):
\begin{equation} \label{eq:wt_sum_rate}
\max_{P_k :\; \sum_{k=1}^{K} P_k\le P}
\sum_{k=1}^{K} \mu_k\log_2 \left(1+P_k\|{\bf f}_k\|^2 \right).
\end{equation}
The optimal power policy for a more general version of this problem, in which there are more than $K$
parallel channels and each user can occupy multiple channels,
is solved in \cite{Lozano_Tulino_Verdu_ISSSTA}. We need only
consider this simplified version, and it is easily checked (via KKT conditions) that the solution to
\eqref{eq:wt_sum_rate} is:
\begin{equation} \label{eq-opt_power_weighted}
P_k^* = \mu_k P + \mu_k \left(\sum_i \frac{1}{\|{\bf f}_i\|^2} \right) -\frac{1}{\|{\bf f}_k\|^2},
\quad \text{for}\;\; k=1,\cdots,K,
\end{equation}
when $P$ is sufficiently large to allow all users to have non-zero power.
Therefore, at high SNR we have
\begin{equation}
P_k^* = \mu_k P + O(1) ,\qquad k=1,\cdots,K.
\end{equation}
Since the $O(1)$ power term leads to a vanishing rate, we have the result.
\end{proof}
Theorem \ref{thm:wt_sum_rate_pow_policy} generalizes the fact that uniform power allocation achieves the
maximum sum rate asymptotically at high SNR. That is, for the sum rate problem the weights are the same
(i.e., $\mu_1 = \cdots = \mu_K = 1/K$), thus the uniform power policy is asymptotically optimal.
In Fig.~\ref{fig:diff_mu_high_snr_appr_ex_error_avg} the difference between the true weighted sum rate
(\ref{eq:C_diff_mu_modified}) and the weighted sum rate achieved using $P_k = \mu_k P$ is plotted as a
function of SNR. This difference is averaged over iid Rayleigh channel realizations for a $(M=4, K=2, N=1)$
system with $\mu_1=0.6$ and $\mu_2=0.4$. The approximate power allocation is seen to give a weighted sum
rate that is extremely close to the optimum even at very low SNR values.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{diff_mu_high_snr_appr_ex_error_avg_1207}
\caption{Averaged weighted sum rate difference between the exact solution \eqref{eq:C_diff_mu_modified} and the asymptotic solution \eqref{eq:wt_sum_rate_pow_policy} when $\mu_1=0.6$ and $\mu_2=0.4$ for
Rayleigh fading channel.}
\label{fig:diff_mu_high_snr_appr_ex_error_avg}
\end{figure}
Meanwhile, the weighted sum rate by ZF is given by
\begin{equation} \label{eq:wt_sum_rate_zf}
{\cal C}_\text{ZF}({\boldsymbol \mu},{\bf H},P)=\max_{P_k :\; \sum_{k=1}^{K} P_k\le P}
\sum_{k=1}^{K} \mu_k\log_2 \left(1+P_k\|{\bf g}_k\|^2 \right),
\end{equation}
where ${\bf g}_k$ is the projection of ${\bf h}_k$ onto the null space of $\{{\bf h}_1, \cdots, {\bf
h}_{k-1}, {\bf h}_{k+1},\cdots, {\bf h}_K\}$.
The result of \cite{Lozano_Tulino_Verdu_ISSSTA} directly applies here,
and therefore the power allocation policy in \eqref{eq:wt_sum_rate_pow_policy} is
also the asymptotic solution to \eqref{eq:wt_sum_rate_zf}.
\subsection{Rate Loss}
Using the asymptotically optimal power allocation policy of (\ref{eq:wt_sum_rate_pow_policy}), the weighted
sum rates of DPC and ZF can be expressed as
\begin{eqnarray}
{\cal C}_\text{DPC}({\boldsymbol \mu},{\bf H},P) &\cong& \sum_{k=1}^K \mu_k \log\left(1+\mu_k P \|{\bf f}_k\|^2 \right),\\
{\cal C}_\text{ZF}({\boldsymbol \mu},{\bf H},P) &\cong& \sum_{k=1}^K \mu_k \log\left(1+\mu_k P \|{\bf g}_k\|^2
\right).
\end{eqnarray}
Thus, the rate offset per realization is given by
\begin{equation}
\beta_\text{DPC-ZF}({\boldsymbol \mu},{\bf H})
\cong \sum_{k=1}^K\mu_k \log\frac{ \|{\bf f}_k\|^2}{ \|{\bf g}_k\|^2}.
\end{equation}
In Rayleigh fading, the distributions of $\|{\bf f}_k\|^2$ and $\|{\bf g}_k\|^2$ are $\chi^2_{2(M-k+1)}$ and
$\chi^2_{2(M-K+1)}$, respectively. Therefore, the expected rate loss is given by
\begin{equation} \label{eq-offset_weighted}
\bar{\beta}_\text{DPC-ZF}({\boldsymbol \mu}, M, K) \cong
(\log_2 e)\sum_{k=1}^K\mu_k \left(\sum_{j=M-K+1}^{M-k}\frac{1}{j}\right)\quad.
\end{equation}
It is straightforward to check that the rate offset is minimized at the sum rate, i.e., when
$\mu_1=\cdots=\mu_k=\frac{1}{K}$. If we let $\zeta_k = \sum_{j=M-K+1}^{M-k}\frac{1}{j}$, then
$\zeta_1>\zeta_2>\cdots>\zeta_K$ and $\bar{\beta}_\text{DPC-ZF} = (\log_2 e) \sum_{k=1}^K \mu_k \zeta_k$.
Since $\{\mu_k\}$ has constraints of $\mu_1\ge \cdots \ge \mu_K$, $\sum_{k=1}^K\mu_1=1$, and $\mu_k\ge 0$
$(1\le k \le K)$, $\bar{\beta}_\text{DPC-ZF}$ achieves minimum at $\mu_1=\cdots=\mu_k=\frac{1}{K}$ for a
given $\{\zeta_k\}$.
\subsection{Application to Queue-based Scheduling}
Queue-based scheduling, introduced by the seminal work of Tassiulas and Ephremides \cite{Tassiulas_AC92}, is
one application in which it is necessary to repeatedly maximize the weighted sum rate for different user
weights.
\begin{figure}
\centering
\includegraphics[width=0.60\textwidth]{highsnr_w_queue}
\caption{MIMO BC with queues}
\label{fig:highsnr_w_queue}
\end{figure}
Fig.~\ref{fig:highsnr_w_queue} illustrates a queue-based scheduling system for two users. Data for the users
arrive at rates $\lambda_1$ and $\lambda_2$, which are generally assumed to be unknown. During each time
slot, the transmitter chooses the rate vector that maximizes the weighted sum rate over the instantaneous
rate region with weights equal to the current queue sizes. If the queue lengths are denoted as $q_1(t)$ and
$q_2(t)$, then the transmitter solves the following optimization during each time slot:
\begin{equation}
\max_{{\bf R}\in {\cal C}({\bf H},P)} q_1(t) R_1 + q_2(t) R_2,
\label{eq:opt_wgt_queue}
\end{equation}
and such a policy stabilizes any rate vector in the ergodic capacity region.
Although the weighted sum rate maximization problem for DPC stated in equation (\ref{eq:C_diff_mu_modified})
is convex, it still requires considerable complexity and could be difficult to perform on a slot-by-slot
basis. An alternative is to use the approximate power allocation policy from
(\ref{eq:wt_sum_rate_pow_policy}) during each time slot:
\begin{eqnarray}
P_k &=& \frac{q_k(t)}{q_1(t)+q_2(t)} P, \label{eq:queue_pow_alloc_K2_P1}
\end{eqnarray}
and where the ordering of the queues determines the dual MAC decoding order (larger queue decoded last).
Although we do not yet have any analytical results on the performance of the asymptotically optimal power
policy, numerical results indicate that such a policy performs nearly as well as actually maximizing weighted
sum rate. Ongoing work is investigating whether the approximate strategy is stabilizing for this system.
In Fig.~\ref{fig:queue_length_vs_lambda} average queue length is plotted versus the sum arrival rate for an
$M=4$, $K=2$ channel at 10 dB, for both the exact weighted sum rate maximization as well as the
approximation. Both schemes are seen to perform nearly identical, and the approximate algorithm appears to
stabilize the system in this scenario, although this is only empirical evidence.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{queue_1218}
\caption{Average queue length for symmetric arrival.}
\label{fig:queue_length_vs_lambda}
\end{figure}
\subsection{Extension to $N>1$}
Similar to \eqref{eq:C_diff_mu_modified}, the weighted sum rate by DPC can be as:
\begin{equation} \label{eq:C_diff_mu_modified_N}
{\cal C}_{\rm DPC}({\boldsymbol \mu}, {\bf H}, P)=\max_{\sum_{k=1}^K \text{tr}({\bf Q}_k) \le P}
\sum_{k=1}^{K} \mu_k \log_2 \frac{|{\bf A}^{(k)}|}{|{\bf A}^{(k-1)}|},
\end{equation}
where ${\bf A}^{(k)}= {\bf I} + \sum_{j=1}^k {\bf H}_j^H {\bf Q}_j {\bf H}_j$ for $k\ge 1$ and ${\bf
A}^{(0)}= {\bf I}$. From the construction of ${\bf A}^{(k)}$,
\begin{equation*}
\frac{|{\bf A}^{(k)}|}{|{\bf A}^{(k-1)}|} = \left|{\bf I}+{\bf Q}_k{\bf H}_k ({\bf A}^{(k-1)})^{-1} {\bf H}_k^H\right|.
\end{equation*}
Hence, \eqref{eq:C_diff_mu_modified_N} can be written as
\begin{equation} \label{eq:C_diff_mu_modified1_N}
{\cal C}_{\rm BC}({\boldsymbol \mu}, {\bf H}, P)=\max_{\sum_{k=1}^K \text{tr}({\bf Q}_k) \le P}
\sum_{k=1}^K \mu_k \log_2 \left|{\bf I}+{\bf Q}_k{\bf H}_k ({\bf A}^{(k-1)})^{-1} {\bf H}_k^H\right|.
\end{equation}
With the decoupling lemma (see Appendix \ref{sec:decoupling_lemma}), the above optimization
\eqref{eq:C_diff_mu_modified1_N} can be solved asymptotically like the case of $N=1$:
\begin{theorem} \label{thm:C_diff_mu_N}
At high SNR, the optimization in \eqref{eq:C_diff_mu_modified_N} is asymptotically achieved when
\begin{equation} \label{eq:opt_sol_C_diff_mu_N}
\boxed{{\bf Q}_k = \frac{\mu_k P}{N} {\bf I}, \qquad k=1,\cdots, K.}
\end{equation}
\end{theorem}
\begin{proof}
See Appendix \ref{sec:pf_thm:C_diff_mu_N}.
\end{proof}
Similarly, the weighted sum rate of BD is given by
\begin{equation} \label{eq:wt_sum_rate_bd}
{\cal C}_\text{BD}({\boldsymbol \mu},{\bf H},P)=\max_{{\bf Q}_k \; : \;\sum_{k=1}^{K} \text{tr}({\bf Q}_k)\le P}
\sum_{k=1}^{K} \mu_k\log_2 \left|{\bf I}+{\bf Q}_k {\bf G}_k {\bf G}_k^H\right|,
\end{equation}
where ${\bf G}_k$ is the projection of ${\bf H}_k$ onto the null space of $\{{\bf H}_1, \cdots, {\bf
H}_{k-1}, {\bf H}_{k+1},\cdots, {\bf H}_K\}$. Likewise, the optimization \eqref{eq:wt_sum_rate_zf} is the
same as the optimization \eqref{eq:wt_sum_N_sep1} and \eqref{eq:wt_sum_N_sep2} except that ${\bf F}_k$ is
replaced by ${\bf G}_k$ which does not contribute to the asymptotic solution. Thus, the power allocation
policy in \eqref{eq:opt_sol_C_diff_mu_N} is also the asymptotic solution to \eqref{eq:wt_sum_rate_bd}.
\subsection{More Users Than Antennas}
Although it is asymptotically optimal to allocate power in proportion to user weights when $M \geq KN$, this
is not the case when $M < KN$. Indeed, such a strategy can easily be checked to be sub-optimal even for a
single antenna broadcast channel with more than one user, as considered in
\cite{Li_IT01}\cite{Tse_unpublished}. Allocating power directly proportional to user weights or allocating
all power to only the user with the largest weight yields, for many single antenna broadcast channels, a
weighted sum rate that is a bounded distance away from the true optimal weighted sum rate.
Although neither of these strategies is asymptotically optimal, numerical
results do show that these approximations achieve rates that are extremely
close to optimum. In general, there are two different reasonable
power approximations. The first is to simply choose $P_k = \mu_k P$.
However, when $K > M$, this results in sending many more data streams than
there are spatial dimensions, which is not particularly intuitive.
An alternative strategy is to allocate power to the users with the
$M$ largest weights, but again in proportion to their weights.
Fig.~\ref{fig:ergordic_wt_sum} illustrates the ergodic weighted sum rates vs SNR for a $K=3, M=2, N=1$ system
in which $\mu_1=0.5$, $\mu_2=0.3$, and $\mu_3=0.2$, averaged over Rayleigh fading. The true weighted sum rate
is compared to the first strategy, where $P_k = \mu_k P$, and to the second strategy, where only users 1 and
2 are allocated power according to: $P_1=\frac{\mu_1}{\mu_1+\mu_2} P$, $P_2=\frac{\mu_2}{\mu_1+\mu_2} P$, and
$P_3=0$. Both approximations are a non-zero distance away from the optimum, but the rate loss is seen to be
extremely small.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{ergordic_wt_sum_K3M2}
\caption{Ergodic weighted sum rates by DPC and by approximations when $M=2$, $N=1$, $K=3$, with $\mu_1=0.5$, $\mu_2=0.3$, and $\mu_3=0.2$.}
\label{fig:ergordic_wt_sum}
\end{figure}
\section{Conclusion}
We have investigated the difference between the throughputs achieved by dirty paper coding (DPC) relative to
those achieved with linear precoding strategies by utilizing the affine approximation to high SNR and
computing the exact throughput/power offsets at asymptotically high SNR for MIMO broadcast channels in which
the number of transmit antennas is no smaller than the total number of receive antennas. Simple expressions
in terms of the number of transmit and receive antennas are provided for the average rate/power offset in a
spatially white Rayleigh fading environment. When the aggregate number of receive antennas is equal or
slightly less than than the number of transmit antennas, linear precoding incurs a rather significant penalty
relative to DPC, but this penalty is much smaller when the number of transmit antennas is large relative to
the number of receive antennas.
Furthermore, we generalized our analysis to weighted sum rate and quantified the asymptotic rate/power
offsets for this scenario as well. One of the most interesting aspects of this extension is the finding that
allocating power directly proportional to user weights is asymptotically optimal for DPC at high SNR.
This result is an extension at a similar result for parallel channels found in
\cite{Lozano_Tulino_Verdu_ISSSTA}, and this simple yet asymptotically optimal power policy may prove to be
useful in other setting such as opportunistic scheduling.
\section*{Acknowledgment}
The authors would like to thank Angel Lozano for his comments, and in particular for suggesting the unequal
average SNR model.
\appendices
\section{Proof of Theorem \ref{thm:exp_beta_zf}}
\label{sec:pf_thm:exp_beta_zf}
Starting from (\ref{eq-dpc_zf_proof1}) and utilizing
$\psi(m) = \psi(1) + \sum_{l=1}^{m-1} \frac{1}{l}$ we have:
\begin{eqnarray*}
\E\left[\log_e \beta_\text{DPC-ZF} \right]
&=& \left( \sum_{l=0}^{KN-1}\psi(M-l) \right) - KN \psi(M-KN+1) \\
&=& \sum_{l=0}^{KN-1} \left(\sum_{j=1}^{M-l-1}\frac{1}{j}-\sum_{j=1}^{M-KN}\frac{1}{j} \right)\\
&=& \sum_{l=0}^{KN-1} \left(\sum_{j=M-KN+1}^{M-1}\frac{1}{j} \right)\\
&=& \sum_{j=1}^{KN-1} \frac{KN-j}{M-KN+j}\\
&=& \sum_{j=1}^{KN-1} \frac{j}{M-j}.
\end{eqnarray*}
\section{Derivation of \eqref{eq:beta_zf_M_K_equal}}
\label{sec:pf_eq:beta_zf_M_K_equal}
When $M=KN$,
\begin{equation}
\E_{\bf H}[\log_e \beta_\text{DPC-ZF}({\bf H})]
= \sum_{j=1}^{KN-1}\frac{j}{M-j}=\sum_{j=1}^{M-1}\sum_{i=1}^{M-j}\frac{1}{i}.
\end{equation}
If we let $S_M$ denote the expected rate loss with $M$ antennas, we have:
\begin{equation}
S_{M+1}-S_M = \sum_{i=1}^M\frac{1}{i} \le 1+\log M , \quad \text{for}\;\; M\ge 1 ,
\end{equation}
since $\log_e M = \int_1^M \frac{1}{x} dx \ge \sum_{i=2}^M \frac{1}{i}$ because $\frac{1}{x}$ is a decreasing
function. If we let $f(M)\triangleq M\log_e M$, $f'(M)=1+\log_e M$, which is an increasing function of $M$,
and thus $f(M+1)\ge f(M)+1+\log_e M$. Since $S_{M+1}-S_M\le 1+\log M$ and $f(1)=S_1=0$, $S_M\le M\log M$ for
all $M\ge 1$.
Now we show that $S_M$ converges to $M\log M$. We do this by showing that $S_M\ge \theta M \log_e M$ for any
$0<\theta<1$ for all $M$ larger than some $M_0$. First notice that $\log_e M \le \sum_{i=1}^{M-1} \frac{1}{i}
\le \sum_{i=1}^M \frac{1}{i}$ by the definition of the $\log(\cdot)$ function. Thus,
\begin{equation}
S_{M+1}-S_M = \sum_{i=1}^M \frac{1}{i} \ge \log_e M.
\end{equation}
Let $g(M)\triangleq \theta M \log M$ for some $0<\theta<1$. Then $g'(M)=\theta+\theta \log_e M$, which is an
increasing function of $M$. Thus $g(M+1)\le g(M)+g'(M+1)=g(M)+\theta+\theta \log_e(M+1)$. Therefore we have
\begin{equation}
g(M+1)-S_{M+1} \le \left(f(M)-S_M\right)
+\theta+\theta\log(M+1)-\log M.
\end{equation}
Notice that the term $\theta+\theta\log(M+1)-\log M$ is a monotonically decreasing function that goes to
$-\infty$. Thus, any positive gap between $g(M)$ and $S_M$ must close and go to $-\infty$, i.e., $S_M\ge
g(M)$ for sufficiently large $M$. As a consequence of this, $\lim_{M\to\infty}\frac{S_M}{\theta M \log_e
M}\ge 1$, or $\lim_{M\to\infty}\frac{S_M}{M\log_e M}\ge \theta$ for any $\theta<1$. Since $\frac{S_M}{M\log_e
M}$ is bounded above by 1, it must converge; i.e.,
\begin{equation}
\lim_{M\to\infty}\frac{S_M}{M\log_e M}=1
\end{equation}
as desired.
\section{Proof of Theorem \ref{thm:asymp_zf_penalty}}
\label{sec:pf_thm_asymp_zf_penalty}
From Theorem \ref{thm:exp_beta_zf}, if $M=\alpha KN$, the expected power
offset, which is now a function of $\alpha$ and $KN$, can be expressed as:
\begin{eqnarray*}
\bar{\Delta}_\text{DPC-ZF}(\alpha,KN)&=& \frac{3\log_2 e}{KN}\sum_{j=1}^{KN-1}\frac{j}{M-j}, \qquad M=\alpha KN \\
&=&3\log_2 e \sum_{j=1}^{KN-1}\frac{\frac{j}{KN}}{\alpha-\frac{j}{KN}}\frac{1}{KN}
\end{eqnarray*}
Let us define a function $f(x)$ as
\begin{equation*}
f(x) = \frac{x}{\alpha-x},\qquad x\in [0,1],\quad\alpha > 1.
\end{equation*}
Then $\bar{\Delta}_\text{DPC-ZF}$ can be expressed as
\begin{equation*}
\bar{\Delta}_\text{DPC-ZF}(\alpha,KN)
=3\log_2 e\sum_{j=1}^{KN-1}f\left(\frac{j}{KN}\right)\frac{1}{KN},
\end{equation*}
which is a Riemann sum; i.e., as $KN\to\infty$,
\begin{equation*}
\lim_{KN\to\infty} \bar{\Delta}_\text{DPC-ZF}(\alpha,KN)
=3\log_2 e\int_{0}^{1}f(x)dx=3\log_2 e\int_{0}^{1}\frac{x}{\alpha-x}dx.
\end{equation*}
Thus,
\begin{eqnarray}
\bar{\Delta}_\text{DPC-ZF}(\alpha)=
\lim_{KN\to\infty} \bar{\Delta}_\text{DPC-ZF}(\alpha,KN)
&=& 3\log_2 e\left(-1-\alpha\log_e\frac{\alpha-1}{\alpha} \right) \nonumber\\
&=& -3 \left(\log_2 e+\alpha \log_2 \left(1-\frac{1}{\alpha} \right) \right) \nonumber.
\end{eqnarray}
\section{Proof of Theorem \ref{thm:exp_beta_bd}}
\label{sec:pf_thm:exp_beta_bd}
From \eqref{eq:sum_rate_dpc_appr} and \eqref{eq:sum_rate_bd_appr},
$\bar{\beta}_\text{DPC-BD}$ is given by:
\begin{equation*}
\bar{\beta}_\text{DPC-BD}
= \E[\log_2 \left|{\bf H}^H {\bf H} \right|]-K \E\left[\log_2 \left| {\bf G}_k^H {\bf G}_k\right| \right]
\end{equation*}
where ${\bf G}_k^H {\bf G}_k$ is Wishart wth $M-(K-1)N$ degrees of freedom. Applying Lemma \ref{lem-wishart}
and expanding the digamma function we have:
\begin{eqnarray*}
\bar{\beta}_\text{DPC-BD} &=&\log e \left[ \sum_{l=0}^{KN-1}\psi(M-l)-K \sum_{n=0}^{N-1} \psi(M-(K-1)N-n) \right]\\
&=& \log e \sum_{k=0}^{K-1}\sum_{n=0}^{N-1}\left[\psi(M-n-kN)- \psi(M-n-(K-1)N) \right]\\
&=& \log e \sum_{k=0}^{K-1}\sum_{n=0}^{N-1}\left[\sum_{j=1}^{M-n-kN-1}\frac{1}{j}- \sum_{j=1}^{M-n-(K-1)N-1}\frac{1}{j} \right]\\
&=& \log e \sum_{k=0}^{K-1}\sum_{n=0}^{N-1}\sum_{i=kN+1}^{(K-1)N}\frac{1}{M-n-i}.
\end{eqnarray*}
\section{Proof of Theorem \ref{thm:diff_beta}}
\label{sec:pf_thm:diff_beta}
From \eqref{eq:E_beta_zf} and \eqref{eq:E_beta_bd} it is known
that the $\bar{\beta}_\text{DPC-BD}$ and $\bar{\beta}_\text{DPC-ZF}$ are given by
\begin{eqnarray*}
\bar{\beta}_\text{DPC-BD} &=&\log e \left[ \sum_{l=0}^{KN-1}\psi(M-l)-K \sum_{n=0}^{N-1} \psi(M-(K-1)N-n) \right],\\
\bar{\beta}_\text{DPC-ZF} &=&\log e \left[ \sum_{l=0}^{KN-1}\psi(M-l)-KN \psi(M-KN+1)\right].
\end{eqnarray*}
From the assumption, $M=\alpha KN$ ($\alpha>1$) with $N>1$,
\begin{eqnarray*}
\frac{1}{\log e}\left(\E[\beta_\text{DPC-ZF}]-\E[\beta_\text{DPC-BD}]\right)
&=& K\left(\sum_{n=0}^{N-1} \psi(M-(K-1)N-n)\right)-KN \psi(M-KN+1) \\
&=& K\sum_{n=2}^{N} \left[\psi(M-KN+n)- \psi(M-KN+1)\right] \\
&=& K\sum_{n=2}^{N} \sum_{j=M-KN+1}^{M-KN+n-1}\frac{1}{j} \\
&=& K\sum_{i=1}^{N-1} \frac{N-i}{M-KN+i}\\
&=& \sum_{i=1}^{N-1} \frac{K(N-i)}{(\alpha-1)KN+i}
\end{eqnarray*}
\section{Derivation of \eqref{eq:C_dpc_gamma}, \eqref{eq:C_zf_gamma}, and \eqref{eq:C_bd_gamma}}
\label{sec:derivation_gamma}
From \eqref{eq:sum_cap} with the uniform power allocation, we have
\begin{equation*}
{\cal C}_\text{DPC}({\bf H},P) \cong \log_2 \left|{\bf I}+\frac{P}{KN}\tilde{\bf H}^H{\boldsymbol \Gamma}\tilde{\bf H}
\right|,
\end{equation*}
where ${\boldsymbol \Gamma} = \text{diag}(\gamma_1,\cdots,\gamma_K )\otimes {\bf I}_{N\times N}$. By $|{\bf
I}+{\bf A}{\bf B}|=|{\bf I}+{\bf B}{\bf A}|$,
\begin{eqnarray*}
{\cal C}_\text{DPC}({\bf H},P) &\cong& \log_2 \left|{\bf I}+\frac{P}{KN}{\boldsymbol \Gamma}\tilde{\bf H}\tilde{\bf H}^H
\right|\\
&=& KN\log_2 P + \log_2 \left|\frac{1}{P}{\bf I}+\frac{1}{KN}{\boldsymbol \Gamma}\tilde{\bf H}\tilde{\bf H}^H \right|\\
&\cong& KN\log_2 P + \log_2 \left|\frac{1}{KN}{\boldsymbol \Gamma}\tilde{\bf H}\tilde{\bf H}^H \right|\\
&=& KN\log_2 P -KN\log_2 KN +\log_2 \left|{\boldsymbol \Gamma} \right| + \log_2 \left|\tilde{\bf H}\tilde{\bf H}^H \right|\\
&\cong& {\cal C}_\text{DPC}(\tilde{\bf H},P) + N \sum_{k=1}^K \log_2 \gamma_k
\end{eqnarray*}
Since the zero-forcing vector ${\bf v}_{k,n}$ for ${\bf h}_{k,n}$ is identical to the zero-forcing vector
$\tilde{\bf v}_{k,n}$ for $\tilde{\bf h}_{k,n}$, the effective channel gain is given by
\begin{equation}
g_{k,n}={\bf h}_{k,n}{\bf v}_{k,n} = \sqrt{\gamma_k}\tilde{g}_{k,n},
\end{equation}
where $\tilde{g}_{k,n} = \tilde{\bf h}_{k,n}\tilde{\bf v}_{k,n} $. Thus the ZF sum rate \eqref{eq:c_sum_zf}
can be modified as
\begin{eqnarray*}
{\cal C}_\text{ZF}({\bf H},P)
&\cong& \sum_{k=1}^K \sum_{n=1}^N \log_2 \left(1+\frac{P}{KN} \gamma_k |\tilde{g}_{k,n}|^2\right)\\
&=& KN\log_2 P+ \sum_{k=1}^K \sum_{n=1}^N \log_2 \left(\frac{1}{P}+\frac{1}{KN} \gamma_k |\tilde{g}_{k,n}|^2\right)\\
&\cong& KN\log_2 P+ \sum_{k=1}^K \sum_{n=1}^N \log_2 \left(\frac{1}{KN} \gamma_k |\tilde{g}_{k,n}|^2\right)\\
&\cong& {\cal C}_\text{ZF}(\tilde{\bf H},P) + N\sum_{k=1}^K \log_2 \gamma_k
\end{eqnarray*}
Likewise, for BD, $\tilde{\bf V}_k = {\bf V}_k$ leads to
\begin{equation}
{\bf G}_k = {\bf H}_k {\bf V}_k = \sqrt{\gamma_k} \tilde{\bf G}_k,
\end{equation}
where $\tilde{\bf G}_k =\tilde{\bf H}_k \tilde{\bf V}_k$. Thus, the BD sum rate in \eqref{eq:sum_bd} is
modified to
\begin{eqnarray*}
{\cal C}_\text{BD}({\bf H},P)
&\cong& \sum_{k=1}^K \log_2 \left|{\bf I}+\frac{P}{KN} \gamma_k \tilde{\bf G}_k^H \tilde{\bf G}_k\right|\\
&=& KN\log_2 P+ \sum_{k=1}^K \log_2 \left|\frac{1}{P}{\bf I}+\frac{1}{KN} \gamma_k \tilde{\bf G}_k^H \tilde{\bf G}_k\right|\\
&\cong& KN\log_2 P+ \sum_{k=1}^K \log_2 \left|\frac{1}{KN} \gamma_k \tilde{\bf G}_k^H \tilde{\bf G}_k\right|\\
&\cong& {\cal C}_\text{BD}(\tilde{\bf H},P) + N\sum_{k=1}^K\log_2 \gamma_k
\end{eqnarray*}
\section{Decoupling Lemma}
\label{sec:decoupling_lemma}
\begin{lemma} \label{lem:decoupling}
Let $\{{\bf H}_j\}_{j=1}^{K} (\in \mathbb{C}^{N\times M})$ be a set of $K$-user MIMO broadcast channel matrices with $M\ge KN$.
If\; ${\bf F}_k$ ($k=1,\cdots, K$) is the projection of\; ${\bf H}_k$ onto the nullspace of $\{{\bf H}_j\}_{j=1}^{k-1}$ (i.e., ${\bf F}_k={\bf H}_k {\bf P}^\perp$ where
${\bf P}^\perp$ denotes the nullspace of $\{{\bf H}_j\}_{j=1}^{k-1}$), then
\begin{equation}
\lim_{P\to\infty}\left[{\bf H}_k ({\bf A}^{(k-1)})^{-1} {\bf H}_k^H - {\bf F}_k {\bf F}_k^H \right]=0,\qquad k=1,\cdots, K,
\end{equation}
where ${\bf A}^{(k)}= {\bf I} + \sum_{j=1}^k {\bf H}_j^H {\bf Q}_j {\bf H}_j$ for $k\ge 1$ and ${\bf A}^{(0)}= {\bf I}$.
\end{lemma}
\begin{proof}
If we let the eigenvector matrix and eigenvalues of $\sum_{j=1}^{k-1} {\bf H}_j^H {\bf Q}_j {\bf H}_j$ be
${\bf U}$ and $\lambda_1, \cdots, \lambda_{k-1}$ with $\lambda_j > 0$, then
\begin{equation*}
({\bf A}^{(k-1)})^{-1/2} = {\bf U}{\boldsymbol \Lambda}{\bf U}^H,
\end{equation*}
where
\begin{equation*}
{\boldsymbol \Lambda} =\text{diag}\left(\frac{1}{\sqrt{1+\lambda_1}}, \cdots, \frac{1}{\sqrt{1+\lambda_{k-1}}},1,\cdots,1 \right).
\end{equation*}
As $P$ goes to infinity, $\lambda$'s tend to infinity. Thus, the first $k-1$ eigenvalues of ${\boldsymbol
\Lambda}$ converge to 0. The eigenvectors corresponding to the unit eigenvalues span the nullspace $\{{\bf
H}_j\}_{j=1}^{k-1}$; i.e.,
\begin{equation*}
\lim_{P\to\infty} \left[{\bf H}_k({\bf A}^{(k-1)})^{-1/2} - {\bf F}_k \right] =0.
\end{equation*}
This completes the proof.
\end{proof}
\section{Proof of Theorem \ref{thm:C_diff_mu_N}}
\label{sec:pf_thm:C_diff_mu_N}
With the decoupling lemma in Appendix \ref{sec:decoupling_lemma}, the optimization
\eqref{eq:C_diff_mu_modified1_N} can be decomposed into the two optimizations at high SNR:
\begin{equation}
{\cal C}_{\rm BC}({\boldsymbol \mu}, {\bf H}, P)\cong \max_{\sum_{k=1}^K P_k \le P}
\sum_{k=1}^K \mu_k \xi_k(P_k),
\label{eq:wt_sum_N_sep1}
\end{equation}
where
\begin{equation}
\xi_k(P_k) = \max_{\text{tr}({\bf Q}_k)=P_k} \log_2 \left|{\bf I}+{\bf Q}_k{\bf F}_k{\bf F}_k^H \right|.
\label{eq:wt_sum_N_sep2}
\end{equation}
At high SNR, Eq.~\eqref{eq:wt_sum_N_sep2} can be asymptotically expressed as an affine approximation
\cite{Shamai_IT01}:
\begin{equation}
\xi_k(P_k) = {\cal S}_{\infty,k} (\log_2 P_k - {\cal L}_{\infty,k}) + o(1),
\end{equation}
where ${\cal S}_{\infty,k}$ and ${\cal L}_{\infty,k}$ are determined by the multiplexing gain and power
offset. Hence, the optimization \eqref{eq:wt_sum_N_sep1} is asymptotically equivalent to solve the following:
\begin{equation*}
\max_{\sum_{k=1}^K P_k \le P}\sum_{k=1}^K \mu_k \log_2 P_k.
\end{equation*}
This leads the optimal $P_k=\mu_k P$. Furthermore, by Theorem 3 of \cite{Caire_IT03}, the optimal power
allocation is asymptotically achieved by
\begin{equation*}
{\bf Q}_k =\frac{\mu_k P}{N}{\bf I},\qquad k=1,\cdots, K.
\end{equation*}
\bibliographystyle{ieeetran} |
1,108,101,562,930 | arxiv |
\section{Conclusion and Outlook}\label{Section Conclusion}
Motivated by the nestedness issue of Multilevel Subset Simulation, we implement Multilevel Sequential Importance Sampling to estimate the probability of rare events. We assume that the underlying limit state function depends on a discretization parameter $\ell$. MLSIS samples a sequence of non-zero density functions that are adaptively chosen such that each pair of subsequent densities are only slightly different. Therefore, nestedness is not an issue for MLSIS. We combine the smoothing approach of the indicator function in \cite{Papaioannou16} and the multilevel idea in \cite{Latz18}. This yields a two-fold adaptive algorithm which combines tempering and bridging sequences in a clever way to reduce computational costs. Moreover, we apply the level dependent dimension approach of \cite{Ullmann15} to reduce variances between consecutive accuracy levels of the limit state function. This leads to more tempering updates on coarse levels and reduces computational costs. Another contribution of our work is the von Mises Fisher Nakagami distribution as a proposal density in an independent Markov chain Monte Carlo algorithm. This leads to an efficient MCMC algorithm even in high dimensions.
\\In numerical experiments in 1D and 2D space, we show for our experiments that MLSIS has a lower computational cost than SIS at any given error tolerance. For both experiments, MLSIS with the von Mises Fisher Nakagami distribution leads to lower computational cost than Multilevel Subset Simulation for the same accuracy. However, MLSIS with adaptive conditional sampling leads only for the 2D experiment to lower computational cost than Multilevel Subset Simulation for the same accuracy. The results also show that applying the von Mises Fisher Nakagami distribution as a proposal density in the MCMC algorithm reduces the bias and coefficient of variation of the MLSIS estimator compared to applying adaptive conditional sampling as the MCMC algorithm.
\\The bridging approach can also handle more general assumptions on the approximation sequence of the limit state function. For instance, the approximation sequence can arise within a multi-fidelity setting. Therein, bridging is applied to transfer samples between a low fidelity model and a high fidelity model.
\\Instead of using SIS to shift samples to the failure region, we plan to apply the Ensemble Kalman Filter for inverse problems as a particle based estimator for the probability of failure. In this case, the reliability problem is formulated as an inverse problem.
\section{Background}\label{Section SIS}
\subsection{Problem Setting}
Consider the probability space $(\Omega, \mathcal{F}, \mathbb{P})$ and a random variable $U:\Omega\rightarrow\mathbb{R}^n$. By \cite{Kiureghian86,Hohenbichler81} it is assumed, without loss of generality, that $U$ is distributed according to the $n$-variate standard normal distribution with density function $\varphi_n$. If a non-Gaussian random variable $\tilde{U}$ is used, an isoprobabilistic transformation $U=T(\tilde{U})$ is applied. Failure is defined in terms of an LSF $G:\mathbb{R}^n\rightarrow\mathbb{R}$ such that $G(U(\omega))\le 0$ for $\omega\in\Omega$. In many applications, the LSF $G$ is not analytically given. We can only evaluate an approximation $G_{\ell}$, where $\ell$ represents the discretization level. Increasing $\ell$ leads to a more accurate approximation. In the numerical examples presented in this paper, $G_{\ell}$ requires the solution of a PDE and $\ell$ specifies the mesh size of an FEM approximation. The \emph{probability of failure} is defined as the measure of the \emph{failure domain} $A:= \{\omega\in\Omega: G(U(\omega))\le 0\}$, which is expressed as
\begin{align}
P_{f} := \mathbb{P}[A] = \mathbb{P}[G(U)\le0] = \int_{G(u) \le 0} \varphi_n(u) \mathrm{d}u.\label{probability of failure}
\end{align}
Using $G_{\ell}$ instead of $G$ in (\ref{probability of failure}) gives the approximation $P_{f,\ell}$, which includes numerical errors due to approximating the exact LSF $G$. Convergence is expected for increasing the level $\ell$, i.e., decreasing the finite element mesh size.
\\ The probability of failure can be estimated by crude Monte Carlo sampling \cite{Fishman96}. By evaluating $G_{\ell}$ on the discretization level $\ell$ for $N\in\mathbb{N}$ independent samples distributed according to $\varphi_n$, we obtain the (single-level) Monte Carlo estimator $\hat{P}_{f,\ell}^{\mathrm{MC}}$ for $P_{f,\ell}$
\begin{align}
\hat{P}_{f,\ell}^{\mathrm{MC}} = \frac{1}{N} \sum_{k=1}^N I\left(G_{\ell}(u_k)\le 0\right),\label{MC esimator}
\end{align}
where $I$ denotes the indicator function; i.e., $I(\mathrm{true}) = 1$ and $I(\mathrm{false}) = 0$. $\hat{P}_{f,\ell}^{\mathrm{MC}}$ is an unbiased estimator and easy to implement. Since the coefficient of variation of $\hat{P}_{f,\ell}^{\mathrm{MC}}$ is inversely proportional to the probability of failure $P_{f,\ell}$, see \cite{Papaioannou16}, a large number of samples is required if $P_{f,\ell}$ is small and a small coefficient of variation should be achieved. Hence, huge computational costs are required if $G_{\ell}$ is a cost demanding evaluation. This makes crude Monte Carlo sampling impractical for the estimation of rare failure probabilities.
\subsection{Subset Simulation and Multilevel Subset Simulation}\label{SuS and MLSuS}
SuS and MLSuS are alternative approaches where the failure probability is estimated by a product of conditional probabilities. Consider the sequence of domains $B_0, B_1,\dots,B_S$, where $B_S = A$ is the failure domain. In both approaches, the sequence is constructed such that
\begin{align}
P(B_j\mid B_{j-1})= \hat{p}_0 \in(0,1),\label{SuS parameter}
\end{align}
while $\hat{p}_0$ is chosen to ensure that samples of $B_j$ can be easily generated from samples of $B_{j-1}$ \cite{Au01}. In SuS, the sequence of domains is nested, i.e., $B_{j}\subset B_{j-1}$ for $j=1,\dots,S$, since the discretization level is fixed. Hence, the SuS estimator is given as
\begin{align*}
\hat{P}_{f,\ell}^{\mathrm{SuS}} := \hat{P}_{B_1} \prod_{j=2}^S \hat{P}_{B_j\mid B_{j-1}},
\end{align*}
where $\hat{P}_{B_j\mid B_{j-1}}$ is an estimator for $P(B_j\mid B_{j-1})$. It has been shown in \cite{Papaioannou16} that SuS is a special case of SIS, where the IS densities $p_{j,\ell}$ are chosen as the optimal IS density with respect to the domain $B_j$. In MLSuS \cite{Ullmann15}, the sequence of domains is no longer nested since the domains $B_j$ are defined for different LSFs $G_{\ell}$, in case of a level update. To overcome this problem, the conditional probability $P(B_{j-1}\mid B_{j})$ has to be estimated. This leads to the MLSuS estimator
\begin{align}
\hat{P}_{f,\ell}^{\mathrm{MLSuS}} := \hat{P}_{B_1} \prod_{j=2}^S \frac{\hat{P}_{B_{j}\mid B_{j-1}}}{\hat{P}_{B_{j-1}\mid B_{j}}}.\label{MLSuS estimator}
\end{align}
Moreover, samples which are taken as seeds in the MCMC step are not distributed according to the target distribution. Therefore, a burn-in is required. Both issues lead to increasing computational costs. Note that if the domains $B_j$ for $j=1,..,S$ were nested, then the denominator in (\ref{MLSuS estimator}) is equal to one and no estimator for the denominator is required. To increase the denominator in (\ref{MLSuS estimator}), the authors in \cite{Ullmann15} apply a level dependent parameter dimension. This reduces the variance of two consecutive levels and makes the MLSuS algorithm more robust. In Section \ref{level dependent dim} of this work, we consider a level dependent parameter dimension for the MLSIS algorithm to reduce the variance of consecutive levels.
\subsection{Importance Sampling}
IS is a variance reduction technique \cite{Agapiou17,Rubinstein16}, where the integral in (\ref{probability of failure}) is calculated with respect to a certain \emph{IS density} $p_{\ell}$. If $p_{\ell}$ takes on large values in the failure domain, many samples following $p_{\ell}$ represent failure events. Therefore, less samples are required to estimate the probability of failure accurately. By \cite{Papaioannou18} the failure probability $P_{f,\ell}$ is expressed as
\begin{align*}
P_{f,\ell} = \int_{\mathbb{R}^n} I\left(G_{\ell}(u)\le 0\right) w_{\ell}(u)p_{\ell}(u)\mathrm{d}u = \mathbb{E}_{p_{\ell}}[I\left(G_{\ell}(u)\le 0\right)w_{\ell}(u)],
\end{align*}
where the \emph{importance weight} is defined as $w_{\ell}(u) := {\varphi_n(u)}/{p_{\ell}(u)}$. Again, crude Monte Carlo sampling is applied, which yields the estimator
\begin{align*}
\hat{P}_{f,\ell}^{\mathrm{IS}} = \frac{1}{N} \sum_{k=1}^{N} I\left(G_{\ell}(u_k)\le 0\right)w_{\ell}(u_k),
\end{align*}
where the samples $\{u_k\}_{k=1}^{N}$ are distributed according to the IS density $p_{\ell}$. $\hat{P}_{f,\ell}^{\mathrm{IS}}$ is an unbiased estimator for $P_{f,\ell}$ if the support of $p_{\ell}$ contains the failure domain $A_{\ell}:= \{\omega\in\Omega: G_{\ell}(U(\omega))\le 0\}$. The \emph{optimal IS density} is given by
\begin{align}
p_{\mathrm{opt},\ell}(u) := \frac{1}{P_{f,\ell}}I(G_{\ell}(u)\le 0) \varphi_n(u),\label{optimal IS density}
\end{align}
which leads to a zero-variance estimator. Since $P_{f,\ell}$ and $A_{\ell}$ are unknown, $p_{\mathrm{opt},\ell}$ cannot be used in practice. In contrast, SIS achieves an approximation of $p_{\mathrm{opt},\ell}$ by approximating the optimal IS distribution in a sequential manner while starting from a known \emph{prior density} $p_0$.
\subsection{Sequential Importance Sampling}
According to \cite{Papaioannou16}, the sequence of IS densities is determined from a smooth approximation of the indicator function. The \emph{cumulative distribution function} (cdf) of the standard normal distribution is one possibility to approximate the indicator function. For $G_{\ell}(u)\neq 0$ we achieve pointwise convergence
\begin{align*}
I(G_{\ell}(u)\le 0) = \lim_{\sigma \downarrow 0} \Phi\left(-\frac{G_{\ell}(u)}{\sigma}\right),
\end{align*}
while for $G_{\ell}(u)=0$ and $\forall \sigma>0$ it holds that $\Phi\left(-{G_{\ell}(u)}/{\sigma}\right)= 1/2 \neq I(G_{\ell}(u)\le 0)$, as visualized in Figure \ref{indicator approximation}. Further approximation functions are examined in \cite{Lacaze15} with an additional sensitivity analysis.
\begin{figure}[htbp]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm,scale=0.25]{indicator_approximation.pdf}
\caption[Indicator approximation]{Approximation of the indicator function $I(G_{\ell}(u)\le 0)$ by the cdf of the standard normal distribution $\Phi\left(-{G_{\ell}(u)}/{\sigma}\right)$.}
\label{indicator approximation}
\end{figure}
\\With the preceding consideration, the sequence of IS densities $\{p_{j,\ell}: j=0,\dots,N_T\}$ is defined as
\begin{align*}
p_{j,\ell}(u) &:= \frac{1}{P_{j,\ell}}\Phi\left(-\frac{G_{\ell}(u)}{\sigma_j}\right) \varphi_n(u) = \frac{1}{P_{j,\ell}}\eta_{j,\ell}(u), \text{ for } j=1,\dots, N_T,
\\ p_0(u) &:= \varphi_n(u),
\end{align*}
where $\infty > \sigma_1 > \cdots >\sigma_{N_T} > 0$ represent a strictly decreasing sequence of temperatures or bandwidths and $P_{j,\ell}$ is a normalizing constant such that $p_{j,\ell}$ is a well-defined density function. The denomination `temperatures' and their use is motivated by the temperature of the Boltzmann distribution \cite[Chapter VIII]{Gibbs02}. The number $N_T$ of tempering steps is a priori unknown and specifies the number of tempering steps to approximate the optimal IS density sufficiently accurately. Applying the IS approach, $P_{j,\ell}$ is determined by sampling from the density $p_{j-1,\ell}$
\begin{align}
P_{j,\ell} = \int_{\mathbb{R}^n} \eta_{j,\ell}(u) \mathrm{d}u = P_{j-1,\ell} \int_{\mathbb{R}^n} w_{j,\ell}(u) p_{j-1,\ell}(u) \mathrm{d}u = P_{j-1,\ell} \mathbb{E}_{p_{j-1,\ell}}[w_{j,\ell}(u)], \label{eq P_j}
\end{align}
where $w_{j,\ell}(u) := {\eta_{j,\ell}(u)}/{\eta_{j-1,\ell}(u)}$. Hence, the fraction of consecutive normalizing constants $S_{j,\ell}= {P_{j,\ell}}/{P_{j-1,\ell}}$ is estimated by
\begin{align}
\hat{S}_{j,\ell} := \hat{\mathrm{E}}_{p_{j-1,\ell}} [w_{j,\ell}(u)] = \frac{1}{N}\sum_{k=1}^{N} w_{j,\ell}(u_k),\label{S tempering}
\end{align}
where the samples $\{u_k\}_{k=1}^{N}$ are distributed according to $p_{j-1,\ell}$. Using the definition of $\eta_{j,\ell}$ and $\eta_{j-1,\ell}$, the weights $w_{j,\ell}(u_k)$ for $k=1,\dots,N$ are given by
\begin{align}
w_{j,\ell}(u_k) &= \frac{\Phi\left(-{G_{\ell}(u_k)}/{\sigma_j}\right)}{\Phi\left(-{G_{\ell}(u_k)}/{\sigma_{j-1}}\right)}, \text{ for } j>1\label{weights}
\\w_{1,\ell} (u_k) &= \Phi(-G_{\ell} (u_k) / \sigma_1).\notag
\end{align}
To obtain an accurate estimator $\hat{S}_{j,\ell}$, the parameters $\sigma_j$ are adaptively determined such that consecutive densities differ only slightly. This goal is achieved by requiring that the coefficient of variation of the weights $w_{j,\ell}$ is close to the target value $\delta_{\mathrm{target}}$, which is specified by the user. This leads to the following minimization problem
\begin{align}
\sigma_j = \underset{\sigma \in (0,\sigma_{j-1})}{\mathrm{argmin}} \big\Vert \delta_{w_{j,\ell}} - \delta_{\mathrm{target}}\big\Vert_2^2,\label{min pbl}
\end{align}
where $\delta_{w_{j,\ell}}$ is the \emph{coefficient of variation} of the weights (\ref{weights}). This adaptive procedure is similar to the adaptive tempering in \cite{Beskos13,Latz18} and is equivalent to requiring that the \emph{effective sample size} takes a target value \cite{Latz18}. Note that the solution of the minimization problem in (\ref{min pbl}) does not require further evaluations of the LSF. Hence, its costs are negligible compared to the overall computational costs. The tempering iteration is finished if the coefficient of variation $\delta_{w_{\mathrm{opt},\ell}}$ of the weights with respect to the optimal IS density
\begin{align}
w_{\mathrm{opt},\ell}(u_k) := I(G_{\ell}(u_k)\le 0)\frac{\varphi_n(u_k)}{\eta_{j,\ell}(u_k)}\label{weights optimal density}
\end{align}
is smaller than $\delta_{\mathrm{target}}$ and, hence, the optimal IS density is approximated sufficiently well. According to \cite{Papaioannou16}, the SIS estimator of the probability of failure is defined as follows
\begin{align}
\hat{P}_{f,\ell}^{\mathrm{SIS}} = \left(\prod_{j=1}^{N_T} \hat{S}_{j,\ell} \right)\frac{1}{N}\sum_{k=1}^{N} w_{\mathrm{opt},\ell}(u_k),\label{SIS estimator}
\end{align}
where the weights $w_{\mathrm{opt},\ell}$ are defined in (\ref{weights optimal density}) with $j=N_T$. The sum over the weights $w_{\mathrm{opt},L}(u_k)$ in (\ref{SIS estimator}) represents the last tempering step from the IS density $p_{N_T,\ell}$ to the optimal IS density $p_{\mathrm{opt},\ell}$ given in (\ref{optimal IS density}). It corresponds to the estimator of the ratio $P_{N_T,\ell}/P_{f,\ell}$ since $P_{f,\ell}$ is the normalizing constant of the optimal IS density.
\\During the iteration, MCMC sampling is applied to transfer samples distributed according to $p_{j-1,\ell}$ to samples distributed according to $p_{j,\ell}$ for $j=1,\dots,N_T$. Section \ref{Section MCMC} explains MCMC sampling in more detail. Algorithm \ref{Tempering alg} summarizes the procedure of one tempering step for sampling from $p_{j,\ell}$ and estimating $S_{j,\ell}$ starting from samples from $p_{j-1,\ell}$.
\begin{remark}
We remark that nestedness, which is a prerequisite for SuS, is not an issue for SIS. This is because the intermediate sampling densities are smooth approximations of the optimal IS density and they all have supports in the whole outcome space. The proximity of two consecutive densities is ensured by (\ref{min pbl}). This property of SIS motivates the development of MLSIS in the following section. We note that MLSuS does not satisfy nestedness, which leads to the denominators in the estimator (\ref{MLSuS estimator}).
\end{remark}
\begin{algorithm}
\caption{Tempering algorithm ($N$ samples from $p_{j-1,\ell}$, $\sigma_{j-1}$, $\delta_{\mathrm{target}}$, $G_{\ell}$)}
\label{Tempering alg}
\begin{algorithmic}[1]
\STATE determine $\sigma_j$ from the optimization problem (\ref{min pbl})
\STATE evaluate the weights $w_{j,\ell}$ as in (\ref{weights}) for the current set of samples
\STATE evaluate the estimator $\hat{S}_{j,\ell}$ as in (\ref{S tempering})
\STATE re-sample the samples of $p_{j-1,\ell}$ based on their weights $w_{j,\ell}$
\STATE move the samples with MCMC to generate $N$ samples from the density $p_{j,\ell}$
\RETURN $N$ samples from $p_{j,\ell}$, $\sigma_j$, $\hat{S}_{j,\ell}$
\end{algorithmic}
\end{algorithm}
\section{Introduction}\label{Sec: Introduction}
Estimating the probability of rare events is crucial in reliability analysis and risk management and arises in applications in many fields. For instance, the authors in \cite{Agarwal17} examine rare events arising in financial risk settings while \cite{Morio15} studies the probability of collision between space debris and satellites. In planning a radioactive waste repository \cite{Cornaton08,Noseck08}, one is interested in the probability that radioactive particles leave the repository and pollute the groundwater in a long time horizon. The particle flow can be simulated by a finite element (FEM) \cite{Braess07} approximation of the groundwater flow and transport equation. Since the subsurface properties of the whole domain of interest are uncertain or only measurable at finitely many points, the soil is modelled as a random field. The particle transport has to be simulated for various realizations of the random field to estimate the probability that the radioactive particles come back to the human environment, which is a rare event.
\\All applications have in common that the probabilities of the events are small $(<10^{-4})$ and the \emph{limit state function} (LSF) underlies a computationally demanding model which depends on the discretization of the domain. If the discretization level is high, i.e., the mesh size is small, the FEM approximation is accurate but also cost intensive. These issues complicate the estimation of the probability of failure.
\\Before we introduce our novel approach, we give a brief overview of existing algorithms. On the one hand, there are deterministic approximation methods, such as the \emph{first} and \emph{second order reliability method} (FORM, SORM) \cite{Melchers18}, which aim at approximating the domain of parameters which lead to failure events. On the other hand, there are sampling based methods, which approximate the probability of failure events. Unlike approximation methods, sampling approaches are based on sample estimates and are usually more robust in terms of the complexity of the LSF. Since our novel approach is a sampling method, we focus on this category and give a larger overview.
\\\emph{Monte Carlo sampling} \cite{Fishman96,Rubinstein16} can be easily applied to estimate the probability of failure and yields an unbiased estimator. However, due to the mentioned issues of rare event settings, the Monte Carlo estimator becomes intractable, since hardly any sample contributes to the rare event and each sample requires a cost intensive function evaluation. Therefore, variance reduction techniques have been developed to reduce the number of samples for obtaining an accurate estimate. For instance, the idea of \emph{Multilevel Splitting} \cite{Botev12,Glasserman99} and \emph{Subset Simulation} (SuS) \cite{Au01,Au14} is to decompose the rare event into a sequence of nested events. This enables expressing the probability of the rare event as a product of conditional probabilities of more frequent events. These methods require sampling from a sequence of probability density functions which is achieved with \emph{Markov chain Monte Carlo} (MCMC) methods \cite{Papaioannou15,Wang19}.
\\\emph{Importance sampling} (IS) methods employ an alternative sampling density, which if chosen properly can reduce considerably the variance of the standard Monte Carlo estimator \cite{Kahn53}. The optimal choice of the sampling density is the density of the input variables conditional on the failure domain. However, direct sampling from the optimal density is not feasible, because the location of the failure domain is unknown prior to performing the simulation. As in Multilevel Splitting or SuS, a sequential approach can be applied to approximate the optimal IS density in a sequential manner. This leads to \emph{Sequential Importance Sampling} (SIS) \cite{Papaioannou18,Papaioannou16} or \emph{Sequential Monte Carlo} (SMC) \cite{Cerou12} for the estimation of rare events. In our novel approach, we consider an adaptive methodology similar to adaptive SMC \cite{Beskos13,Doucet09,Jasra11}. Another approach to estimate the optimal sampling density sequentially is the \emph{Cross-entropy} method \cite{Geyer19}, where the sampling density minimizes the Kullback-Leibler divergence to the optimal density within a family of parametrized densities. IS can also be applied to a hyperplane that is perpendicular to an important direction, a method known as \emph{line sampling} \cite{Angelis15,Koutsourelakis04,Rackwitz01}.
\\The previous algorithms have the drawback that all evaluations have to be performed with respect to the same LSF. The evaluation of the LSF could require the solution of a discretized PDE, which depends on the mesh size of the computational domain. Since computational costs increase with decreasing mesh size, we wish to construct a method wherein the discretized PDE is solved on fine meshes only for very few realizations. Therefore, we apply a multilevel approach that uses a hierarchy of discretization levels. The authors in \cite{Elfverson16} use the telescoping sum approach of \cite{Giles15} to estimate the probability of failure. Applying the multilevel idea to the previously described methods gives \emph{Multilevel Subset Simulation} (MLSuS) \cite{Ullmann15} and \emph{Multilevel Sequential Monte Carlo} \cite{Beskos17,Moral17}. Moreover, a multi-fidelity approach combined with the cross-entropy method is investigated in \cite{Peherstorfer18}. Furthermore, the work in \cite{Latz18} develops the \emph{Multilevel Sequential$^2$ Monte Carlo} ($\mathrm{MLS}^2\mathrm{MC}$) estimator, which is a twofold sequential algorithm for Bayesian inverse problems.
\\In this paper, we consider SuS and SIS as well as their multilevel versions. In more detail, an MCMC algorithm \cite{Cotter13,Hastings70} is applied within SuS to gradually shift samples into consecutive domains, which are defined by the sequence of nested events. By the \emph{nestedness property} \cite{Papaioannou15}, the simulated Markov chains do not require a burn-in period, since seeds are already distributed approximately according to the target distribution. Therefore, SuS is an efficient but slightly biased estimator \cite{Au01}. The MLSuS method, given in \cite{Ullmann15}, employs a hierarchy of discretization levels and enables the usage of coarse grid function evaluations. MLSuS saves significant computational costs compared to SuS if the failure domains between discretization levels are still nested. However, nestedness is no longer guaranteed in the multilevel setting since the sequence of consecutive domains is based on LSFs with different accuracies. Therefore, a second MCMC step has to be performed. Additionally, a burn-in period is proposed since seeds are no longer distributed (approximately) according to the target distribution. Both issues increase the computational costs of the MLSuS estimator; and thus decrease its efficiency. However, a level dependent parameter dimension can be applied to reduce variances between two accuracy levels of the LSF and approximately satisfy the nestedness property.
\\The nestedness issue of MLSuS is our main motivation to implement the $\mathrm{MLS}^2\mathrm{MC}$ algorithm for rare event estimation. Nestedness is not an issue for $\mathrm{MLS}^2\mathrm{MC}$; the method samples a sequence of non-zero densities with IS and chooses each IS density to be close to each target density in the sequence. The idea of the $\mathrm{MLS}^2\mathrm{MC}$ method is combined with the SIS approach and yields a \emph{Multilevel Sequential Importance Sampling} (MLSIS) estimator for rare events. Note that both MLSIS as well as MLSuS are not based on the telescoping sum approach. To achieve an even more efficient algorithm, we apply the level dependent parameter dimension approach of \cite{Ullmann15}. As SIS, the MLSIS method requires an MCMC algorithm to shift samples into consecutive target distributions. We consider an independent MCMC sampler that uses the \emph{von Mises-Fisher Nakagami} (vMFN) distribution model fitted with the available weighted samples at each sampling level as the proposal distribution. The vMFN distribution is applied in \cite{Papaioannou19} as a parametrized family of probability distributions for the Cross-entropy method which yields an efficient algorithm even in high dimensions. Employing the vMFN distribution as a proposal density is another main contribution of our work.
\\The paper is structured as follows. In Section \ref{Section SIS}, the problem setting of estimating the probability of failure is defined and SIS as well as SuS are explained. The MLSIS estimator is described in Section \ref{Section MLSIS}. In Section \ref{Section MCMC}, two MCMC algorithms are studied which are applied within SIS and MLSIS. In Section \ref{chapter numerical experiments}, the studied estimators are applied to 1D and 2D test problems and the MLSIS estimator is compared with SIS as well as SuS and MLSuS. In Section \ref{Section Conclusion}, a summary of the discussion and an outlook are given.
\section{Markov Chain Monte Carlo}\label{Section MCMC}
The goal of SIS and MLSIS is to transform samples from the prior density $p_0 = \varphi_n$ into samples of the optimal IS density $p_{\mathrm{opt}}$. Thereby, a sequence of densities is defined which converges to the optimal one. MCMC is applied to transform samples into consecutive densities of the tempering and bridging steps.
\\Consider the tempering step from $p_{j-1,\ell}$ to $p_{j,\ell}$. The samples $\{u_k\}_{k=1}^{N}$ are distributed as $p_{j-1,\ell}$ and have to be transformed into samples that are distributed as $p_{j,\ell}$. To define the number of seeds of the MCMC algorithm, we choose a parameter
\begin{align}
c\in(0,1] \text{ such that } \frac{1}{c}\in\mathbb{N} \text{ and } c\cdot N \in \mathbb{N}.\label{seed paramter}
\end{align}
Then, $N_c:=c\cdot N$ seeds are randomly selected with replacement from the set $\{u_k\}_{k=1}^{N}$ according to their weights $\{w_{j-1,\ell}(u_k)\}_{k=1}^{N}$ given in (\ref{weights}). The set of seeds is denoted by $\{u_{k_j}\}_{j=1}^{N_c}$. In this procedure, which corresponds to \emph{multinomial resampling}, samples with high weights are copied multiple times and samples with low weights are discarded. There are also other resammpling methods, such as \emph{stratified resampling} or \emph{systematic resampling}, which can be applied. A study on their convergence behaviour is given in \cite{Gerber2019}. The burn-in length is denoted by $N_b\in\mathbb{N}$. Starting with the seed $u_0\in \{u_{k_j}\}_{j=1}^{N_c}$, a Markov chain of length $N_b+1/c$ is simulated that has $p_{j,\ell}$ as its stationary distribution. The first $N_b$ states are rejected after the simulation. Algorithm \ref{MCMC alg} states the MCMC procedure that employs the \emph{Metropolis-Hastings} sampler \cite{Hastings70,Metropolis53}. During the algorithm, a \emph{proposal} $\bar{u}$ is generated according to the \emph{proposal density} $q$. Moreover, the acceptance function $\alpha : \mathbb{R}^n\times \mathbb{R}^n\rightarrow [0,\infty)$ is given by
\begin{align*}
\alpha_T(u_0,\bar{u}) := \frac{\Phi\left(-{G(\bar{u})}/{\sigma_j}\right)\varphi_n(\bar{u})q(u_0\mid \bar{u})}{\Phi\left(-{G(u_0)}/{\sigma_j}\right)\varphi_n(u_0)q(\bar{u}\mid u_0)},
\end{align*}
which is the ratio of the target density $p_{j,\ell}$ with respect to the current state of the chain $u_0$ and candidate $\bar{u}$. For a bridging step, the seeds are selected from samples distributed according to $p_{j,\ell}^t$ and the target density is $p_{j,\ell}^{t+1}$. The weights are given by $\{w_{j,\ell}^t(u_k)\}_{k=1}^N$, see (\ref{bridging weights}), and the acceptance function $\alpha$ must be replaced by
\begin{align*}
\alpha_B(u_0,\bar{u}) = \frac{\Phi\left(-{G_{\ell+1}(\bar{u})}/{\sigma_j}\right)^{\beta_{t+1}}\Phi\left(-{G_{\ell}(\bar{u})}/{\sigma_j}\right)^{1-\beta_{t+1}}\varphi_n(\bar{u})q(u_0\mid \bar{u})}{\Phi\left(-{G_{\ell+1}(u_0)}/{\sigma_j}\right)^{\beta_{t+1}}\Phi\left(-{G_{\ell}(u_0)}/{\sigma_j}\right)^{1-\beta_{t+1}}\varphi_n(u_0)q(\bar{u}\mid u_0)}.
\end{align*}
\begin{algorithm}
\caption{MCMC algorithm ($u_0$, $q(\cdot\mid \cdot)$, $\alpha(\cdot, \cdot)$, $c$, $N_b$)}\label{MCMC alg}
\begin{algorithmic}[1]
\STATE $\mathrm{Chain} = \emptyset$
\WHILE{$i\le N_b + 1/c$}
\STATE Generate a candidate $\bar{u}$ from the proposal density $q(\cdot\mid u_0)$
\STATE Evaluate $\alpha(u_0,\bar{u})$
\STATE Accept the candidate $\bar{u}$ with probability $\min\{1,\alpha(u_0,\bar{u})\}$
\IF{$\bar{u}$ is accepted}
\STATE $u_0 \leftarrow \bar{u}$
\ENDIF
\STATE $\mathrm{Chain} \leftarrow \mathrm{Chain}\cup u_0$
\STATE $i\leftarrow i+1$
\ENDWHILE
\STATE Discard the first $N_b$ elements of $\mathrm{Chain}$
\RETURN simulation of Markov chain
\end{algorithmic}
\end{algorithm}
\begin{remark}
Since consecutive densities within SIS and MLSIS are constructed in a way that they are not too different and samples are weighted according to the target distribution, the burn-in length can be small or even negligible within SIS and MLSIS \cite{Papaioannou16}. Note that for SuS and MLSuS, the $N\cdot \hat{p}_0$ samples with the lowest LSF values are selected as seeds.
\end{remark}
\subsection{Adaptive conditional sampling}
The random walk Metropolis Hastings algorithm \cite{Hastings70,Metropolis53} is a classical MCMC algorithm. However, random walk samplers suffer from the curse of dimensionality, i.e., the acceptance rate is small in high dimensions, see \cite{Papaioannou15}. Since high dimensional parameter spaces are considered in the numerical experiments in Section \ref{chapter numerical experiments}, \emph{adaptive conditional sampling} (aCS) is proposed, where the chain correlation is adapted to ensure a high acceptance rate. aCS is a dependent MCMC algorithm, i.e., the proposal density depends on the current seed $u_0$. More formally, the proposal $q$ is defined as the conditional multivariate normal density with mean vector $\rho u_0$ and covariance matrix $\Sigma=(1-\rho^2) I_n$, with $I_n$ denoting the identity matrix. During the iterations, $\rho\in [0,1]$ is adaptively adjusted such that the acceptance rate is around $44\%$ \cite{roberts2001}. This value leads to an optimal value in terms of the minimum autocorrelation criterion. By the structure of the proposal, the acceptance functions read as
\begin{align*}
\alpha_T(u_0,\bar{u}) &= \frac{\Phi\left(-{G(\bar{u})}/{\sigma_j}\right)}{\Phi\left(-{G(u_0)}/{\sigma_j}\right)},
\\\alpha_B(u_0,\bar{u}) &= \frac{\Phi\left(-{G_{\ell+1}(\bar{u})}/{\sigma_j}\right)^{\beta_{t+1}}\Phi\left(-{G_{\ell}(\bar{u})}/{\sigma_j}\right)^{1-\beta_{t+1}}}{\Phi\left(-{G_{\ell+1}(u_0)}/{\sigma_j}\right)^{\beta_{t+1}}\Phi\left(-{G_{\ell}(u_0)}/{\sigma_j}\right)^{1-\beta_{t+1}}}.
\end{align*}
A more detailed description of the algorithm with the adaptive adjustment of the correlation parameter is given in \cite{Papaioannou15}. The aCS algorithm can be viewed as an adaptive version of the \emph{preconditioned Crank-Nicolson} (pCN) sampler \cite{Cotter13} tailored to application within SIS.
\subsection{Independent sampler with von Mises-Fisher Nakagami proposal distribution}
Since aCS is a dependent MCMC algorithm, the states of the chains are correlated which can lead to a higher variance of the estimated ratio of normalizing constants $\hat{S}_{j,\ell}$ and $\hat{S}_{j,\ell}^t$ given in (\ref{S tempering}) and (\ref{S bridging}), respectively. Hence, this leads to a higher variance of the estimated probability of failure (\ref{MLSIS estimator}). An independent MCMC algorithm overcomes this problem through using a proposal density that does not depend on the current state. The dependence on the current state enters in the acceptance probability. If the proposal density is chosen close to the target density, the acceptance probability will be close to one and the samples will be approximately independent. In the context of SIS and MLSIS, the available samples and corresponding weights of each previous density can be used to fit a distribution model to be used as proposal density in the MCMC step \cite{Chopin01,Papaioannou16}. For instance, \emph{Gaussian mixture} models can be used as a proposal density \cite{Papaioannou16}. A drawback of Gaussian densities in high dimensions is the concentration of measure around the hypersphere with norm equal to $\sqrt{n}$, see \cite{Katafygiotis08,Papaioannou19}. Therefore, only the direction of the samples is of importance. Furthermore, the Gaussian mixture model with $K$ densities has $K {n(n+3)}/{2}+(K-1)$ parameters, which have to be estimated. Both issues motivate the vMFN distribution. Therein, the direction is sampled from the \emph{von Mises-Fisher} (vMF) distribution \cite{Wang16} while the radius is sampled from the \emph{Nakagami} distribution, see \cite{Nakagami60}. The vMFN mixture model has only $K(n+3)+(K-1)$ parameters, which scales linearly in the dimension $n$. Note that for the Gaussian mixture, the number of parameters of the distribution model scales quadratically in the dimension $n$. To apply the vMFN distribution as the proposal density in Algorithm \ref{MCMC alg}, the parameters of the distribution model have to be fitted in advance.
\\It is assumed that all samples $u\in\mathbb{R}^n$ are given in their polar coordinate representation $u=r\cdot a$, where $r=\Vert u\Vert_2\in\mathbb{R}_+$ is the norm of $u$ and $a={u}/{\Vert u \Vert_2}\in\mathbb{R}^n$ its direction. For $u= r\cdot a$ the vMFN distribution with one mixture is defined as the product of the von Mises-Fisher and the Nakagami distribution, that is,
\begin{align*}
f_{\mathrm{vMFN}}(r,a\mid\nu,\kappa,s,\gamma) = f_{\mathrm{N}}(r\mid s,\gamma)\cdot f_{\mathrm{vMF}}(a\mid\nu,\kappa).
\end{align*}
The vMF distribution $f_{\mathrm{vMF}}$ defines the distribution of the direction on the $n$-dimen\-sional hypersphere $\mathbb{S}^{n-1}:=\{x\in\mathbb{R}^n: \Vert x \Vert_2 = 1\}$ and is given by
\begin{align*}
f_{\mathrm{vMF}}(a\mid\nu,\kappa) = \frac{\kappa^{n/2-1}}{(2\pi)^{n/2}\mathcal{I}_{n/2-1}(\kappa)}\exp(\kappa\nu^Ta),
\end{align*}
where $\nu\in\mathbb{S}^{n-1}$ is a mean direction and $\kappa\ge 0$ characterises the concentration around $\nu$. $\mathcal{I}_{n/2-1}$ denotes the \emph{modified Bessel function} of the first kind and order $n/2-1$ \cite[Chapter 9]{Abramowitz64}. Contrary, the Nakagami distribution $f_\mathrm{N}$ specifies the distribution of the radius and is defined by
\begin{align*}
f_{\mathrm{N}}(r\mid s,\gamma) := \frac{2s^s}{\Gamma(s)\gamma^s}r^{2s-1} \exp\left(-\frac{s}{\gamma}r^2\right),
\end{align*}
where $\Gamma(s)$ is the \emph{Gamma function}, $s\ge0.5$ is a shape parameter and $\gamma>0$ a spread parameter. Figure \ref{Fig: vMFN} shows an illustration of $f_{\mathrm{N}}$ and $f_{\mathrm{vMF}}$ for certain parameter values.
\begin{figure}[htbp]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm,scale=0.23]{vMFN.pdf}
\caption[vMFN]{Illustration of the Nakagami distribution (left) and von Mises-Fisher distribution (right) in two dimensions. The parameters are defined as $\nu= (0.6, 0.75)^T$, $\kappa = 11$, $s=12$ and $\gamma=8$.}
\label{Fig: vMFN}
\end{figure}
\begin{remark}
We have defined the vMFN distribution for the radius and direction $(r,a)$ on $[0,\infty)\times \mathbb{S}^{n-1}$. However, we actually approximate a distribution on $\mathbb{R}^n$, which defines the distribution of $u=r\cdot a\in\mathbb{R}^n$. By \cite[Theorem 1.101]{Klenke2014}, the distribution of $u\in\mathbb{R}^n$, which is the product distribution of $r$ and $a$, is given as
\begin{align*}
f_U(u) &= \int_0^{\infty} f_{\mathrm{N}}(r\mid s,\gamma)f_{\mathrm{vMF}}\left(\frac{u}{r}\mid \nu,\kappa\right)\frac{1}{r^n}\mathrm{d}r
\\&\propto\int_0^{\infty}r^{2s-1-n}\exp\left(-\frac{s}{\gamma}r^2+\frac{\kappa\nu^T u}{r}\right)\mathrm{d}r.
\end{align*}
Since we can easily separate $u$ into $r$ and $a$, we usually work with $f_{\mathrm{vMFN}}$ rather than $f_U$.
\end{remark}
To define the vMFN distribution as a proposal density for Algorithm \ref{MCMC alg}, the parameters $\nu,\kappa,s$ and $\gamma$ have to be fitted using the current set of samples $\{u_k=r_k\cdot a_k\}_{k=1}^{N}$ and their weights $\{w_k\}_{k=1}^{N}$, which are given by (\ref{weights}) or (\ref{bridging weights}) for a tempering or bridging step, respectively. The parameters are determined by maximizing the weighted log-likelihood
\begin{align*}
\max_{\nu, \kappa, s, \gamma} \sum_{k=1}^N w_k \ln(f_{\mathrm{vMFN}}(r_k,a_k\mid \nu, \kappa, s, \gamma)).
\end{align*}
Differentiating this expression with respect to the parameters and setting the derivatives equal to zero yields the optimal parameters for the fitting \cite{Papaioannou19}. However, for the concentration $\kappa$ and shape parameter $s$ we use an approximation since the derivatives require the solutions of non-linear equations which arise from the Gamma function and modified Bessel function \cite{Bouhlel15,Wang16}. The fitted mean direction $\hat{\nu}$ and concentration $\hat{\kappa}$ are given by
\begin{align}
\hat{\nu} = \frac{\sum_{k=1}^N w_k\cdot a_k}{\Vert \sum_{k=1}^N w_k\cdot a_k\Vert_2},\hspace{0.5cm} \hat{\kappa} = \frac{\chi\cdot n - \chi^3}{1-\chi^2},\text{ where } \chi = \min\left\lbrace\frac{\Vert \sum_{k=1}^N w_k\cdot a_k\Vert_2}{\sum_{k=1}^N w_k}, 0.95\right\rbrace.\label{vMFN nu calc}
\end{align}
The upper bound of $0.95$ in (\ref{vMFN nu calc}) is chosen to ensure numerical stability of the algorithm. If $\chi$ converges to $1$, the vMFN distribution would converge to a point density \cite{Papaioannou19}. Moreover for the Nakagami distribution, the fitted spread $\hat{\gamma}$ and shape parameter $\hat{s}$ are given by
\begin{align*}
\hat{\gamma} = \frac{\sum_{k=1}^N w_k \cdot r_k^2}{\sum_{k=1}^N w_k}, \hspace{0.5cm} \hat{s} = \frac{\hat{\gamma}^2}{\nu_4-\hat{\gamma}^2}, \text{ where } \nu_4 = \frac{\sum_{k=1}^N w_k\cdot r_k^4}{\sum_{k=1}^N w_k}.
\end{align*}
To apply Algorithm \ref{MCMC alg} with respect to the vMFN distribution, the proposal $q(\cdot \mid u_0)$ is replaced by $f_{\mathrm{vMFN}}(\cdot,\cdot,\hat{\nu},\hat{\kappa},\hat{s},\hat{\gamma})$ with the fitted parameters.
\begin{remark}
If a mixture of vMFN distributions is considered with $K>1$ individual vMFN densities, the vMFN mixture distribution reads as
\begin{align*}
f_{\mathrm{vMFNM}}(r,a\mid\boldsymbol{\nu},\boldsymbol{\kappa},\boldsymbol{s},\boldsymbol{\gamma}) = \sum_{j=1}^K \pi_j f_{\mathrm{vMFN}}(r,a\mid\nu_j,\kappa_j,s_j,\gamma_j),
\end{align*}
where the weights $\pi_{j}$ represent the probability of each mode and $\sum_{j=1}^K \pi_j=1$. In this case, the assignments of the samples to the modes is unknown and this assignment has to be estimated in addition. Therefore, the required parameters cannot be estimated in one iteration. For instance, the \emph{Expectation-Maximization} algorithm \cite{McLachlan05} can be applied to estimate the parameters iteratively. The resulting formulas are given in \cite{Papaioannou19}. The usage of mixtures is motivated by multi-modal failure domains. In the numerical experiments in Section \ref{chapter numerical experiments} only $K=1$ is considered.
\end{remark}
\subsection{MCMC for a level dependent dimension}
In the case that MLSIS or MLSuS are applied with a level dependent parameter dimension, the procedure of a level update has to be adjusted. Consider the level update from level $\ell$ to $\ell + 1$ and assume that the corresponding LSFs are defined as $G_\ell : \mathbb{R}^{n_{\ell}}\rightarrow \mathbb{R}$ and $G_{\ell+1} : \mathbb{R}^{n_{\ell+1}} \rightarrow \mathbb{R}$, respectively, where $n_{\ell} < n_{\ell+1}$. Before the first MCMC step of the level update is carried out, the weights $w_{j,\ell}^1(u_k)$ for $k=1,\dots,N$ (see (\ref{bridging weights})) have to be evaluated. However, these evaluations require the evaluation of $G_{\ell+1}$ with respect to the current samples $\{u_{k}\}_{k=1}^{N}$ which are defined on $\mathbb{R}^{n_{\ell}}$. In the beginning of MLSIS or MLSuS the samples $u_k$ are initialized from the standard normal density $\varphi_{n_{1}}$. Therefore, it is natural to sample the missing dimensions $\Delta n_{\ell+1} = n_{\ell+1}-n_{\ell}$ from the standard normal distribution $\varphi_{\Delta n_{\ell+1}}$. Hence, for each $k=1,\dots, N$ we sample $\Delta n_{\ell+1}$ independent standard normal random variables $\psi_k\in\mathbb{R}^{\Delta n_{\ell+1}}$ and stack $u_k$ and $\psi_k$ together, i.e., $\tilde{u}_k = [u_k, \psi_k]\in\mathbb{R}^{n_{\ell+1}}$. In order to evaluate the weights $w_{j,\ell}^1(u_k)$, the LSF $G_{\ell+1}$ is evaluated for $\tilde{u}_k$ and $G_{\ell}$ for $u_k$. The seeds for the MCMC step are chosen based on these weights. Subsequently, Algorithm \ref{MCMC alg} is performed. Within the MCMC algorithm, a proposal $\bar{u}\in\mathbb{R}^{n_{\ell+1}}$ is sampled from $q(\cdot\mid u_0)$ which is suitable for the evaluations of $G_{\ell+1}$. For the LSF $G_{\ell}$ the first $n_{\ell}$ entries of $\bar{u}$ are taken as input.
\section{Multilevel Sequential Importance Sampling}\label{Section MLSIS}
SIS and SuS have the drawback that all PDE solves are performed with the same discretization accuracy. This can lead to huge computational costs if the discretization level is high or the number of required tempering steps is large. Simply decreasing the level $\ell$ can lead to a bias in the estimated probability of failure, since the accuracy of the LSF decreases if the discretization level decreases. Therefore, the work in \cite{Latz18} develops the $\mathrm{MLS}^2\mathrm{MC}$ method, where computations are performed on a sequence of increasing discretization levels while achieving an improvement in terms of computational costs. Originally, this method has been developed for \emph{Bayesian inverse problems} \cite{Dashti2017}. In this section, we show how we can reformulate the $\mathrm{MLS}^2\mathrm{MC}$ method as an MLSIS estimator for the probability of failure.
\subsection{Bridging}
Consider the sequence of discretization levels $\ell\in\{1,\dots,L\}$ where $\ell=1$ represents the smallest and $\ell=L\in\mathbb{N}$ highest discretization level, i.e., finest element mesh size. Throughout this paper, it is assumed that the computational costs of evaluating $G_{\ell}$ are given by
\begin{align}
\mathrm{Cost}_{\ell} = \mathcal{O}(2^{-d(L-\ell)}),\label{Cost levels}
\end{align}
where $d\in\mathbb{N}$ is the dimension of the computational domain. In order to use a hierarchy of discretization levels, \emph{bridging} is applied to transfer samples following a distribution on a coarse grid to samples following a distribution on the next finer grid. The level update is defined as proposed in \cite{Koutsourelakis09}. The density $p_{j,\ell}$ of the coarse grid is transformed to the density $p_{j,\ell+1}$ of the next finer grid by the sequence
\begin{align}
p_{j,\ell}^{t}(u) := \frac{1}{P_{j,\ell}^{t}}\Phi\left(-\frac{G_{\ell+1}(u)}{\sigma_j}\right)^{\beta_{t}}\Phi\left(-\frac{G_{\ell}(u)}{\sigma_j}\right)^{1-\beta_{t}}\varphi_n(u),\label{eq bridging}
\end{align}
for $t=0,\dots,N_{B_{\ell}}$, where $0 = \beta_{0} < \beta_{1}< \cdots < \beta_{N_{B_{\ell}}}=1$, i.e., $p_{j,\ell}^0 = p_{j,\ell}$ and $p_{j,\ell}^{N_{B_{\ell}}}=p_{j,\ell+1}$. The number $N_{B_{\ell}}\in\mathbb{N}$ of intermediate bridging densities is a priori unknown. As in equation (\ref{eq P_j}), the quantity $P_{j,\ell}^t$ in (\ref{eq bridging}) can be calculated using samples distributed according to $p_{j,\ell}^{t-1}$. Similarly, the fraction of consecutive normalizing constants $S_{j,\ell}^t = P_{j,\ell}^t/P_{j-l}^{t+1}$ is estimated by
\begin{align}
\hat{S}_{j,\ell}^t:=\hat{\mathrm{E}}_{p_{j,\ell}^{t-1}} [w_{j,\ell}^t(u)] = \frac{1}{N}\sum_{k=1}^{N} w_{j,\ell}^t(u_k),\label{S bridging}
\end{align}
where the samples $\{u_k\}_{k=1}^N$ are distributed according to $p_{j,\ell}^{t-1}$ and the weights are given by
\begin{align}
w_{j,\ell}^t(u_k) := \frac{\Phi\left(-{G_{\ell+1}(u_k)}/{\sigma_j}\right)^{\beta_{t}}\Phi\left(-{G_{\ell}(u_k)}/{\sigma_j}\right)^{1-\beta_{t}}}{\Phi\left(-{G_{\ell+1}(u_k)}/{\sigma_j}\right)^{\beta_{t-1}}\Phi\left(-{G_{\ell}(u_k)}/{\sigma_j}\right)^{1-\beta_{t-1}}},\label{bridging weights}
\end{align}
for $k=1,\dots,N$. The bridging temperatures $\beta_t$ are adaptively determined by solving the minimization problem
\begin{align}
\beta_t = \underset{{\beta \in (\beta_{t-1}, 1]}}{\mathrm{argmin}} \big\Vert \delta_{w_{j,\ell}^t} - \delta_{\mathrm{target}} \Vert_2^2,\label{min bridging}
\end{align}
where $\delta_{w_{j,\ell}^t}$ is the coefficient of variation of the weights. As in \cite{Latz18}, we set the target coefficient of variation within the bridging steps to the same value as in the tempering steps. Within one level update, the bridging sequence is finished if $\beta_t = 1$ holds. Note that each level update requires a sequence of bridging densities and tempering is not performed during level updates. As in the tempering steps, MCMC sampling is applied to transfer samples between two consecutive bridging densities. By combining all estimators $\hat{S}$ of the tempering and bridging sequences given in (\ref{S tempering}) and (\ref{S bridging}), respectively, the MLSIS estimator for the probability of failure is given as
\begin{align}
\hat{P}_{f}^{\mathrm{MLSIS}} = \left(\prod_{j=1}^{N_T} \prod_{\ell=1}^{L} \prod_{t=1}^{N_{B_{\ell}}}\hat{S}_{j,\ell}^t \right)\frac{1}{N}\sum_{k=1}^N w_{\mathrm{opt},L}(u_k),\label{MLSIS estimator}
\end{align}
where the weights $w_{\mathrm{opt},L}$ are defined in (\ref{weights optimal density}) with $j=N_T$ and represent the last tempering step from the IS density $p_{N_T,L}$ to the optimal IS density $p_{\mathrm{opt},L}$ given in (\ref{optimal IS density}). Algorithm \ref{Briging alg} summarizes the procedure of one level update.
\begin{algorithm}
\caption{Bridging algorithm ($N$ samples from $p_{j,\ell}$, $\sigma_{j}$, $\delta_{\mathrm{target}}$, $G_{\ell}$, $G_{\ell+1}$)}\label{Briging alg}
\begin{algorithmic}[1]
\STATE $t\leftarrow 0$
\STATE $\beta_t \leftarrow 0$
\WHILE{$\beta_t <1$}
\STATE $t\leftarrow t+1$
\STATE determine $\beta_{t}$ from the optimization problem (\ref{min bridging})
\STATE evaluate the weights $w_{j,\ell}^t$ as in (\ref{bridging weights}) for the current set of samples
\STATE evaluate the estimator $\hat{S}_{j,\ell}^t$ as in (\ref{S bridging})
\STATE re-sample the samples of $p_{j,\ell}^{t-1}$ based on their weights $w_{j,\ell}^t$
\STATE move the samples with MCMC to generate $N$ samples from the density $p_{j,\ell}^t$
\ENDWHILE
\RETURN $N$ samples from $p_{j,\ell+1}$, $\hat{S}_{j,\ell}^t$
\end{algorithmic}
\end{algorithm}
\subsection{Update scheme}\label{Update scheme}
The crucial part of the MLSIS method is to combine the adaptive tempering and bridging sequences and to provide a heuristic idea when to perform bridging or tempering. Initially, the samples $\{u_k\}_{k=1}^N$ are distributed according to the $n$-variate standard normal distribution $\varphi_n$, i.e., $\sigma_0 = \infty$. The LSF is evaluated on the smallest discretization level $\ell=1$. Tempering is always performed in the first step in order to determine $\sigma_1$ to approximate the indicator function. The tempering finishes if the coefficient of variation $\delta_{w_{\mathrm{opt},\ell}}$ of the weights with respect to the optimal IS density (\ref{weights optimal density}) is smaller than $\delta_{\mathrm{target}}$. The bridging finishes if the highest discretization level $\ell = L$ is reached. The combination of tempering and bridging determines costs and accuracy of the method. The authors in \cite{Latz18} analyse the efficiency of different decision schemes which leads to the following approach. The scheme should perform as many tempering steps as possible on small discretization levels while level updates are performed if the discrepancy between evaluations of two consecutive levels is too large. To measure this occurrence, a small subset of samples $\{u_{j_k}\}_{k=1}^{N_s}$ with $N_s<N$ is randomly selected without replacement. A level update is performed for this subset through one bridging step and the resulting coefficient of variation $\delta_{w^{N_s}}$ of the weights
\begin{align*}
w_{j,\ell}^{N_s}(u_{j_k}) = \frac{\Phi\left(-{G_{\ell+1}(u_{j_k})}/{\sigma_j}\right)}{\Phi\left(-{G_{\ell}(u_{j_k})}/{\sigma_j}\right)}, \text{ for } k=1,\dots,N_s,
\end{align*}
is estimated. Depending on the estimated value $\delta_{w^{N_s}}$, two cases occur:
\begin{itemize}
\item[1)] either $\delta_{w^{N_s}}> \delta_{\mathrm{target}}$: Bridging is performed since the accuracy is small, i.e., the difference between levels is high
\item[2)] or $\delta_{w^{N_s}} \leq \delta_{\mathrm{target}}$: Tempering is performed since the accuracy is high, i.e., the difference between levels is small.
\end{itemize}
If case 1) occurs, the evaluations with respect to $G_{\ell+1}$ can be stored and reused in the bridging step and invested costs are not wasted. Whereas in case 2), these evaluations are no longer required and invested costs are wasted. Calculating $\delta_{w^{N_s}}$ for the sample subset is redundant if tempering has already finished. Then, bridging is always performed to reach the final discretization level. Moreover as proposed in \cite{Latz18}, tempering is performed after each level update, if the tempering has not already finished. In this case, calculating $\delta_{w^{N_s}}$ is redundant, too. Note that $\delta_{w_{\mathrm{opt},\ell}}$ has to be calculated after each tempering and level update, to decide if tempering is finished. Finally, the MLSIS method is finished if both tempering and bridging are finished. The procedure is described in Algorithm \ref{MLSIS alg}.
\begin{remark}
We note that, according to \cite{Latz18}, the finest discretization level $L$ can be chosen adaptively based on the coefficient of variation $\delta_{w^{N_s}}$ between two consecutive discretization levels. Bridging is finished if $\delta_{w^{N_s}}$ is smaller than a given bound which is much smaller than $\delta_{\mathrm{target}}$.
\end{remark}
\begin{algorithm}
\caption{MLSIS algorithm ($N$, $n$, $L$, $\delta_{\mathrm{target}}$, $N_s$, $G_{\ell}$)}\label{MLSIS alg}
\begin{algorithmic}[1]
\STATE Generate $N$ samples from the $n$-variate standard normal distribution $\varphi_n$
\STATE $\ell \leftarrow 1$
\STATE Perform Tempering
\WHILE{Tempering is not finished \textbf{or} Bridging is not finished}
\IF{Tempering is finished}
\STATE Perform Bridging
\STATE $\ell \leftarrow \ell + 1$
\ELSIF{Bridging is finished \textbf{or} last step was a Bridging step}
\STATE Perform Tempering
\ELSE
\STATE Perform Bridging in one step with a random subset of $N_s$ samples
\STATE Calculate $\delta_{w^{N_s}}$
\IF{$\delta_{w^{N_s}}<\delta_{\mathrm{target}}$}
\STATE Perform Tempering
\ELSE
\STATE Perform Bridging
\STATE $\ell \leftarrow \ell + 1$
\ENDIF
\ENDIF
\STATE Calculate $\delta_{w_{\mathrm{opt},\ell}}$
\IF{$\delta_{w_{\mathrm{opt},\ell}}\leq\delta_{\mathrm{target}}$}
\STATE Tempering is finished
\ENDIF
\IF{$\ell=L$}
\STATE Bridging is finished
\ENDIF
\ENDWHILE
\RETURN Probability of failure estimate
\end{algorithmic}
\end{algorithm}
\subsection{Level dependent dimension}\label{level dependent dim}
As mentioned in Section \ref{SuS and MLSuS}, the nestedness problem of MLSuS motivates \cite{Ullmann15} to study a level dependent parameter dimension. This approach can also be applied in MLSIS to reduce variances between level updates and, hence, increase the number of tempering updates on coarse grids. For this purpose, it is assumed that the LSF $G$ depends on a random field that is approximated by a truncated \emph{Karhunen-Lo\`{e}ve} (KL) expansion. This setting occurs in many relevant applications as well as in numerical experiments presented in Section \ref{chapter numerical experiments}. Since high order KL terms are highly oscillating, they can not be accurately discretized on coarse grids which leads to noisy evaluations and higher variances. By reducing the number of KL terms on coarse grids, the variance between consecutive LSF evaluations is reduced. Therefore, the coefficient of variation $\delta_{w^{N_s}}$ is smaller and case 2) in Section \ref{Update scheme} is more likely. Hence, more tempering steps are performed on small discretization levels, which decreases the computational costs for MLSIS.
\section{Numerical experiments}\label{chapter numerical experiments}
In the following examples, all probability of failure estimates are obtained with respect to the same, finest discretization level, i.e., the multilevel methods iterate until this level is reached and the single-level methods estimator are based on this level. Therefore, the obtained errors involve only sampling errors while discretization errors are not included.
\subsection{1D diffusion equation}
We begin with Example 2 in \cite{Ullmann15} which considers the diffusion equation in the one-dimensional domain $D=[0,1]$. In particular, the corresponding stochastic differential equation is given by
\begin{align}
-\frac{\partial}{\partial x}\left(a(x, \omega)\frac{\partial}{\partial x} v(x,\omega)\right) &= 1 \text{ for } 0\le x \le 1,\label{1D diffusion}
\\ \text{such that} \hspace{0.2cm} v(0,\omega) &= 0 \text{ and } v'(1,\omega) = 0,\notag
\end{align}
for almost every (a.e.) $\omega\in\Omega$. Failure is defined as the event that the solution $v$ is larger than $0.535$ at $x=1$, i.e., $G(\omega):= 0.535 - v(1,\omega)\le 0$. The solution $v$ is approximated by a piecewise linear, continuous FEM approximation $v_h$ on a uniform grid with mesh size $h>0$. Hence, the approximated LSF is given by $G_{\ell}(\omega) = 0.535-v_{h_\ell}(1,\omega)$, where $\ell\in\mathbb{N}$ defines the discretization level. By crude Monte Carlo sampling (\ref{MC esimator}) with $N=10^7$ samples on a grid with mesh size $h={1}/{512}$, the probability of failure is estimated to be $P_f = 1.524\cdot 10^{-4}$. In the following, this value is referred to as the reference solution. Figure \ref{histogram} shows the mean of $10^5$ realizations of solutions $v_h(\cdot,\omega)$ plus/minus the standard deviation for $h={1}/{512}$. Additionally, the respective histogram of their LSF values is presented. We see that very few realizations are larger than $0.535$ at $x=1$.
\begin{figure}[htbp]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm,scale=0.23]{histogram.pdf}
\caption[1D histogram]{Mean over $10^5$ realizations of solutions $v_h(\cdot,\omega)$ plus/minus the standard deviation (left). Fit of the respective LSF values $G_h(\omega)$ for $h={1}/{512}$ (right). Note that the probabilities are shown on a log-scale.}
\label{histogram}
\end{figure}
\\The coefficient function $a(x,\omega)=\exp(Z(x,\omega))$ in (\ref{1D diffusion}) is a log-normal random field with constant mean function $\mathrm{E}[a(x,\cdot)] = 1$ and standard deviation $\mathrm{Std}[a(x,\cdot)] = 0.1$. That is, $Z$ is a Gaussian random field with constant mean function $\mu_Z = \log(\mathrm{E}[a(x,\cdot)]) - {\zeta_Z^2}/{2}$ and variance $\zeta_Z^2 = \log\left(({\mathrm{Std}[a(x,\cdot)]^2 + \mathrm{E}[a(x,\cdot)]^2})/{\mathrm{E}[a(x,\cdot)]^2}\right).$ Moreover, $Z$ has an exponential type covariance function which is given by $c(x,y) = \zeta_Z^2\exp\left(-{\vert x -y\vert}/{\lambda}\right)$, where $\lambda = 0.01$ denotes the correlation length. The infinite-dimensional log-normal random field $a$ is discretized by the truncated KL expansion of $Z$
\begin{align*}
Z(x,\omega) = \mu_Z + \zeta_Z \sum_{m=1}^M \sqrt{\nu_m}\theta_m(x) U_m(\omega),
\end{align*}
where $(\nu_m, \theta_m)$ are the KL eigenpairs and $\{U_m\}_{m=1}^M$ are independent standard normal Gaussian random variables. The eigenpairs can be analytically calculated as explained in \cite[p. 26ff]{Ghanem91}.
\\The probability of failure is estimated by SIS, MLSIS, SuS and MLSuS. For all methods, the estimation is performed for $N=250, 500, 1000, 2000$ samples and $N_s = 0.1\cdot N$ samples are considered for the small sample subset to decide if either bridging or tempering is performed in the update scheme of Section \ref{Update scheme}. For each parameter setting, the estimation is repeated $100$ times. For the multilevel methods, the sequence of mesh sizes is $h_\ell = 2^{-\ell-1}$ for $\ell=1,..,8$, i.e., the coarsest mesh size is $h_1={1}/{4}$ and the finest $h_8={1}/{512}$. If a level dependent dimension is considered, the parameter dimensions of the KL expansions are $n_1=10$, $n_2=20$, $n_3=40$, $n_4=80$ and $n_5=n_6=n_7=n_8=150$ as proposed in \cite{Ullmann15}. For a fixed parameter dimension, the dimension is $n=150$ for all discretization levels. This captures $87\%$ of the variability of $\log(a)$ \cite{Ullmann15}. SIS and MLSIS are performed for target coefficient of variations $\delta_{\mathrm{target}}=0.25$ and $\delta_{\mathrm{target}}=0.50$, which is considered in (\ref{min pbl}), (\ref{min bridging}). aCS and the independent sampler with the vMFN distribution and one mixture are considered as the MCMC methods without a burn-in. The parameter $c$ to define the number of seeds of the MCMC algorithm in (\ref{seed paramter}) is $c=0.1$ or $c=1$. For SuS and MLSuS, aCS is considered as the MCMC method without a burn-in and the parameter $\hat{p}_0$ in (\ref{SuS parameter}) is $\hat{p}_0 =0.1$ or $\hat{p}_0 =0.25$.
\subsubsection{Results}
Figure \ref{1D-Diffusion: probability of failure + standard deviation} shows the estimated mean probability of failure by SIS and MLSIS plus/minus its standard deviation. The estimates of the means are in accordance with the reference solution for all settings. As expected, the bias and standard deviation decrease with an increasing number of samples. Furthermore, the standard deviation is smaller for a smaller target coefficient of variation. We observe that sampling from the vMFN distribution with independent MCMC yields a smaller bias and smaller standard deviation than applying aCS. Additionally, we observe that $c=0.1$ yields also a smaller standard deviation than $c=1$. Comparing the MLSIS results with the SIS results for $\delta_{\mathrm{target}}=0.50$, we see that SIS reaches a smaller standard deviation. For $\delta_{\mathrm{target}}=0.25$ the results are similar. For MLSIS and $\delta_{\mathrm{target}}=0.50$, a level dependent parameter dimension leads to a higher standard deviation than a fixed parameter dimension. However, for $\delta_{\mathrm{target}}=0.25$, the results are similar. We summarize that $\delta_{\mathrm{target}}=0.25$ yields for all settings a similar bias and standard deviation. Only the MCMC algorithm has a larger influence on the standard deviation in this setting.
\begin{figure}[h!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm,scale=0.23]{1D_Pf_Std.pdf}
\caption[a]{Estimated probability of failure by SIS and MLSIS averaged over $100$ runs for $250, 500, 1000$ and $2000$ samples and $\delta_{\mathrm{target}}\in\{0.25, 0.50\}$. The coloured areas show the standard deviation of the estimates. The black lines show the reference estimate by Monte Carlo sampling. 1st row: aCS, $c=1.0$; 2nd row: aCS, $c=0.1$; 3rd row: vMFN, $c=1.0$; 4th row: vMFN, $c=0.1$; 1st column: SIS; 2nd column: MLSIS with level dependent dimension; 3rd column: MLSIS without level dependent dimension.}
\label{1D-Diffusion: probability of failure + standard deviation}
\end{figure}
\\Figure \ref{1D-Diffusion: RMSE and Costs} shows the \emph{relative root mean square error} (RMSE) on the horizontal axis and the computational costs on the vertical axis of the SIS and MLSIS estimators. The relative RMSE is defined as
\begin{align*}
\mathrm{relRMSE} := \frac{\left(\mathbb{E}\Bigl\lbrack\left(\hat{P}_f-P_f\right)^2\Bigr\rbrack\right)^{\frac{1}{2}}}{P_f},
\end{align*}
where $\hat{P}_f$ denotes the estimated probability of failure. The costs are calculated based on the formula given in (\ref{Cost levels}) for $L=8$ and $d=1$. SIS and MLSIS yield a similar range of the relative RMSE but the computational costs are lower for MLSIS. Comparing the computational costs shown in Figure \ref{1D-Diffusion: RMSE and Costs}, we can save around $61\%$ of the computational costs if we apply MLSIS for the estimation. This shows the achievement of the main goal of the MLSIS algorithm, that is to save computational costs by employing a hierarchy of discretization levels. Furthermore, we observe that sampling from the vMFN distribution yields a lower relative RMSE than applying aCS. In case of $\delta_{\mathrm{target}}=0.25$, a level dependent dimension yields a smaller relative RMSE and lower computational costs than a fixed parameter dimension. This was expected since variances between level updates are smaller and, therefore, more tempering steps are performed on coarse levels. However, MLSIS with a level dependent dimension, $\delta_{\mathrm{target}}=0.50$ and sampling from the vMFN distribution yields a higher relative RMSE than applying a fixed parameter dimension for the same computational cost.
\begin{figure}[h!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm,scale=0.23]{1D_RMSE_Cost_SIS_MLSIS.pdf}
\caption[1D-Diffusion: RMSE and Costs]{Computational costs and relative RMSE of SIS and MLSIS averaged over $100$ runs for $250, 500, 1000$ and $2000$ samples and $\delta_{\mathrm{target}}\in\{0.25, 0.50\}$. 1st column: aCS, $c=0.1$; 2nd column: vMFN, $c=0.1$.}
\label{1D-Diffusion: RMSE and Costs}
\end{figure}
\\Figure \ref{1D-Diffusion: Comparison RMSE and Costs} shows the relative RMSE and computational costs of SIS, MLSIS, SuS and MLSuS. We observe that SuS yields the same relative RMSE as SIS with aCS. However, SuS requires less computational costs. If we consider SIS with vMFN, the relative RMSE is smaller compared to SuS but the computational costs are higher for SIS. For the multilevel methods with a level dependent dimension, we observe that MLSuS and MLSIS with aCS yield a similar relative RMSE but MLSIS requires more computational costs. However, the savings with MLSuS are smaller compared to the single-level estimators. MLSIS with vMFN and $\delta_{\mathrm{target}}=0.25$ yields a much smaller relative RMSE than all other estimators and computational costs can be saved compared to MLSuS. Theses results are similar to the multilevel methods without a level dependent dimension. In this case, we can observe that MLSuS with $\hat{p}_0=0.1$ yields a large relative RMSE which is due to the nestedness issue of MLSuS.
\begin{figure}[h!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm,scale=0.23]{1D_RMSE_Cost_SIS_MLSIS_SuS_MLSuS.pdf}
\caption[1D-Diffusion: Comparison RMSE and Costs]{Computational costs and relative RMSE of SIS, MLSIS, SuS and MLSuS averaged over $100$ runs for $250, 500, 1000$ and $2000$ samples. SIS and MLSIS are considered with aCS and vMFN, $c=0.1$ and $\delta_{\mathrm{target}}\in\{0.25, 0.50\}$. SuS and MLSuS are considered with aCS and $\hat{p}_0 =0.1$ or $\hat{p}_0 =0.25$. 1st column: single level; 2nd column: multi-level with level dependent dimension; 3rd column: multi-level without level dependent dimension.}
\label{1D-Diffusion: Comparison RMSE and Costs}
\end{figure}
\subsection{2D flow cell}
We consider the two-dimensional application in \cite[Section 6.1]{Ullmann15}, which is a simplified setting of the rare event arising in planning a radioactive waste repository, see Section \ref{Sec: Introduction}. The probability of failure is based on the travel time of a particle within a two-dimensional flow cell. Therein, the following PDE system has to be satisfied in the unit square domain $D = (0,1) \times (0,1)$
\begin{align*}
q(x,\omega) &= -a(x,\omega)\nabla v(x,\omega), \text{ for } x\in D,
\\ \nabla \cdot q(x,\omega) &= 0, \text{ for } x\in D ,
\end{align*}
for a.e. $\omega\in\Omega$, where $q$ is the Darcy velocity, $v$ is the hydrostatic pressure and $a$ is the permeability of the porous medium, which is modelled as a log-normal random field. More precisely, $\log(a)$ is a Gaussian random field with mean $\mu_Z=0$ and constant variance $\zeta_Z^2 = 1$. Moreover, $Z$ has an exponential type covariance function
\begin{align*}
c(x,y) = \zeta_Z^2\exp\left(-\frac{\Vert x -y\Vert_1}{\lambda}\right),
\end{align*}
where $\lambda = 0.5$ denotes the correlation length. Again, the random field $Z$ is discretized by its KL expansion. The PDE system is coupled with the following boundary conditions
\begin{align}
\nu \cdot q(x,\omega) &= 0 \text{ for } x\in (0,1) \times \{0,1\},\label{2d bc 1}
\\ v(x,\omega) &= 1 \text{ for } x\in \{0\} \times (0,1),\label{2d bc 2}
\\ v(x,\omega) &= 0 \text{ for } x\in \{1\} \times (0,1),\label{2d bc 3}
\end{align}
for a.e. $\omega\in\Omega$, where $\nu$ denotes the derivative with respect to the normal direction on the boundary. Equation (\ref{2d bc 1}) impose that there is no flow across the horizontal boundaries and (\ref{2d bc 2}), (\ref{2d bc 3}) impose that there is inflow at the western boundary and outflow at the eastern boundary, respectively. The Darcy velocity $q$ is discretized by lowest order Raviart-Thomas mixed FEs, see \cite{Raviart77}. The pressure $v$ is discretized by piecewise constant elements. The grid is determined by the mesh size $h$ and consists of $2\cdot {1}/{h^2}$ uniform triangles.
\\The failure event is based on the time that a particle requires to travel from the initial point $x_0 = (0, 0.5)^T$ to any other point on the boundary $\partial D$. Given the Darcy velocity $q_{h_{\ell}}(x,\omega)$, the particle path $x(t,\omega)$ has to satisfy the following ordinary differential equation
\begin{align*}
\frac{\partial}{\partial t}x(t,\omega) = q_{h_{\ell}}(x(t,\omega),\omega), \text{ } x(0,\omega) = x_0.
\end{align*}
We approximate the particle path with the forward Euler discretization
\begin{align*}
x_{h_{\ell}}(t+\Delta t,\omega) = x(t,\omega) + \Delta t q_{h_{\ell}}(x(t,\omega),\omega),\text{ where } \Delta t = \frac{h_{\ell}}{2\Vert q_{h_{\ell}}(x(t,\omega),\omega)\Vert_2}.
\end{align*}
The travel time $\tau_{h_{\ell}}(\omega)\in[0,\infty)$ is defined as
\begin{align*}
\tau_{h_{\ell}}(\omega) = \underset{t>0}{\mathrm{argmin}}\text{ } x_{h_{\ell}}(t,\omega) \in \partial D.
\end{align*}
The approximation of the particle path is different to the procedure in \cite{Ullmann15} and, therefore, the estimated probability of failures differs slightly. Failure is defined as the event that $\tau_{h_{\ell}}$ is smaller than the threshold $\tau_0=0.03$. Hence, the respective LSF is defined as $G_{\ell}(\omega) := \tau_{h_{\ell}}(\omega)-\tau_0$. The reference solution of the probability of failure is $4.6730 \cdot 10^{-7}$ and is the estimated mean probability of failure over $100$ realizations of SuS with $N=10^4$ samples, mesh size $h={1}/{128}$, $\hat{p}_0=0.1$ and aCS as the MCMC method without burn-in. We note that SuS is a biased estimator and the relative bias scales as $\mathcal{O}(1/N)$ while the coefficient of variation scales as $\mathcal{O}(1/\sqrt{N})$ \cite{Au01}. The coefficient of variation of the $100$ probability of failure estimates is roughly $15\%$. Hence, we expect that the relative bias of the reference estimate is of order $10^{-2}$.
\\Figure \ref{flow_cell} shows a realization of a non-failure event and of a failure event. The figure displays the permeability $a$ and the respective solutions of the Darcy velocity $q_h$ for $h={1}/{128}$ and shows the particle paths which start at $x_0$ and their respective travel times.
\begin{figure}[h!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm,scale=0.3]{flow_cell.pdf}
\caption[flow cell realization]{Realization of the permeability $a(\cdot,\omega)$ and the respective solution of the Darcy velocity $q_h(\cdot,\omega)$ and the particle path $x(t,\omega)$ for $h={1}/{128}$.}
\label{flow_cell}
\end{figure}
\\The probability of failure is estimated by SIS, MLSIS, SuS and MLSuS. For all methods, the estimation is performed for $N=250, 500, 1000$ samples and $N_s = 0.1\cdot N$ samples are considered for the small sample subset to decide if either bridging or tempering is performed in the update scheme of Section \ref{Update scheme}. For each parameter setting, the estimation is repeated $100$ times. For the multilevel methods, the sequence of mesh sizes is $h_\ell = 2^{-\ell-1}$ for $\ell=1,..,6$, i.e., the coarsest mesh size is $h_1={1}/{4}$ and the finest $h_6={1}/{128}$. The multi-level methods are applied with a level dependent dimension, where the parameter dimensions of the KL expansions are $n_1=10$, $n_2=20$, $n_3=40$, $n_4=80$ and $n_5=n_6=150$. SIS and MLSIS are performed for target coefficient of variations equal to $0.50$ and $1.00$. aCS and the vMFN distributions are considered as the MCMC methods without a burn-in. The parameter $c$ to define the number of seeds of the MCMC algorithm in (\ref{seed paramter}) is $c=0.1$. For SuS and MLSuS, aCS is considered as the MCMC method without a burn-in and the parameter $\hat{p}_0$ in (\ref{SuS parameter}) is $\hat{p}_0 =0.1$ or $\hat{p}_0 =0.25$.
\subsubsection{Results}
Figure \ref{2D-Diffusion: probability of failure + standard deviation} shows the estimated mean probability of failure calculated by SIS and MLSIS plus/minus its standard deviation. The estimates of the means are in accordance with the reference solution and the bias and standard deviation decrease with an increasing number of samples. The standard deviation is smaller for a smaller target coefficient of variation. As for the 1D problem, we observe that applying the independent sampler with the vMFN distribution yields a smaller standard deviation than applying aCS.
\begin{figure}[htbp]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm,scale=0.23]{2D_Pf_Std.pdf}
\caption[a]{Estimated probability of failure by SIS and MLSIS averaged over $100$ runs for $250, 500$ and $1000$ samples and $\delta_{\mathrm{target}}\in\{0.50, 1.00\}$. The coloured areas show the standard deviation of the estimates. The black lines show the reference estimate by Monte Carlo sampling. 1st row: aCS, $c=0.1$; 2nd row: vMFN, $c=0.1$; 1st column: SIS; 2nd column: MLSIS with level dependent dimension.}
\label{2D-Diffusion: probability of failure + standard deviation}
\end{figure}
\\Figure \ref{2D-Diffusion: RMSE and Costs} shows the relative RMSE on the horizontal axis and the computational costs on the vertical axis for the SIS and MLSIS estimator. The costs are calculated based on the formula given in (\ref{Cost levels}) for $L=6$ and $d=2$. Again, SIS and MLSIS yield the same range of the relative RMSE but the computational costs are lower for MLSIS. Considering the computational costs shown in Figure \ref{2D-Diffusion: RMSE and Costs}, we can save around $61\%$ of the computational costs if we apply MLSIS for the estimation. This is the same level of savings as in the 1D problem. However in the 2D problem, less level updates have to be performed as in the 1D problem setting. We expect that even more computational costs can be saved with MLSIS if we increase the highest discretization level.
\begin{figure}[htbp]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm,scale=0.23]{2D_RMSE_Cost_SIS_MLSIS.pdf}
\caption[1D-Diffusion: RMSE and Costs]{Computational costs and relative RMSE of SIS and MLSIS averaged over $100$ runs for $250, 500$ and $1000$ samples and $\delta_{\mathrm{target}}\in\{0.50, 1.00\}$. 1st column: aCS, $c=0.1$; 2nd column: vMFN, $c=0.1$.}
\label{2D-Diffusion: RMSE and Costs}
\end{figure}
\\Figure \ref{2D-Diffusion: Comparison RMSE and Costs} shows the relative RMSE and computational costs of SIS, MLSIS, SuS and MLSuS. We observe that SuS yields the same relative RMSE as SIS with aCS. However, SuS requires less computational costs. If we consider SIS with vMFN, the relative RMSE is smaller compared to SuS but the computational costs are higher for SIS. For the multilevel methods, we observe that MLSIS with $\delta_{\mathrm{target}}=1.00$ yields a smaller relative RMSE and requires less computational costs than MLSuS. This observation holds for both MCMC algorithms. In the 1D problem, we only observe that MLSIS with sampling from the vMFN distribution yields a more efficient estimator than MLSuS. For SuS and MLSuS, $\hat{p}_0 =0.25$ yields higher computational costs and a slightly smaller relative RMSE than $\hat{p}_0 =0.1$ since more intermediate failure domains are considered for $\hat{p}_0 = 0.25$.
\begin{figure}[h!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm,scale=0.23]{2D_RMSE_Cost_SIS_MLSIS_SuS_MLSuS.pdf}
\caption[1D-Diffusion: Comparison RMSE and Costs]{Computational costs and relative RMSE of SIS, MLSIS, SuS and MLSuS averaged over $100$ runs for $250, 500$ and $1000$ samples. SIS and MLSIS are considered with aCS and vMFN, $c=0.1$ and $\delta_{\mathrm{target}}\in\{0.50, 1.00\}$. SuS and MLSuS are considered with aCS and $\hat{p}_0 =0.1$ or $\hat{p}_0 =0.25$. 1st column: single level; 2nd column: multi-level with level dependent dimension.}
\label{2D-Diffusion: Comparison RMSE and Costs}
\end{figure} |
1,108,101,562,931 | arxiv | \section{Introduction}
\setcounter{equation}{0}
In this paper, we study the following equation
\begin{equation}\label{1.1}
-\triangle_{\mathbb{H}^n}u =2n^2u^{q} \quad \text{in} \quad \Omega,
\end{equation}
where $\Omega$ is a domain in Heisenberg group $\mathbb{H}^n$, and $u$ is a smooth, nonnegative real function defined in $\Omega$,
while $\triangle_{\mathbb{H}^n}u=u_{\alpha\overline{\alpha}}+u_{\overline{\alpha}\alpha}$ is the Heisenberg laplacian of $u$.
Let $Q=2n+2$ be the homogeneous dimension of $\mathbb{H}^n$. Denote $q_*=\frac{Q}{Q-2}$ and $q^*=\frac{Q+2}{Q-2}$. We will deduce
an entire Liouville type theorem and a point wise estimate near the isolated singularity for solutions to (\ref{1.1}). Precisely, we have
\begin{theorem}\label{Thm1}
Let $\Omega=\mathbb{H}^n$ be the whole space and $1<q<q^*$, then the equation (\ref{1.1}) has no positive solution,
namely, any nonnegative entire solution of (\ref{1.1}) must be the trivial one.
\end{theorem}
\begin{theorem}\label{Thm2}
Let $\Omega=B_1(0)\backslash\{0\}$ be the punctured unit ball in $\mathbb{H}^n$ and $1<q<q^*$,
then any positive solution $u$ of (\ref{1.1}) satisfies:
\begin{equation}\label{1.2}
u(\xi)\leq C|\xi|^{\frac{-2}{q-1}} \quad \text{for} \,\, \xi\,\, \text{near}\,\, 0,
\end{equation}
with some positive constant $C$ depending only on $n$ and $q$.
\end{theorem}
The soul of the proofs of theorem \ref{Thm1},\ref{Thm2} is an integral estimate, which may be interested itself.
In fact we shall prove the following
\begin{theorem}\label{Thm3}
Let $1<q<q^*$, $B_{4r}(\xi_0)\subset \Omega$ be any ball centered at $\xi_0$ with radio $4r$.
Then any positive solution $u$ of (\ref{1.1}) satisfies:
\begin{equation}\label{1.3}
\int_{B_r(\xi_0)} u^{3q-q^*} \leq C\,r^{Q-2\times\frac{3q-q^*}{q-1}} ,
\end{equation}
with some positive constant $C$ depending only on $n$ and $q$.
\end{theorem}
For $1<q<q^*$, we see $Q-2\times\frac{3q-q^*}{q-1}<0$. So if $u$ be a positive solution of (\ref{1.1}) with $\Omega=\mathbb{H}^n$,
taking $r\rightarrow +\infty$ in (\ref{1.3}) we have
\begin{equation}\label{1.4}
\int_{\mathbb{H}^n} u^{3q-q^*} \leq 0.
\end{equation}
This contradiction signifies directly the conclusion of theorem \ref{Thm1}.
Also, we will prove theorem \ref{Thm2} by using (\ref{1.3})
combining with the Harnack inequality deduced by Capogna-Danielli-Garofalo ( see Theorem 3.1 in \cite{CDG1993}).
The equation (\ref{1.1}) had been studied intensively by many authors in decades. In fact, it comes from the CR Yamabe problem on $\mathbb{H}^n$.
Let $\mathbf{\Theta}$ be the standard contact form on $\mathbb{H}^n$. Consider another smooth contact form $\theta=u^{\frac{2}{n}}\mathbf{\Theta}$.
Then the pseudo-Hermitian scalar curvature associated to the Fefferman metric of ($\mathbb{H}^n$,$\theta$) is $R=4n(n+1)u^{q-q^*}$ while $u$ satisfies the equation (\ref{1.1}). Especially, for $q=q^*$, the pseudo-Hermitian scalar curvature $R$ is a constant and it is called CR Yamabe problem to find such a contact form $\theta$. Accordingly, for $q=q^*$, the equation (\ref{1.1}) is called the CR Yamabe equation. The constant $1+q^*=\frac{2Q}{Q-2}$ is the CR Sobolev embedding exponent and, for the Yamabe equation, there is nontrivial solution as follows
\begin{equation}\label{1.5}
u(z,t)=C\big|t+\sqrt{-1}z\cdot \overline{z}+z\cdot \mu +\lambda\big|^{-n}
\end{equation}
for some $C>0$, $\lambda\in \mathbf{C}$, Im($\lambda$)$>|\mu|^2/4$, and $\mu\in \mathbf{C}^n$, which is also the only extremals of
the CR Yamabe functional, or the CR Sobolev inequality, on $\mathbb{H}^n$. So our theorem 1 also confirms that $q^*$ is really critical.
The equation (\ref{1.1}) had catched many mathematician's attention since it raised in the CR Yamabe problem. The CR Yamabe problem had been initiated and studied by David Jerison and John M. Lee in their series fundamental works (see \cite{JL1987}-\cite{JL1989}).
For compact, strictly pseudovonvex CR manifold, the CR Yamabe problem had been solved in case of not locally CR equivalent to sphere $\mathbf{S}^{2n+1}$ by Jerison-Lee \cite{JL1989} for $n\geq 2 $ and Gamara \cite{Ga2001} for $n=1$, and in case of locally
CR equivalent to $\mathbf{S}^{2n+1}$ by Gamara-Yacoub \cite{GY2001} for all $n\geq 1$. The CR Yamabe problem on closed Einstein pseudohermitian manifold was also studied by Wang \cite{Wang2015}.
On $\mathbb{H}^n$, the uniqueness of CR Yamabe solutions was also obtained by Jerison-Lee \cite{JL1988} for the case of finite volume, i.e., $u\in L^{\frac{2Q}{Q-2}}(\mathbb{H}^n)$, and by Garofalo-Vassilev \cite{GV2001} for the case of cylindrically symmetry on groups of Heisenberg type. For the subcritical case $1<q\leq q_*$, Birindelli-Dolcetta \cite{BDC1997} proved that the only nonnegative entire solution of (\ref{1.1}) is the trivial one, where they also showed that $q=q_*$ is sharp for the nonexistence of the inequality
\begin{equation}\label{1.6}
-\triangle_{\mathbb{H}^n}u \geq 2n^2u^{q} \quad \text{in} \quad \mathbb{H}^n.
\end{equation}
For the subcritical case $q_*<q<q^*$, the classification of solutions to the equations (\ref{1.1}) is still open, except
for some partial results, such as the solutions are cylindrical or decay at infinity in \cite{BP1999},
and as $n>1,\,1<q\leq q^*-\frac{1}{(Q-2)(Q-1)^2}$ in \cite{Xu2009}.
There are analogous results in the Euclidean case. In the splendid paper \cite{GS1981}, B. Gidas and J. Spruck proved that,
for $1<q<\frac{n+2}{n-2}$, the following equation (\ref{1.7}) has no positive entire solution
in the n-dimension Euclidean space $\mathbb{R}^n$:
\begin{equation}\label{1.7}
-\triangle u = u^{q} .
\end{equation}
The method used by Gidas-Spruck \cite{GS1981} is the integral estimate, as here in our paper.
Later, Chen-Li [3] also got the same result by using the method of moving plane. Gidas-Spruck \cite{GS1981}
also gave a singularity estimate, precisely, they proved that for $\frac{n}{n-2}<q<\frac{n+2}{n-2}$,
the positive solution of (\ref{1.7}) in the punctured unit ball, with a nonremovable singularity at the origin, must satisfies
\begin{equation}\label{1.8}
|x|^{\frac{2}{q-1}}u(x)\rightarrow C_0 \quad \text{as} \,\, x\rightarrow 0.
\end{equation}
Also as in the Euclidean case, the Liouville type result in theorem \ref{Thm1} may be useful
in resolving the Dirichlet problem of the same equation, via the blow-up analysis, .
To get the integral estimate (\ref{1.3}), there are usually two difficulties to be overcome in a noncompact domain.
One is to find a suitable identity, and the other is to estimate the ``tail" terms after integrating
by part of the identity multiplied suitable cut-off function. When they studied the CR Yamabe problem in their splendid
work \cite{JL1988}, Jerison and Lee had found several remarkable identities with the help of a computer program.
The idea of Jerison-Lee \cite{JL1988} was originally due to Obata in his classic work \cite{Ob1971}. Roughly speaking,
the main idea is to find an identity to express some suitable nonnegative terms (usually with associate geometry data)
in a divergence form. Then integrating both sides of the identity to get useful results as one desired.
But there is a pity that the identity in case of Heisenberg group $\mathbb{H}^n$
given by Jerison-Lee(see (4.2) for example in \cite{JL1988}) is in a so complicated form that,
one must suffer an awful long and tedious computation to check it and, may hardly to make any generalization.
Nevertheless, based on our new observation, we would generalize the Jerison-Lee's identity to a new form
and gave a transparent proof, so that it can be used in dealing with the subcritical case of the equation (\ref{1.1}).
The paper is organized as follows. In section 2, we introduced some notations and proved a generalization of the Jerison-Lee's identity.
Then, using this generalized identity, we proved theorem 3 in section 3. The proof of theorem 2 shall be presented in section 4.
\section{Generalization of Jerison-Lee's identity }
\setcounter{equation}{0}
\setcounter{theorem}{0}
In this Section we discuss the generalization of a remarkable Jerison-Lee's identity from \cite{JL1988} on Heissenberg group $\mathbb{H}^{n}$.
We adopt notations as in \cite{JL1988}.
We shall first give a brief introduction to the Heissenberg group $\mathbb{H}^{n}$ and some notations.
We consider $\mathbb{H}^{n}$ as the set $\mathbb{C}^n\times \mathbb{R}$ with coordinates ($z,\,t$) and group law $\circ$:
$$(z,t)\circ(\xi,t)=\big( z+\xi,\, t+s+2\mathbf{Im} z^{\alpha}\overline{\xi}^{\alpha}\big)\quad
\text{for}\,\,(z,t),\,(\xi,t)\in \mathbb{C}^n\times \mathbb{R}, $$
\noindent where and in the sequel, the repeated indices are sum form $1$ to $n$. The CR structure of $\mathbb{H}^{n}$ is given by the bundle $\mathcal{H}$ spanned by the left-invariant vector
fields $Z_{\alpha} = \partial/\partial z^{\alpha}+ \sqrt{-1}\overline{z}^{\alpha} \partial/\partial t$, $\alpha= 1, \cdots, n$.
The standard (left-invariant) contact form on $\mathbb{H}^{n}$ is
$\mathbf{\Theta}= dt + \sqrt{-1}(z^{\alpha}d\overline{z}^{\alpha} - \overline{z}^{\alpha}dz^{\alpha})$.
With respect to the standard holomorphic frame $\{Z_{\alpha}\}$ and dual admissible coframe $\{dz^{\alpha}\}$,
the $Levi\,\, forms\,\,h_{\alpha\overline{\beta}}= 2\delta_{\alpha\overline{\beta}} $.
Accordingly, for a smooth function $f$ on $\mathbb{H}^{n}$, denote its derivatives by
$f_{\alpha}= Z_{\alpha}f$, $f_{\alpha\overline{\beta}}= Z_{\overline{\beta}}(Z_{\alpha}f)$
, $f_0= \frac{\partial f}{\partial t}$, $f_{0\alpha}= Z_{\alpha}(\frac{\partial f}{\partial t})$, etc.
We would also indicate the derivatives of functions or vector fields with indices preceded by a comma, to avoid confusion.
Then we have the following commutative formulae:
$$ f_{\alpha\beta}-f_{\beta\alpha}=0,\quad
f_{\alpha\overline{\beta}}-f_{\overline{\beta}\alpha}=2\sqrt{-1}\delta_{\alpha\overline{\beta}}\,f_0,\quad
f_{0\alpha}-f_{\alpha0}=0,$$
$$f_{\alpha\beta 0}-f_{\alpha 0\beta}=0,\qquad
f_{\alpha\beta\overline{\gamma}}-f_{\alpha\overline{\gamma}\beta}
=2\sqrt{-1}\delta_{\beta\overline{\gamma}}\,f_{\alpha 0},\, \cdots.$$
Now we are at the point to give the generalized identity for positive solution of the equation (\ref{1.1}).
Let $u>0$ solves (\ref{1.1}). Take $e^f= u^{\frac{1}{n}}$ and $q=q^*+ \frac{p}{n}$, then $f$ satisfies the following equation
\begin{equation}\label{2.1}
\mathbf{Re} f_{\alpha\overline{\alpha}}=-n|\partial f|^2-ne^{(2+p)f},
\end{equation}
where $|\partial f|^2=f_{\alpha}f_{\overline{\alpha}}$.
Define the tensors
\begin{equation}\label{2.2}
\begin{split}
D_{\alpha\beta} =& f_{\alpha\beta}-2f_{\alpha}f_{\beta}, \qquad\qquad\qquad D_{\alpha}=D_{\alpha\beta}f_{\overline{\beta}},\\
E_{\alpha\overline{\beta}} =& f_{\alpha\overline{\beta}}-\frac{1}{n}f_{\gamma\overline{\gamma}}\delta_{\alpha\overline{\beta}},
\qquad\quad \qquad E_{\alpha}=E_{\alpha\overline{\beta}}f_{\beta}, \\
G_{\alpha}=& \sqrt{-1}f_{0\alpha}-\sqrt{-1}f_0f_{\alpha}+e^{(2+p)f}f_{\alpha}+|\partial f|^2f_{\alpha}.
\end{split}
\end{equation}
Denote the function $g=|\partial f|^2+e^{(2+p)f}-\sqrt{-1}f_0$. Then we can rewrite the equation (\ref{2.1}) as
\begin{equation}\label{2.3}
f_{\alpha\overline{\alpha}}=-ng.
\end{equation}
\noindent Moreover, we observe that
\begin{equation}\label{2.4}
\begin{split}
\,& E_{\alpha\overline{\beta}} = f_{\alpha\overline{\beta}}+g\delta_{\alpha\overline{\beta}},\qquad\,\,\qquad
E_{\alpha}= f_{\alpha\overline{\beta}}f_{\beta}+gf_{\alpha},\\
\,& D_{\alpha}=f_{\alpha\beta}f_{\overline{\beta}}-2|\partial f|^2f_{\alpha},\qquad
G_{\alpha}=\sqrt{-1}f_{0\alpha}+gf_{\alpha},
\end{split}
\end{equation}
and by
\begin{equation}\label{2.5}
(|\partial f|^2)_{,\overline{\alpha}} =D_{\overline{\alpha}}+E_{\overline{\alpha}}+\overline{g}f_{\overline{\alpha}}-2f_{\overline{\alpha}}e^{(2+p)f},
\end{equation}
we find
\begin{equation}\label{2.6}
\begin{split}
g_{\overline{\alpha}} =& D_{\overline{\alpha}}+E_{\overline{\alpha}}+G_{\overline{\alpha}}+pf_{\overline{\alpha}}e^{(2+p)f}.
\end{split}
\end{equation}
In view of the above observation, now we give the crucial identity as follows
\begin{proposition}\label{Pro-1}
\begin{equation}\label{2.7}
\begin{split}
\,& \mathbf{Re}Z_{\overline{\alpha}}\Big{\{}e^{2(n-1)f}\Big[\big(g+3\sqrt{-1}f_0\big)E_{\alpha}\\
\,&\hspace{77pt} +\big(g- \sqrt{-1}f_0\big)D_{\alpha} -3\sqrt{-1}f_0 G_{\alpha}
-\frac{p}{4}f_{\alpha}|\partial f|^4 \Big]\Big{\}}\\
=&\, e^{(2n+p)f}\big(|E_{\alpha\overline{\beta}}|^2 +|D_{\alpha\beta}|^2\big)\\
\,& +e^{2(n-1)f}\big(|G_{\alpha}|^2+|G_{\alpha}+D_{\alpha}|^2 +|G_{\alpha}-E_{\alpha}|^2
+|D_{\alpha\beta}f_{\overline{\gamma}}+E_{\alpha\overline{\gamma}}f_{\beta}|^2\big)\\
\, & +e^{(2n-2)f}\mathbf{Re}\big(D_{\alpha}+E_{\alpha}\big) f_{\overline{\alpha}}\big( pe^{(2+p)f} -\frac{p}{2}|\partial f|^2 \big)\\
\,& -p(2n-1)|\partial f|^2e^{2(n+1+p)f}-\frac{p}{4}(7n-6)|\partial f|^4e^{(2n+p)f}\\
\,& -\frac{p}{4}n|\partial f|^6e^{2(n-1)f} -3np|f_0|^2e^{(2n+p)f}.
\end{split}
\end{equation}
\end{proposition}
\begin{remark}
Note that for $p=0$, then (\ref{2.7}) is exactly a remarkable identity found by Jerison and Lee (see (4.2) in \cite{JL1988}).
\end{remark}
\vspace{10pt}
$\mathbf{Proof\,\, of\,\, proposition\,\, \ref{Pro-1}}$ \qquad Denote
$$\mathcal{L}=\mathcal{L}_1+\mathcal{L}_2+\mathcal{L}_3+\mathcal{L}_4,$$
with
$$\quad\mathcal{L}_1=Z_{\overline{\alpha}}\Big{\{} \big(g+3\sqrt{-1}f_0\big)E_{\alpha}e^{2(n-1)f} \Big{\}},$$
$$\hspace{-2pt}\quad\mathcal{L}_2=Z_{\overline{\alpha}}\Big{\{} \big(g- \sqrt{-1}f_0\big)D_{\alpha}e^{2(n-1)f} \Big{\}},$$
$$\mathcal{L}_3=Z_{\overline{\alpha}}\Big{\{} -3\sqrt{-1}f_0 G_{\alpha}e^{2(n-1)f} \Big{\}},$$
$$\hspace{-10pt} \mathcal{L}_4=Z_{\overline{\alpha}}\Big{\{} -\frac{p}{4}f_{\alpha}|\partial f|^4e^{2(n-1)f} \Big{\}}.$$
First we compute $\mathcal{L}_3$. We have, by (\ref{2.4}) and the commutative formulae,
\begin{equation}\label{2.8}
\begin{split}
G_{\alpha,\overline{\alpha}}
=&\, \sqrt{-1}f_{0\alpha\overline{\alpha}}+g_{\overline{\alpha}}f_{\alpha} +gf_{\alpha\overline{\alpha}}\\
=&\, \sqrt{-1}f_{\alpha\overline{\alpha} 0}+g_{\overline{\alpha}}f_{\alpha} +g(f_{\overline{\alpha}\alpha}+2n\sqrt{-1}f_0)\\
=&\, f_{\alpha}g_{\overline{\alpha}}-n\sqrt{-1}g_{0}-n|g|^2+2n\sqrt{-1}f_0g.
\end{split}
\end{equation}
There for
\begin{equation}\label{2.9}
\begin{split}
e^{-2(n-1)f}\mathcal{L}_3
=&\, e^{-2(n-1)f}Z_{\overline{\alpha}}\Big{\{} -3\sqrt{-1}f_0 G_{\alpha}e^{2(n-1)f} \Big{\}}\\
=&\, -3\sqrt{-1}f_{0}G_{\alpha,\overline{\alpha}}-3\sqrt{-1}f_{0\overline{\alpha}}G_{\alpha}
-6(n-1)\sqrt{-1}f_{0}f_{\overline{\alpha}}G_{\alpha}\\
=&\, -3\sqrt{-1}f_{0}\big( f_{\alpha}g_{\overline{\alpha}}-n\sqrt{-1}g_{0}-n|g|^2+2n\sqrt{-1}f_0g \big)\\
\,&\, +3\sqrt{-1}(G_{\overline{\alpha}}-\overline{g}f_{\overline{\alpha}})G_{\alpha} -6(n-1)\sqrt{-1}f_{0}f_{\overline{\alpha}}G_{\alpha}\\
=&\, 3|G_{\alpha}|^2 -3(\overline{g}+2(n-1)\sqrt{-1}f_0)f_{\overline{\alpha}}G_{\alpha}\\
\,&\, -3\sqrt{-1}f_0f_{\alpha}g_{\overline{\alpha}}-3nf_0g_{0}+3n\sqrt{-1}f_0|g|^2+6n|f_0|^2g.
\end{split}
\end{equation}
Next we compute $\mathcal{L}_1$. Also by (\ref{2.4}) and the commutative formulae,
\begin{equation}\label{2.10}
\begin{split}
E_{\alpha,\overline{\alpha}}
=&\, f_{\alpha\overline{\beta}\overline{\alpha}}f_{\beta}+f_{\alpha\overline{\beta}}f_{\beta\overline{\alpha}}
+g_{\overline{\alpha}}f_{\alpha} +gf_{\alpha\overline{\alpha}}\\
=&\, f_{\alpha\overline{\alpha}\overline{\beta}}f_{\beta}
+f_{\alpha\overline{\beta}}(f_{\overline{\alpha}\beta}+2\sqrt{-1}f_0\delta_{\beta\overline{\alpha}})
+g_{\overline{\alpha}}f_{\alpha} +gf_{\alpha\overline{\alpha}}\\
=&\, -ng_{\overline{\beta}}f_{\beta} +f_{\alpha\overline{\beta}}f_{\overline{\alpha}\beta}
+2\sqrt{-1}f_0f_{\alpha\overline{\alpha}}
+g_{\overline{\alpha}}f_{\alpha} +gf_{\alpha\overline{\alpha}}\\
=&\, (1-n)f_{\alpha}g_{\overline{\alpha}}
+(E_{\alpha\overline{\beta}}-g\delta_{\alpha\overline{\beta}})(E_{\overline{\alpha}\beta}-\overline{g}\delta_{\overline{\alpha}\beta})
-n|g|^2\\
=&\, |E_{\alpha\overline{\beta}}|^2+(1-n)f_{\alpha}g_{\overline{\alpha}}.
\end{split}
\end{equation}
There for
\begin{equation}\label{2.11}
\begin{split}
e^{-2(n-1)f}\mathcal{L}_1
=&\, e^{-2(n-1)f}Z_{\overline{\alpha}}\Big{\{} \big(g+3\sqrt{-1}f_0\big)E_{\alpha}e^{2(n-1)f} \Big{\}}\\
=&\, \big(g+3\sqrt{-1}f_0\big)E_{\alpha,\overline{\alpha}}\\
\,&\, +\big(g_{\overline{\alpha}}+3\sqrt{-1}f_{0\overline{\alpha}}\big)E_{\alpha}
+2(n-1)\big(g+3\sqrt{-1}f_0\big)f_{\overline{\alpha}}E_{\alpha}\\
=&\, \big(g+3\sqrt{-1}f_0\big)\big( |E_{\alpha\overline{\beta}}|^2+(1-n)f_{\alpha}g_{\overline{\alpha}} \big)\\
\,&\, +g_{\overline{\alpha}}E_{\alpha}+3\big(-G_{\overline{\alpha}}+\overline{g}f_{\overline{\alpha}}\big)E_{\alpha}
+2(n-1)\big(g+3\sqrt{-1}f_0\big)f_{\overline{\alpha}}E_{\alpha}\\
=&\, \big(g+3\sqrt{-1}f_0\big)|E_{\alpha\overline{\beta}}|^2+\big( g_{\overline{\alpha}}-3G_{\overline{\alpha}}\big)E_{\alpha}\\
\,&\, +\big(3\overline{g}+2(n-1)(g+3\sqrt{-1}f_0)\big)f_{\overline{\alpha}}E_{\alpha}\\
\,&\, +(1-n)(g+3\sqrt{-1}f_0)f_{\alpha}g_{\overline{\alpha}}.
\end{split}
\end{equation}
Now we compute $\mathcal{L}_2$. Using the commutative formulae, we compute
\begin{equation}\label{2.12}
\begin{split}
f_{\alpha\beta\overline{\alpha}}
=&\, f_{\alpha\overline{\alpha}\beta}+2\sqrt{-1}f_{0\alpha}\delta_{\beta\overline{\alpha}}\\
=&\, (f_{\overline{\alpha}\alpha}+2n\sqrt{-1}f_0)_{,\beta}+2\sqrt{-1}f_{0\beta}\\
=&\, -n\overline{g}_{\beta}+2(n+1)(G_{\beta}-gf_{\beta})\\
=&\, 2(n+1)G_{\beta}-n\overline{g}_{\beta}-2(n+1)f_{\beta}g.
\end{split}
\end{equation}
By this and (\ref{2.4}), (\ref{2.5}) we deduce
\begin{equation}\label{2.13}
\begin{split}
D_{\alpha,\overline{\alpha}}
=&\, f_{\alpha\beta\overline{\alpha}}f_{\overline{\beta}}+f_{\alpha\beta}f_{\overline{\beta}\overline{\alpha}}
-2(|\partial f|^2)_{,\overline{\alpha}}f_{\alpha} -2|\partial f|^2f_{\alpha\overline{\alpha}}\\
=&\, \big( 2(n+1)G_{\beta}-n\overline{g}_{\beta}-2(n+1)f_{\beta}g \big)f_{\overline{\beta}}\\
\,&\, +(D_{\alpha\beta}+2f_{\alpha}f_{\beta})(D_{\overline{\alpha}\overline{\beta}}+2f_{\overline{\alpha}}f_{\overline{\beta}})\\
\,&\, -2\big( D_{\overline{\alpha}}+E_{\overline{\alpha}}+\overline{g}f_{\overline{\alpha}}-2f_{\overline{\alpha}}e^{(2+p)f} \big)f_{\alpha}
+2n|\partial f|^2g\\
=&\, |D_{\alpha\beta}|^2 +2f_{\overline{\alpha}}D_{\alpha} -2f_{\alpha}E_{\overline{\alpha}}
+2(n+1)f_{\overline{\alpha}}G_{\alpha} -nf_{\overline{\alpha}}\overline{g}_{\alpha}.
\end{split}
\end{equation}
There for
\begin{equation}\label{2.14}
\begin{split}
\,& e^{-2(n-1)f}\mathcal{L}_2\\
=&\,\, e^{-2(n-1)f}Z_{\overline{\alpha}}\Big{\{} \big(g- \sqrt{-1}f_0\big)D_{\alpha}e^{2(n-1)f} \Big{\}}\\
=&\,\, \big(g- \sqrt{-1}f_0\big)D_{\alpha,\overline{\alpha}}\\
\,&\,\, +\big(g_{\overline{\alpha}}- \sqrt{-1}f_{0\overline{\alpha}}\big)D_{\alpha}
+2(n-1)\big(g- \sqrt{-1}f_0\big)f_{\overline{\alpha}}D_{\alpha}\\
=&\,\, \big(g- \sqrt{-1}f_0\big)\big( |D_{\alpha\beta}|^2 +2f_{\overline{\alpha}}D_{\alpha} -2f_{\alpha}E_{\overline{\alpha}}
+2(n+1)f_{\overline{\alpha}}G_{\alpha} -nf_{\overline{\alpha}}\overline{g}_{\alpha} \big)\\
\,&\,\, +\big(g_{\overline{\alpha}}+G_{\overline{\alpha}}-\overline{g}f_{\overline{\alpha}} \big)D_{\alpha}
+2(n-1)\big(g- \sqrt{-1}f_0\big)f_{\overline{\alpha}}D_{\alpha}\\
=&\,\, \big(g- \sqrt{-1}f_0\big)|D_{\alpha\beta}|^2+(g_{\overline{\alpha}}+G_{\overline{\alpha}})D_{\alpha}\\
\,&\,\, +(2ng-\overline{g}-2n\sqrt{-1}f_0)f_{\overline{\alpha}}D_{\alpha} -2(g-\sqrt{-1}f_0)f_{\alpha}E_{\overline{\alpha}}\\
\,&\,\, +2(n+1)(g-\sqrt{-1}f_0)f_{\overline{\alpha}}G_{\alpha} -n(g-\sqrt{-1}f_0)f_{\overline{\alpha}}\overline{g}_{\alpha} .
\end{split}
\end{equation}
Finally, for $\mathcal{L}_4 $, by (\ref{2.3}) and (\ref{2.5}), a direct computation shows
\begin{equation}\label{2.15}
\begin{split}
e^{-2(n-1)f}\mathcal{L}_4
=& e^{-2(n-1)f}Z_{\overline{\alpha}}\Big{\{} -\frac{p}{4}f_{\alpha}|\partial f|^4e^{2(n-1)f} \Big{\}}\\
=& -\frac{p}{2}\big( E_{\overline{\alpha}}+D_{\overline{\alpha}}\big)f_{\alpha}|\partial f|^2\\
\,&\,\, -\frac{p}{4}n|\partial f|^6 +\frac{p}{4}(n+2)|\partial f|^4e^{(2+p)f} -\frac{p}{4}(n+2)\sqrt{-1}f_0|\partial f|^4.
\end{split}
\end{equation}
By (\ref{2.9}), (\ref{2.11}), (\ref{2.14}) and (\ref{2.15}), noticing
$f_{\alpha}E_{\overline{\alpha}}=f_{\overline{\alpha}}E_{\alpha} $ (this also implies it's real), we obtain
\begin{equation}\label{2.16}
\begin{split}
\,& e^{-2(n-1)f}(\mathcal{L}_1+\mathcal{L}_2+\mathcal{L}_3+\mathcal{L}_4)\\
=&\,\, \big(g- \sqrt{-1}f_0\big)|D_{\alpha\beta}|^2 +\big(g+3\sqrt{-1}f_0\big)|E_{\alpha\overline{\beta}}|^2 +3|G_{\alpha}|^2\\
\,&\,\, +(g_{\overline{\alpha}}+G_{\overline{\alpha}})D_{\alpha}\,
+\big( g_{\overline{\alpha}}-3G_{\overline{\alpha}}\big)E_{\alpha}\\
\,&\,\, +\Big( 2ng-\overline{g}-\frac{p}{2}|\partial f|^2-2n\sqrt{-1}f_0 \Big)f_{\overline{\alpha}}D_{\alpha}\\
\,&\,\, +\Big( 2(n-2)g+3\overline{g}-\frac{p}{2}|\partial f|^2+(6n-4)\sqrt{-1}f_0) \Big)f_{\overline{\alpha}}E_{\alpha}\\
\,&\,\, +\Big( 2(n+1)g-3\overline{g}-(8n-4)\sqrt{-1}f_0 \Big)f_{\overline{\alpha}}G_{\alpha}\\
\,&\,\, -\big( (n-1)g+3n\sqrt{-1}f_0 \big)f_{\alpha}g_{\overline{\alpha}}\,
-n(g-\sqrt{-1}f_0)f_{\overline{\alpha}}\overline{g}_{\alpha} -3nf_0g_0\\
\,&\,\, +3n\sqrt{-1}f_0|g|^2 +6n|f_0|^2g \\
\,&\,\, -\frac{p}{4}n|\partial f|^6 +\frac{p}{4}(n+2)|\partial f|^4e^{(2+p)f} -\frac{p}{4}(n+2)\sqrt{-1}f_0|\partial f|^4.
\end{split}
\end{equation}
Straight calculations show
\begin{equation}\label{2.17}
\begin{split}
g_0
=&\,\, \sqrt{-1}f_{\alpha}G_{\overline{\alpha}}-\sqrt{-1}f_{\overline{\alpha}}G_{\alpha}\\
\,&\,\, +2f_0|\partial f|^2 +(2+p)f_0e^{(2+p)f} -\sqrt{-1}f_{00}.
\end{split}
\end{equation}
By this and virtue of (\ref{2.6}), we finally reach
\begin{equation}\label{2.18}
\begin{split}
\,& e^{-2(n-1)f}(\mathcal{L}_1+\mathcal{L}_2+\mathcal{L}_3+\mathcal{L}_4)\\
=&\,\, \big(g- \sqrt{-1}f_0\big)|D_{\alpha\beta}|^2 +\big(g+3\sqrt{-1}f_0\big)|E_{\alpha\overline{\beta}}|^2 +3|G_{\alpha}|^2\\
\,&\,\, +(D_{\overline{\alpha}}+E_{\overline{\alpha}}+2G_{\overline{\alpha}})D_{\alpha}
+\big( D_{\overline{\alpha}}+E_{\overline{\alpha}}-2G_{\overline{\alpha}}\big)E_{\alpha}\\
\,&\,\, +p\big( f_{\overline{\alpha}} -\frac{1}{2}f_{\overline{\alpha}}|\partial f|^2 \big)\big(D_{\alpha}+E_{\alpha}\big)\\
\,&\,\, +(n-1)\big( |\partial f|^2+e^{(2+p)f} \big)(f_{\overline{\alpha}}G_{\alpha}-f_{\alpha}G_{\overline{\alpha}})\\
\,&\,\, +(5n+1)\sqrt{-1}f_0(f_{\overline{\alpha}}G_{\alpha}+f_{\alpha}G_{\overline{\alpha}})\\
\,&\,\, -p(2n-1)|\partial f|^2e^{2(2+p)f}-\frac{p}{4}(7n-6)|\partial f|^4e^{(2+p)f}\\
\,&\,\, -\frac{p}{4}n|\partial f|^6 -3np|f_0|^2e^{(2+p)f}\\
\,&\,\, -\frac{p}{4}(n+2)\sqrt{-1}f_0|\partial f|^4-p\sqrt{-1}f_0|\partial f|^2e^{(2+p)f}\\
\,&\,\, +3n\sqrt{-1}f_0|g|^2 -6n\sqrt{-1}f_0|f_0|^2+3n\sqrt{-1}f_0f_{00}.
\end{split}
\end{equation}
From this one can check (\ref{2.7}) easily and complete the proof of proposition \ref{Pro-1}. \qed
\section{Proof of Theorem \ref{Thm3} }
\setcounter{equation}{0}
\setcounter{theorem}{0}
$\mathbf{Proof\,\, of\,\, Theorem\,\, \ref{Thm3}}$\qquad Let $f$ satisfy the equation (\ref{2.3}) and hence the identity (\ref{2.7}).
Then by $q=q^*+ \frac{p}{n}$, the subcritical exponent $1<q<q^*$ corresponding to $-2<p< 0$ and (\ref{1.3}) is equivalent to
\begin{equation}\label{3-1}
\int_{B_r(\xi_0)} e^{(2n+4+3p)f}
\lesssim \,\,C r^{2n+2-2\times\frac{2n+4+3p}{2+p}}.
\end{equation}
Note that (\ref{2.7}) can be rewritten as
\begin{equation}\label{3-2}
\begin{split}
\mathcal{M}= \mathbf{Re}Z_{\overline{\alpha}}\Big{\{} \Big[ &\big(D_{\alpha}+E_{\alpha})(|\partial f|^2+e^{(2+p)f}) \\
\,&-\sqrt{-1}f_0\big(2D_{\alpha}-2E_{\alpha}+3 G_{\alpha}\big)
-\frac{p}{4}f_{\alpha}|\partial f|^4 \Big]e^{2(n-1)f}\Big{\}},
\end{split}
\end{equation}
with
\begin{equation}\label{3-3}
\begin{split}
\mathcal{M}
=& \big(|E_{\alpha\overline{\beta}}|^2 +|D_{\alpha\beta}|^2\big)e^{(2n+p)f}
+\Big( |G_{\alpha}|^2+|D_{\alpha\beta}f_{\overline{\gamma}}+E_{\alpha\overline{\gamma}}f_{\beta}|^2\Big)e^{2(n-1)f}\\
\,& + s\Big(|G_{\alpha}+D_{\alpha}|^2 + |G_{\alpha}-E_{\alpha}|^2\Big)e^{2(n-1)f}\\
\,& +(1-s)\Big(|G_{\alpha}+D_{\alpha}|^2 + |G_{\alpha}-E_{\alpha}|^2\Big)e^{2(n-1)f}\\
\,& + p e^{2(n-1)f}\mathbf{Re}[f_{\overline{\alpha}}(G_{\alpha}+D_{\alpha})]\big( e^{(2+p)f} -\frac{1}{2}|\partial f|^2 \big)\\
\,& + p e^{2(n-1)f}\mathbf{Re}[f_{\overline{\alpha}}(E_{\alpha}-G_{\alpha})]\big( e^{(2+p)f} -\frac{1}{2}|\partial f|^2 \big)\\
\,& -p(2n-1)|\partial f|^2e^{2(n+1+p)f}-\frac{p}{4}(7n-6)|\partial f|^4e^{(2n+p)f}\\
\,& -\frac{p}{4}n|\partial f|^6e^{2(n-1)f} -3np|f_0|^2e^{(2n+p)f}.
\end{split}
\end{equation}
So for $-2<p < 0$, we shall choose suitable $0<s<1$ such that $\mathcal{M}\geq 0$.
Now we rewrite $\mathcal{M}$ as
\begin{equation}\label{3-3a}
\begin{split}
\mathcal{M}
=& \big(|E_{\alpha\overline{\beta}}|^2 +|D_{\alpha\beta}|^2\big)e^{(2n+p)f}
+\Big( |G_{\alpha}|^2+|D_{\alpha\beta}f_{\overline{\gamma}}+E_{\alpha\overline{\gamma}}f_{\beta}|^2\Big)e^{2(n-1)f}\\
\,& + s \Big(|G_{\alpha}+D_{\alpha}|^2 + |G_{\alpha}-E_{\alpha}|^2\Big)e^{2(n-1)f}\\
\,& +e^{2(n-1)f}\Big|\sqrt{1-s}(G_{\alpha}+D_{\alpha}) +\frac{p}{2\sqrt{1-s}} f_{\alpha} \big( e^{(2+p)f} -\frac{1}{2}|\partial f|^2 \big)\Big|^2\\
\,& +e^{2(n-1)f}\Big|\sqrt{1-s}(E_{\alpha}-G_{\alpha}) +\frac{p}{2\sqrt{1-s}} f_{\alpha} \big( e^{(2+p)f} -\frac{1}{2}|\partial f|^2 \big)\Big|^2\\
\,&- \frac{p^2}{2(1-s)}e^{2(n-1)f}|\partial f|^2\Big[ e^{2(2+p)f} + \frac{1}{4}|\partial f|^4 - e^{(2+p)f}|\partial f|^2 \Big]\\
\,& -p(2n-1)|\partial f|^2e^{2(n+1+p)f}-\frac{p}{4}(7n-6)|\partial f|^4e^{(2n+p)f}\\
\,& -\frac{p}{4}n|\partial f|^6e^{2(n-1)f} -3np|f_0|^2e^{(2n+p)f}.
\end{split}
\end{equation}
Then we treat the terms in the last three lines and get
\begin{equation}\label{3-3b}
\begin{split}
\mathcal{M}
=& \big(|E_{\alpha\overline{\beta}}|^2 +|D_{\alpha\beta}|^2\big)e^{(2n+p)f}
+\Big( |G_{\alpha}|^2+|D_{\alpha\beta}f_{\overline{\gamma}}+E_{\alpha\overline{\gamma}}f_{\beta}|^2\Big)e^{2(n-1)f}\\
\,& + s \Big(|G_{\alpha}+D_{\alpha}|^2 + |G_{\alpha}-E_{\alpha}|^2\Big)e^{2(n-1)f}\\
\,& +e^{2(n-1)f}\Big|\sqrt{1-s} (G_{\alpha}+D_{\alpha}) + \frac{p}{2\sqrt{1-s}} f_{\alpha} \big( e^{(2+p)f} -\frac{1}{2}|\partial f|^2 \big)\Big|^2\\
\,& +e^{2(n-1)f}\Big|\sqrt{1-s} (E_{\alpha}-G_{\alpha}) + \frac{p}{2\sqrt{1-s}} f_{\alpha} \big( e^{(2+p)f} -\frac{1}{2}|\partial f|^2 \big)\Big|^2\\
\,& -p\big[\frac{n}{4} + \frac{p}{8(1-s)}\big]|\partial f|^6e^{2(n-1)f}-\frac{p}{4}\big[7n-6 -\frac{2p}{1-s}\big] |\partial f|^4e^{(2n+p)f}\\
\,& -p\big[2n-1+\frac{p}{2(1-s)}\big]|\partial f|^2e^{2(n+1+p)f} -3np|f_0|^2e^{(2n+p)f}.
\end{split}
\end{equation}
Now we take $0<s=s_0= \frac{1}{2} +\frac{p}{4n}<1$, then
\begin{equation}\label{3-3c}
\begin{split}
\mathcal{M}
=& \big(|E_{\alpha\overline{\beta}}|^2 +|D_{\alpha\beta}|^2\big)e^{(2n+p)f}
+\Big( |G_{\alpha}|^2+|D_{\alpha\beta}f_{\overline{\gamma}}+E_{\alpha\overline{\gamma}}f_{\beta}|^2\Big)e^{2(n-1)f}\\
\,& + s_0 \Big(|G_{\alpha}+D_{\alpha}|^2 + |G_{\alpha}-E_{\alpha}|^2\Big)e^{2(n-1)f}\\
\,& +e^{2(n-1)f}\Big|\sqrt{1-s_0}(G_{\alpha}+D_{\alpha}) +\frac{p}{2\sqrt{1-s_0}} f_{\alpha}\big( e^{(2+p)f} -\frac{1}{2}|\partial f|^2 \big)\Big|^2\\
\,& +e^{2(n-1)f}\Big|\sqrt{1-s_0}(E_{\alpha}-G_{\alpha}) +\frac{p}{2\sqrt{1-s_0}} f_{\alpha}\big( e^{(2+p)f} -\frac{1}{2}|\partial f|^2 \big)\Big|^2\\
\,& -p\frac{n(2n+p)}{4(2n-p)}|\partial f|^6e^{2(n-1)f}-\frac{p}{4}\big[7n-6 -\frac{8np}{2n-p}\big] |\partial f|^4e^{(2n+p)f}\\
\,& -p\frac{4n^2-2n+p}{2n-p}|\partial f|^2e^{2(n+1+p)f} -3np|f_0|^2e^{(2n+p)f},
\end{split}
\end{equation}
and clearly all the coefficients in above are positive for $-2<p<0$ and $\mathcal{M}\geq 0$.
Since $B_{4r}\subset \Omega$, we can take a real smooth cut off function $\eta$ such that
\begin{equation}\label{3-4}
\begin{cases}
\eta\equiv 1 &in \,\,B_r,\\
0\leq\eta\leq1 &in \,\,B_{2r},\\
\eta\equiv 0 &in \,\,\Omega\backslash B_{2r},\\
|\partial \eta|\lesssim \frac{1}{r} &in \,\,\Omega,
\end{cases}
\end{equation}
\noindent where we use ``$\lesssim $" , ``$\cong$" to replace ``$\leq$" and ``$=$" respectively, to drop out some
positive constants independent of $r$ and $f$.
Take a real $s>0$ big enough. Multiply both sides of (\ref{3-2}) by $\eta^s $ and integrate over $\Omega$ we have
\begin{equation}\label{3-5}
\begin{split}
\,& \int_{\Omega}\eta^s \mathcal{M}\\
=& \int_{\Omega}\eta^s\mathbf{Re}Z_{\overline{\alpha}}\Big{\{} \big[ \big(D_{\alpha}+E_{\alpha})(|\partial f|^2+e^{(2+p)f}) \\
\,&\hspace{68pt} -\sqrt{-1}f_0\big(2D_{\alpha}-2E_{\alpha}+3 G_{\alpha}\big)
-\frac{p}{4}f_{\alpha}|\partial f|^4 \big]e^{2(n-1)f}\Big{\}}.
\end{split}
\end{equation}
Integrating by part and using (\ref{3-4}) we get
\begin{equation}\label{3-6}
\begin{split}
\,& \int_{\Omega}\eta^s \mathcal{M}\\
= & -s\int_{\Omega}\eta^{s-1}\mathbf{Re}\eta_{\overline{\alpha}}\Big{\{} \big[ \big(D_{\alpha}+E_{\alpha})(|\partial f|^2+e^{(2+p)f}) \\
\,&\hspace{99pt} -\sqrt{-1}f_0\big(2D_{\alpha}-2E_{\alpha}+3 G_{\alpha}\big)
-\frac{p}{4}f_{\alpha}|\partial f|^4 \big]e^{2(n-1)f}\Big{\}}\\
\lesssim & \frac{1}{r}\int_{\Omega}\eta^{s-1}\Big{\{} |D_{\alpha}+E_{\alpha}|(|\partial f|^2+e^{(2+p)f})e^{2(n-1)f} \\
\,& \hspace{55pt} +|f_0|\big|2D_{\alpha}-2E_{\alpha}+3 G_{\alpha}\big|e^{2(n-1)f}+|\partial f|^5e^{2(n-1)f}\Big{\}}
\end{split}
\end{equation}
Since
$$|D_{\alpha}+E_{\alpha}|\leq |D_{\alpha}+G_{\alpha}|+|E_{\alpha}-G_{\alpha}|,$$
$$\big|2D_{\alpha}-2E_{\alpha}+3 G_{\alpha}\big| \leq 2|D_{\alpha}+G_{\alpha}|+2|E_{\alpha}-G_{\alpha}|+|G_{\alpha}|,$$
\noindent using the Young's inequality $ab\leq \epsilon a^2+\frac{C}{\epsilon}b^2$ in (\ref{3-6}) we obtain
\begin{equation}\label{3-7}
\begin{split}
\int_{\Omega}\eta^s \mathcal{M}
\lesssim & \epsilon \int_{\Omega}\eta^{s}\big( |D_{\alpha}+G_{\alpha}|^2+|E_{\alpha}-G_{\alpha}|^2+|G_{\alpha}|^2\big) e^{2(n-1)f} \\
\,& \hspace{13pt} +\frac{1}{\epsilon r^2} \int_{\Omega}\eta^{s-2}\big(|\partial f|^4+e^{2(2+p)f}+|f_0|^2\big)e^{2(n-1)f}\\
\,& \hspace{113pt} +\frac{1}{r} \int_{\Omega}\eta^{s-1}|\partial f|^5e^{2(n-1)f}.
\end{split}
\end{equation}
This implies, by taking $\epsilon$ small, that
\begin{equation}\label{3-8}
\begin{split}
\int_{\Omega}\eta^s \mathcal{M}
\lesssim & \frac{1}{r^2} \int_{\Omega}\eta^{s-2}\big(|\partial f|^4+e^{2(2+p)f}+|f_0|^2\big)e^{2(n-1)f}\\
\,& \hspace{86pt} +\frac{1}{r} \int_{\Omega}\eta^{s-1}|\partial f|^5e^{2(n-1)f}.
\end{split}
\end{equation}
To go forward, we need the following lemmas, that will be proved at the end of this section.
\begin{lemma}\label{lem-1}
\begin{equation}\label{3-9}
\begin{split}
\int_{\Omega}\eta^{s-2}|f_0|^2e^{2(n-1)f}
\lesssim &\, \epsilon r^2\int_{\Omega} \eta^s \mathcal{M}
+\int_{\Omega}\eta^{s-2}|\partial f|^4e^{2(n-1)f}\\
\,& +\int_{\Omega}\eta^{s-2}|\partial f|^2e^{(2n+p)f}
+\frac{1}{r^2}\int_{\Omega}\eta^{s-4}|\partial f|^2e^{2(n-1)f}.
\end{split}
\end{equation}
\end{lemma}
\begin{lemma}\label{lem-2}
\begin{equation}\label{3-10}
\int_{\Omega} \eta^s e^{(2n+4+3p)f}
\lesssim \, \int_{\Omega} \eta^s|\partial f|^2e^{2(n+1+p)f}
+\frac{1}{r^{2}}\int_{\Omega} \eta^{s-2} e^{(2n+2+2p)f} .
\end{equation}
\end{lemma}
\vspace{10pt}
Now plugging (\ref{3-9}) into (\ref{3-8}) with small $\epsilon$ we get
\begin{equation}\label{3-11}
\begin{split}
\int_{\Omega}\eta^s \mathcal{M}
\lesssim &\,\frac{1}{r^{2}}\int_{\Omega}\eta^{s-2} e^{2(n+1+p)f}\\
\,& +\frac{1}{r^2}\int_{\Omega}\eta^{s-2}|\partial f|^4e^{2(n-1)f}
+\frac{1}{r^2}\int_{\Omega}\eta^{s-2}|\partial f|^2e^{(2n+p)f}\\
\,& +\frac{1}{r^4}\int_{\Omega}\eta^{s-4}|\partial f|^2e^{2(n-1)f}
+\frac{1}{r} \int_{\Omega}\eta^{s-1}|\partial f|^5e^{2(n-1)f}.
\end{split}
\end{equation}
For the last term in above, using Young's inequality one get
\begin{equation}\label{3-12}
\begin{split}
\frac{1}{r} \int_{\Omega}\eta^{s-1}|\partial f|^5e^{2(n-1)f}
\lesssim & \epsilon \int_{\Omega}\eta^{s }|\partial f|^6e^{2(n-1)f}
+\frac{1}{r^6}\int_{\Omega}\eta^{s-6} e^{2(n-1)f}.
\end{split}
\end{equation}
Similarly, one has
\begin{equation}\label{3-13}
\begin{split}
\frac{1}{r^2} \int_{\Omega}\eta^{s-2}|\partial f|^4e^{2(n-1)f}
\lesssim & \epsilon \int_{\Omega}\eta^{s }|\partial f|^6e^{2(n-1)f}
+\frac{1}{r^6}\int_{\Omega}\eta^{s-6} e^{2(n-1)f},
\end{split}
\end{equation}
\begin{equation}\label{3-14}
\begin{split}
\frac{1}{r^2} \int_{\Omega}\eta^{s-2}|\partial f|^2e^{(2n+p)f}
\lesssim & \epsilon \int_{\Omega}\eta^{s }|\partial f|^4e^{(2n+p)f}
+\frac{1}{r^4}\int_{\Omega}\eta^{s-4} e^{(2n+p)f},
\end{split}
\end{equation}
and
\begin{equation}\label{3-15}
\begin{split}
\frac{1}{r^4} \int_{\Omega}\eta^{s-4}|\partial f|^2e^{2(n-1)f}
\lesssim & \epsilon \int_{\Omega}\eta^{s }|\partial f|^6e^{2(n-1)f}
+\frac{1}{r^6}\int_{\Omega}\eta^{s-6} e^{2(n-1)f}.
\end{split}
\end{equation}
Inserting these into (\ref{3-11}) and taking $\epsilon$ small yields
\begin{equation}\label{3-16}
\begin{split}
\int_{\Omega}\eta^s \mathcal{M}
\lesssim & \frac{1}{r^{2}}\int_{\Omega}\eta^{s-2} e^{2(n+1+p)f}\\
\, & +\frac{1}{r^4}\int_{\Omega}\eta^{s-4} e^{(2n+p)f}+ \frac{1}{r^{6}}\int_{\Omega}\eta^{s-6} e^{2(n-1)f}.
\end{split}
\end{equation}
Combining this with (\ref{3-10}) we arrive at
\begin{equation}\label{3-17}
\begin{split}
\,& \int_{\Omega} \eta^s e^{(2n+4+3p)f}\\
\lesssim & \frac{1}{r^{2}}\int_{\Omega}\eta^{s-2} e^{2(n+1+p)f}\\
\, & +\frac{1}{r^4}\int_{\Omega}\eta^{s-4} e^{(2n+p)f}+ \frac{1}{r^{6}}\int_{\Omega}\eta^{s-6} e^{2(n-1)f} \\
\lesssim & \epsilon \int_{\Omega} \eta^s e^{(2n+4+3p)f}
+r^{-2\times\frac{2n+4+3p}{2+p}}\int_{\Omega} \eta^{s-2\times\frac{2n+4+3p}{2+p}} ,
\end{split}
\end{equation}
where in the last step, the Young's inequality has been used three time with different exponent pairs.
Note that $0\leq\eta\leq 1$ in $\Omega$ and $\eta= 1$ in $B_r(\xi_0)\subset\Omega$.
Therefor, by choosing $s>0$ big enough and $\epsilon$ small, we finally obtain
\begin{equation}\label{3-18}
\int_{B_r(\xi_0)} e^{(2n+4+3p)f}
\lesssim \,\, r^{2n+2-2\times\frac{2n+4+3p}{2+p}} .
\end{equation}
This is (\ref{3-1}), and hence theorem \ref{Thm3} is proved.\qed
\vspace{20pt}
To complete this section, now we give the proofs of lemma \ref{lem-1},\ref{lem-2}.
\vspace{10pt}
$\mathbf{Proof\,\, of\,\, lemma\,\, \ref{lem-1} }$
Since $f$ satisfies the equation (\ref{2.3}), a straight calculation shows
\begin{equation}\label{3--17}
e^{-kf}\mathbf{Re}Z_{\overline{\alpha}}\Big( \sqrt{-1}f_0f_{\alpha}e^{kf} \Big)
= -\mathbf{Re}G_{\overline{\alpha}}f_{\alpha}-n|f_0|^2+|\partial f|^4+|\partial f|^2e^{(2+p)f}.
\end{equation}
Multiply both sides of (\ref{3--17}) by $\eta^{s-2} e^{kf}$ with $k=2(n-1)$
and integrate over $\Omega$ we have
\begin{equation}\label{3.17}
\begin{split}
\,& \int_{\Omega}\eta^{s-2}\mathbf{Re}Z_{\overline{\alpha}}\Big( \sqrt{-1}f_0f_{\alpha}e^{2(n-1)f} \Big)\\
=&\, \int_{\Omega}\eta^{s-2}\Big(-\mathbf{Re}G_{\overline{\alpha}}f_{\alpha}
-n|f_0|^2+|\partial f|^4+|\partial f|^2e^{(2+p)f}\Big)e^{2(n-1)f}.
\end{split}
\end{equation}
Integrating by part, using (\ref{3-4}) and arranging the terms yields
\begin{equation}\label{3.18}
\begin{split}
n\int_{\Omega}\eta^{s-2}|f_0|^2e^{2(n-1)f}
=&\, \int_{\Omega}\eta^{s-2}\big(|\partial f|^4+|\partial f|^2e^{(2+p)f}\big)e^{2(n-1)f}\\
\,& \quad -\int_{\Omega}\eta^{s-2}\mathbf{Re}G_{\overline{\alpha}}f_{\alpha}e^{2(n-1)f}\\
\,& \quad +(s-2)\int_{\Omega}\eta^{s-3}\mathbf{Re}\eta_{\overline{\alpha}}\Big( \sqrt{-1}f_0f_{\alpha}e^{2(n-1)f} \Big)\\
\lesssim &\, \int_{\Omega}\eta^{s-2}\big(|\partial f|^4+|\partial f|^2e^{(2+p)f}\big)e^{2(n-1)f}\\
\,& \quad +\int_{\Omega}\eta^{s-2}|G_{\overline{\alpha}}||\partial f| e^{2(n-1)f}\\
\,& \quad +\frac{1}{r}\int_{\Omega}\eta^{s-3}|f_0||\partial f|e^{2(n-1)f}.
\end{split}
\end{equation}
For the above last two terms, Young's inequality implies
\begin{equation}\label{3.19}
\begin{split}
\,& \int_{\Omega}\eta^{s-2}|G_{\overline{\alpha}}||\partial f| e^{2(n-1)f}
+\frac{1}{r}\int_{\Omega}\eta^{s-3}|f_0||\partial f|e^{2(n-1)f}\\
\leq &\, \epsilon r^2\int_{\Omega}\eta^s |G_{\alpha}|^2e^{2(n-1)f}
+\epsilon\int_{\Omega}\eta^{s-2}|f_0|^2 e^{2(n-1)f}\\
\,& \hspace{106pt} +\frac{C}{\epsilon r^2}\int_{\Omega}\eta^{s-4}|\partial f|^2e^{2(n-1)f} .
\end{split}
\end{equation}
Submitting this into (\ref{3.18}) with small $\epsilon$ we get
\begin{equation}\label{3.20}
\begin{split}
\int_{\Omega}\eta^{s-2}|f_0|^2e^{2(n-1)f}
\lesssim &\, \epsilon r^2\int_{\Omega} \eta^s \mathcal{M}
+\int_{\Omega}\eta^{s-2}|\partial f|^4e^{2(n-1)f}\\
\,& +\int_{\Omega}\eta^{s-2}|\partial f|^2e^{(2n+p)f}
+\frac{1}{r^2}\int_{\Omega}\eta^{s-4}|\partial f|^2e^{2(n-1)f}.
\end{split}
\end{equation}
This is just (\ref{3-9}).\qed
\vspace{10pt}
$\mathbf{Proof\,\, of\,\, lemma\,\, \ref{lem-2} }$
Multiply both sides of the equation (\ref{2.3}) by $-\eta^s e^{2(n+1+p)f}$
and integrate over $\Omega$ we have
\begin{equation}\label{3-21}
\begin{split}
n\int_{\Omega} \eta^s g e^{2(n+1+p)f}
=& -\int_{\Omega} \eta^s f_{\alpha\overline{\alpha}}e^{2(n+1+p)f}\\
=& 2(n+1+p)\int_{\Omega} \eta^s|\partial f|^2e^{2(n+1+p)f}\\
\,& +s\int_{\Omega} \eta^{s-1} f_{\alpha}\eta_{\overline{\alpha}}e^{2(n+1+p)f}.
\end{split}
\end{equation}
Using (\ref{3-4}) and arranging the terms yields
\begin{equation}\label{3-22}
\begin{split}
\int_{\Omega} \eta^s e^{(2n+4+3p)f}
\lesssim & \int_{\Omega} \eta^s|\partial f|^2e^{2(n+1+p)f}+ \frac{1}{r}\int_{\Omega} \eta^{s-1} |\partial f|e^{2(n+1+p)f}\\
\lesssim & \int_{\Omega} \eta^s|\partial f|^2e^{2(n+1+p)f}+ \frac{1}{r^2}\int_{\Omega} \eta^{s-2} e^{2(n+1+p)f},
\end{split}
\end{equation}
where in the last step, the Cauchy-Schwarz inequality has been used, and this is (\ref{3-10}) as desired. \qed
\section{Proof of theorem \ref{Thm2} }
\setcounter{equation}{0}
\setcounter{theorem}{0}
Before the proof of theorem \ref{Thm2}, we give the following Harnack inequality,
which is a special case of that given by Capogna-Danielli-Garofalo (see Theorem 3.1 in \cite{CDG1993}),
\begin{lemma}\label{lem4.1}
Let $0\leq u\in C^2(\Omega)$ satisfies
\begin{equation}\label{4.1}
\triangle_{\mathbb{H}^{n}} u+ h(\xi) u=0 \quad \text{in}\quad \Omega.
\end{equation}
with $h(\xi)\in L^s_{loc}(\Omega)$ for some $s>\frac{Q}{2}$. Then there exist constants $C_0>0$, $r_0>0$,
such that for any $B_r(\xi)$, with $B_{4r}(\xi)\subset \Omega$, and $r<r_0$,
\begin{equation}\label{4.2}
\max_{B_r(\xi)}u\leq C_0 \min_{B_r(\xi)}u.
\end{equation}
\end{lemma}
$\mathbf{Proof\,\, of\,\, Theorem\,\, \ref{Thm2}}.$
\qquad
Rewrite the equation (\ref{1.1}) as
\begin{equation}\label{4.3}
\triangle_{\mathbb{H}^{n}} u+ h(\xi) u=0 \quad \text{in}\quad B_1(0)\backslash\{0\},
\end{equation}
with $h(\xi)=2n^2 u^{q-1} $. For any $\xi_0\in B_1\backslash\{0\}$, take $r=\frac{1}{4}|\xi_0|$.
Denote $ \big|B_r(\xi_0)\big|$ the volume of the ball $B_r(\xi_0)$. Using the estimate (\ref{1.3}) we have
\begin{equation}\label{4.4}
\int_{B_r(\xi_0)}h^s= (2n^2)^s\int_{B_r(\xi_0)} u^{3q-q^*} \leq C\,r^{Q-2\times\frac{3q-q^*}{q-1}}.
\end{equation}
with $s=\frac{3q-q^*}{q-1}>\frac{Q}{2}$ for $1<q<q^{*}$. This implies $h(\xi)\in L^s_{loc}(\Omega)$ for some $s>\frac{Q}{2}$
and hence $u$ satisfies the Harnack inequality (\ref{4.2}).
So for $|\xi_0|$ small enough, combining this Harnack inequality with (\ref{1.3}) we finally obtain
\begin{equation}\label{4.5}
\frac{1}{C}|\xi_0|^Q u(\xi_0)^{3q-q^*}\leq |B_r(\xi_0)| \Big[\frac{u(\xi_0)}{C_0}\Big]^{3q-q^*} \leq \int_{B_r(\xi_0)} u^{3q-q^*} \leq C\,r^{Q-2\times\frac{3q-q^*}{q-1}}.
\end{equation}
This implies (\ref{1.2}) and the proof of theorem \ref{Thm2} goes to the end.\qed
\begin{remark}\label{remark1}
In the following paper \cite{MaOu}, we shall generalize this methods to a class semilinear elliptic equation on CR manifold and get a rigidity results.
\end{remark}
{\bf Acknowledgement} Partial research of the second author was done while he was visiting
The Chinese University of Hong Kong. He would like to thank the Institute of Mathematical Sciences
in The Chinese University of Hong Kong for its warm hospitality.
|
1,108,101,562,932 | arxiv | \section{Introduction}
It is well-known from the materials science, physics, and chemistry literature, that there is intense interest in studying the mechanics of structures made of graphene. Industrial applications and the potentials for graphene made structures are abundant. For instance, nanoscale devices that use graphene as basic components, such as nanoscale resonators, switches, and valves, are being developed in many industries. Understanding the response of individual graphene structure elements to applied loads is therefore crucially important (see [1]-[8] and the references therein for a comprehensive list of applications). In this paper, we analyze the effects of axial compression and nonlinear lateral forces upon an idealized graphene beam. We prove the existence of a minimal " buckling load," which, mathematically speaking, is not obvious due to the structure of the constitutive law relating the stress and strain upon a beam made of graphene. Furthermore, we prove the existence and uniqueness of solutions for the equilibrium equation of the elastoplastic beam when the lateral force satisfies a natural bound in terms of the elastoplastic parameters (and we prove non-existence, in certain cases, when this bound is not satisfied).
The Euler buckling load of simply supported straight elastics beam subject to an end axial compressive load can be modeled by the equation:
\begin{equation} EI v''''+Pv''=0, 0<x<L \end{equation}
with boundary conditions
\begin{equation} v(0)=v(L)=v''(0)=v''(L) \end{equation}
where $L$ is the length of the beam, the $E$ Young's modulus, and $I$ the area moment of inertia.
Integrating (1.1) twice gives
\begin{equation} EIv''+Pv=0 \end{equation}
when the boundary conditions are taken into account.
Therefore the boundary value problem (1.1)-(1.2) reduces to the well-known eigenvalue problem for the Laplacian in one dimension:
\begin{equation} EIv''+Pv=0\end{equation}
\begin{equation} v(0)=v(L)=0\end{equation}
As is well known, system (1.4)-(1.5) yields a sequence of eigenvalues and eigenfunctions $$v_{k}(x)=\text{sin}(\frac{k\pi x}{L}), P_{k}=EI(\frac{k\pi}{L})^2, k=1,2,3,...$$
Furthermore, each eigenfunction in (1.4)-(1.5) is simple. The eigenfunction $v_{1}$ is called the first buckling mode and $P_{1}$ is the well-known Euler critical buckling load, sometimes also called the onset buckling load. It is used widely in engineering practice.
\par\
The above Euler critical buckling load is derived based on the Hooke's law, relating the axial stress $\sigma_{x}$ and the axial strain $\epsilon_{x}:$ $\sigma_{x}=E\epsilon_{x}$ and the assumption that during the deformation, the cross-sections of the beam column remains perpendicular to the center line. This classical result is generalized for the Hollomon's law $\sigma_{x}=K|\epsilon_{x}|^{n-1}\epsilon_{x},$ where equation (1.1) is replaced by:
\begin{equation} (KI_n|v''|^{n-1}v'')''+P v''=0 \end{equation}
\begin{equation} v(0)=v(L)=v''(0)=v''(L) \end{equation}
The the critical load of (1.6)-(1.7) was found in [10], and is given by: $$P_{cr}=\frac{2n(\pi_{2,1+1/n})^2}{n+1}K I_{n}$$
where $I_{n}=\int_{A}|y|^{1+n}dydz$ is the generalized area moment of inertia, and $\pi_{2,1+1/n}=2\int_{0}^{\frac{\pi}{2}} cos(\theta)^{\frac{n-1}{n+1}}d\theta.$ The first eigenfunction is defined in terms the of generalized sine function by using the notation of the two parameter sine function developed in [11].
Graphene material is shown to be modeled by the following quadratic stress-strain constitutive law (see [3] and [8]):
\begin{equation} \sigma_{x}=E\epsilon_{x}+D|\epsilon_{x}|\epsilon_{x}, \end{equation}
where D is related to the Young's modulus by the relation $D=-\frac{E^2}{4\sigma_{\max}}$ , and $\sigma_{\max}$ is the material's ultimate maximal shear stress.
For small strain, the elastic stress $E\epsilon_x$ dominates (1.8), while the plastic stress $D|\epsilon_x|\epsilon_x$ becomes prominent for large strain. Notice that the ratio $|\frac{D}{E}|=\frac{E}{4\sigma_{\max}}:=\alpha$ is the elastoplastic parameter which we will use in our asymptotic analysis of Section 2. When this parameter is small then the material's ultimate maximal shear stress $\sigma_{\max}$ is very large, and the elastic behavior dominates.
The equilibrium equation for a grapheme made Euler-Beam subject to axial compressive load $P,$ lateral force $f,$ and a nonlinear support $g$ (all per unit length) is given by the fourth order equation:
\begin{equation} EI v'''' +DI_{2}(|v''|v'')''+Pv'' +g(v'')=f(x), 0<x<L, \end{equation}
where $I=\int_{A} y^2 dy dz,$ $I_{2}=\int_{A} y^3 dy dz,$ and the z-axis being the off-plane direction and $A$ is the cross sectional area of the bar. We consider (1.9) along with one of the pin-pin (PP), and pin-slide (PS), or the slide-slide (SS) boundary conditions:
\begin{equation} \text{(PP Conditions) } v(0)=v(L)=v''(0)=v''(L) \end{equation}
\begin{equation}\text {(PS Conditions) } v'(0)=v(L)=v'''(0)=v''(L) \end{equation}
\begin{equation} \text{(SS Conditions) } v'(0)=v'(L)=v'''(0)=v'''(L) \end{equation}
Using the non-dimensional variables and parameters:
$$z=xL^{-1}, u=vL^{-1}, \alpha= \frac{|D|I_{2}}{EIL}, \lambda=\frac{PL^2}{EI}, \hat{g}(u'')=\frac{g(u'')}{EIL^{-3}}, \hat{f}(z)=\frac{f(z)}{EIL^{-3}}, $$
equation (1.9) can be rewritten as:
\begin{equation} u''''-\alpha (|u''|u'')''+\lambda u'' +\hat{g}(u'')=\hat{f}(z).\end{equation}
The boundary conditions (1.10)-(1.12) become:
\begin{equation} \text{(PP Conditions) } u(0)=u(1)=u''(0)=u''(1) \end{equation}
\begin{equation}\text {(PS Conditions) } u'(0)=u(1)=u'''(0)=u''(1) \end{equation}
\begin{equation} \text{(SS Conditions) } u'(0)=u'(1)=u'''(0)=u'''(1)\end{equation}
In the next section we will study a special case of (1.13):
\begin{equation} u''''-\alpha(|u''|u'')''+\lambda u'' =0\end{equation}
with the boundary condition
\begin{equation} u(0)=u(1)=u''(0)=u''(1)=0 \end{equation}
Here (1.17)-(1.18) represents the buckling problem for a Euler graphene beam which replaces problems (1.1)-(1.2) and (1.6)-(1.7) for the elastic and Hollomon beams, respectively.
In Section 2, we provide asymptotic expansion of the first eigen-pair of (1.17)-(1.18) in terms of the perturbation parameter $\alpha,$ and prove that, for small enough $\alpha$, each eigen-pair is simple and continuously dependent upon $\alpha$, and also establish existence of an infinite sequence of eigen-pairs of (1.17)-(1.18). In section 3, we show that all eigenvalues are positive and derive a lower bound for the smallest eigenvalues. In sections 4, we consider the global existence and uniqueness of solutions for the boundary value problem equations (1.13) -(1.14), for the case of PP boundary condition. Similar techniques are valid for the other boundary conditions. This way, we extend the results established in [9] and [11] for the graphene beam with nonlinear support.
\section{ Existence of Eigen-pairs and Buckling Analysis of the Graphene Beam}
Integrating (1.17) twice, and applying the boundary conditions we obtain the nonlinear eigenvalue problem:
\begin{equation} u''-\alpha |u''|
u''+\lambda u=0 \end{equation}
\begin{equation} u(0)=u(1)=0 \end{equation}
When $\alpha=0$, (2.1)-(2.2) reduces to the eigenvalue problem for the Euler elastic beam:
\begin{equation}u''+\lambda u=0 \end{equation}
\begin{equation} u(0)=u(1)=0\end{equation}
whose eigenpairs are given by:
\begin{equation} \lambda_{k}=(k\pi)^2, u_{k}=sin(\pi k z), k=1,2,3,...\end{equation}
In particular this linear problem has a discrete spectrum and each eigen-value is simple.
Consider the nonlinear graphene operator defined by:
$$N_{G}(\alpha,u)=u''-\alpha|u''|u'',u\in H^{2}(0,1)\cap H^{1}_{0}(0,1)$$
Ideally, we would like to prove that $N_{G}$ has a discrete spectrum. The next proposition is a first step in this direction. We show that for each eigenvalue $\lambda_{k}$ of the linear operator (the laplacian) there exists a continuously differentiable curve of eigenvalues to $N_{G}(\alpha,\cdot),$ for small $\alpha.$
The proof of these facts is based on the implicit function theorem as demonstrated below.
\begin{prop} For each eigenpair $(u_{1},\lambda_{1})$ of (2.2), there exists $\alpha_0$ small so that there exists a unique smooth curve $(u(\alpha),\lambda(\alpha))$ of eigenpairs of $N_{G}(\alpha,u)$ defined for $\alpha \leq \alpha_{0}$ such that $\lambda(0)=\lambda_{1}$ and $u(0)=u_{1}.$
\end{prop}
\emph{Proof} Define $F:\mathbb{R}\times H^2 \cap H^{1}_{0}\times \mathbb{R} \rightarrow L^{2}\times \mathbb{R}$ in the following way:
$F(\alpha,u,\lambda)=(u''-\alpha|u''|)u''+ \lambda u,<u_{1}',u'>-<u_{1}',u_{1}'>)$
$F$ is obviously continuously differentiable and $F(0,u_{1},\lambda_{1})=(0,0).$
We seek to prove that $F_{u,\lambda}(0,u_{1},\lambda_{1})= \left[ {\begin{array}{lcl}{cc}
( \cdot)''+\lambda_{1}(\cdot) & u_{1} \\
< u'_{1}, (\cdot)'> & 0 \\
\end{array} } \right ]$ is invertible.
Now, if $F_{u,\lambda}(0,u_{1},\lambda_{1}) \left[ {\begin{array}{lcl}{cc}
u \\
\lambda \\
\end{array} } \right]=0$ then $u$ and $\lambda$ have to satisfy the following system:
$$u''+\lambda_{1}u+\lambda u_{1}=0$$ and
$$<u_{1}',u'>=0.$$
Now, multiply the first equation by $u_{1},$ integrate from $0$ to $1,$ and integrate by parts in the first term. Using the fact that $u''_1 +\lambda_{1}u_1 =0,$ we get:$\lambda \int_{0}^{1} u_{1}^2 dx=0$ so that $\lambda=0.$
Then the first equation becomes
$$u''+\lambda_{1}u=0.$$ But since $<u'_{1},
u'>=0$ and since $u_{1}$ is simple, $u\equiv 0.$ The theorem then follows from the implicit function theorem in Banach spaces.
\qed
We now seek to find an asymptotic expansion of the solution of (2.1) in powers of $\alpha$ .
The zeroth order boundary value problem is (2.3)-(2.4) whose solution is given by (2.5).
The first order equation then reads:
$$u''_{2}+\lambda_{1}u_{2}=u''_{1}-\lambda_{2}u_{1} $$
$$u_{2}(0)=u_{2}(1)=0 $$
whose solvability condition gives: $$\lambda_{2}=-\frac{\int_{0}^{1}|u'_{0}|^2dz}{\int_{0}^{1}|u_{0}|^2dz}$$
In this way we obtain an asymptotic expansion:
$$u(z)=u_{1}(z)+\alpha u_{2}(z)+O(\alpha^2)$$
$$\lambda=\lambda_{1}+\alpha \lambda_{2} +O(\alpha^2)$$
valid for small enough $\alpha$, where $u_{2}$ is the unique solution of the first order problem above.
We denote be $X$ the Banach space $H^{2}(0,1)$ and define $K: X\to X$ by $K(u)(v)=\int_0^1u'v'dx$. Consider the existence of solutions of (2.1),(2.2) with the following additional constraint:
\begin{equation}
\gamma=<u,w>_{X}, \forall w\in K^{-1}(u)
\end{equation}
We now prove the following theorem:
\begin{thm} For any given $\gamma>0$, there exists a real number $\lambda \in R$, and a week solution $w\in X $ of (2.1)-(2.2) satisfying (2.6). Furthermore, there exists infinitely many distinct eigenvalues $\{\lambda_i\}_{i=1}^{\infty}$ for which $\lambda_i \to \infty$, as $i\to\infty$.
\end{thm}
\begin{proof}
We define $F: X\to X$ by defining $F(u)=u-\alpha|u|u,$ and $\phi(u)=\frac{1}{2}u^2-\frac{\alpha}{3}|u|^3.$ The proof the follows by observing that the operators $K$, $F$ and $\phi$ satisfy the properties stated in the following theorem of (Amann,1972) and that the eigenvalue problem (2.1) and (2.2) can be written in the operator form:
\begin{equation}
\begin{array}{lcl}
\lambda w=KF(w), w\in X \\
\gamma=<u,w>_{X} , \forall w\in K^{-1}(u)
\end{array}
\end{equation}
\end{proof}
\begin{thm}
Let $X$ be a Hilbert space and $K$,$F$ and $\phi$ satisfy the following conditions:\break
(A) $K: X\to X$ is linear, compact, monotone, symmetric, and $dim(K(X))=\infty$,
(B) $F:X\to X$ is nonlinear, continuous, odd, with potential $\phi$,\break
(C) $K(0)=\phi(0)=0$, and $\phi(u)\ne 0,$ for $u\ne 0 \in X$.
Then, for any number $c>0$, there exists $u\in X$, and a real number $\lambda$ satisfying the equations:
\begin{equation}KF(u)=\lambda u, <u,w>=c,\forall w \in K^{-1}(u)\end{equation}
Furthermore, there exists infinitely many distinct eigenvalues $\{\lambda_i\}_{i=1}^{\infty}$ for which $\lambda_i \to \infty$, as $i\to\infty$.
\end{thm}
\section{Positivity of Eigenvalues and Lower Bound on the First Buckling Load}
In the following two theorems we present a bound on the maximum norm of an eigenfunction of the eigenvalue problem (1.17)-(1.18) and prove that all the eigenvalues are positive. In particular, we study the following eigenvalue/eigenfunction problem (by calling $v=u''$ in (1.17)):
\begin{equation} (v-\alpha v|v|)'' +\lambda v=0 \end{equation}
\begin{equation} v(0)=v(1)=0\end{equation}
It is not obvious that all eigenvalues of (3.1)-(3.2) are positive. Indeed, one may imagine that the eigenvalues should change sign. On the other hand, physically, it makes sense that there be a minimal positive eigenvalue--a so-called buckling load. In this section we show that all eigenvalues of (3.1)-(3.2) are positive and bounded below by a constant (depending on $\alpha$) and we give a-priori estimates on the eigenfunctions in the $L^\infty$ norm.
One technicality which gives us a little bit of trouble is that an eigenfunction $v$ is not necessarily smooth in $(0,1).$ In fact, if it has interior zeros it cannot be smoother than of class $W^{2,\infty}$ near those zeros (due to the presence of the $|v|$ in our equation). Nonetheless, away from the zeros of a continuous eigenfunction, and away from points where $|v|=\frac{1}{2\alpha},$ the eigenfunction must be smooth.
This can be proved using the same techniques as are used in the regularity part of the proof of theorem 4.1 of section 4.
\begin{prop}
For every given $\alpha$, let $\lambda$ be an eigenvalue of (3.1)-(3.2). Then $\lambda>0.$
\end{prop}
\begin{proof}
It is obvious that $\lambda \neq 0.$ Now let $(\lambda,v)$ be an eigenpair of (2.7) and (2.8). Then $$(v-\alpha(|v|v))''=-\lambda v, 0<z<1$$ and $v(0)=v(1)=0$. Multiply both sides of the differential equation by $(v-\alpha(|v|v))'$ and integrate, we get:
$\frac{1}{2}[(v-\alpha(|v|v))']^2+\lambda[\frac{1}{2}v^2-\frac{2\alpha}{3}|v|^3]=A$, where $A$ is an integration constant.
Suppose that $\exists c\in (0,1),$ such that $ |v(c)|=\frac{1}{2\alpha}$. Then, upon evaluating at $x=0$ and $x=c$, we get
$\frac{1}{2}[v'(0)]^2=A=\frac{1}{24\alpha^2}\lambda,$ which gives $\lambda \ge 0$. However, since $\lambda =0$ gives $v=0$, we conclude that, in this case, we must have $\lambda >0$.
If there is no number $c$ satisfying $|v(c)|=\frac{1}{2\alpha}, c \in [0,1],$ then, it must be that $|v|<\frac{1}{2\alpha}.$ We then multiply both sides of (3.1) by $v$ and integrate over $[0,1].$ Upon integrating by parts we see
$$\int_0^1v'^2(1-2\alpha|v|)dx=\lambda\int_0^1v^2dx$$ which leads again to the conclusion that $\lambda>0$. This completes the proof of the proposition.
\end{proof}
\begin{thm}
Let $v$ be a continuous eigenfunction of (3.1)-(3.2) corresponding to an eigenvalue $\lambda\ge0$. Then $|v|\le\frac{1}{2\alpha}$ on $[0,1]$.
\end{thm}
\begin{proof} Suppose that the conclusion of the theorem is false. Then without loss of generality we may assume that $v$ has a maximum at $x=c$ and that $v(c)>\frac{1}{2\alpha}$. Note that $v$ must be smooth in a neighborhood of $c$ by elliptic regularity (one may mimic the regularity proof in the next section). Now, because $v$ has a local maximum at $c,$ $v''(c)\le 0$ and $v'(c)=0$. Expanding equation (3.1) gives:
\begin{equation}
v''(1-2\alpha|v|)-2\alpha\frac{vv'^2}{|v|}+\lambda v=0.
\end{equation}
Letting $x=c$ and noting that $\lambda>0$ gives a contradiction.
\end{proof}
\begin{thm}
There exists an absolute constant $c$ so that if $\lambda$ is an eigenvalue of (3.1)-(3.2) then $\lambda\geq {c}.$ $c$ can be taken to be $\frac{1}{4}.$
\end{thm}
\emph{Proof:}
Set $f=-\lambda v.$ Then $v$ solves $$ (v-\alpha v|v|)''=f, u(0)=u(1)=0$$
Integrating by parts twice, we see that $$v -\alpha v|v|=F,$$ where $F$ solves $F''=f, F(0)=F(1)=0.$
We want to see now that if $F$ is small then $u$ is small.
Indeed, if $|F|_{L^\infty}\leq \frac{1}{ 8\alpha},$ then $|v|_{L^\infty} \leq \frac{1}{4\alpha}$ (just by solving the equation $v-\alpha v|v|=F.$).
Since we know a-priori that $|v|\leq \frac{1}{2\alpha},$ if we take $\lambda \leq \frac{1}{4},$ then $$|F|_{L^\infty}\leq |f|_{L^2}\leq |f|_{L^\infty}\leq \frac{\lambda}{2\alpha} \leq \frac{1}{8\alpha}$$
Therefore, we have shown that if $\lambda\leq \frac{1}{4},$ then $|v|\leq \frac{1}{4\alpha}.$
Now assume $\lambda\leq \frac{1}{4}$ and multiply (3.1) by $v$ and integrate from 0 to 1.
Then, integration by parts tells us: $$\int_{0}^{1}|v'|^2 (1-2\alpha|v|)=\lambda \int_{0}^{1} |v|^2.$$
But now that $|v|\leq \frac{1}{4\alpha},$ $$\int_{0}^{1}|v'|^2 =4 \lambda \int_{0}^{1} |v|^2.$$
So, if $4\lambda< \pi^2,$ we have that $v\equiv 0$. That is, if $\lambda < \frac{\pi^2}{4}$ then $v\equiv 0.$ But remember that we had to assume that $\lambda \leq \frac{1}{4}$ to get this. Therefore, the theorem is proved with $c=\frac{1}{4}.$
\qed
This Theorem gives us a lower bound on the first buckling load for the graphene beam as $$P_{cr}\ge\frac{I^2\sigma_{max}}{I_2L}$$
\section{Existence and Uniqueness of the Beam with Nonlinear Support}
In this section we want to prove existence and uniqueness of solutions for the elastic beam equations with compression below the first buckling load, with a nonlinear foundational support, and subject to a mild external force. In the next section we will show that the conditions we assume to prove existence and uniqueness are more or less optimal.
Consider the following non-linear elliptic boundary value problem:
\begin{equation} ((1-2\alpha|v|) v')'+\lambda v+ g(v)=f, \, \, \text{in} \, \, (0,1) \end{equation}
\begin{equation} v(0)=v(1)=0\end{equation}
with $\alpha\geq 0,$ $\lambda < \frac{\pi^2}{2},$ and $f$ is a bounded function. Furthermore, $g$ is a differentiable function which is homogeneous of degree 2 or more and satisfies the following inequalities:
$$tg(t)\leq 0, g'(t)\leq 0 \, \, \text{for all}\, \, t.$$
The main result of this section is that if $f$ is small enough in $L^{2}(0,1)$, then (4.1)-(4.2) has a unique $H^2$ solution. Moreover, we show by example, that our result is in some sense optimal: if $f$ is positive and large enough then no solution exists.
We prove the uniqueness before we prove existence.
We prove that if we have two solutions of (4.1)-(4.2) which are both small enough then the two solutions must coincide.
Define the following classes of functions:
$$B_{\delta} \equiv \{k\in W^{1,\infty}(0,1): \, |k|_{W^{1,\infty}}\leq \delta \}$$
\begin{prop}
If $\delta<\frac{(1+\frac{1}{\pi})}{2\alpha},$ then (4.1)-(4.2) has at most one weak solution in $B_{\delta}.$
\end{prop}
\emph{Proof:} Suppose that $v_{1},v_{2} \in B_{\delta}$ solve (1.1)-(1.2).
Then, $v=v_1 - v_2$ satisfies the following equation:
$$v'' + \lambda v +g(v_1)-g(v_2)= 2\alpha[((|v_1|-|v_2|)| v'_1)'- (|v_1|v')'] .$$
Now multiply by $u$ and integrate by parts. Since $v=0$ at 0 and 1, all the boundary terms vanish and we get:
\begin{equation}\begin{array}{lcl} \int_{0}^1 | v'|^2 +|\lambda|\int_{0}^1 |v|^2 -\int_{0}^{1} (g(v_{1})-g(v_{2}))(v_1 -v_2) \\
\leq 2\alpha [ \int_{0}^1 |v|| v'_1||v'|+ |v_1||v'|^2] \end{array} \end{equation}
where we used $$||v_1|-|v_2||\leq |v_1-v_2|$$
Now, because $g'\leq 0,$ we have that $(g(v_{1})-g(v_{2}))(v_1 -v_2) \leq 0,$ so we can drop the last term on the left hand side of (4.3).
By the Poincar\'{e} inequality, we have $|v|_{L^2} \leq \frac{1}{\pi}|v'|_{L^2}.$ This and using the fact that $|v_1|,|v'_1| \leq \delta,$ we see that
$$| v'|_{L^2} \leq 2\alpha \delta(1+\frac{1}{\pi})| v'|_{L^2}$$
Therefore, if $\delta <(1+\frac{1}{\pi})^{-1} \frac{1}{2\alpha},$ then $v'\equiv 0$ and, using the boundary condition, the uniqueness theorem is proven.
\qed
The proof of existence will rely upon energy estimates and a suitable iteration scheme.
We will begin by proving the existence of a small solution in $H^{1}_{0}$ under a suitable condition on $f.$
First of all, in (4.1)-(4.2), we write $v=\frac{1}{2\alpha} w$ and $F=2\alpha f.$ Then we get that $v$ is a solution of (4.1)-(4.2) if and only if $w$ is a solution of:
\begin{equation} ((1-|w|) w')'+ \lambda w + 2\alpha g(\frac{1}{2\alpha} w)=F, \, \, \text{in} \, \, (0,1)
\end{equation}
\begin{equation} w(0)=w(1)=0. \end{equation}
Recakk that $\lambda < \frac{\pi^2}{2}$ and $G(\cdot):= 2\alpha g(\frac{1}{2\alpha} \cdot)$ satisfies the same conditions as $g.$
The main idea we want to use is that if $F$ is smooth and small enough, then, using the maximum principle, $w$ must also be small. Once $w$ is small, the equation becomes uniformly elliptic and we will then be able to deduce the existence and uniqueness of a small solution. We now prove existence of an $H^{1}$ weak solution.
\begin{prop}
Let $h$ be a bounded, measurable function with $|h|\leq \frac{1}{2}.$ Then if $w$ solves the following semi-linear boundary-value problem
\begin{equation} ((1-|h|)w')'+{\lambda} w +G(w)=F, \, \, \text{in} \, \, (0,1) \end{equation}
\begin{equation} w(0)=w(1)=0, \end{equation}
with $tG(t)\leq 0,$ for all $t$. Assume further that $\lambda < \frac{\pi^2}{2}.$ Then $$|w'|_{L^2} \leq \frac{1}{\pi(\frac{1}{2} - \frac{\lambda}{ \pi^2} )}|F|_{L^2}$$
\end{prop}
\emph{Proof:}
Multiply (4.6) by $w$ and integrate from $0$ to $1.$ Upon integrating by parts we see
\begin{equation} \int_{0}^{1}(1-|h|)|w'|^2 dz-\lambda\int_{0}^{1} w^2 dz -\int_{0}^{1} G(w)wdz = -\int_{0}^{1}Fw dz \end{equation}
Using the condition on $h$ and that $tG(t)\leq 0,$ we see that $$\frac{1}{2} \int_{0}^{1} |w'|^2 dx \leq|\int_{0}^{1}Fw dx| + \lambda\int_{0}^{1}|w|^2 dx. $$
Now, using the best constant in the Poincar\'{e} inequality on $[0,1],$ we know that $$\int_{0}^{1} |w|^{2} dz \leq \frac{1}{\pi^2} \int_{0}^{1} |w'|^2 dz$$
This implies that $$\frac{1}{2} \int_{0}^{1} |w'|^2 dz \leq|\int_{0}^{1}Fw dz| + \frac{{\lambda}}{\pi^2}\int_{0}^{1}|w'|^2 dz $$
Since, by assumption, ${\lambda} < \frac{\pi^2}{2},$
$$(\frac{1}{2} - \frac{\lambda}{ \pi^2} )\int_{0}^{1} | w'|^2 dz \leq |\int_{0}^{1}Fu dz| $$
Using the Cauchy-Schwarz inequality and the Poincar\'{e} inequality once more we see:
$$|w'|_{L^2} \leq \frac{1}{\pi(\frac{1}{2} - \frac{\lambda}{ \pi^2} )}|F|_{L^2} $$
We now need the fact that $H^{1}_{0}$ is imbedded in $L^\infty$ in dimension one:
$$|f|_{L^\infty} \leq |f'|_{L^2} $$ for all $f\in H^{1}_{0}(0,1).$ This is just a consequence of the Cauchy-Schwarz inequality. Therefore,
$$|w|_{L^\infty} \leq \frac{1}{\pi(\frac{1}{2} - \frac{\lambda}{\pi^2} )}|F|_{L^2}$$
\qed
One may also try to prove this proposition using the maximum principle by seeing that the condition on $h$ implies that the ellipticity constant of our equation is $1-|h|\geq \frac{1}{2}.$ However we wanted a simple way to get exact constants in our bounds. Now assume $|F|_{L^2}\leq \frac{\pi(\frac{1}{2} - \frac{\lambda}{ \pi^2})}{2}$ and define the following sequence of functions $w_{n}:$
$$w_0 =0, ((1-|w_{n-1}|)w'_n)'+w_n+G(u_{n})=F, \, \, \text{in} \, \, (0,1) $$
$$ w_n (0)=w_n (1)=0 $$
Using the theory of semi-linear elliptic equations in one dimension, we see that the sequence $w_{n}$ can be defined for all $n$ (see, for example, [9]). Moreover, by Proposition 4.2, $|w_n|\leq \frac{1}{\pi(\frac{1}{2} - \frac{\lambda}{ \pi^2} )} |F|_{L^2}$ and $|w_n|_{H^1} \leq \frac{2}{\pi(\frac{1}{2} - \frac{\lambda}{ \pi^2} )}|F|_{L^2}$ for all $n.$
Thus the sequence $w_n$ is uniformly bounded in $H^1 \cap L^\infty.$ Thus we may extract a subsequence of $w_n$ which converges weakly in $H^1 ,$ strongly in $L^p,$ for some $p>2$ and pointwise to a function $w \in H^1 \cap L^\infty.$ Moreover, $|w|_{H^1} \leq \frac{2}{\pi(\frac{1}{2} - \frac{\lambda}{ \pi^2} )}|F|_{L^2}$ and $|w|_{L^\infty} \leq \frac{1}{\pi(\frac{1}{2} - \frac{\lambda}{ \pi^2} )} |F|_{L^\infty}.$
Therefore, $w$ is a bounded weak solution of (4.6)-(4.7).
We now want to show that $w,$ in fact, belongs to $H^2$ with a certain smallness estimate. We aim to show that $w\in H^2(0,1)$ with an appropriate bound. We will first show that $w-\frac{|w|w}{2} \in H^2(0,1).$ Notice that $(w-\frac{|w|w}{2})''=((1-|w|)w')'.$ Therefore, we can write or equation as
$$(w-\frac{|w|w}{2})''=H$$ where $H$ is an $L^2$ function ($H=F-{\lambda}w-G(w)$). Using standard elliptic theory, $w-\frac{|w|w}{2} \in H^{2}(0,1).$
Call $v=w-\frac{w|w|}{2}.$ Define the function $\Phi$ with $\Phi(x)=x-\frac{1}{2}x|x|.$ Now, $\Phi$ is not invertible on the whole real line. However, it is invertible on $|x|\leq \frac{1}{2}.$ From Theorem 3.2, we have $|w|\leq \frac{1}{4},$ $\Psi$ is well-defined. Noting that $\Phi'(x)=1-|x|,$ we see that $\Phi$ is invertible for $|x|\leq \frac{1}{2}.$ Call the inverse $\Psi.$ By the inverse function theorem, $|\Psi'|\leq 2.$ In fact, $\Psi \in W^{2,\infty}$. So, $\Psi(v)=w.$ Now we want to transfer our regularity estimate for $v$ to a regularity estimate for $w.$ This follows by the chain rule in Sobolev spaces.
Now that $w\in H^2,$ we can perform the following estimates:
Take the equation $$((1-|w|)w')'+{\lambda}w+G(w) = F$$ and multiply by $((1-|w|)w')'$ then integrate from 0 to 1.
Recall that $G'(t)\leq 0.$ Then we see, upon integration by parts in the second and third terms,
$$\begin{array}{lcl}\,\int_{0}^{1} |((1-|w|) w')'|^2 dz -\lambda\int_{0}^{1}|w'|^2(1-|w|)dz+\int_{0}^{1} G'(w)|w'|^2(1-|w|)dz\\=\int_{0}^{1} F((1-|w|)w')'dz\end{array} $$
Therefore, $$\begin{array}{lcl}\,\int_{0}^{1} |(w-\frac{|w|w}{2})''|^2 dz -{\lambda}\int_{0}^{1}|w'|^2(1-|w|)dz-\int_{0}^{1} G'(w)|w'|^2(1-|w|)dz\\=\int_{0}^{1} F(w-\frac{|w|w}{2})''dz \end{array} $$
Using the Cauchy-Schwarz inequality,
$$\int_{0}^{1} |(w-\frac{|w|w}{2})''|^2 dz -{\lambda}\int_{0}^{1}|u'|^2dz\leq \frac{1}{2} \int_{0}^{1} F^2+|(w-\frac{|w|w}{2})''|^2dz $$
$$ \int_{0}^{1} |(w-\frac{|w|w}{2})''|^2 dz \leq \int_{0}^{1} F^2 dz+2\lambda\int_{0}^{1}|w'|^2dz $$
Now recall that ${\lambda}\leq \frac{\pi^2}{2}$ and $|w'|_{L^2}^2 \leq(\frac{1}{\pi(\frac{1}{2} - \frac{\lambda}{ \pi^2} )})^2 |F|_{L^2}^2 $.
Therefore, $$ \int_{0}^{1} |(w-\frac{|w|w}{2})''|^2 dz \leq (1+(\frac{1}{(\frac{1}{2} - \frac{\lambda}{ \pi^2} )})^2) \int_{0}^{1} F^2 dz $$
So, $$ \int_{0}^{1} |v''|^2 dz \leq (1+(\frac{1}{(\frac{1}{2} - \frac{\lambda}{ \pi^2} )})^2) \int_{0}^{1} F^2 dz $$ and $w=\Psi(v).$ By simple calculations, $|w''|_{L^2} \leq 2(|v''|_{L^2}+|v_{z}^2|_{L^2}).$
Therefore, $$ |w|_{H^2} \leq 2 (\sqrt{(1+(\frac{1}{(\frac{1}{2} - \frac{\lambda}{ \pi^2} )})^2})|F|_{L^2}+(1+(\frac{1}{(\frac{1}{2} - \frac{\lambda}{ \pi^2} )})^2)|F|_{L^2}^2) $$
Thus we have proven the following theorem.
\begin{thm}
Let $\alpha>0$ be given. Suppose that $g$ is a continuous function on the real line which is homogeneous of degree 2 or more. Suppose further that $tg(t)\leq 0$ and $g'(t)\leq 0$ for all $t.$ Suppose ${\lambda}\leq \frac{\pi^2}{2}.$ Then there exists $c_{1}>0$ small (explicitly given below) so that if $f$ is a measurable $L^2$ function on $[0,1]$ with $|f|_{L^2}\leq \frac{c_{1}}{\alpha},$ then the following non-linear boundary-value problem has a unique solution belonging to $H^2.$
\begin{equation} ((1 -2\alpha|v|) v')' +\lambda v+g(v)=f \end{equation}
\begin{equation} v(0)=v(1)=0. \end{equation}
Moreover, there exists a constant $c_{2}$ so that $|v|_{H^2}\leq \frac{c_{1}}{c_{2}}|f|_{L^2}.$ Here,
$c_{1}= \frac{\pi(\frac{1}{2} - \frac{\lambda}{\pi^2})}{2}, $ and
$c_{2}=4\sqrt{(1+(\frac{1}{(\frac{1}{2} - \frac{\lambda}{\pi^2} )})^2}).$
\end{thm}
\section{Nonexistence for Large External Force}
\begin{prop} Consider the system (4.9)-(4.10). Take $\lambda=0$ and $g=0.$ Then there exists a universal constant $c_{3}>0$ so that if we take $f\equiv \frac{c}{\alpha},$ for $c> c_{3},$ then there exists no solution to $(4.9)-(4.10).$
\end{prop}
\emph{Proof}
(4.9)-(4.10) reduces to $$ (1-\alpha|v|v)''=\frac{c}{\alpha},$$ $$v(0)=v(1)=0.$$
Integrating twice and using the boundary condition yields $$ v-\alpha|v|v=\frac{c}{\alpha} z(z-1)$$
Factoring we get:
$$v(1-\alpha|v|)=\frac{c}{\alpha}z(z-1).$$
Since the right hand side is never zero in (0,1), the left hand side can never be zero either. Therefore, $v$ is either positive or negative in $(0,1).$
Moreover, since $v(0)=0,$ $(1-2\alpha|v|)>0$ for $z$ close to 0. The right hand side is negative in $(0,1).$ Therefore $v$ is negative for $z>0$ small. Therefore $v$ is negative in the entire interval $(0,1)$. Therefore, $$ v+\alpha v^2=\frac{c}{\alpha} z(z-1)$$ Take $z=\frac{1}{2}.$ Then,$$ v(\frac{1}{2})+\alpha v^2 (\frac{1}{2})=-\frac{1}{4}\frac{c}{\alpha}$$
So, if $c>1,$ we see that the discriminant of this equation is negative so that no solutions exist.
\qed.
Note that in the case that $\lambda=0,$ $c_{1}=\frac{\pi}{4}<2$ so that if the external force is less than $\frac{\pi}{8\alpha}$ in $L^2,$ then we have existence and uniqueness of an $H^2$ solution. Moreover, we have an example of an external force larger than $\frac{1}{\alpha}$ in $L^2$ so that there exists no $H^1$ solution to (4.9)-(4.10). Thus our result in theorem 3.3 is, essentially, optimal both in the mathematical and physical sense. Physically, this says that for a small enough lateral force we have a smooth deformation, but for a large lateral force--'small' and 'large' being determined by the basic physical constants in the system such as the maximal strain $\sigma_{\max},$ there is no smooth deformation.
\section{Acknowledgements}
This research was done while D. Wei was visiting Texas A\&M University in Qatar in summer 2013 and he acknowledges the gracious support of TAMUQ and the Qatar Foundation. T. Elgindi was supported by NSF grant no. 1211806 during the completion of this research.
|
1,108,101,562,933 | arxiv | \section{\label{sec:level1}Introduction}
The geometrical frustration in magnetic systems often causes attractive phenomena such as the spin liquid\cite{Lee2008,Leon2010}, spin ice\cite{Bramwell2001}, and spin nematic order\cite{Momoi2005}. Over several decades, a number of studies have been performed on insulator quantum-spin systems. Recently these interests have been extended to strongly correlated systems with conduction electrons\cite{Intro_frustrate}. In such systems, the quantum mechanical effects between conduction electrons and localized spin with frustration can play an important role to lift the frustration. To explore novel quantum phases, it is intriguing to study geometrical frustration effects on strongly-correlated $f$-electron heavy-fermion systems, such as UNi$_4$B\cite{Mentink1994,Lacroix1996} and CePdAl\cite{Donni1996,Keller2002}.
In the present study, we focus on the geometrically frustrated Kondo lattice CePdAl, which crystallizes in a hexagonal ZrNiAl-type crystal structure (space group $P\bar{6}2m$) with no inversion symmetry\cite{Xue1994}. There exist three equivalent Ce sites, forming a quasi-Kagome lattice\cite{Hulliger1993,Schank1994}. The magnetic susceptibility has a strong anisotropy due to the crystal-electric-field (CEF) effect\cite{Isikawa1996}. An incommensurate antiferromagnetic order occurs at $T_{\rm{N}}$ = 2.7 K\cite{Kitazawa1994}. The smallness of energy scale of $T_{\rm{N}}$ compared to the paramagnetic Curie temperature of $\theta_{\rm{p}} = -$34 K\cite{Donni1996} may indicate the presence of a strong frustration in this system.
One of the most interesting properties in CePdAl is the presence of partially disordered spins due to the Kondo effect, coexistent with the antiferromagnetic order below $T_{\rm{N}}$. The magnetic structure below $T_{\rm{N}}$ is characterized by an incommensurate propagation vector $Q$ = (0.5, 0, $\tau$)\cite{Donni1996}, where the component $\tau$ decreases with cooling and becomes constant ($\tau\approx$ 0.354) below $\sim$2 K\cite{Keller2002,Prokes2006}. In the hexagonal basal plane, the ordered moments at two-thirds of Ce sites, \textit{i.e.}, Ce(1) and Ce(3), form ferromagnetic chains, which couple antiferromagnetically each other\cite{Donni1996}. On the other hand, surprisingly, one-third of Ce sites, \textit{i.e.}, Ce(2), which are located between the magnetic chains formed by Ce(1) and Ce(3), have no magnetic order below $T_{\rm{N}}$\cite{Donni1996}. Such partially disordered behavior has also been revealed from NMR measurements; the spin-lattice relaxation time obeys Korringa's law down to 30 mK without further magnetic transition below $T_{\rm{N}}$\cite{Oyamada2008}. These experimental facts indicate that the moments on the nonmagnetic Ce(2) sites are screened by the Kondo effect, and the frustration is lifted by forming the heavy-fermion state. Indeed, the low-temperature electronic specific-heat coefficient becomes huge $C/T \sim$ 1 JK$^{-2}$mol$^{-1}$ just above $T_{\rm{N}}$, and the estimated Kondo temperature has a similar energy scale $T_{\mathrm{K}}\sim$ 6 K\cite{Cermak_JPhysC_2010} to $T_{\rm{N}}$.
In magnetic fields, the magnetization exhibits three successive metamagnetic transitions at 0.51 K below 5 T and reaches 1.43 $\mu_{\rm B}$/Ce at 10 T\cite{Hane2000}, smaller than the value of 1.58-1.81 $\mu_{\rm B}$/Ce observed in the neutron diffraction experiments\cite{Donni1996,Prokes2015}. Thus experiments in further high magnetic fields are necessary to understand the whole magnetization process. Moreover, recently, a rich $B$-$T$ phase diagram with several field-induced states has been revealed\cite{Zhao2016}. However, the details of the ground states, their anisotropy, and effect of magnetic fluctuation in the high magnetic phases have still not been clarified. Therefore thermodynamic understanding of the field-induced magnetic phases in CePdAl and their relationship to the heavy-fermion state is necessary by means of precise quantitative measurements.
In this paper, we report the results of the high-field magnetization and specific-heat measurements on CePdAl. To reveal the magnetic property above the metamagnetic transitions and the magnetic anisotropy in the high-field phases, we have measured the magnetization curves from the magnetic easy axis ($c$-axis) to the hard plane ($ab$-plane) by using the pulsed magnetic field up to 50 T. From the specific-heat measurements for $B\parallel c$-axis up to 7 T, we have also precisely constructed magnetic phase diagram of low-$T$ ground state of CePdAl and investigated the effect of magnetic fluctuation in the high-field phases.
\section{\label{sec:level2}Experiment}
A single crystalline sample of CePdAl was prepared by the Czochralski pulling method\cite{Isikawa1996}. The sample was orientated by the Laue X-ray photographs. The high magnetic field up to 50 T for the magnetization measurement was generated by using a non-destructive pulsed magnet, and the sample was cooled by using a $^4$He cryostat ($T\geq$ 1.3 K, $B\leq$ 50 T). The magnetization was measured by a standard pick-up coil method. To apply magnetic fields along various crystal orientations, the sample was put between diagonally cut quartz rods and sealed into a heat shrinkable tube. Then, we could avoid the effect of the torque caused by the large magnetic anisotropy. The specific-heat measurements were performed by a standard quasi-adiabatic heat-pulse technique with a hand-made calorimeter installed in a $^3$He Oxford Heliox system ($T\geq$ 0.3 K, $B\leq$ 7 T).
\section{\label{sec:level3}Results and discussion}
\begin{figure}[t]
\includegraphics[width = 3.4in]{M-H_CePdAl_1.pdf}
\caption{\label{fig:CePdAl_M-H_1}
High-field magnetization curves parallel and perpendicular to the $c$-axis of a single crystal CePdAl at 1.3 K. The inset figure shows the differential magnetization along the $c$-axis. The dashed curve is the calculation with the CEF model\cite{Isikawa1996} for $B\parallel c$ assuming $T=1.3$ K.}
\end{figure}
The magnetization ($M$) curves of the CePdAl at 1.3 K up to 50 T parallel and perpendicular to the easy axis ($c$-axis) are shown in Fig. \ref{fig:CePdAl_M-H_1}. The inset shows the differential of $M(B)$ curve \textit{i.e.}, d$M(B)$/d$B$ for $B\parallel c$-axis around three metamagnetic transitions. The magnetization along the $c$-axis increases rapidly and shows three metamagnetic transitions\cite{magnetocaloric}. Corresponding to these transitions, d$M(B)$/d$B$ has three clear peaks at $B_{\rm{m}1}$ = 3.2 T, $B_{\rm{m}2}$ = 3.4 T and $B_{\rm{m}3}$ = 4.0 T. In addition, a shoulder-like anomaly is observed at $\sim$4.2 T above $B_{\rm{m}3}$. In the high magnetic field region, $M(B)$ has a very small slope of $\sim$1.5$\times$10$^{-3}$ $\mu_{\rm{B}}$/T at 45 T and reaches 1.6 $\mu_{\rm{B}}$/Ce at 50 T, smaller than the value expected from the previously determined CEF parameters (dashed curve)\cite{Isikawa1996,CEF_states}. By contrast, the magnetization perpendicular to the $c$-axis shows no anomaly and increases in proportion to $B$ up to 50 T, and the value of the magnetization at 50 T decreases from 1.6 $\mu_{\rm{B}}$/Ce for $B\parallel c$ to 0.35 $\mu_{\rm{B}}$/Ce for $B\bot c$. These results suggest that the large magnetic anisotropy of CePdAl persists even in the strong magnetic field of 50 T.
\begin{figure}[t]
\includegraphics[width = 3.4in]{M-H_CePdAl_2.pdf}
\caption{\label{fig:CePdAl_M-H_2}
(Color online)
(a) Magnetization and differential magnetization curves of CePdAl for various crystal orientations. The differential magnetizations at $\theta$ = 47$^\circ$ and 57$^\circ$ are multiplied by 2, and that of $\theta$ = 75$^\circ$ is multiplied by 15 for clarity.
(b) Angular dependences of the metamagnetic transition fields $B_{\rm{m}1,2,3}$. The inset shows the wide-area view.
(c) Angle dependences of the magnetization values below (2.0 T) and above (20 T) the metamagnetic transitions. The constant term $M_0$, which results mainly from the contribution of the CEF exited states, is subtracted as $\Delta M(\theta)=M(\theta)-M_0$. The solid and dashed curves represent the functions in proportional to $\cos\theta$ and $\cos^2\theta$, respectively.}
\end{figure}
\begin{figure*}[th]
\includegraphics[width = 7in]{HC_CePdAl_1.pdf}
\caption{\label{fig:HC_CePdAl}
(Color online)
(a) $C/T$ vs $B$ plots below 1 K. Specific-heat data are offset for clarity.
The anomalies are distinguished into five magnetic fields of $B_{{\rm m}i}$ ($i=$1,2,3) (up arrows), $B_{0}$ (solid-down arrow), and $B^{\star}$ (dashed-down arrow).
(b) $C/T$ vs $T$ plots from 0 T to 7 T. The anomalies are distinguished into three temperatures of $T_{\rm{N}}$ (down arrows), $T_{0}$ (dashed-down arrows) and $T^{\star}$ (up arrows).
(c) $C/T$ vs $T$ plots in the magnetic fields between 3.0 T and 4.0 T.
Specific-heat data are offset for clarity. The sharp anomalies at $T_{\rm{m}}$ are indicated by the up arrows, and the broad anomalies at $T_{\rm{N}}$ and $T_{\rm A}$ are indicated by down arrows and dashed-down arrows, respectively.
}
\end{figure*}
To further investigate the anisotropy of the magnetization and metamagnetic transitions, the angular dependence of $M(B)$ curves are measured [Fig. \ref{fig:CePdAl_M-H_2}(a)]. The angle $\theta$ is measured from the $c$-axis as illustrated in the lower panel of Fig. \ref{fig:CePdAl_M-H_2}(a). The magnetization is gradually suppressed with increasing $\theta$, but is rapidly decreased from $\theta=57^\circ$ to $\theta= 75^\circ$. As seen in the Fig. \ref{fig:CePdAl_M-H_2}(a), each transition field shifts to higher magnetic field, and the anomalies in d$M(B)$/d$B$ become smaller and broader with increasing $\theta$. At $\theta=75^\circ$, the critical fields of the metamagnetic transitions are $B_{\rm{m}2}\sim$12.0 T and $B_{\rm{m}3}\sim$ 13.5 T, whereas the anomaly at $B_{\rm{m}1}$ disappears. The angular dependences of the metamagnetic transition fields are well described by the function of $B_{\rm{m}1,2,3}(\theta)=B_{\rm{m}1,2,3}(\theta = 0)/\cos\theta$ below $\theta=57^\circ$ [Fig. \ref{fig:CePdAl_M-H_2}(b)], indicating that the effective magnetic field along $c$-axis is dominant to the metamagnetic transitions. Thus the high-field states are also Ising-like ordered phases with successive increments of the magnetic moment along $c$-axis, unlike the spin flop phase of conventional antiferromagnetic order.
Figure \ref{fig:CePdAl_M-H_2}(c) shows the angular dependences of the magnetization values at 2.0 and 20 T. We subtract the angular-independent term as $\Delta M(\theta) \equiv M(\theta)-M_0$; $M_{\rm 0}= M_{\bot c}$, which is considered to result mainly from the contribution of the CEF excited states. Above $B_{\rm{m}3}$, at 20 T, the magnetization value is well described by the function of $\Delta M_{20\rm{T}}(\theta) = \Delta M_{20\rm{T}}(0)\cos\theta$ [Fig. \ref{fig:CePdAl_M-H_2}(c)]. This ``$\propto$ cos$\theta$" behavior suggests that almost fully-induced magnetic moments on Ce sites are ferromagnetically aligned along the $c$-axis above $B_{\rm{m}3}$, since we detect the magnetization component along magnetic fields as shown in Fig. \ref{fig:CePdAl_M-H_2}(c). The neutron diffraction measurements have also detected the increments of ferromagnetic (100) reflection at the metamagnetic transitions\cite{Prokes2006}.
In contrast, the magnetization at 2.0 T below $B_{\rm{m}1}$ is proportional to cos$^2\theta$ rather than cos$\theta$ [Fig. \ref{fig:CePdAl_M-H_2}(c)]. This is reasonable if the magnetic moments along $c$-axis $M_{\parallel c}$ are induced by the effective magnetic field $B_{\rm{eff}}= B\cos\theta$ as $M_{\parallel c} = \chi_{\parallel c}B_{\rm{eff}} = \chi_{\parallel c}B\cos\theta$. In this case, we detect the magnetization component along the applied field $M = M_{\parallel c} \cos\theta = \chi_{\parallel c} B\cos^2\theta$ [Fig. \ref{fig:CePdAl_M-H_2}(c)]. The moments on the ordered Ce(1) and Ce(3) sites may hardly contribute to the observed magnetization below $B_{\rm{m}1}$, since they cannot rotate freely due to the Ising-type anisotropy. We would like to point out that the partially disordered Ce(2) sites may contribute to such cos$^2\theta$ dependence due to appearance of paramagnetic moments on Ce(2) sites, which are induced by the breaking of Kondo screening with increasing field. The presence of field-induced paramagnetic moments on Ce(2) sites might be supported by the fact that the magnetization at $\sim B_{\rm{m}1}$ reaches one-third of the full moment $M_{\rm 50 T}$.
Next we report the results of the detailed measurements of the specific heat ($C$) around the metamagnetic transitions. We first show the magnetic-field dependences of specific heat for $B\parallel c$-axis below 1 K in Fig. \ref{fig:HC_CePdAl}(a). At 0.34 K, we observe two kink anomalies at $B_{\rm{m1}}=3.2$ T and $B_{\rm{m2}}=$ 3.4 T as well as a maximum at $B_{\rm{m3}}=$ 4.0 T, corresponding to the three metamagnetic transitions. They become sharper with increasing temperature up to 0.8 K. At $B_{\rm{m3}}$, the magnetization jump becomes largest (Fig. \ref{fig:CePdAl_M-H_1}), accompanying the maximum in $C/T$. At 0.80 K, the two peaks at $B_{\rm{m1}}$ and $B_{\rm{m2}}$ merge into one peak. Moreover, two additional broad peaks appear at $B_0 < B_{\rm{m1}}$ and $B^{\star} > B_{\rm{m3}}$, indicating existence of crossovers below $B_{\rm{m1}}$ and in the paramagnetic region above $B_{\rm{m3}}$.
Figure \ref{fig:HC_CePdAl}(b) show the temperature dependences of the specific heat at zero and magnetic fields up to 7 T for $B\parallel c$-axis. $C/T$ shows the different behavior between below ($B\leq$ 3 T) and above ($B\geq$ 4 T) metamagnetic transitions. A large peak anomaly in $C(T)/T$ at 0 T is due to the antiferromagnetic order ($T_{\rm{N}}$ = 2.7 K). With increasing field up to 3 T, the peak at $T_{\rm{N}}$ becomes smaller. In addition, a broad anomaly appears around $T_0 (<T_{\rm{N}})$ at 2 and 3 T. On the other hand, in high fields above 4 T, a broad and large Schottky-like anomaly observed at $T^{\star}$, which occur in paramagnetic phase above metamagnetic transition fields. This anomaly shifts to higher temperature and becomes broader with increasing field.
Figure \ref{fig:HC_CePdAl}(c) shows the $C(T)/T$ vs $T$ plots around the three metamagnetic transitions. Sharp ($T_{\rm{m}}$) and broad ($T_{\rm{N}}, T_{\rm A}$) anomalies are very sensitive to the magnetic fields, and show different field dependences. The anomaly of the antiferromagnetic transition at $T_{\rm{N}}$ broadens with increasing magnetic field, and remains up to 3.5 T. At $T_{\rm{m}}(<T_{\rm{N}})$, the sharp anomaly appears in fields between 3.3 to 3.7 T, which seems to be related to the metamagnetic transitions as discussed later with obtained phase diagram [Fig. \ref{fig:phase_diagram}(a)]. The anomaly becomes huge between 3.5 T and 3.7 T, while $T_{\rm{N}}(B)$ vanishes around 3.6 T. Moreover, another anomaly appears at $T_{\rm A}(<T_{\rm{m}})$, indicating the existence of the crossover below $T_{\rm{m}}$.
Figure \ref{fig:phase_diagram}(a) shows the $B$-$T$ phase diagram of CePdAl for the magnetization-easy axis ($B\parallel c$) obtained from the present specific-heat measurements. The solid and dashed lines indicate the phase boundary and crossover, respectively. The phase diagram consists of three ordered phases I-III and a paramagnetic phase. The phase boundaries between I-III and PM' are accompanied by the metamagnetic transitions. The antiferromagnetic ordered phase I is divided by a crossover line $T_0(B)$ into two regions, where the higher $T$ region is defined as I'. On the other hand, the paramagnetic phase is separated into PM and PM' regions by the broad Schottky-like anomaly in $C(T)/T$ above 4 T. Moreover, inside the phase III, there is a broad hump anomaly at $T_{\rm A}$ below $T_{\rm{m}}$.
\begin{figure}[t]
\includegraphics[width = 3.5in]{phase_diagram.pdf}
\caption{\label{fig:phase_diagram}
(Color online)
$B$-$T$ phase diagram of CePdAl ($B\parallel c$). The triangles and the circles indicate the critical temperatures and magnetic fields, respectively. (b) $C(T,B)/T$ mapped on the $B$-$T$ phase diagram of CePdAl. The color changes from violet to dark red with increasing $C/T$.}
\end{figure}
It seems that these metamagnetic phase boundaries between I-III and PM' are merged into one boundary. We have observed hysteresis behavior at $B_{{\rm m}i}$ ($i=$1,2,3) from dc magnetization measurements, which will be reported elsewhere. Thus the merging point is possibly a tri-critical point, at which the line of second-order phase transition at $T_{\rm{N}}(B)$ meet the line of first-order phase transition, \textit{i.e.}, the boundary of phase III. Around the critical point, there exist a strong enhancement of the fluctuation. We shall see the contour plot of $C(T,B)/T$ of CePdAl [Fig. \ref{fig:phase_diagram}(b)] in order to visualize the effect of magnetic fluctuation, mapping on the obtained $B$-$T$ phase diagram. Owing to the suppression of the antiferromagnetic transition by the magnetic field, the green region of the large $C/T$ shifts to low temperature below 4 T. The dark-red region, indicating very large $C/T$ spreads around the meeting point of the phase boundary lines $B_{\rm m2}(T)$ and $T_{\rm N}(B)$. In particular, $C/T$ is strongly enhanced, showing a $\lambda$-type anomaly at 3.5 T and 0.84 K through the phase transition between the III and PM phases. This strong enhancement of $C/T$ also supports the possible presence of a critical point around 3.5 T.
\begin{figure}[tp]
\includegraphics[width = 3.4in]{gamma_mag2.pdf}
\caption{\label{fig:gamma_mag}
(Color online)
Field dependences of
(a) the specific heat ($C$) at 0.34 K (dashed curve between 5 and 7 T is a guide to the eye),
(b) magnetization curve with its derivative up to 7 T, and
(c) entropy $S/R$ln2 mapped on the $T$-$B$ phase diagram.
The dashed lines in Fig. \ref{fig:gamma_mag}(a) and (b) indicate the metamagnetic transition fields.
The solid and dashed lines in Fig. \ref{fig:gamma_mag}(c) indicate the lines of phase transition and crossover, respectively.
}
\end{figure}
Let us discuss the relationship between the obtained magnetic phase diagram and heavy-fermion state in CePdAl. Figures \ref{fig:gamma_mag}(a) and (b) show magnetic-field dependence of $C(B)/T$ at the lowest temperature 0.34 K and the magnetization curve at 1.3 K. It is stressed that there exist disordered $f$ electrons even in the antiferromagnetic state, as revealed by the previous NMR studies \cite{Oyamada2008}. In the phase I, \textit{i.e.}, the low-field antiferromagnetic phase, $C/T$ increases towards $B_{\rm m1}$, and it becomes $\sim$0.96 JK$^{-2}$mol$^{-1}$ at 3 T. The large $C/T$ value remains in the field-induced phases II and III. The phase transition from phase III to the paramagnetic phase occurs accompanying with a large maximum of $C/T \sim $1.5 JK$^{-2}$mol$^{-1}$. This enhancement of $C/T$ is probably related to the density of states of the heavy electrons as well as magnetic fluctuations in CePdAl. Very recently, it has been reported that the $A$ coefficient of the resistivity [$\rho(T) = \rho_{0} + AT^2$] of CePdAl reaches a large maximum value of $A = $ 12 $\mu \Omega$cmK$^{-2}$ just above $B_{\rm{m3}}$\cite{Zhao2016}. These large values of $C/T$ [Fig. \ref{fig:gamma_mag}(a)] and $A$ coefficient, according to the Kodowaki-Wood's relation\cite{Kadowaki1986}, indicate that $f$ electrons in CePdAl form a strongly correlated heavy-fermion state, as in the heavy-fermion superconductor CeCu$_{2}$Si$_{2}$ \cite{Steglich1979}.
The essential point for understanding the low-$T$ physics of CePdAl is to explain how a $f$ electron releases the entropy of $R$ln2 for the Kramers doublet as the localized CEF ground state. Figure. \ref{fig:gamma_mag}(c) shows the entropy of CePdAl obtained from the present specific-heat measurements. The contribution of the lattice is subtracted by using the specific heat of YPdAl\cite{Kitazawa1994}. At zero magnetic field, the entropy above $T_{\mathrm{N} }$ is $\sim$0.5$R$ln2, implying that the entropy of the $f$ electron is already reduced at low temperatures due to the Kondo effect. The entropy of the $f$ electron is further released by the incommensurate magnetic ordering, but the entropy release is not sufficient probably due to the geometrical frustration. Then, the partially Kondo screening for the Ce(2) site occurs at lower temperature below $T_{\mathrm{N}}$. The crossover line of $T_{0}(B)$, represented by a dashed line, might be a signature of the partial Kondo screening, since the remaining entropy of $\sim0.3R$ln2 seems to be released at $T_{0}$. This crossover might also be related to the fixing of incommensurate propagation vector component $\tau$\cite{Prokes2006}. In magnetic field, the partial Kondo screening begins to break with a magnetic field of several Tesla, since we also estimate the Kondo temperature of several Kelvin by using the entropy value at 0 T, \textit{i.e.}, $T_{\rm K}\sim 2T(S=0.5R\ln 2)\sim 6$ K. Interestingly, the entropy as a function of $B$ in paramagnetic phase shows a maximum around 3-4 T (red region) [Fig. \ref{fig:gamma_mag}(c)]. This behavior suggests that the breaking of the Kondo effect in the magnetic field induces the geometrical frustration again. Thus, to lift the frustration, magnetic phases II and III appear as the field-induced ground states between 3 and 4 T.
In the field-induced paramagnetic phase above $B_{\rm{m3}}$, the large value $C(B)/T$ dramatically decreases with increasing field, suggesting the reduction of its heavy-effective mass for $B> B_{\rm{m3}}$. Simultaneously, the magnetization $M(B)$ does not reach the full moment immediately, with a hump structure for its derivation d$M(B)$/d$B$ in a field range of $4.0 $ $<$ $B$ $<$ 4.2 T [Fig. \ref{fig:gamma_mag}(b)]. Such behaviors are possibly related to the effects of geometrical frustration as well as the remaining of the Kondo effect. On the other hand, in high-field region above $\sim$20 T, the slope of the magnetization curve is roughly described by the CEF model (Fig. \ref{fig:CePdAl_M-H_1}). Moreover, above 4 T, a broad Schottky-like anomaly appears in $C/T$, probably due to the Zeeman splitting of the CEF ground state. These behaviors suggest that the heavy quasiparticles dressed by the Kondo cloud in the low-field region gradually recover a well localized behavior with increasing field.
It is also interesting to discuss the physical properties around the phase boundary $B_{\rm{m3}}(T)$ as $T \rightarrow$ 0 K. As seen in Fig. \ref{fig:phase_diagram}(b), the $C(T,B)/T$ is not enhanced towards zero temperature around $B_{\rm{m3}}$. It is thus considered that the line of $B_{\rm{m3}}(T)$ remains first-order, unlike the presence of a quantum critical point, where a second-order phase transition line vanishes as $T \rightarrow$ 0 K. The situation of the quantum phase transition at $B_{\rm{m3}}$ in CePdAl is quite different from that of another heavy-fermion antiferromagnet YbRh$_{2}$Si$_{2}$ which shows the field-induced quantum criticality\cite{Tokiwa2009}. Here, it has been reported that non-Fermi-liquid behaviors near a quantum critical point for a two-dimensional antiferromagnetic system are induced by pressure \cite{Akamaru2002,Goto2002} and Ni substitution, \textit{i.e.}, CePd$_{1-x}$Ni$_{x}$Al\cite{Fritsch2014, Isikawa2000}. Our results suggest that magnetic field is not a tuning parameter which induces the quantum critical point in this system. In pure CePdAl, non-Fermi-liquid behavior has not been observed around $B_{\rm{m3}} (T)$ from low-$T$ resistivity measurements\cite{Zhao2016}.
We finally discuss a possibility of a spin-liquid phase around $B_{\rm{m3}}$, as reported by recent resistivity studies\cite{Zhao2016}. From our specific-heat measurements, a broad anomaly is observed at $T_{\rm A}$ just below the field-induced transition temperature $T_{\rm{m}}(B)$ [Fig. \ref{fig:HC_CePdAl} (c)]. This anomaly ($T_{\rm A}$) inside the magnetic ordered phase III may be distinguished from the behavior of the spin-liquid phase, which is \textit{not} characterized by any ordering. Thus we have not yet obtained any thermodynamic evidence for a spin-liquid phase from specific-heat data. Nevertheless, the hump structure in the derivative magnetization d$M(B)$/d$B$ above $B_{\rm{m3}}$ might be related to the reported resistivity anomaly. Further studies from other probes are intriguing to gain more insight into the novel physical properties around the field-induced phases in CePdAl.
\section*{\label{sec:level5}Conclusions}
To conclude, thermodynamic properties of CePdAl have been studied by means of dc pulsed-field magnetization and low-$T$ specific-heat measurements using a single crystalline sample. In low-field antiferromagnetic phase I, the crossover anomaly at $T_{0}(B)$, as observed in $C(T)/T$, may indicate an occurrence of the partial Kondo screening below $T_{\mathrm{N}}$, releasing the residual entropy of $0.3R$ln2. With increasing magnetic field, this screening is gradually broken, and as a result, the paramagnetic moment with the one-third of the full moment $M_{\rm 50 T}$ appears around $B_{\rm m1}$. To lift the frustration caused by the appearance of the magnetic moment on the partial disorder sites in high-field region, the magnetic phases II and III appear as the ground state. The magnetization measurements demonstrate that these high-field states II and III are strongly Ising-type ordered phases. Moreover, the large enhancement of $C/T$ at 3.5-3.7 T may imply that the strong magnetic fluctuation exists around the tri-critical point. With increasing field above $B_{\rm m3}$, the heavy-fermion state, accompanying with the partial disorders, gradually breaks down above 4 T, and $f$ electrons recover the localized behavior, which can be described by the CEF model. These thermodynamic results along with the obtained unusual magnetic phase diagram deepen the understanding for the low-$T$ ground state of this geometrically frustrated Ising-type quasi-Kagome Kondo lattice.
\section*{\label{sec:level6}Acknowledgements}
We would like to thank M. Imada, Y. Yamaji, and S. Kambe for valuable discussions and comments. K. M. was supported by Japan Society for the Promotion of Science through Program for Leading Graduate Schools (MERIT).
|
1,108,101,562,934 | arxiv | \section{Introduction}
\label{intro}
Identifying and studying the galaxies at high redshift that will
evolve into today's normal and massive galaxies remains a major goal
of observational astrophysics. Galaxies discovered in deep
sub-millimetre and mm-wavelength surveys
\citep[e.g.][]{Smail1997, Hughes1998, Barger1998, Blain1999A,
Barger1999A, Eales2000, Cowie2002, sescott2002, Webb2003, Borys2003,
Greve2004, Laurent2005} are generally thought to be dominated by
dusty, possibly merger-induced starburst systems and active galactic
nuclei (AGN) at redshifts $z>2$ with star formation rates as high as
SFR $\sim 1000 ~ \mathrm{M}_{\sun}
\mathrm{yr}^{-1}$~\citep{Blain2002}. The high areal number density of
these sub-mm and mm-detected galaxies (SMGs), combined with their
implied high star formation rates and measured FIR luminosities
\citep[$L_{\mathrm{FIR}}\sim10^{12}
\mathrm{L}_{\sun}$,][]{Kovacs2006,Coppin2008}, makes their estimated contribution
to both the global star formation density and the sub-mm background radiation as
high as 50\% at $z\sim2$ \citep[e.g.,][]{Borys2003,Wall2008}. Their observed
number counts imply strong evolution between $z=2$ and today
\citep[e.g.][]{sescott2002,Greve2004,Coppin2006}. The high star formation rates
at early epochs of SMGs generally match the expectation for rapidly forming
elliptical galaxies, a view supported by the high rate of mergers seen locally
in samples of ultra-luminous infrared galaxies \citep[ULIRGs;][]{Borne2000}, which are
plausible local counterparts of distant SMGs. Together, these characteristics
have led many observers to surmise that SMGs are likely to evolve into the
massive galaxies observed locally \citep[e.g.,][]{Dunlop1994,Smail1997,Bertoldi2007}
and may hold important clues to the processes
of galaxy and structure formation in general at high redshift.
GOODS-N is one of the most intensively studied extragalactic fields,
with deep multi-wavelength photometric coverage from numerous
ground-based and space-based facilities. These include Chandra in the
X-ray \citep{Alexander2003A}, HST in the optical and NIR
\citep{Giavalisco2004}, Spitzer in the NIR--MIR (Chary et al. in prep.,
Dickinson et al. in prep.), and the Very Large Array in the radio
\citep[][Morrison et al. in prep.]{Richards2000}, as well as highly complete
spectroscopic surveys from ground-based observatories
\citep[e.g.][]{Wirth2004,Cowie2004}. This field is therefore ideally
suited for deep mm-wavelength studies of SMGs: the extensive coverage
in GOODS-N allows the identification of SMG counterparts in X-ray, UV,
optical, IR, and radio bands, as well as constraints on photometric
redshifts and investigation of SMG power sources and evolution.
Deep mm surveys of blank fields are needed in order to constrain the
faint end of the SMG number counts, while large areal coverage is
required to constrain the bright end. Together they provide strong
constraints on evolutionary scenarios. Previous sub-mm surveys of
GOODS-N have been carried out with SCUBA on the JCMT
\citep{Hughes1998,Barger2000,Borys2003,Wang2004,Pope2005}. The
`Super-map' of the GOODS-N field, which was assembled from all available JCMT
shifts covering the field, contains 40 robust sources at 850\,$\mu$m\
down to an average sensitivity of 3.4\,mJy ($1 \sigma$) and covers
200\,arcmin$^2$
\citep{Borys2003,Pope2005}. However, the r.m.s. is highly non-uniform ranging from
0.4 mJy to 6 mJy (see Fig.~\ref{fig_goodsn_cover}). That
non-uniformity presents serious complications for comparisons with
multi-wavelength data.
\begin{figure}
\centering
\includegraphics[width=\hsize]{cov_regions.eps}
\caption{AzTEC and SCUBA coverage contours for the GOODS-N region
demonstrates our uniform coverage. The dark rectangular contour
corresponds to the AzTEC region with a map r.m.s. $\le$1.16\,mJy at
1.1\,mm, the coverage region presented here. The grey contours,
according to increasing line thickness, are the 850\,$\mu$m\ SCUBA
contours for r.m.s. values of 4\,mJy, 2.5\,mJy, and 0.5\,mJy
respectively. The underlying map is the IRAC 3.6\,$\mu$m\ image from the
Spitzer legacy program (Dickinson et al. in prep). The AzTEC map
represents a significant improvement in the uniformity of coverage at
faint flux levels.}
\label{fig_goodsn_cover}
\end{figure}
In this paper we present a new 1.1 mm survey of the GOODS-N field made
with AzTEC \citep{Wilson08} at the 15-m James Clerk Maxwell Telescope
(JCMT) on Mauna Kea, Hawaii. This map is the deepest blank-field
survey undertaken during the AzTEC/JCMT observing campaign, and is one
of the largest, deepest, and most uniform mm-wavelength maps of any
region of the sky. Our map covers $245$\,arcmin$^2$ and completely
encompasses the $16.5^{\prime}\times10^{\prime}$ {\em Spitzer} GOODS-N
field and all of the previous GOODS-N sub-mm and mm-wavelength fields,
including the original HDF map of
\citet{Hughes1998} and the SCUBA GOODS-N `Super-map'
\citep[indicated in Fig.~\ref{fig_goodsn_cover} here and presented
in][]{Borys2003,Pope2005}. The large number and high stability of the
AzTEC bolometers has enabled us to produce a map with small variations
in r.m.s., from 0.96--1.16\,mJy, across the 245 min$^2$ field. This
uniformity is a drastic improvement over the SCUBA GOODS-N `Super-map.'
The sensitivity variations of the AzTEC and SCUBA maps are
compared in Fig.~\ref{fig_goodsn_cover}.
In this work, we extract a catalogue of mm sources from the map and
calculate number counts towards the faint end of the 1.1-mm galaxy
population. The main results we discuss here were obtained from the
AzTEC data alone; data from other surveys have been used only as tools
to check the quality of our map. A second paper will address
counterpart identification of our AzTEC sources at other wavelengths
(Chapin et al. in prep.). We present the JCMT/AzTEC observations of
GOODS-N in
\S~\ref{obs}, data reduction and analysis leading to source
identification in \S~\ref{ana}, properties of our source catalogue in
\S~\ref{sources}, the number counts analysis in
\S~\ref{nc}, the discussion of results in \S~\ref{results}, and the
conclusion in \S~\ref{conclusion}.
\section{AZTEC OBSERVATIONS OF GOODS-N}
\label{obs}
AzTEC is a 144-element focal-plane bolometer array designed for use at
the 50-m Large Millimetre Telescope (LMT) currently nearing completion
on Cerro La Negra, Mexico. Prior to permanent installation at the
LMT, AzTEC was used on the JCMT between Nov. 2005 and Feb. 2006,
primarily for deep, large-area blank field SMG surveys
\citep[e.g.][Austermann et al. in prep.]{Scott08}. We imaged the
GOODS-N field at $1.1\,$mm with the AzTEC camera during this
2005--2006 JCMT observing campaign. Details of the AzTEC optical
design, detector array, and instrument performance can be found in
\citet{Wilson08}. Each detector has a roughly Gaussian-shaped beam on
the sky with an 18-arcsec full-width at half-maximum (FWHM). Given
the beam separation of
22\,arcsec, the hexagonal close-packed array
subtends a ``footprint'' of 5\,arcmin on the sky. Out of the full
array complement of 144 bolometer-channels, 107 were operational during this
run.
We mapped a 21\,arcmin $\times$ 15\,arcmin area centred on the
GOODS-N field (12$^{\rm h}$37$^{\rm m}$00$^{\rm s}$,
+62$^\circ$13\arcmin00\arcsec) in unchopped raster-scan mode, where
the primary mirror scans the sky at constant velocity, takes a small
orthogonal step, then scans with the same speed in the opposite
direction, repeating until the entire area has been covered. We used a
step size of 9\,arcsec in order to uniformly Nyquist-sample the sky.
We scanned at speeds in the range
60\,arcsec\,s$^{-1}$--180\,arcsec\,s$^{-1}$ as allowed by the fast
time constants of our micro-mesh bolometers, with no adverse
vibrational systematics. In total, we obtained 50 usable individual
raster-scan observations, each taking 40~minutes (excluding
calibration and pointing overheads). The zenith opacity at 225~GHz is
monitored with the CSO tau meter, and ranged from 0.05--0.27 during
the GOODS-N observations. This corresponds to 1.1\,mm transmissions
in the range 70--94\%. A detailed description and justification of
the scan strategy we used can be found in
\citet{Wilson08}.
\section{Data Reduction: from time-streams to source catalogue}
\label{ana}
In this section, we summarise the processing of the AzTEC/GOODS-N
data, which is specifically geared towards finding mm {\em point}
sources. The data reduction procedure generally follows the method
outlined in \citet{Scott08}, although we emphasise several new pieces
of analysis that were facilitated by the improved depth of this map
over the COSMOS survey. We begin with the cleaning and calibration of
the time-stream data in \S~\ref{ana_cleaning}, which includes a new
investigation into the sample length over which to clean the data. In
\S~\ref{ana_map_filter}, we describe the map making process and the
optimal filtering for point sources. We asses the properties and
quality of the AzTEC/GOODS-N map in \S~\ref{ana_map_qual}. The depth
of this survey has enabled us to ascertain the degree to which our
data follow Gaussian statistics and detect, directly, a departure from
it at long integration times indicating a component of
signal variance due to source confusion. The astrometry of the map is
analysed in \S~\ref{ana_stacking}, and we describe the extraction of
sources from the optimally filtered map in
\S~\ref{ana_sources}.
\subsection{Filtering, cleaning, and calibration of time-stream data}
\label{ana_cleaning}
The AzTEC data for each raster-scan observation consists of pointing,
housekeeping (internal thermometry, etc.), and bolometer time-stream
signals. Because the bolometer data are sampled at 64~Hz, all other
signals are interpolated to that frequency as needed by the analysis.
The raw time-streams of the 107 working bolometers are first despiked
and low-pass filtered at 16\,Hz, as described in \citet{Scott08}. The
despiked and filtered time-streams are next ``cleaned'' using a
principal component analysis (PCA) approach, which primarily removes
the strong atmospheric signal from the data. This ``PCA-cleaning''
method was developed by the Bolocam group
\citep{Laurent2005} and later adapted for AzTEC, as described in
\citet{Scott08}. As explained there, we also generate PCA-cleaned time streams
corresponding to a simulated point source near the field centre, in
order to produce the {\em point-source kernel}, which is used later
for beam-smoothing our maps (see \S~\ref{ana_map_filter}).
In this work we go beyond the analysis in \citet{Scott08} to verify
that we have made good choices with regard to several aspects of the
general cleaning procedure that has been adopted for all of the
existing AzTEC data. We examine two outstanding questions in
particular: 1) does PCA-cleaning work better than a simple
common-mode subtraction based only on the average signal measured by
all detectors as a function of time? and 2) over what time scale
should each eigenvector projection be calculated in order to give the
best results?
The first question addresses whether simple physical models may be
used in place of PCA-cleaning, where the choice of which modes to
remove from the data is not physically motivated. We investigate this
by creating a simple sky-signal template as the average of all of the
detectors at each time sample. We then fit for an amplitude
coefficient of the template to each detector by minimising the r.m.s.
between the scaled template and the actual data. This scaled template
is removed from the bolometer data and we examine the residual signal,
which ideally consists only of astronomical signal and white noise.
We find that this residual signal contains many smaller
detector-detector correlations that are clearly visible in the data
and are dominant compared to the signal produced by astronomical
sources in the map. The residual time-stream r.m.s.\ from the simple
sky-template subtraction is usually about twice the r.m.s.\ resulting
from PCA cleaning. This test shows that the simple common-mode
removal technique is insufficient.
In the ``standard'' PCA-cleaning procedure for AzTEC data, outlined in
\citet{Scott08}, the eigenvector
decomposition is performed on each scan ($\sim5$--15~s of data). We
now study which time scales give the best results using a statistical
correlation analysis. We generate a bolometer-bolometer Pearson
correlation matrix using sample lengths that range from a fraction of
a second to tens of minutes (the length of a complete observation).
On the shortest time scales, the correlation coefficients have large
uncertainties due to sample variance (too few samples from which to
make estimates). On time scales corresponding to a single raster-scan
($\sim$5--15~sec), however, the sample variance decreases and a clear
pattern emerges: the strength of the correlations drops off uniformly
with physical separation between the detectors. The most obvious
trend is the gradient in correlations that we see with detector
elevation, which is presumed to be produced by the underlying gradient
in sky emission. As the sample length increases, a different pattern
emerges, in which the dominant correlation appears to be related to
the order in which the detectors are sampled by the read-out
electronics, rather than their physical separation. These
correlations, likely due to electronics-related $1/f$ drifts, are
effectively removed when using scan-sized sample lengths (5--15\,s) as
well, since they appear as DC baseline differences on these short
time-scales. These results verify that scan-sized sample lengths
produce the best results as they provide a sufficient number of
samples on short enough time scales.
After PCA-cleaning the bolometer signals, we apply a calibration
factor to convert the bolometers' voltage time-streams into units of
Jy per beam. Details of this procedure are given in
\citet{Wilson08}. The total error on the calibrated signals (including
the error on the absolute flux of Uranus) is 11\%.
\subsection{Map-making and optimal filtering}
\label{ana_map_filter}
The map-making process used to generate the final optimally filtered
AzTEC/GOODS-N map is identical to that used in \citet{Scott08}, and
the reader is directed to that paper for the details of this process,
which we briefly summarise below.
We first generate maps for each of the 50 individual raster-scan
observations separately by binning the time-stream data onto a
3\arcsec~$\times$~3\arcsec\ grid in RA-Dec which is tangent to the
celestial sphere at (12$^\mathrm{h}$37$^\mathrm{m}$00$^\mathrm{s}$,
+62$^\circ$13\arcmin00\arcsec). We chose the same tangent point and
pixel size as that used for the SCUBA map of GOODS-N
\citep[see for example][]{Pope2006} so that the two maps can easily be
compared in a future paper. We find that this pixel size provides a
good compromise between reducing computation time, while sampling with
high resolution the 18-arcsec FWHM beams. Individual signal maps and
their corresponding weight maps for each observation are created as
described in
\citet{Scott08}, along with kernel maps that reflect how a faint point
source is affected by PCA-cleaning and other steps in the
analysis. Next, we form a single ``co-added'' signal map from the
weighted average over all 50 individual observations. An averaged
kernel map is also created in a similar way. The total weight map is
calculated by summing the weights from individual observations, pixel
by pixel. As described by \citet{Scott08} we also generate 100 noise
realization maps corresponding to the co-added map.
We then use a spatial filter to beam-smooth our map using the
point-source kernel, by optimally weighting each spatial-frequency
component of this convolution according to the spatial power spectral
density (PSD) of noise-realization maps. Details of this optimal filter can
also be found in \citet{Scott08}.
\subsection{Map quality: depth, uniformity, point-source response, and noise integration}
\label{ana_map_qual}
\begin{figure*}
\centering
\includegraphics[width=\hsize]{source_map.eps}
\caption{AzTEC/GOODS-N signal map with the 36 S/N$\geq$3.5 source
candidates circled. Information about these source candidates is
given in Table~\ref{table_sources}. Here and in that table, source
candidates are numbered in decreasing order of S/N. The source
candidates marked with dashed-line circles do not belong to the
robust sub-list, indicated by a horizontal line in
Table~\ref{table_sources}. The map has been trimmed to show only the
70\% coverage region (245\,arcmin$^2$).}
\label{fig_source_map}
\end{figure*}
The final co-added, optimally filtered signal map for the GOODS-N
field is shown in Fig.~\ref{fig_source_map}. Of the
315-arcmin$^2$ solid angle scanned by the telescope boresight
during our survey, we expect $\sim$250\,arcmin$^2$ to be imaged
uniformly by the complete AzTEC array. We identify this region by
imposing a coverage cut. We find that weights within 70\% of the
central value occur in a contiguous region of 245\,arcmin$^2$. The
map of Fig.~\ref{fig_source_map} has been trimmed to only show this
region. Much of the analysis presented here is limited to this
region, which we will henceforth refer to as the the ``70\% coverage
region.'' The 1-$\sigma$ flux-density error estimates in the trimmed
map range from 0.96~mJy\,beam$^{-1}$ in the centre to
1.16~mJy\,beam$^{-1}$ at the edges.
\begin{figure}
\centering
\includegraphics[width=2.3in,angle=90]{kernel_profile.eps}
\caption{Cross section of the point-source kernel. The Gaussian that best fits
the inner $R = 10$\,arcsec region is shown in the lighter shade and
has a FWHM of 19.5\,arcsec. The negative ring around the centre and
other peripheral features (not visible here) are induced by
PCA-cleaning as well as the optimal filter.}
\label{fig_kernel}
\end{figure}
We also run the co-added kernel map through the same filtering process
as the signal map. The resulting filtered kernel map, whose profile
is shown in Fig.~\ref{fig_kernel}, is our best approximation of the
shape of a point source in the co-added, filtered signal map. As
demonstrated in
\S~\ref{ana_stacking}, our pointing jitter/uncertainty has a
sub-2-arcsec characteristic scale; this will have little impact on the
kernel shape and therefore is not included in generating the kernel
map. The negative troughs around the central peak are due to a
combination of array common-mode removal in the PCA-cleaning and
de-weighting of longer spatial wavelength modes by the optimal filter.
The point-source kernel also has radial scan-oriented features, or
``spokes,'' due to PCA cleaning that are $<$0.1\% of the kernel
amplitude. The directions of these spokes would vary across the map
as the scan angle changes with RA-Dec. Therefore, the kernel map
accurately reflects these spokes only for point sources near the
centre of the field. However, because it is difficult to analytically
model a point source (through PCA cleaning and optimal filtering) and
because the radial features are very faint, we use the kernel map as a
point source-template for injection of sources in the simulations
described later.
Because this GOODS-N survey is the deepest blank-field survey
conducted thus far with AzTEC on the JCMT, we demonstrate in
Fig.~\ref{fig_quietness_plot} how the map noise averages down with the
successive co-addition of individual observations. The central
200\arcsec$\times$200\arcsec\ region of the signal map and the noise
realisation maps are used for this calculation. The x-axis represents
the average weight of a 3-arcsec pixel in this region prior to
filtering. A scale factor converts this raw weight to an effective
time, $T^{*}$, so that the final effective time equals the final
integration time devoted to an {\em average} 3-arcsec pixel in this
central patch. Thus, the increment in $T^{*}$ gained with the
addition of an individual observation is the effective integration
time contributed by that particular observation to the central region.
The $i$th y-axis value is calculated by co-adding (averaging)
individual signal maps from observations 1 through $i$, then applying
the optimal filter, and finally taking the standard deviation of this
co-added, filtered map in the central region. The crosses represent
the signal map. The 100 curves shown in a lighter shade are
calculated by carrying out the same process on 100 noise realisations.
In the absence of systematics or astronomical signal, we expect all
curves to scale as $1/\sqrt{T^{*}}$, in accordance with Gaussian
statistics, as indicated by the dashed line. At higher $T^{*}$, we
may expect a slight steepening in all curves because later
co-additions better reflect our assumptions of circular symmetry (in
the optimal filtering process) as we add more scan directions to the
mix. However, this effect appears to be unmeasurably small in our
data.
While the noise realisations follow the $1/\sqrt{T^{*}}$ trend, the
signal map initially follows it but flattens near the point where
${\sim}20$--30~s of effective time is spent on a 3-arcsec pixel.
Switching the order in which signal maps are co-added does not alter
this trend or the noisy behaviour of these points at large $T^{*}$.
Therefore, we conclude that: 1) single individual observations yield
maps that are consistent with our noise realisations; 2) map features
that do not survive scan-by-scan ``jack-knifing,'' presumably
astronomical signal due to source confusion, prevent the signal map's
r.m.s. from improving as $1/\sqrt{T^{*}}$; and 3) the fact that noise
realisations continue to follow this trend indicates that we are far
from a systematics floor due to atmospheric or instrumental effects,
even at the highest $T^{*}$.
\begin{figure}
\centering
\subfigure{
\hspace{.17in}
\includegraphics[width=2.4in,angle=90]{quietness.eps}}
\caption{Behaviour of the signal map's r.m.s. (crosses), as well as the
r.m.s. of 100 separate noise realisations (collection of curves), as a
function of the mean effective integration time $T^{*}$ spent on each
3-arcsec central pixel of map. The dashed curve shows the 1/$\sqrt{T^*}$
relationship expected in the absence of systematics and astronomical
signal. This demonstrates how the map noise averages down with the
successive addition of more observations. The ``flattening'' of the
central r.m.s. at
large $T^*$ in the signal map, compared to the noise maps, is due to
astronomical signal. The fluctuations of this curve at large $T^*$ are
simply due to noise in the r.m.s. itself, as re-ordering observations
gives similar features near the same region.}
\label{fig_quietness_plot}
\end{figure}
\subsection{Astrometry calibration}
\label{ana_stacking}
The pipeline used to produce this map of GOODS-N interpolates pointing
offsets inferred from regular observations of pointing calibrators
interspersed with science targets \citep{Wilson08,Scott08}. In order
to verify the quality of this pointing model for GOODS-N, both in an
absolute sense, and in terms of small variations between passes, we
compare the AzTEC map with the extremely deep 1.4\,GHz VLA data in
this field
\citep[][Morrison et al. in prep.]{Richards2000}.
The radio data reduction and source list used here is the same as that of
\citet{Pope2006}, with a 1-$\sigma$ noise of $\sim$5.3\,$\mu$Jy at the
phase centre. The catalogue is constructed with a 4-$\sigma$ cut, and
has positional uncertainties $\sim0.2^{\prime\prime}$
(Morrison et al. in prep.).
We stack the signal in the AzTEC map at the positions of radio sources
to check for gross astrometric shifts in the AzTEC pointing model, as
well as any broadening in the stacked signal which may indicate
significant random offsets in the pointing between visits. A more
detailed comparison between the mm and 1.4\,GHz map is presented in
(Chapin et al. in prep.) to assist with the MIR/NIR identifications of
individual AzTEC SMGs, and the production of radio--NIR SEDs.
The stack was made from the 453 1.4\,GHz source positions that are
within the uniform noise region of the AzTEC map. As in
\citet{Scott08} we check for an astrometric shift and broadening by
fitting a simple model to the stacked image, which consists of an
astrometric shift ($\delta$RA, $\delta$Dec) to the ideal point source
kernel, convolved with a symmetric Gaussian with standard deviation
$\sigma_{\rm p}$. This Gaussian represents our model for the random
pointing error in the AzTEC map. We determine maximum likelihood
estimates $\delta$RA $= 0.2^{\prime\prime}$, $\delta$Dec $=
-0.9^{\prime\prime}$, and $\sigma_{\rm p}=0.6^{\prime\prime}$. The
expected positional uncertainty (in each coordinate) for a point
source with a purely Gaussian beam is approximately
$0.6\times$FWHM/(S/N) \citep[see the Appendix in][]{Ivison2007} where
the FWHM is 18\,arcsec in our case. The S/N of our stack is
approximately 10, so the expected positional uncertainty is
${\sim}\,1^{\prime\prime}$. Therefore the total astrometric shift
measured by the fitting process, $0.9^{\prime\prime}$, is consistent
with the hypothesis that there is {\em no} significant underlying
shift. We also note that the $\chi^2$ function for this fit is
extremely shallow along the $\sigma_{\rm p}$ axis, so although the
minimum occurs at $0.6^{\prime\prime}$, it is not significantly more
likely than $0^{\prime\prime}$. We therefore conclude from this
analysis that there is no significant offset, nor beam broadening
caused by errors in the pointing model.
\subsection{Source finding}
\label{ana_sources}
\begin{figure}
\centering
\includegraphics[width=2.4in,angle=90]{pix_histos.eps}
\caption{Pixel flux histogram of the final signal map in a dark shade and
the average pixel flux histogram made from 100 noise realizations in a
lighter shade. The positive tail and smaller negative excess in the
signal map is due to the presence of point sources.}
\label{fig_pix_histograms}
\end{figure}
To investigate the presence of astronomical sources in our map, we
plot in Fig.~\ref{fig_pix_histograms} a histogram of pixel fluxes in
the 70\% coverage region of the field. Also shown in a lighter shade
is the average pixel histogram made from the 100 noise-realization
maps. The noise histogram can be modelled well by a Gaussian centred
on $0\,$mJy with a standard deviation of 1.0\,mJy. The obvious excess
of large positive pixel values and the small excess of negative values
in the signal map are caused by the presence of sources.
To identify individual point sources, we first form a S/N map by
multiplying the final (i.e.\ co-added and filtered) signal map by the
square-root of the weight map. We then identify local maxima in this
S/N map with S/N $\geq 3.5$. There are 36 local maxima that meet this
condition in the 70\% coverage region of the field. Our analysis of
these source candidates is simplified because no pair of them are
close enough to significantly alter each other's recovered flux
densities ($>$36\,arcsec apart in each case). We have evidence that
AzGN01 is a blend of two sources. However, since this knowledge is
not based on AzTEC data alone, we defer a detailed discussion of that
source for the second paper of this series (Chapin et al. in prep.).
The final signal map and these source candidates
are shown in Fig.~\ref{fig_source_map}. Table~\ref{table_sources}
lists details of all the AzTEC/GOODS-N $\geq3.5$-$\sigma$ source
candidates, including their locations, measured fluxes, S/N, and
additional quantities which are defined below. The source positions
are given to sub-pixel resolution by calculating a centroid for each
local maximum based on nearby pixel fluxes. Sources with clear
counterparts in the SCUBA map of GOODS-N
\citep{Borys2003,Pope2005} are highlighted in
Table~\ref{table_sources}.
\begin{table*}
\centering
\begin{tabular}{l|c|c|c|c|c|c}
\hline
Source ID & RA (J2000) & Dec (J2000) & 1.1\,mm flux [mJy] & source S/N & de-boosted
flux [mJy] & non-positive PFD integral \\
\hline
\hline
AzGN01$^S$ & 12:37:12.04 & 62:22:11.5 & 11.45$\pm$0.99 & 11.58 & 10.69$^{+0.94}_{-1.12}$ & 0.000 \\
AzGN02 & 12:36:32.98 & 62:17:09.4 & 6.84$\pm$0.97 & 7.03 & 5.91$^{+1.02}_{-1.00}$ & 0.000 \\
AzGN03$^S$ & 12:36:33.34 & 62:14:08.9 & 6.23$\pm$0.97 & 6.43 & 5.35$^{+0.94}_{-1.08}$ & 0.000 \\
AzGN04 & 12:35:50.23 & 62:10:44.4 & 5.76$\pm$1.01 & 5.71 & 4.69$^{+1.06}_{-1.06}$ & 0.000 \\
AzGN05 & 12:37:30.53 & 62:12:56.7 & 5.21$\pm$0.97 & 5.38 & 4.13$^{+1.08}_{-0.98}$ & 0.000 \\
AzGN06 & 12:36:27.05 & 62:06:06.0 & 5.28$\pm$1.00 & 5.29 & 4.13$^{+1.12}_{-1.00}$ & 0.000 \\
AzGN07$^S$ & 12:37:11.94 & 62:13:30.1 & 5.04$\pm$0.97 & 5.21 & 3.95$^{+1.08}_{-0.98}$ & 0.000 \\
AzGN08$^S$ & 12:36:45.85 & 62:14:41.9 & 4.94$\pm$0.97 & 5.09 & 3.83$^{+1.08}_{-1.00}$ & 0.000 \\
AzGN09$^S$ & 12:37:38.23 & 62:17:35.6 & 4.50$\pm$0.97 & 4.63 & 3.39$^{+1.02}_{-1.10}$ & 0.003 \\
AzGN10 & 12:36:27.03 & 62:12:18.0 & 4.46$\pm$0.97 & 4.60 & 3.35$^{+1.02}_{-1.10}$ & 0.003 \\
AzGN11 & 12:36:35.62 & 62:07:06.2 & 4.44$\pm$0.98 & 4.53 & 3.27$^{+1.08}_{-1.08}$ & 0.004 \\
AzGN12 & 12:36:33.17 & 62:06:18.1 & 4.32$\pm$0.99 & 4.39 & 3.07$^{+1.12}_{-1.08}$ & 0.008 \\
AzGN13 & 12:35:53.86 & 62:13:45.1 & 4.30$\pm$0.99 & 4.36 & 3.07$^{+1.10}_{-1.12}$ & 0.008 \\
AzGN14$^S$ & 12:36:52.25 & 62:12:24.1 & 4.18$\pm$0.97 & 4.31 & 2.95$^{+1.10}_{-1.08}$ & 0.009 \\
AzGN15 & 12:35:48.64 & 62:15:29.9 & 4.76$\pm$1.12 & 4.26 & 3.23$^{+1.26}_{-1.32}$ & 0.016 \\
AzGN16$^S$ & 12:36:16.18 & 62:15:18.1 & 4.12$\pm$0.97 & 4.23 & 2.89$^{+1.08}_{-1.14}$ & 0.013 \\
AzGN17 & 12:35:40.59 & 62:14:36.1 & 4.75$\pm$1.13 & 4.20 & 3.23$^{+1.24}_{-1.42}$ & 0.020 \\
AzGN18 & 12:37:40.80 & 62:12:23.3 & 4.09$\pm$0.97 & 4.20 & 2.79$^{+1.16}_{-1.08}$ & 0.014 \\
AzGN19 & 12:36:04.33 & 62:07:00.2 & 4.54$\pm$1.09 & 4.15 & 3.07$^{+1.20}_{-1.36}$ & 0.022 \\
AzGN20$^N$ & 12:37:12.36 & 62:10:38.2 & 4.01$\pm$0.97 & 4.14 & 2.79$^{+1.08}_{-1.16}$ & 0.016 \\
AzGN21 & 12:38:01.96 & 62:16:12.6 & 3.99$\pm$0.99 & 4.05 & 2.65$^{+1.16}_{-1.16}$ & 0.023 \\
AzGN22$^N$ & 12:36:49.70 & 62:12:12.0 & 3.81$\pm$0.97 & 3.93 & 2.55$^{+1.08}_{-1.24}$ & 0.030 \\
AzGN23 & 12:37:16.81 & 62:17:32.2 & 3.75$\pm$0.97 & 3.88 & 2.39$^{+1.16}_{-1.18}$ & 0.035 \\
AzGN24$^S$ & 12:36:08.46 & 62:14:41.7 & 3.77$\pm$0.98 & 3.86 & 2.39$^{+1.18}_{-1.20}$ & 0.038 \\
AzGN25 & 12:36:52.30 & 62:05:03.4 & 4.19$\pm$1.09 & 3.85 & 2.55$^{+1.32}_{-1.42}$ & 0.050 \\
AzGN26 & 12:37:13.86 & 62:18:26.8 & 3.70$\pm$0.97 & 3.82 & 2.39$^{+1.10}_{-1.28}$ & 0.041 \\
AzGN27$^N$ & 12:37:19.72 & 62:12:21.5 & 3.68$\pm$0.97 & 3.81 & 2.31$^{+1.16}_{-1.22}$ & 0.043 \\
AzGN28 & 12:36:43.60 & 62:19:35.9 & 3.68$\pm$0.98 & 3.76 & 2.31$^{+1.14}_{-1.30}$ & 0.050 \\
\hline
AzGN29 & 12:36:21.14 & 62:19:12.1 & 4.17$\pm$1.13 & 3.70 & 2.39$^{+1.34}_{-1.64}$ & 0.077 \\
AzGN30 & 12:36:42.83 & 62:17:18.3 & 3.58$\pm$0.97 & 3.69 & 2.13$^{+1.20}_{-1.26}$ & 0.059 \\
AzGN31 & 12:36:22.16 & 62:16:11.0 & 3.58$\pm$0.97 & 3.68 & 2.13$^{+1.20}_{-1.28}$ & 0.061 \\
AzGN32 & 12:37:17.14 & 62:13:56.0 & 3.56$\pm$0.97 & 3.67 & 2.13$^{+1.18}_{-1.28}$ & 0.061 \\
AzGN33 & 12:36:51.42 & 62:20:23.7 & 3.54$\pm$0.98 & 3.63 & 2.13$^{+1.12}_{-1.40}$ & 0.069 \\
AzGN34 & 12:36:48.30 & 62:21:05.5 & 3.65$\pm$1.02 & 3.59 & 2.13$^{+1.16}_{-1.50}$ & 0.080 \\
AzGN35 & 12:38:18.20 & 62:14:29.8 & 4.02$\pm$1.12 & 3.59 & 2.13$^{+1.32}_{-1.68}$ & 0.096 \\
AzGN36 & 12:36:17.38 & 62:15:45.5 & 3.41$\pm$0.97 & 3.50 & 1.87$^{+1.16}_{-1.40}$ & 0.091 \\
\hline
\end{tabular}
\caption{Source candidates in AzTEC/GOODS-N with S/N$\geq$3.5 ordered
according to S/N. The horizontal line between AzGN28 and AzGN29
represents our threshold for source robustness, as explained in
\S~\ref{sources_fdr}. The last two columns are defined in
\S~\ref{sources_deboost}. The superscripts $S$ and $N$ highlight sources
in our robust sub-list that lie within the considered SCUBA region
(where the 850-$\mu$m\ r.m.s.\ is $<$2.5\,mJy). The sources denoted by
$S$ have robust detections at 850\,$\mu$m\ within 12\,arcsec of the given
positions while the sources denoted by $N$ do not (Chapin et al. in
prep.).}
\label{table_sources}
\end{table*}
\section{The AzTEC/GOODS-N source catalogue}
\label{sources}
As evident from Table~\ref{table_sources}, the number of source
candidates increases rapidly with decreasing S/N. However, if we use
a S/N threshold to make a sub-list of the sources in
Table~\ref{table_sources}, the false positives contained in such a
list will also increase with lower S/N thresholds. Our aim here is to
find a S/N threshold above which $\ga$95\% of source candidates are,
on average, expected to be true sources. This is a practical choice
aimed at maximising the number of sources recommended for follow-up
studies (the subject of Chapin et al. in prep.) in a way that limits
the effect of false detections on any conclusions drawn. The
horizontal line in Table~\ref{table_sources} below source AzGN28
(S/N$\geq$3.75) marks the cut-off of the sub-list that we expect will
satisfy our robustness condition. We first explain in
\S~\ref{sources_fdr} the analysis of false detection rates (FDRs) that
yields this threshold. In that section, we go beyond previous FDR
treatments for AzTEC \citep{Scott08} and derive some general results
about FDRs that are applicable to (sub)mm surveys in general.
Next, we explain in \S~\ref{sources_deboost} the last two columns of
Table~\ref{table_sources} which contain a re-evaluation of source flux
densities and an assessment of the relative robustness of our source
candidates. Then, in \S~\ref{sources_completeness}, we discuss the
survey completeness and present a brief consistency check of our
source candidates against SCUBA detections at 850\,$\mu$m.
\subsection{False detection rates}
\label{sources_fdr}
Two obvious methods for estimating the false detection rate (FDR) of a
survey are to run the source finding algorithm on: 1) simulated noise
realization maps; or 2) the {\em negative} of the observed signal map.
For several S/N thresholds, Table~\ref{table_fdr} lists the number of
source candidates in the actual map (row 1), the average number of
``sources'' found in simulated pure noise realizations (row 2), and
the number of ``sources'' in the negative of the actual map (row 4).
When using the map negative, regions within 36\,arcsec of a bright
positive source were excluded in order to avoid their ``negative
ring'' (see Fig.~\ref{fig_kernel}).
\begin{table}
\centering
\begin{tabular}{l|c|c|c|c}
\hline
Source Threshold & 3.5-$\sigma$ & 3.75-$\sigma$ & 4-$\sigma$ &
5-$\sigma$ \\
\hline
\hline
Sources Detected & 36 & 28 & 21 & 8 \\
Pure-noise FDR & 4.32 & 1.69 & 0.68 & 0.01 \\
Best-fit-model FDR & 2.65 & 1.13 & 0.42 & 0.00\\
\hline
Negative FDR & 6 & 4 & 4 & 0 \\
Pure-noise Negative FDR & 4.55 & 1.58 & 0.33 & 0.00\\
Best-fit-model Negative FDR & 5.96 & 2.85 & 1.16 & 0.04\\
\hline
\end{tabular}
\caption{The number of source candidates passing a given S/N threshold
in the actual map are indicated in row 1. Several methods for
determining the false detection rates (FDRs) were explored.
``Pure-noise'' refers to averages computed over 100 noise-realization
maps. ``Best-fit-model'' corresponds to averages from 100
noise+source realization maps using the best fit model of
\S~\ref{nc_parametric}. We have settled on the values of row 2 as our
nominal FDRs because they give a conservative overestimate, as
explained in the text.}
\label{table_fdr}
\end{table}
We conclude that these two estimates of the FDR are not very accurate
for our maps. Because of the high number density of SMGs in the sky
compared to our beam size, every point of the map is in general
affected by the presence of sources. This source confusion causes the
simple FDR estimates above to be inaccurate. In particular, there are
equal numbers of negative and positive ``detections'' in noise
realizations to within the statistical error of our noise simulations,
as indicated by rows 2 and 5. However, the presence of sources skews
this balance in the actual map, making the false negatives rate higher
than the pure-noise numbers and the false positives rate (what we are
after) lower than the pure-noise numbers.
Both these effects can be understood by considering the following
hypothetical construction: a noise-less AzTEC map of the sky
containing many point sources, all with the shape of the point-source
kernel. Because each kernel has a mean of zero, such a map would have
an excess of negative valued pixels over positive valued pixels (about
70\% to 30\%) to counter the high positive values near the centre of
the kernel (see \ref{fig_kernel}). When noise that is symmetric
around zero is ``added'' to such a map, this small negative bias will
cause a larger number of high-significance negative excursions in that
sky map compared to a map containing just the symmetric noise. The
pixel flux histogram of the actual map, shown in
Fig.~\ref{fig_pix_histograms} (darker shade), also shows evidence of
this effect through its negatively shifted peak as well as the excess
of negative pixels in comparison with pure noise realizations
(lighter-shade histogram). This small negative bias, in pixels that do
not lie atop a source peak, also explains why there are fewer
high-significance false positives in an actual sky map compared to a
pure noise map.
To verify our reasoning, we generated 100 noise+source realizations
for the best-fit number counts model described in
\S~\ref{nc_parametric}. For each realization, we find the number of
positive and negative ``detections'' just as for the true map. False
positives are defined as detections occurring $>$10\,arcsec away from
{\em inserted} sources of brightness $>$0.1\,mJy. The FDR results for
these simulations are given in rows 3 and 6 of Table~\ref{table_fdr}.
The results show that the negatives rate is indeed boosted by the
presence of sources, compared to pure noise maps (rows 2 and 5).
Furthermore, the negative FDR of the actual map (row 4), which
drops to 0 at a S/N of 4.2, is statistically consistent with the
simulated negative FDR means of row 6. As expected, the simulated
false positives rate is lower than the pure-noise FDR, as
evident from row~3.
As the true positive FDR depends on the number counts, we adopt the
model-independent pure-noise values of row 2 as our nominal FDRs.
These will be conservative overestimates of the FDR regardless of the
true $1.1\,$mm number-counts of the GOODS-N field.
Based on these nominal FDRs, we divide the source candidate list of
table~\ref{table_sources} into two categories of
robustness, with the dividing line at a S/N of 3.75. On average, we
expect 1--2 source candidates with S/N$\geq$3.75 (above the horizontal
line in Table~\ref{table_sources}) and 1--3 candidates with S/N$<$3.75
(below the line) to be false detections.
\subsection{Flux bias correction}
\label{sources_deboost}
In our map, where the signal from sources does not completely dominate
over noise, the measured flux density can be significantly shifted
from the true $1.1\,$mm flux density of a source due to noise. The
measured flux densities in column~4 of Table~\ref{table_sources} are
more likely to be overestimates than underestimates of the true flux
densities because of the sharply decreasing surface density of (sub)mm
galaxies with increasing flux density. As this slope in the number
counts is quite steep \citep[see for example][]{Blain1999A, Barger1999A,
Eales2000, Borys2003, Greve2004, Coppin2006}, this {\em bias} can be a
large effect. Therefore, we estimate a ``de-boosted'' flux density
for all our 3.5-$\sigma$ source candidates. This estimate is based on
the Bayesian technique laid out in
\citet{Coppin2005} for calculating the posterior flux density (PFD)
distribution of each source.
The number-counts model that we use to generate the prior is given by
\begin{equation}
{\mathrm{d}N \over \mathrm{d}S} = N^\prime {S^\prime \over S} e^{-S/S^\prime}
\label{eq_prior}
\end{equation}
where d$N$/d$S$ represents the differential number counts of sources
with flux density $S$. We use $N^\prime=3500\,$mJy$^{-1}$deg$^{-1}$
and $S^\prime=1.5\,$mJy, which is consistent with taking the Schechter
function number-counts fit of
\citet{Coppin2006} and scaling the 850$\,\mu$m fluxes by a factor of 2.2
to approximate the $1.1\,$mm fluxes of the same population. It is
sufficient to use a prior that is only approximately correct, since
many of the derived results (as we have checked explicitly) are
independent of the exact form of the assumed number counts. We take
as our Bayesian prior the noise-less pixel flux histogram of a large
patch of sky simulated according to this model. Since our
point-source kernel has a mean of zero, the prior is non-zero for
negative fluxes and peaks near $0\,$mJy.
The de-boosted flux density given in column~6 of
Table~\ref{table_sources} is the location of the PFD's local maximum
closest to the measured flux density. This de-boosted flux density is
fairly insensitive to changes in the prior that correspond to other
number-counts models allowed by current constraints. The upper and
lower error bounds quoted for a de-boosted flux density correspond to
the narrowest PFD interval bracketing the local maximum that
integrates to 68.3\%.
In order to determine the relative robustness of each source
individually, we calculate the integral of the PFD below zero flux.
This quantity, given in column~7 of Table~\ref{table_sources}, is not
a function of just S/N but depends on the flux (signal) and its error
(noise) separately. Although the values given in column~7 can vary
appreciably among reasonable choices of number-counts priors and the
PFD integration upper bounds (set to zero here), the source robustness
{\it order\/} inferred by the non-positive PFD integral is quite
insensitive to these choices. Therefore, the values in column~7
provide a useful indicator of the relative reliability of individual
sources.
However, due to the arbitrariness present, the values in
this column cannot be used to directly calculate the FDR of a source
list. For instance, the sum of column~7 values for our robust source
list is $\sim$0.5, which is an underestimate of the expected FDR (see
\S~\ref{sources_fdr}). We note that, for our choice of prior, the
requirement of a non-positive PFD $\le$5\% {\em happens to} identify
the same robust source-candidate list as the S/N cut of 3.75.
However, this statement is specific to a particular choice of prior
and PFD integration upper bound.
\subsection{Survey completeness and comparison with SCUBA detections}
\label{sources_completeness}
\begin{figure}
\centering
\includegraphics[width=2.4in,angle=90]{completeness.eps}
\caption{Survey completeness for the S/N$\geq$3.75 cut used here to
select robust sources is represented with the dark symbols and error
bars. The lighter symbols and error bars are estimates of the survey
completeness when the integrated posterior flux distribution below
0\,mJy is required to be $<$5\%.}
\label{fig_completeness}
\end{figure}
We next compute the survey completeness by injecting one source at a
time, in the form of the point-source kernel scaled to represent each
flux, at random positions in the GOODS-N signal map
(Fig.~\ref{fig_source_map}) and tallying the instances when a {\em
new\/} source is recovered with S/N$\geq$3.75 within 10\,arcsec of the
insertion point. We choose this radius because it is small enough for
conducting quick searches in our simulations and because, barring
incompleteness, simulations show that $>$99.5\% of $\ge3.75$-$\sigma$
sources will be found within 10\,arcsec of their true position given
the size of the AzTEC beam and the depth of coverage. This method of
calculating completeness allows for the inclusion of ``confusion
noise'' without altering the map properties appreciably, because only
one artificial source is injected per
simulation~\citep{sescott2006,Scott08}.
We have also assessed completeness by inserting point sources of known
flux, one at a time, into pure noise-realisation maps rather than the
signal map. With this method, we also require that each recovered
artificial source has a $<5$\% non-positive PFD. Since this
constraint is essentially equivalent to a limiting S/N threshold of
3.75 (as evident from Table~\ref{table_sources}), it is not surprising
that the survey completeness determined this way (lighter-shade points
of Fig.~\ref{fig_completeness}) is similar to that derived from the
previous method. The similarity in results also shows that the
effects of confusion noise on survey completeness is small.
Finally, to verify that our source-candidate list has overlap with
previously detected extragalactic (sub)mm sources, we compare our
source list against 850-$\mu$m\ SCUBA detections within overlapping
survey regions. For this purpose, we only consider the regions in the
SCUBA/850-$\mu$m\ map with noise r.m.s. $<$ 2.5\,mJy. Of the 28 AzTEC
sources in the robust list, 11 lie within this region of the SCUBA
map; of these 8 (73\%) have robust detections at 850\,$\mu$m\
\citep{Pope2005} within 12\,arcsec of the AzTEC position. Those 8 are
highlighted with the superscript ``$S$'' in Table~\ref{table_sources}
while the other 3 are marked with the superscript ``$N$.'' On
the other hand, all 38 robust SCUBA sources within the r.m.s. $<$
2.5\,mJy region
\citep{Pope2005,Wall2008} lie within the 70\% coverage region of
AzTEC. In Chapin et al. (in prep.) we will
discuss the 850\,$\mu$m\ properties of AzTEC sources by performing
photometry in the SCUBA map at AzTEC positions, and more fully explore
the overlap of the AzTEC and SCUBA populations in general.
\section{1.1~mm Number counts}
\label{nc}
Using our AzTEC/GOODS-N data, we next quantify the number density of
sources as a function of their intrinsic (de-boosted) 1.1\,mm flux.
These counts cannot be read directly from the recovered distribution
of source flux densities due to: 1) the bias towards higher fluxes in
the data (as described in section~\ref{sources_deboost}), which
includes false detections; and 2) the survey incompleteness at lower
fluxes. In order to estimate the counts we use two independent
methods: a Monte Carlo technique that implicitly includes the flux bias
and completeness issues; and a Bayesian approach that accounts for
both these effects explicitly.
Fig.~\ref{fig_diffnc_results} shows the results of our number-counts
simulations. It shows the source flux density histogram simulated for
the best fit model from the parametric method overlaid on the actual
distribution from the true map. It also shows the differential number
counts vs.\ de-boosted source flux density as returned by both
methods. The dot-dashed lines in the lower right correspond to the
survey limits of the frequentist and Bayesian approaches, which are
27.8 and 33.8\,deg$^{-2}$\,mJy$^{-1}$, respectively. The survey limit
is the y-axis value (number counts) that experiences Poisson
deviations to zero sources per mJy-bin 32.7\% of the time, given the
map area considered. The two limits differ slightly because the
frequentist simulations include the slightly larger area 50\% coverage
region, as opposed to the 70\% coverage region that we use for the
Bayesian method. The survey limit occurs at around 6\,mJy for both
the best-fit frequentist and Bayesian type simulations. Thus, we are
not sensitive to the differential number counts {\em with 1\,mJy
resolution\/} beyond that point.
The power of the AzTEC/GOODS-N survey is in constraining number-counts
at lower flux densities, given the depth reached in this relatively
small field. We have, however, excluded results below the $<$2\,mJy
level from both methods, because of low survey completeness ($<$10\%)
and the possibility of increasing systematic effects. Therefore, the
noteworthy features of Fig.~\ref{fig_diffnc_results} are the points
from the Bayesian approach, indicated by crosses and error bars, in
the range 2\,mJy to 6\,mJy and the allowed functional forms from the
parametric (frequentist) method within those flux density bounds.
Models allowed by the 68.3\% confidence interval of the parametric
method form the shaded region while the dark curve is the best-fit
model. Given the error bounds from the two methods, they are in good
agreement. Both methods are briefly described below.
\subsection{Parametric frequentist approach}
\label{nc_parametric}
An obvious choice of indicator for the underlying source population is
the recovered brightness distribution of source candidates in the
GOODS-N map. Here, we use a S/N threshold of 3.5 and the 50\%
coverage region of the map. After identifying S/N$\geq$3.5 source
candidates, we make a histogram of their measured flux densities using
0.25\,mJy bins, for comparison against histograms made from simulating
various number-counts models. This approach is similar, in spirit, to
the method employed in \citet{Laurent2005} and the parametric version
of number counts derived in \citet{Coppin2006}. However, we avoid
intermediate analytical constructs, as the procedure outlined below
accounts for all relevant effects.
\begin{figure}
\centering
\includegraphics[width=2.45in, angle=90]{diff_nc_schech2_rec.eps}
\caption{The thick solid curve and the enveloping shaded region
correspond to the best fit number counts model and the 68.3\%
confidence interval from the parametric approach of
\S~\ref{nc_parametric}. The distribution of measured fluxes of
3.5-$\sigma$ sources in the actual map is shown by the triangles in
the 3.5-8\,mJy interval while the corresponding average distribution
of the best fit model is indicated by the thin solid-line histogram.
The difference between the thick solid line and the thin solid
histogram indicates the importance of accounting for flux boosting and
completeness. The crosses and error bars represent the differential
number counts derived from the Bayesian method, which are in excellent
agreement with the result from the parametric method. The dashed-line
curve indicates the Bayesian prior. The upper and lower dot-dashed
lines indicate the survey limits of the Bayesian and parametric
methods, respectively.}
\label{fig_diffnc_results}
\end{figure}
We generate model realisation maps by injecting kernel-shaped point
sources into noise realisation maps. The input source positions are
{\em uniformly} distributed over the noise realisation map while their
number density and flux distribution reflect the number-counts model
being considered. For every model we have considered, we make 1200
simulated maps by constructing 12 different source realisations for
each of the 100 noise realisation maps. Next, we use the same
source-finding algorithm used on the signal map to extract all
S/N$\geq$3.5 peaks in each simulated map. We then compare the average
histogram of recovered source fluxes from the 1200 model realisations
against the actual distribution of source fluxes. The data vs.\
models comparison is restricted to the 3.5--8\,mJy measured flux
density range. This comparison process is illustrated in
Fig.~\ref{fig_diffnc_results}.
The likelihood of the data given a model is determined according to
Poisson statistics as in
\citet{Laurent2005} and \citet{Coppin2006}.
One set of parameterised models that we have explored has the functional form given
by Equation~\ref{eq_prior}. We chose to re-parametrise these models
so that the normalisation factor depends on only one of the fit
parameters. The parameters we chose are the same $S^\prime$ as in
Equation~\ref{eq_prior} and $N_{\rm 3mJy}$, the differential counts at
3\,mJy, given by
\begin{equation}
N_{\rm 3mJy} = N^\prime \left({S^\prime \over 3\mathrm{mJy}}\right)
e^{-3\mathrm{mJy} / S^\prime}.
\label{eq_N3mJy}
\end{equation}
In terms of these parameters, Equation~\ref{eq_prior} becomes
\begin{equation}
{\mathrm{d}N \over \mathrm{d}S} = N_{\rm 3mJy}
\left({3\mathrm{mJy}} \over S \right)
e^{-(S - 3\mathrm{mJy}) / S^\prime}.
\label{eq_ncparam}
\end{equation}
We explored the $S^\prime$--$N_{\rm 3mJy}$ parameter space over the
rectangular region bracketed by 0.5-2\,mJy and
60-960\,mJy$^{-1}$\,degree$^{-2}$ using a $(\Delta S^\prime,
\Delta N_{\rm 3mJy})$ cell size of (0.15,60). The likelihood function, ${\cal
L}$, is a maximum for the model with $S^\prime = 1.25 \pm 0.38\,$mJy
and $N_{\rm 3mJy} = 300 \pm 90$\,mJy$^{-1}$\,degree$^{-2}$. We did
not assume $\chi^2$-like behaviour of $-\ln({\cal L})$ for calculating
the 68.3\% confidence contours whose projections are the error bars
quoted above. Instead, as outlined in \citet{press92}, we made many
realisations of the best-fit model and put them through the same
parameter estimation procedure that was applied to the actual data.
In terms of the goodness of fit, we find that 66\% of the simulated
fits yield a higher value of $-\ln({\cal L})$ compared to the actual
value. Fig.~\ref{fig_diffnc_results} shows this best-fit
number-counts estimate against the de-boosted 1.1\,mm flux density
along with a continuum of curves allowed by the 68.3\% confidence
region.
\subsection{Bayesian method}
\label{nc_bayes}
We also estimate number counts from the individual source PFDs
calculated in \S~\ref{sources_deboost} using a modified version of the
bootstrapping method described in \citet{Coppin2006}. A complete
discussion of the modifications and tests of the method will be
presented in Austermann et al. (in prep.). For these calculations, we
use only the sub-list of {\em robust\/} sources in
Table~\ref{table_sources}. We have repeated this bootstrapping
process 20{,}000 times to measure the mean and uncertainty
distributions of source counts in this field. The differential and
integrated number counts extracted with this method, using $1\,$mJy
bins, are shown in Fig.~\ref{fig_diffnc_results}. Our simulations
show that the extracted number counts are quite reliable for a wide
range of source populations and only weakly dependent on the assumed
population used to generate the Bayesian prior (the dashed-line curve
of Fig.~\ref{fig_diffnc_results}) with the exception of the lowest
flux density bins, below 2\,mJy, which suffer from source confusion
and low (and poorly constrained) completeness. Overall, the results
from the Bayesian method are in excellent agreement with those from
the parametric method between the lower sensitivity bound (2\,mJy) and
the survey limit ($\sim$6\,mJy).
\section{Discussion}
\label{results}
\begin{figure}
\centering
\includegraphics[width=\hsize]{int_counts_both.eps}
\caption{The cumulative (integral) number counts from other
1.1--1.2\,mm surveys are shown alongside our results. The
AzTEC/GOODS-N parametric number-counts results are indicated by the
hatched region that represents the 68.3\% confidence region for
parametric models. The dot-dashed line indicates the survey limit.
Results from the Bolocam 1.1\,mm Lockman Hole survey are indicated by
a thin solid line and two bounding dotted-lines that represent the
best-fit model and 68.3\% confidence region as found by
\citet{Maloney2005}. The 1.2-mm MAMBO-IRAM results reported in
\citet{Greve2004} also shown (triangles). The stars represent the
``reduction D'' results of \citet{Coppin2006} with 850\,$\mu$m\ flux
densities scaled by the factor 1/2.08 as explained in the text. The
dashed curve indicates the best combined fit to the Bayesian results
from both surveys.}
\label{fig_int_counts}
\end{figure}
In Fig.~\ref{fig_int_counts}, we display our cumulative number-counts
results with the 68.3\%-allowed hatched region derived from the
parametric method. We next compare those results with previous
surveys of (sub)mm galaxies. Combined results from the 1.2-mm MAMBO
surveys of the Lockman Hole and ELAIS-N2 region~\citep{Greve2004} and
the 1.1-mm Bolocam Lockman Hole survey~\citep{Maloney2005} are shown
in Fig.~\ref{fig_int_counts}. Our GOODS-N number counts are in good
agreement with MAMBO results. Our results are in disagreement with
the results of \citet{Maloney2005}, even within a limited flux range
such as 3--6\,mJy where we expect both surveys to be sensitive to the
number counts.
In Fig.~\ref{fig_int_counts}, we also compare our results with the
850-$\mu$m\ number counts of \citet{Coppin2006}. If the 1.1--1.2\,mm
surveys detect the same population of sub-mm sources seen by
SCUBA at 850$\,\mu$m -- an assumption that is not obviously valid
given the possible redshift-dependent selection effects
\citep{Blain2002} -- we would expect a general correspondence between
number counts at these two wavelengths, with a scaling in flux density
that represents the spectral factor for an average source. Therefore,
we perform a simultaneous fit to the SCUBA/SHADES and AzTEC/GOODS-N
{\em differential} Bayesian number counts in order to determine the
average dust emissivity spectral index, $\alpha_{\rm{dust}}$ (and thus
the flux density scaling factor from 1.1~mm wavelength to 850\,$\mu$m\
wavelength), and the parameters, $N_{\rm 3mJy}$ and $S^{\prime}$, of
Equation~\ref{eq_ncparam}. This fit results in the best-fit parameters and
correlation matrix given in Table~\ref{tab:params}. We overlay the
\citet{Coppin2006} number counts on Fig.~\ref{fig_int_counts} with
the 850-$\mu$m\ fluxes scaled by the scaling factor derived from this
fit, which is 2.08$\pm$0.18. For visual comparison, the shaded region
of Fig.~\ref{fig_int_counts}, which represents our {\em parametric}
result, is sufficient because it represents well the results from both
methods (see Fig.~\ref{fig_diffnc_results}).
Fig.~\ref{fig_int_counts} shows that the scaled SCUBA-SHADES
points fall well within the bounds allowed by our results.
\begin{table}
\centering
\begin{tabular}{r|c|c|c}
\hline
Survey & $S^{\prime}$ & $N_{\rm 3mJy}$ & $\alpha_{\rm{dust}}$ \\
\hline
\hline
AzTEC/GOODS-N & $1.25\pm0.38$ & $300\pm90$ & \\
AzTEC/GOODS-N \\+ SCUBA/SHADES & $1.60\pm0.25$ & $274\pm54$ & $2.84\pm0.32$ \\
\hline
& $S^{\prime}$ & $N_{\rm 3mJy}$ & $\alpha_{\rm{dust}}$ \\
$S^{\prime}$ & 1 & 0.05 & -0.32 \\
$N_{\rm 3mJy}$ & 0.05 & 1 & -0.8 \\
$\alpha_{\rm{dust}}$ & -0.32 & -0.8 & 1 \\
\end{tabular}
\caption{Best-fit Schechter function parameters and dust emissivity
spectral index using the Bayesian results from the
AzTEC/GOODS-N, SCUBA/SHADES, and combined surveys. The
correlation matrix for the combined fit is also listed.
Caveats on this analysis are given in the text.}
\label{tab:params}
\end{table}
The $\alpha_{\rm{dust}}$ of Table~\ref{tab:params} was computed for
the nominal AzTEC and SCUBA band centres, which are 1.1\,mm and
850\,$\mu$m\ respectively. However, the quoted error on
$\alpha_{\rm{dust}}$ brackets the effects of small shifts in the
effective band centres due to spectral index differences between SMGs
and flux calibrators. The dust emissivity spectral index may also be
estimated by averaging the 1.1\,mm to 850\,$\mu$m\ flux density ratio of
individual sources or by performing the appropriate stacking analysis.
Due to the moderate S/N of sources in our surveys, the effects of flux
bias and survey completeness must be accounted for in such analyses.
Therefore, performing a combined fit to the differential number counts
vs.\ de-boosted flux from the two surveys, where those effects are
already included, is an appropriate method for estimating the spectral
index. From Fig.~\ref{fig_int_counts}, the hypothesis that
SCUBA and AzTEC detect the same underlying source population appears
plausible.
However, we do not comment on the formal goodness of fit as the
$\chi^2$ obtained for the combined fit is unreasonably small because
the full degree of correlation between data points is underestimated
in the standard computation of the two covariance matrices
\citep{Coppin2006}. In addition, the best-fit parameters of the
combined fit may have a large scatter from a global mean value (if one
exists), due to sample variance, as the two surveys cover different
fields. Although SCUBA 850\,$\mu$m\ number-counts are available for
GOODS-N \citep{Borys2003}, the survey region (see
Fig.~\ref{fig_goodsn_cover}) and the method used to estimate number
counts in that work are quite different from those used here.
Therefore, we chose to fit to the SCUBA/SHADES number counts
\citep{Coppin2006} instead, since they were determined using methods
similar to ours.
\section{Conclusion}
\label{conclusion}
We have used the AzTEC instrument on the JCMT to image the GOODS-N
field at 1.1~mm. The map has nearly uniform noise of
0.96--1.16$\,{\rm mJy}\,{\rm beam}^{-1}$ across a field of $245\,{\rm
arcmin}^2$. A stacking analysis of the map flux at known radio source
locations shows that any systematic pointing error for the map is
smaller than 1\,arcsec in both RA and Dec. Thus, the dominant
astrometric errors for the 36 source-candidates with S/N$\geq$3.5 are
due to noise in the centroid determination for each source. Using a
S/N$\geq$3.75 threshold for source robustness, we identify a subset of
28 source candidates among which we only expect 1--2 noise-induced
spurious detections. Furthermore, of the 11 AzTEC sources that fall
within the considered region of the SCUBA/850-$\mu$m, 8 are detected
unambiguously.
This AzTEC map of GOODS-N represents one of the largest, deepest
mm-wavelength surveys taken to date and provides new constraints on
the number counts at the faint end (down to ${\sim}\,2\,{\rm mJy}$) of
the 1.1~mm galaxy population. We
compare two very different techniques to estimate the number density
of sources as a function of their intrinsic flux--a frequentist
technique based on the flux histogram of detected sources in the map
similar in spirit to that of \citet{Laurent2005},
and a Bayesian approach similar to that of
\citet{Coppin2006}. Reassuringly, the two techniques give similar
estimates for the number counts. Those results are in good agreement
with the number counts estimates of
\citet{Greve2004} but differ significantly from those of \citet{Maloney2005}.
The 1.1~mm number counts from this field are consistent with a direct
flux scaling of the 850~\micron\ SCUBA/SHADES number counts
\citep{Coppin2006} within the uncertainty of the two measurements,
with a flux density scaling factor of $2.08\pm0.18$. If we assume
that the two instruments are detecting the same population of sources,
we obtain a grey body emissivity index of $2.84\pm0.32$ for the dust
in the sources. While there is no evidence based on the number counts
that 1.1~mm surveys select a significantly different population than
850~\micron\ surveys, we caution that the number counts alone cannot
really test this hypothesis. A more thorough study of whether AzTEC
is selecting a systematically different population than SCUBA can come
only from comparison of the redshifts and multi-wavelength SEDs of the
identified galaxies, which we will describe in Chapin et al. (in
prep.), the second paper in this series.
There is also a survey of GOODS-N with MAMBO at 1.25\,mm performed by
Greve et al. (in prep.). A comparison between these two millimetre
maps and, possibly, the SCUBA `Super-map' is reserved for a future
paper (Pope et al. in prep.).
This AzTEC/GOODS-N map is one of the large blank-field SMG surveys at
1.1~mm taken at the JCMT. Combined with the AzTEC surveys in the
COSMOS \citep{Scott08} and SHADES (Austermann et al. in prep.)
fields, these GOODS-N data will allow a study of clustering and
cosmic variance on larger spatial scales than any existing (sub)mm
extragalactic surveys.
\section*{Acknowledgements}
\label{acknowledgement}
The authors are grateful to J. Aguirre, J. Karakla, K. Souccar,
I. Coulson, R. Tilanus, R. Kackley, D. Haig, S. Doyle, and the
observatory staff at the JCMT who made these observations possible.
Support for this work was provided in part by the NSF grant AST
05-40852 and the grant from the Korea Science \& Engineering
Foundation (KOSEF) under a cooperative Astrophysical Research Center
of the Structure and Evolution of the Cosmos (ARCSEC). DHH and IA
acknowledge partial support by CONACT from research grants 60878 and
50786. AP acknowledges support provided by NASA through the Spitzer
Space Telescope Fellowship Program, through a contract issued by the
Jet Propulsion Laboratory, California Institute of Technology under a
contract with NASA. KC acknowledges support from the Science and
Technology Facilities Council. DS and MH acknowledge support from the
Natural Sciences and Engineering Research Council of Canada.
|
1,108,101,562,935 | arxiv | \section{Introduction}
Approximate 70 percent of the global urban water main assets are buried pipes, and in Australia most of them are now reaching over 100 years old \cite{Miro2014}. Due to deterioration with age, those ageing water pipes are frequently burst, which leads to severe implications for the society and the efficacy and sustainability of water services, including disruption to water supply, obstruction to traffic and damage to properties surrounding the main failures \cite{Vitanage2014}. To prevent the aforesaid severities to community, fully understanding present conditions of the critical water pipes is crucial so that water utilities can have a better pipe-management program. In other words, to effectively forecast remaining lifetime of whole the ageing pipe or even a pipe section, its remaining wall thickness (RWT) needs to be known as can be seen an exemplary example in Fig. \ref{fig1}, where a critical patch is highly visible.
\begin{figure}[t]
\centering
\subfloat[]{\label{3D}
\includegraphics[scale=0.52]{3D.png}} \hspace*{1.5em}
\subfloat[]{\label{2D}
\includegraphics[scale=0.42]{2D.png}}
\caption{Water pipe remaining wall thickness interpretations in (a) 3D and (b) 2D.}
\label{fig1}
\end{figure}
Given a request from Sydney Water Corporation, a water utility in Australia, University of Technology Sydney has developed a non-destructive evaluation/testing (NDE/NDT) system called the rapid response thickness tool (R2T2) that can rapidly assess conditions of a cement lined cast iron (CI) water main pipe. Though the CI pipe wall is invisible since its internal surface is cemented to isolate it from internal corrosion, R2T2 can effectively inspect RWT of the pipe by the use of the NDT techniques \cite{Munoz2017} such as the pulsed Eddy current (PEC) sensing technology \cite{Ulapane2017}. Nevertheless, since the sensing is based on a magnetism principle, where the magnetic field takes time to penetrate through the material (i.e. CI), the tool is constrained by its inspecting speed. More particularly, the primary objective of developing R2T2 is to deploy it into a water pipe at the bursting point when a failure occurs so that it can assess conditions of the pipe in the vicinity to prevent it from subsequent failures in the same area in the near future. Since the water utility aims to minimize long disruption of water supply to customers, the tool is only allowed to be deployed during a short time interval from the bursting to its repair; then the more quickly R2T2 can scan, the longer the pipe section is investigated.
To enhance the issue, it is proposed to scan a part of the pipe and then employ a machine learning regression technique like Gaussian process (GP) \cite{Rasmussen2006, Nguyen2017a} to predict the rest. Nonetheless, GP is assumed that its input data is normally distributed when using a constant mean, which is not really practical since most of data in nature is non-Gaussian \cite{Nguyen2016b}. To address this issue, it is proposed to exploit a marginal distribution, which marginalize thickness at a location, to model the raw RWT data from R2T2's sensors. That is, the raw RWT readings can be converted to the standard normally distributed data that is represented as input for a GP model. The predictions at the unmeasured locations as outputs of the GP model are then converted to the expected RWT values by the use of the marginal distribution parameters.
\begin{figure*}[t]
\centering
\subfloat[]{\label{r2t2}
\includegraphics[scale=0.35]{R2T2.png}} \hspace*{1.5em}
\subfloat[]{\label{r2t2_deploy}
\includegraphics[scale=0.75]{R2T2_deploy.png}}
\caption{Rapid response thickness tool system \cite{Hunt2018} (a) and its realistic deployment (b).}
\label{fig2}
\end{figure*}
A marginal distribution presents probability of a raw RWT value at an arbitrary point on a pipe surface can be found in literature. For instance, the extreme value (EV) distributions including Gumbel and Weibull are widely used to marginally present the statistics of RWT of a pipe \cite{Asadi2017, Benstock2017}. Due to the tail behaviour, the EV distributions are favoured to model the occurrence of extreme values in corrosion patches such as the minimum RWT or maximum pit depth. Nonetheless, in the cases where the corrosion patches are small but deep, as can be seen in the RWT maps in Figures \ref{p1gt} and \ref{p2gt}, these single-component distributions cannot fit well to the data, which will be discussed further in the following sections. Hence, in this work, we propose to employ Gaussian mixture (GM) as the marginal distribution for modelling the raw PEC sensor readings. Since the GM model is formed by multiple Gaussian components, it is shown to efficiently present probability of having a RWT value at an arbitrary spot on the water pipe.
The remaining of the paper is organized as follows. Section \ref{sec_2} introduces the inspection tool and how the data was collected in the field before modelling RWT is discussed in Section \ref{sec_3}. Section \ref{sec_4} demonstrates effectiveness of the proposed approach while Section \ref{sec_5} draws conclusions.
\section{Rapid Response Thickness Tool System and Field Data Collection}
\label{sec_2}
To evaluate effectiveness of the Gaussian mixture based marginal distribution on modelling RWT of critical water mains, we implemented the distribution in the data collected by a NDT tool on a realistic water pipe in Sydney, Australia. This section will briefly delineate that tool \cite{Hunt2018} and how the data was collected.
\subsection{Rapid Response Thickness Tool System}
The rapid response thickness tool (R2T2) \cite{Hunt2018} as demonstrated in Fig. \ref{r2t2} is aimed to inspect the CI water main pipes with diameters ranging from 350 mm to 750 mm, which are dominant in the water distribution network in Sydney, Australia. The tool has six arms designed imitating the umbrella configuration, which enables it to adaptively work in the multiple different diameter CI water pipes as aforesaid. On each arm of R2T2, a PEC based elliptical sensor is embedded, which is proved to efficiently sense a CI pipe wall up to 20 mm through a lift-off \cite{Ulapane2017,Ulapane2018}. It is noted that the CI water main pipes in the water distribution network in Sydney have an approximately 10 mm cement lining, which was made to prevent the internal surfaces from corrosion and is considered as a lift-off with respect to the sensor. Nonetheless, in many pipes where the cement lining was applied in-situ, this lift-off can vary from 2 mm to 25 mm, subject to a crown or bottom location. Moreover, given magnetism principle, the sensor cannot measure the wall thickness at a single point but be able to estimate an average thickness under its footprint with a crossing section area, e.g. 50 mm $\times$ 50 mm. The total time for the sensor to comprehensively read one thickness measurement is approximate 150 msec, from emitting a pulsed signal from the excitation coil of the sensor to interpreting a sensor voltage output into the thickness value exploiting our proposed decay curve based algorithms \cite{Nguyen2017b,Ulapane2019}.
Since the sensor is designed based on the principle of magnetism where the magnetic field takes time to penetrate through the material, assessing the wall thickness of the non-destructive CI water mains is deemed to be quite slow. For instance, R2T2 requires 1.5 hours of scanning to produce a full coverage thickness map of a 20 m section of a 450 mm diameter CI water pipe \cite{Hunt2018}. Unfortunately, the time gap for R2T2 to be implemented into a water pipe at each its failure is very momentary since water utilities would prevent a long disruption from supplying water to their customers. Therefore, if RWT of a CI pipe can be accurately modelled from limited data, R2T2 does not need to scan full coverage of an internal surface of a main but a small part; and the unmeasured part of the main can be efficiently predicted based on the learned model.
\subsection{Field Data Collection}
R2T2 was deployed into an ageing CI water main pipe in the Sydney Water network, Sydney, Australia that was locally burst on 25$^{th}$ May 2019, as shown in Fig. \ref{r2t2_deploy}. The tool was given approximately 2 hours to fully scan the internal surface of a 50 m long section at one side from the breaking point. R2T2 could run forward and backward, and six sensors directly contacted the internal surface (i.e. cement lining). At each run, there were six lines of RWT produced instantly. Since dimension of the sensor is 50 mm $\times$ 50 mm and the pipe diameter is 450 mm, it required the tool to run 6 times to completely scan the whole internal surface of the water main. Some minor measurements overlap between the $1^{st}$ and $6^{th}$ sensors was removed from post-processing the data.
Out of the nine inspected spools, the worst two spools, illustrated in Figures \ref{p1gt} and \ref{p2gt}, have been used in this work to verify the validity of the proposed approach.
\section{Remaining Wall Thickness Modelling}
\label{sec_3}
The objective of this study is the assessment of predicting RWT of the CI water mains at unmeasured locations so that R2T2 can speed up the critical condition assessment in future deployments. To this end, information of the unmeasured RWT values can be learned by capturing the spatial correlations among the inspected data. Mathematically, advanced machine learning and spatial statistic tools like Gaussian processes (GP) \cite{Nguyen2017a} can be employed to learn those spatial correlations and then predict the unmeasured thickness given the collected sensor readings. Details arel discussed in this section.
\subsection{Gaussian Mixture based Marginal Distribution}
\label{sec_31}
In our gathered data as illustrated in Figures \ref{p1gt} and \ref{p2gt}, the histogram representations in Fig. \ref{fig4} show that the RWT distribution has very long but tiny tail. Then both the Gumbel and Weibull distributions can capture the probability of the RWT at its high range but not at its low one. It is noted that two data sets called Pipe 1 and Pipe 2 with the histograms plotted in Fig. \ref{fig4} correspond to the RWT maps demonstrated in Figures \ref{p1gt} and \ref{p2gt}, respectively.
In the condition assessment of a critical water main pipe, accurately quantifying RWT at the low range, particularly the minimum RWT, is paramount importance for effectively predicting the likely failure of the pipe in the near future. Since the EV distributions do not fit well to the long but small tail data, in this work it is proposed to employ GM as a marginal distribution to model RWT of the CI water mains. GM distribution can be specified as follows.
\begin{equation}
p(t)=\sum_{i=1}^N\gamma_i\mathcal{N}(t\vert \mu_i,\sigma_i),
\end{equation}
where $N$ is the number of the components and $\gamma_i$ is the component weight with a constraint $\sum_{i=1}^N\gamma_i=1$. $\mu_i$ and $\sigma_i$ are the mean and standard deviation of the $i^{th}$ component, respectively. Advantage of GM distribution as compared with the single-component EV distributions is that it has multiple components of Gaussian distributions, where each component can model a range of the RWT values. The number of Gaussian components can be optimized by the use of Akaike information criterion (AIC). In the following, we will mathematically show how good the GM based marginal distribution is as compared with the Weibull and Gumbel distributions, using two statistical criteria including Kolmogorov–Smirnov (K-S) statistic test and AIC.
\begin{figure*}[t]
\centering
\subfloat[]{\label{p1cdf}
\includegraphics[width=0.8\columnwidth]{p1_cdf.PNG}} \hspace*{4em}
\subfloat[]{\label{p2cdf}
\includegraphics[width=0.8\columnwidth]{p2_cdf.PNG}}
\caption{K-S statistic tests on CDF.}
\label{fig3}
\end{figure*}
\begin{figure*}[t]
\centering
\subfloat[]{\label{p1pdf}
\includegraphics[width=0.8\columnwidth]{p1_pdf.PNG}} \hspace*{4em}
\subfloat[]{\label{p2pdf}
\includegraphics[width=0.8\columnwidth]{p2_pdf.PNG}}
\caption{AIC tests on PDF.}
\label{fig4}
\end{figure*}
\subsubsection{K-S statistic test}
Given our collected data as presented in Section \ref{sec_2}, we conducted three different K-S statistic tests with three different null hypothesises, where we hypothesized that one data set (i.e. Pipe 1 or Pipe 2) follows the Gumbel, Weibull and GM distributions, respectively. We first visualize the cumulative distribution functions (CDF) for the both data sets as shown in Fig. \ref{fig3}. It can be seen that at the low range of the RWT, both the Gumbel and Weibull distributions are faraway from the observed data. We then computed the test statistic by
\begin{equation}
KS=\sup_t\vert F_{exp}(t)-F_{obs}(t)\vert,
\end{equation}
where $KS$ is the test statistic while $F_{exp}(t)$ and $F_{obs}(t)$ are the expected hypothesized CDF and the empirical CDF of the RWT values, respectively. Results of the test statistics for both the data sets are summarized in Table \ref{table_1}. Since Pipe 1 has 3080 readings and Pipe 2 has 2968 measurements, at 99\% level of confidence their critical values are approximately given by 0.0294 and 0.0299, respectively. From Table \ref{table_1}, it can be clearly seen that both the null hypothesises of utilizing the Gumbel and Weibull distributions are rejected while the null hypothesis of employing GM is accepted as the marginal distribution for the given data sets.
\begin{table}[thb]
\renewcommand{\arraystretch}{1.3}
\caption{K-S STATISTIC TEST RESULTS}
\label{table_1}
\centering
\begin{tabular}{|>{\centering\arraybackslash}m{1cm}|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{2cm}|} \hline
& Gumbel & Weibull & Gaussian mixture \\ \hline
Pipe 1 & 0.0715 & 0.0841 & 0.0086 \\ \hline
Pipe 2 & 0.1448 & 0.1773 & 0.0073 \\ \hline
\end{tabular}
\end{table}
\subsubsection{AIC}
While the K-S statistic tests focus on the CDF, AIC based comparison is obtained through the probability distribution function (PDF) as can be seen in Fig. \ref{fig4}. Akin to the results obtained in the K-S statistic tests, in those obtained by AIC, GM outperforms both Gumbel and Weibull as the best fit marginal distribution for the given data, especially at the low range of RWT. For numerical comparison, we computed the AIC quantities as follows,
\begin{equation}
AIC=2P-2\log(\mathcal{L}),
\end{equation}
where $P$ is the number of the parameters and $\mathcal{L}$ is the log likelihood of the marginal distribution, respectively. The obtained results for each hypothesized marginal distribution applied for both the data sets are summarized in Table \ref{table_2}. Apparently, GM distribution is best fit to the inspected data as its AIC values are smaller than those of Gumbel and Weibull. It is noticed that it is optimized by 5 and 6 components in the GM marginal distribution for the Pipe 1 and Pipe 2 data sets, respectively.
\begin{table}[thb]
\renewcommand{\arraystretch}{1.3}
\caption{AIC RESULTS}
\label{table_2}
\centering
\begin{tabular}{|>{\centering\arraybackslash}m{1cm}|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{2cm}|} \hline
& Gumbel & Weibull & Gaussian mixture \\ \hline
Pipe 1 & 7844 & 8053 & 7281 \\ \hline
Pipe 2 & 9044 & 9677 & 7459 \\ \hline
\end{tabular}
\end{table}
\subsection{Gaussian Process}
In order to model RWT of a CI water main pipe by employing GP \cite{Nguyen2017a}, since the sensor measures an average thickness value in an area of 50 mm $\times$ 50 mm, it is presumed that the measurement is correspondingly located in the centroid of the sensor footprint.
Let us consider $n$ RWT measurements $\bm{t}=(t_1,t_2,\cdots,t_n)^T\in\mathbb{R}^{n}$ recorded at the locations $\bm{l}=(l_1^T,l_2^T,\cdots,l_n^T)^T\in\mathbb{R}^{n\times 2}$, and those recordings can be statistically modelled by
\begin{equation}
\bm{t}(\bm{l})=\bm{r}(\bm{l})+\varepsilon,
\end{equation}
where $\bm{r}=(r_1,r_2,\cdots,r_n)^T\in\mathbb{R}^{n}$ is the random latent variables at $\bm{l}$. $\varepsilon=(\epsilon_1,\epsilon_2,\cdots\epsilon_n)\in\mathbb{R}^{n}$, where $\epsilon_i$ is an independent identically distributed measurement noise with a zero mean and an unknown variance $\sigma_n^2$. In order to predict RWT at unmeasured positions, it is proposed to model $\bm{r}$ by GP, which maps the probability distribution by the mean and covariance functions. For the purpose of simplicity, it is assumed that the mean function is constant, which averages all the gathered measurements, while the anisotropic automatic relevance determination Matern kernel was selected as the covariance function $C$ in this work. It is noticed that since the pipe is in circumferential cylinder shape, the collected data is periodical. Thus, for the purpose of warping in modelling and predicting of GP, it is proposed to convert a standard two-dimensional location $l_i$ to a periodical four-dimensional location $p_i$ \cite{Miro2018} as follows,
\begin{equation}
p_i=2\pi diag(\lambda^{-1})l_i,
\end{equation}
where $\lambda$ is the vector of the periodical parameters. As a result, the adapted Matern covariance function can be specified by
\begin{equation}
\label{cov_f}
C(p_i,p_j\mid\Theta)=\sigma^2\left(1+\sqrt{3}d\right)\exp\left(-\sqrt{3}d\right),
\end{equation}
where $\sigma$ is the RWT standard deviation and
\begin{equation}
d=\sqrt{\sum_{k=1}^2\frac{(p_i-p_j)^T(p_i-p_j)}{\eta_k^2}}
\end{equation}
with $\eta_k$ is the characteristic length scale in either the circumferential or longitudinal direction. Note that the hyperparamters $\Theta=(\sigma,\eta_1,\eta,\sigma_n)$ can be obtained by the use of the maximum likelihood technique \cite{Nguyen2016a}.
\subsection{Prediction of Remaining Wall Thickness at Unscanned Pipe Surfaces}
To predict RWT at the $m$ unmeasured locations $\bm{l^*}=((l_1^*)^T,(l_2^*)^T,\cdots,(l_m^*)^T)^T\in\mathbb{R}^{m\times 2}$ (and corresponding periodical positions $\bm{p^*}=((p_1^*)^T,(p_2^*)^T,\cdots,(p_m^*)^T)^T\in\mathbb{R}^{m\times 4}$) given the recorded data $\bm{t}(\bm{p})$, we compute the posterior distribution of the learned GP model with the means $m_{\bm{p^*}}$ and covariances $\Sigma_{\bm{p^*}}$ as follows,
\begin{equation}
m_{\bm{p^*}}\mid\bm{t}(\bm{p})=\mu(\bm{p^*})+\Sigma_{\bm{pp^*}}^T(\Sigma_{\bm{pp}}+\sigma_n^2I)^{-1}\left(\bm{t}(\bm{p})-\mu(\bm{p})\right)
\end{equation}
\begin{equation}
\Sigma_{\bm{p^*}}\mid\bm{t}(\bm{p})=\Sigma_{\bm{p^*p^*}}-\Sigma_{\bm{pp^*}}^T(\Sigma_{\bm{pp}}+\sigma_n^2I)^{-1}\Sigma_{\bm{pp^*}},
\end{equation}
where $\mu(\bm{p^*})$ ($\mu(\bm{p})$) and $\Sigma_{\bm{p^*p^*}}$ ($\Sigma_{\bm{pp}}$) are the vector of means and matrix of covariances at the positions $\bm{p^*}$ ($\bm{p}$), respectively. $\Sigma_{\bm{pp^*}}$ is the matrix of covariances correlating the RWT variables at $\bm{p^*}$ and $\bm{p}$ while $I$ is an identity matrix. All the elements of the covariance matrices are computed by (\ref{cov_f}).
It is noted that the outputs of the GP predictions obey standard normal distribution, to infer the approximately realistic RWT values at the unscanned locations, we conduct the inverse CDF transformation by employing the learned marginal distribution parameters as discussed in Section \ref{sec_31}.
\begin{figure*}[t]
\centering
\subfloat[]{\label{p1dc}
\includegraphics[scale=0.36]{Pipe1_DC.PNG}} \hspace*{1.5em}
\subfloat[]{\label{p2dc}
\includegraphics[scale=0.36]{Pipe2_DC.PNG}}\\
\subfloat[]{\label{p1pre}
\includegraphics[scale=0.36]{Pipe1_Pre.PNG}} \hspace*{1.5em}
\subfloat[]{\label{p2pre}
\includegraphics[scale=0.36]{Pipe2_Pre.PNG}}\\
\subfloat[]{\label{p1gt}
\includegraphics[scale=0.36]{Pipe1_GT.PNG}} \hspace*{1.5em}
\subfloat[]{\label{p2gt}
\includegraphics[scale=0.36]{Pipe2_GT.PNG}}
\caption{Remaining wall thickness maps: Collected data, predicted results and ground truth. Pipe 1 (left column) and Pipe 2 (right column).}
\label{fig5}
\end{figure*}
\begin{figure*}[t]
\centering
\subfloat[]{\label{p1pdf}
\includegraphics[width=0.76\columnwidth]{rmse_p1.PNG}} \hspace*{3em}
\subfloat[]{\label{p2pdf}
\includegraphics[width=0.74\columnwidth]{rmse_p2.PNG}}
\caption{Root mean square errors.}
\label{fig6}
\end{figure*}
\section{Realistic Experimental Results}
\label{sec_4}
To demonstrate effectiveness of the GM based marginal distribution in influencing on the final prediction results, we conducted the experimental tests on the measured data. In other words, the realistic data sets as illustrated in Figures \ref{p1gt} and \ref{p2gt} obtained after R2T2 run 5 times along the water pipe were utilized as the ground truth for comparison. It was presumed that the tool could only run one time along the water main and recorded the data as shown in Figures \ref{p1dc} and \ref{p2dc} for Pipe 1 and Pipe 2, respectively. We expected that the system could predict RWT at the rest of the pipe by filling up the blank areas in Figures \ref{p1dc} and \ref{p2dc}. To this end, a GP model was trained by the sensor readings on each pipe after they were converted from the raw measurements to the standard normal distribution based values by the use of the marginal distribution. The learned GP model was then utilized to predict the corresponding values at the unscanned locations before they were converted to the expected RWT ones based on the marginal distribution parameters. And the predicted RWT results are visualized in Figures \ref{p1pre} and \ref{p2pre} for Pipe 1 and Pipe 2, respectively. It can be clearly seen that the predictions as results of the GP model based on the GM based marginal distribution are highly comparable to the ground truth in Figures \ref{p1gt} and \ref{p2gt}. More importantly, though R2T2 only collected six lines of the data, the prediction could effectively establish the highly accurate corrosion patches as compare to the ground truth, which is crucial in the condition assessment of the critical water main pipes.
For numerical details, we conducted three different tests where R2T2 was supposed to run only one time along the water pipe but the sensor 1 started from the line 1 (the collected data is illustrated in Figures \ref{p1dc} and \ref{p2dc}), the line 2 or line 3, respectively; and the other sensors were shifted accordingly. In each test of each pipe, three GP models were trained by the data converted from the raw measurements by using Gumbel, Weibull and GM as a marginal distribution, respectively. The predicted results as the outputs of the GP models were then converted to the RWT values relied on their corresponding marginal distribution parameters. Differences between the predicted RWT and the ground truth were exploited to compute the root mean square errors (RMSE), which are demonstrated in Fig. \ref{fig6} for the both data sets. The results of the six tests summarized in Fig. \ref{fig6} clearly show outperformance of the GM based marginal distribution as compared with its two counterparts, Gumbel and Weibull. If the RMSE are acceptable, the scanning speed of R2T2 can be ameliorated 5 times faster than the full coverage, which is significantly useful for the condition assessment of the critical CI water main assets.
\section{Conclusions}
\label{sec_5}
The paper has discussed an efficient approach for rapidly inspecting RWT of a metallic water pipeline in NDE. It has been proposed to exploit GM distribution to represent the RWT measurements collected by the PEC sensors from a part of the water main pipe and convert them to the standard normally distributed data for training a GP model. The learned GP model is then utilized to predict the RWT values at the unscanned areas of the main. The results obtained by the proposed method implemented on the real-life dataset show the best fit of the GM distribution to the measurements as compared with the other distributions including Gumbel and Weibull. More importantly, accurately modelling the RWT readings leads to the better RWT predictions, which significantly improves the inspection speed of R2T2.
\balance
\bibliographystyle{IEEEtran}
|
1,108,101,562,936 | arxiv | \section{Acknowledgement}
The work described in this paper is partially funded by the USA National Science Foundation grant CNS-1931962 .
\section{Human-Multi-UAV Collaborations}
\label{sec:motivating_scenarios}
Several research groups have explored the application of UAVs for specific emergency scenarios such as surveying and assessing damage following an earthquake \cite{XU201422} or volcanic eruption \cite{DEBENI2019250}, investigating maritime spills \cite{DOOLY2016528}, delivering defibrillators \cite{10.1145/2851581.2892288}, and mapping wildfires \cite{Athanasis19}. These applications all involve human operators interacting with UAVs in direct or indirect ways to plan routes, capture video, or to supervise varying degrees of autonomous UAV behavior -- typically through the use of a graphical user interface (GUI). Researchers have described other forms of interactions \cite{hdi-survey}, including haptic and voice interfaces \cite{funk18,cauchard15}, but these are infrequently used in emergency response applications.
\subsection{DroneResponse: A Case Environment}
In this paper, we primarily draw examples from our \emph{DroneResponse} system, which we are developing to enable multiple collaborating, semi-autonomous UAVs to support diverse emergency response missions such as fire surveillance, search-and-rescue, and environmental sampling \cite{droneresponse,DBLP:conf/icse/Cleland-HuangVB18,DBLP:conf/euromicro/VierhauserCBKRG18}. Figure~\ref{fig:use-case} depicts a river search-and-rescue use-case in which multiple UAVs are deployed to find a victim on the river and to potentially aid emergency responders in delivering a flotation device.
DroneResponse represents a socio-technical cyber-physical system (CPS) in which multiple humans and multiple semi-autonomous UAVs engage in a shared emergency response mission. UAVs are designed to make autonomous decisions based on their current goals, capabilities, and current knowledge.
They build and maintain their knowledge of the mission through directly observing the environment (e.g., through use of their onboard sensors) and through receiving information from other UAVs, central control, and human operators \cite{wooldridge1997agent}.
UAVs then work to achieve their goals through enacting a series of tasks \cite{pokahr2005jadex}.
Humans interact with UAVs through various GUIs to create and view mission plans, monitor mission progress, assign permissions to UAVs, provide interactive guidance, and to maintain situational awareness. Bidirectional communication is crucial for enabling both humans and UAVs to complement each other's capabilities during the mission. An example of human-UAV collaboration is depicted in Figure~\ref{fig:droneResponse}, which shows a UI developed for the DroneResponse system. In this example, the UAV has detected a candidate victim in the water, autonomously started tracking the victim, while simultaneously requesting confirmation from the human incident commander that the detected object is actually the victim.
\begin{figure}
\centering
{\includegraphics[width=0.98\columnwidth]{figures/Models-UseCase1.pdf}}
\caption{A partial use case description of the DroneResponse River search-and-rescue scenario.}
\label{fig:use-case}
\vspace{10pt}
\includegraphics[width=0.98\columnwidth]{figures/DroneResponse_Cropped.png}
\caption{A human-UAV interaction point in which a UAV has detected a candidate victim and requested human confirmation.}
\label{fig:droneResponse}
\vspace{-12pt}
\end{figure}
\subsection{Human-UAV Interactions}
\label{sec:human-uav-interactions}
DroneResponse is being developed in close collaboration with emergency responders through engagement in a series of brainstorming activities, interviews, participatory design sessions, and early field-tests~\cite{droneresponse,DBLP:conf/icse/Cleland-HuangVB18,DBLP:conf/re/Cleland-HuangV18}.
The following concrete examples of human-UAV interactions, taken from the river search-and-rescue example, were identified as part of this collaborative design process. We use these examples throughout the remainder of the paper to motivate and contextualize our modeling activities. \newline \vspace{-8pt}
\noindent{\bf Scenario S1 -- Planning a rescue strategy:~}
When a UAV identifies a potential victim in the river, the victim's coordinates are sent to the mobile rescue unit. However, the UAV must also decide whether to request delivery of a flotation device by a suitably equipped UAV or whether it is sufficient to simply continue streaming imagery of the victim until human rescuers arrive. The UAV makes this decision by estimating the arrival time of the rescue boat versus the time to deliver a flotation device. However, humans can contribute additional information to the decision -- for example, by modifying the expected arrival time of the rescue boat, or by inspecting the streamed imagery and determining whether the victim would be able to receive the flotation device if it were delivered (e.g., the victim is conscious and not obscured by overhead branches) and is in need of the device (e.g., not having a safe waiting position on a rock or tree branch). This is an example of a \emph{bidirectional exchange of knowledge} between multiple humans and multiple UAVs, where the first UAV shares the victim's coordinates and streams imagery, humans on the boat estimate their ETA and if necessary update the UAV's situational awareness, the incident commander decides whether a flotation device could be used effectively if delivered on time, and if needed, a second UAV performs the delivery. The scenario illustrates many aspects of human-agent collaboration including \emph{knowledge sharing} and \emph{human intervention}. \newline \vspace{-8pt}
\noindent{\bf Scenario S2 -- Sharing environmental information:~} In river search-and-rescue missions, victims tend to get trapped in `strainers' (i.e., obstruction points) or tangled in tree roots on outer banks. These areas require closer inspection. While UAVs have onboard vision and will attempt to identify `hotspots', human responders can directly provide this information to multiple UAVs based on their observation of the scene. This enables UAVs to collaboratively adapt their flight plan so that they prioritize specific search areas, or adjust their flight patterns to reduce speed or fly at lower altitudes in order to render higher-resolution images of priority search areas. This interaction scenario is similar to the previous one, except that it is primarily uni-directional with information passed from humans to UAVs. \newline \vspace{-8pt}
\noindent{\bf Scenario S3 -- Victim confirmation:~} The UAV's AI model uses its onboard computer vision to detect potential victims. When the confidence level surpasses a given threshold, the UAV will autonomously switch to tracking mode and broadcast this information to all other UAVs. If the UAV autonomy level is low, it requests human confirmation of the victim sighting before it starts tracking. Human feedback is sent to the UAV and propagated across all other UAVs. In this scenario the \emph{UAV elicits help from the human} and the human responds by confirming or refuting the UAV's belief that it has sighted a victim or by suggesting additional actions. For example, if the detected object is partially obscured, the human might ask the UAV to collect additional imagery from multiple altitudes and angles. \newline \vspace{-8pt}
\noindent{\bf Scenario S4 -- Support for UAV coordination:~}
In an extension to the previous scenario, multiple UAVs might simultaneously detect a victim. They must then use onboard computer vision and their own estimated coordinates of the detected object to determine whether they have detected the same object and to plan a coordinated response. However, this determination may be more complicated in poor visibility environments with weak satellite signals and low geolocation accuracy (e.g., in canyons). Human responders may need to intervene in the UAV's planning process by helping determine whether the sighted objects are valid and unique, and if necessary selecting the most appropriate UAV for the tracking task. This is an example in which the human \emph{intervenes in the UAV's autonomy} and potentially provides \emph{direct commands}, assigning a specific UAV to the task. \newline \vspace{-8pt}
\noindent{\bf Scenario S5 -- Prohibiting normal behavior:~} Most UAVs come with built-in safety features so that they autonomously land-in-place or return to launch (RTL) when their battery becomes low or a malfunction is detected. In the case of a low battery, the DroneResponse system initially raises a low-battery alert in the UI, and eventually initiates the RTL command. A human responder might modify the UAV's permissions and \emph{prohibit the UAV from transitioning to RTL} if the UAV is conducting a critical task. An example, that arose from discussions with the Navy, was the use of floating drones for man-overboard scenarios. If a UAV found a victim, and no other UAV or human rescue unit were in the vicinity, the RTL feature would be deactivated automatically. This meant that when batteries lost power, the UAV would land in the water and serve as a search beacon. However, for many reasons, a human might wish to override the default deactivation of the RTL, thereby reactivating the UAV's RTL autonomy.
These motivating examples provide the foundation for our discussion of human-on-the-loop collaboration patterns.
\section{Analysis of Collaboration Actions}
\label{sec:actions}
Agents within a human-on-the-loop (HotL) system are empowered to execute tasks independently with humans serving in a purely supervisory role~\cite{scharre2015introduction}. However, as our previous examples have shown, humans and agents continually share information in order to maintain bidirectional situational awareness and to work collaboratively towards achieving mission goals. Agents report on their status (e.g., remaining battery levels, GPS coordinates, and altitude), and they explain their current plans, actions, and autonomous decisions whenever requested by humans. Humans can directly intervene in the agents' behavior by providing additional information about the environment, and agents can then leverage this information to make more informed decisions. Humans also respond to direct requests for feedback -- for example, to confirm a victim sighting as previously discussed. They can also provide direct commands (e.g., RTL or stop tracking), or can explicitly modify an agent's permissions in order to enhance or constrain the agent's autonomous behavior. These types of interactions are depicted in Figure~\ref{fig:HumanCollaborationPoints}.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{figures/CollaborationPoints.pdf}
\caption{Humans and agents collaborate through shared knowledge and through human interventions in the agents' autonomy.}
\label{fig:HumanCollaborationPoints}
\end{figure}
\subsection{Situational Awareness}
Situational Awareness (SA) is the ability of the user to perceive the environment (Level-1 SA), to understand the reasoning behind the current state of the environment (Level-2 SA), and finally, to project how the situation could evolve in the future (Level-3 SA)~\cite{ensdley_SA}.
Humans acquire knowledge of the situation from diverse sources such as their physical interactions with the agents (e.g., visual observations and sounds), observations of the current weather, radio communication with on-scene first responders, and finally through information shared through the systems' GUI. Humans combine knowledge from all of these sources to create a mental model of the current status of the mission. At the same time, autonomous agents, such as UAVs, develop their own situational awareness using their onboard sensors and through collating information shared by other autonomous agents and by humans. Both humans and autonomous agents then use their shared knowledge of the environment to formulate and enact plans to collaboratively achieve their mission goals.
In a HotL environment, agents make many autonomous decisions; however, in order for humans to supervise the mission and to maintain full situational awareness, the agents must explain their behavior when requested by a human. The explanation should include key \textbf{information} (i.e., the agent's situational awareness at the time the decision was made), the autonomous decision (e.g., switch modes, change altitude etc), and a human understandable rationale for the decision. Providing {\bf rationales for all decisions and subsequent behavior} is therefore critical in order for humans to achieve situational awareness. If the human were to disagree with the decision and the logic of the supporting rationale, then they could monitor the agents more closely, temporarily lower their autonomy levels, or make longer-term adjustments (e.g., retraining a computer vision model) for future missions.
\subsection{Human Intervention}
At times, humans may need to intervene in the autonomy of an agent in order to influence and improve the outcome of the joint mission. They can do so in several different ways.
Previous studies\cite{loftin2016learning, thomaz2006reinforcement} demonstrate that a {\bf feedback loop} can help agents to improve their future performance by fine-tuning algorithmic parameters that drive the agent's autonomy. For example, feedback on a candidate victim detected by the computer vision model, could be used to retrain the model or refine its configuration parameters, thereby potentially reducing false positives or false negatives. In addition, users can initiate commands to immediately enact changes in the behavior of the UAV. For example, a human could directly command a UAV to fly to a specific waypoint to checkout a report received on social media.
Finally, the human may choose to {\bf raise or lower autonomy levels} of the agent. Autonomy levels, defined as the extent of an agent's independence while acting autonomously in the environment, can be expressed through role assignments or through specific permissions within a role. For example, a UAV that is permitted to track a victim without first obtaining human confirmation has a higher autonomy level than one which needs explicit human confirmation before tracking. Humans tend to establish autonomy levels based on their trust in the agent's capabilities. For example, a UAV exhibiting high degrees of accuracy in the way it classifies an object increases human trust, and as a result, the human might grant the UAV additional permissions. On the other hand, the human operator might revoke permissions, thereby lowing autonomy levels, if the UAV were operating in weather conditions for which the computer vision model had not been appropriately trained and for which accuracy was expected to be lower than normal.
\section{Conclusion}
\label{sec:conclusion}
This paper describes the model-driven analysis and specification of human multi-agent interaction requirements for a human-on-the-loop system. The human multi-agent interaction types, the proposed meta-model, and the structured probing questions assist in modeling and formally specifying the complex human multi-agent interactions. We have demonstrated its use through formally specifying human interaction and intervention points for two distinct scenarios in which multiple semi-autonomous UAVs are deployed in emergency response missions. Our future work will involve implementing and evaluating our models with first-responders with physical UAVs in outdoor field-tests.
\section{Application: Structural Fire Support}
\label{sec:example}
As previously described, we constructed our meta-model based on examples from river-rescue and other scenarios shown in Table \ref{tab:usecases}. In this section we briefly illustrate that the proposed meta-model and the probing questions can be used to specify requirements for other human multi-agent use-cases such as structural fire support. We collected an initial set of requirements for this scenario during a series of brainstorming sessions with the South Bend firefighters in the spring of 2019. The firefighters had already used manually-flown UAVs to support their firefighting efforts; and our brainstorming session focused on how they would extend their current use-case to leverage semi-autonomous UAVs as part of our DroneResponse system.
For the purposes of this paper, we leverage the feedback we acquired during the previous brainstorming sessions to retroactively answer the probing questions and to provide an additional example of modeling human interaction requirements.
Figure \ref{fig:fire_mokeup} shows a visionary mockup used in our original brainstorming session to encourage discussion about the use of UAVs in firefighting. The firefighters identified two primary use cases. First, they wanted to use UAVs to create thermal maps of the building -- focusing especially on detecting hotspots on roofs as many firefighters have been injured when a roof has collapsed without warning due to an undetected internal fire. They even suggested that UAVs could mark hotspots with lasers. Second, they proposed using UAVs to search for victims through windows and smoke using thermal cameras.
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{figures/FireSupport.pdf}
\caption{A visionary prototype that we used during brainstorming meetings with Fire Fighters to trigger ideas about the use of DroneResponse in fighting structural fires.}
\label{fig:fire_mokeup}
\end{figure}
To demonstrate that our meta-model can be applied to this very different scenario we focus on a specific fire-fighting scenario in which multiple UAVs work collaboratively to create a 3D model of the building. At the start of the mission, the UAVs collaboratively create a plan for surveying the building. For example, depending upon the size and layout of the building, weather conditions, and the number of available UAVs, they could work independently on different parts (sides, roof) of the building, they could prioritize specific areas, fly around the building in either direction, or even work together on a single section at distinct altitudes. In the scenario that we model, the UAVs devise a specific mapping plan; however, firefighters observe smoke coming from a different area of the building, update the knowledge base, and this leads to the UAVs redesigning their strategy. In this example, the firefighters do not issue a direct command, but instead provide additional information and allow the UAVs to autonomously adapt their plans. In this example, one of the UAVs assumes a new role of using thermal imagery to search for victims through windows in the area at which smoke has been detected.
The probing questions enable us to explore this type of scenario. \textit{PQ11}, \textit{PQ12}, and \textit{PQ13} identify the required \texttt{AutonomousDecision}s and the required \texttt{Information} to create the 3D model of the building autonomously. \textit{PQ3} and \textit{PQ4} elicit human multi-UAV interaction points such as \textit{fire smoke detection by humans} while UAVs are engaged in mapping the building. \textit{PQ6} identifies potential flight adaptation patterns and roles assumed by the UAVs after receiving updated information about the smoke. Answers to the probing questions lead us to construct the conceptual model and sequence diagram depicted in Figure~\ref{fig:sequence_fire_map}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/M2_fire_map.pdf}
\caption*{a: Conceptual model showing entities involved in the human multi-agent interaction.}
\includegraphics[width=\linewidth]{figures/sequence_fire_map.pdf}
\caption*{b: Sequence diagram showing human-agent collaborations.}
\caption{Multiple UAVs collaborate to create the 3D model of the building. When the human operator shares information about smoke observed at one end of the building, UAV-1 starts capturing thermal images to search for victims through the smoke.}
\label{fig:sequence_fire_map}
\end{figure}
\section{Introduction}
The deployment of a swarm of Unmanned-Aerial Vehicles (UAVs) to support human first responders in emergencies such as river search-and-rescue, hazardous material sampling, and fire surveillance has earned significant attention due to advancements in the robotics and Artificial Intelligence (AI) domains \cite{torresen2018review,carpentiero2017swarm}. Advanced AI models can assist UAVs in performing tasks such as creating a 3D heat-map of a building, finding a drowning person in a river, and delivering a medical device, while robotics autonomy models enable UAVs to automatically plan their actions in a dynamic environment to achieve a task \cite{chung2018survey,hu2020deep}. However, despite these advances, the deployment of such systems remains challenging due to uncertainties in the outcome of the AI models \cite{klas2018uncertainty}, rapid changes in environmental conditions, and emerging requirements for how a swarm of autonomous UAVs can best support first responders during a mission.
The UAVs of next-generation emergency response systems will be capable of sensing, planning, reasoning, sharing, and acting to accomplish their tasks \cite{nahavandi2017trusted}. These UAVs will not require humans-in-the-loop to make all key decisions, but rather will make independent decisions with humans-on-the-loop setting goals and supervising the mission \cite{fischer2017loop}. For example, in a multi-UAV river search-and-rescue mission, the autonomous UAV can detect a drowning person in the river utilizing the on-board AI vision models (\emph{sensing}) and ask another UAV to schedule delivery of a flotation device to the victim's location (\emph{planning} and \emph{reasoning}). These UAVs collaborate to share (\emph{sharing}) the victim's location and subsequently deliver the flotation device (\emph{acting}). These intelligent UAVs also send the victim's location to emergency responders on the rescue-boat so that they can perform the physical rescue operation. Autonomous systems of such complexity demand humans and intelligent agents to collaborate as a human-agent team~\cite{bellamy2017human,ClelandHuang-iHDI2020}.
A well-known issue in designing a system comprising humans and autonomous agents is to identify how they can collaborate and work together to achieve a common goal~\cite{hancock1998allocating}. The challenges in human multi-agents collaboration include identifying when and how humans should adjust the autonomy levels of agents, identifying how autonomous agents should adapt and explain their current behavior to maintain humans' trust in them, and finally, identifying different ways to maintain situational awareness among humans and all autonomous agents. In this paper we propose a humans-on-the-loop solution in which humans maintain oversight while intelligent agents are empowered to autonomously make planning and enactment decisions. We first identify common interaction patterns in which humans collaborate with autonomous agents, and then leverage those patterns to construct a human interaction meta-model. In addition, we define a set of `probing' questions which can be used to elicit, analyze, and ultimately specify requirements for human multi-UAV interactions in specific emergency response missions.
This paper makes three primary contributions. First it motivates the problem of human multi-agent interaction through examples drawn from a concrete mission scenario. Second, it provides a meta-model to describe human interactions with multiple agents, and finally it presents a set of requirements-related guiding questions for eliciting and then modeling specific instances of these human multi-agent interactions.
The paper is organized as follows: Section~\ref{sec:motivating_scenarios} presents examples of human multi-agent interactions drawn from the river-rescue scenario and section~\ref{sec:actions} presents an analysis of these interactions. Section~\ref{sec:meta-model} introduces a human-on-the-loop meta-model for describing human multi-agent interactions. Section \ref{sec:requirements} then describe our process for eliciting requirements, mapping them to elements of the meta-model, and then specifying requirements by deriving instances of the meta-model for each identified human multi-agent interaction type. Section \ref{sec:example} discusses an application of our work and finally sections \ref{sec:threats},~\ref{sec:related}, and \ref{sec:conclusion} discuss threats to validity, related work, and draw conclusions.
\section{Meta-Model for Human-UAV Interactions}
\label{sec:meta-model}
We constructed a meta-model to define the vocabulary of the domain of human multi-agent interactions. The meta-model includes domain-specific concepts and establishes rules for how those types of concepts are associated with one another. This allows us to express specific instances of human multi-agent interaction in conceptual models and reuse the concepts we identified to express how humans and multiple agents will interact with each other in specific scenarios.
\begin{table}[h!]
\centering
\caption{Additional Use-Cases from which human multi-UAV interaction patterns were identified and analyzed}
\label{tab:usecases}
\addtolength{\tabcolsep}{-4.5pt}
\small
\begin{tabular}{L{.8cm}L{4cm}L{4cm}@{}}
\hline
{\bf ID} & {\bf Use Cases} & {\bf Engaged Stakeholders}\\ \hline
UC1 & River Search \& Rescue &South Bend Firefighters\\
UC2 & Defibrillator Delivery &DeLive, Cardiac Science\\
UC3 & Traffic Accident surveillance&South Bend Firefighters\\
UC4 & Water Sampling &Environmental Scientists\\
UC5&Man overboard& US Navy\\
\hline
\end{tabular}
\end{table}
The elements of the meta-model were (cf.~Fig.~\ref{fig:human-on-the-loop-meta-model}) derived from our analysis of human multi-UAV interactions in the river-rescue scenarios and also from additional scenarios summarized in Table \ref{tab:usecases}. The meta-model depicts frequently occurring concept types and their associations, and was designed iteratively through multiple refinements in which we recursively validate the model against the specific scenarios described in Section \ref{sec:motivating_scenarios}. Our meta-model includes the following elements:
A \texttt{Role} defines the complex behaviors that agents perform autonomously. Complex behaviors of a UAV include takeoff, search, track, deliver, and RTL.
An \texttt{AutonomousDecision} entity uses algorithms that leverage \textit{Information} in the \texttt{KnowledgeBase} to make decisions. The complex behaviour of a \texttt{Role} is defined through one or several such decisions.
For example, there are many cases in which a single agent must serve as a \textit{leader}, responsible for coordinating behavior of its \textit{follower}s. During a leader election, an \texttt{AutonomousDecision} entity could select a new leader from the set of followers, thereby enabling the system to switch leaders without the need for human intervention. Upon making a decision, an \texttt{AutonomousDecision} entity generates output \texttt{Information} including a rationale for its decision, which could later be used to generate a human-readable explanation.
Entities of type \texttt{Permission} are used by \texttt{AutonomousDecision}s to decide if the agents are allowed to make a specific decision. For example, an \texttt{AutonomousDecision} entity checks whether the human responders have allowed the system to automatically select a replacement if needed during a victim tracking activity. \texttt{Role}s are associated with a set of permissions defining the allowed behaviors of the agent which can be modified at run-time.
A \texttt{KnowledgeBase} entity contains current environmental information as well as information about the state of a single agent or multiple agents. An \texttt{AutonomousDecision} entity uses the \texttt{Information} stored in the \texttt{KnowledgeBase} in decision making. A human can use the information in the \texttt{KnowledgeBase} entity to gain situational awareness of the mission
Entities of type \texttt{HumanInteraction} allow humans to intervene in the autonomy of the agents or to share their explicit knowledge of the environment.
The three entity types \texttt{ProvidedInformation}, \texttt{ChangedPermission}, and \texttt{IssuedCommand} provide different ways for humans to interact with the system. The \texttt{ProvidedInformation} entity adds \texttt{Information} to the \texttt{KnowledgeBase} of the system to maintain the consistent knowledge among multiple agents. Humans can use interventions of type \texttt{ChangedPermission} to raise or lower the autonomy of an agent, or agents, based on their trust in the ability of the agents to make correct decisions within the current environment. Finally, an \texttt{IssuedCommand} entity allows humans to gain control over the autonomous behavior of the agents. For example, if a UAV loses communication with other UAVs in the mission and fails to deliver the flotation device when it is needed, a human can send a direct command that sets the current \texttt{Role} of the UAV to \textit{deliver flotation device}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/ModelM2.pdf}
\caption{Meta-model for human-on-the-loop interaction in a multi-agent mission.}
\label{fig:human-on-the-loop-meta-model}
\end{figure}
It is noteworthy that neither humans nor agents are represented explicitly in our meta-model. The underlying implicit assumption is that roles are assigned to agents according to the capabilities of each UAV, that UAVs can assume new roles according to the state of the environment, constrained by permissions associated with their capabilities. Furthermore, humans and agents have access to one or several instances of the distributed \texttt{KnowledgeBase} which stores information acquired from the environment, multiple UAVs, and from humans. The reason for leaving these aspects implicit are that the domain of our model is human multi-UAV interaction and it is not relevant to the meta-model to specify which concrete UAV has assumed each specific role.
\section{System Design}
In this section, we describe the design of collaboration actions for the goal-oriented agents following the Belief-Desire-Intent(BDI) model. BDI agents are one of the most popular models inspired by the theory of human practical reasoning. Different architectures exist to implement the BDI agents such as JACK \cite{howden2001jack}, JADEX \cite{pokahr2005jadex}, and AgentSpeak \cite{rao1996agentspeak}. JADEX provides an open-source java environment to simulate intelligent agents. We choose to implement the core execution model of JADEX in python so that we can interface the implementation easily with our existing system. The \textit{beliefs} of BDI agents represent their knowledge of the environment. Multiple agents communicates to exchange their beliefs with each other. \textit{Desires} represents the state of the agent that it wishes to achieve. The commitment to a particular desire represents the goal of an agent. Each goal includes plans that an agent adopts to pursue the goal. Intentions are the series of actions in a plan that an agent executes to achieve the goal successfully. Now we map these concepts to the human-on-the-loop collaboration actions discussed in section \ref{sec:Actions}.
\textbf{Share knowledge}: Beliefs represents the knowledge-base of the BDI agents. Therefore, in our implementation, we categorize beliefs into three parts based on the level of their awareness about the environment. These levels corresponds to different levels of situational awareness. Level-1 belief variables represents the existence of objects in the environment such as location of fire-fighters and rescue boats, and assigned search area in a river search-and-rescue mission. Level-2 beliefs represent the context of the situation such as whether a victim is identified in the river. Finally level-3 beliefs variables include the probable estimates about the future state of the environment such as estimated arrival times of rescue-boats and flotation devices. All these beliefs variables construct the knowledge-base and situation awareness of the agent. In our implementation, human actions to share the knowledge of the environment results in updating the beliefs of agents at different levels. These updates in the belief state at run-time triggers the agent to adapt their behavior accordingly.
\textbf{Feedback and Commands}: \textit{Plans} represents the behavioral aspect of the BDI agent. An agent might have multiple plans to achieve a single goal. Therefore, the agents uses its reasoning engine to select one of the applicable plans. A change in the goal of the agent trigger it to re-evaluate the plans and select one that is appropriate for achieving the new goal. In our system, we implement Feedback and Commands to influences the goal. A command to the BDI agent directly manipulates the goal of the agent which results in adoption of new intents or actions. For example, a UAV currently searching an area, upon receiving a command to replace another UAV, causes it to abort the current goal and adopt a new goal. On the other hand, a feedback from human operator on the current execution of actions will cause an update in the parameters of goals and intents of the UAV. This update in the parameter in intended to improve the execution of a goal.
\textbf{Autonomy Levels}: The dispatcher in the JADEX execution model is responsible to select an appropriate plan in response to an event. The selection of a plan for execution is done in two steps. First, the dispatcher generates a list of applicable plans by matching the pre-conditions of every plan against the event. The meta-level reasoning in the next step select an appropriate plan for execution. In scenario S5, the applicable plans against the event \emph{low battery} while tracking the victims are either continue tracking the victim or launch the RTL for safety purpose. In such scenarios, the human action to lower the autonomy levels in our implementation dynamically modifies the applicability of a plan for an event. This restricts the dispatcher from selecting the RTL as an applicable plan while tracking the victim. The human operator can dynamically modify the applicability of plans to apply constraints on the mission.
\ankit{add tables to show the mappings}
\section{Related Work}
\label{sec:related}
The effectiveness of the HotL is highly dependent upon the human multi-agent interaction mechanisms built into the system as well as the flexibility of the autonomy models.
To this end, several researchers have explored techniques for exposing the intent, actions, plans and associated rationales of an autonomous agent \cite{chen2018situation}, while other researchers have explored ways to improve overall performance by dynamically adapting agents' autonomy levels based on the estimated cognitive workload of the human participants; however, they also observed that frequent changes in autonomy levels reduced situational awareness and forced operators to continually reevaluate the agents' behavior \cite{heard2020sahrta}.
Furthermore, systems that use AI techniques to support autonomy often lack adequate explanations of the autonomous behavior which can negatively impact achievement of mission goals \cite{stoica2017berkeley} and reduce trust in the system. Therefore, several of our PQs are specifically designed to explore the explainable aspects of a HotL system. Guizzardi argues that RE techniques can be applied in the design of AI systems, such as driverless cars and autonomous weapons, to ensure that they comply to ethical principles and codes \cite{guizzardi2020ethical}. Gamification is a popular technique for gathering and validating the requirements of a cyber-physical system \cite{lombriser2016gamified}. Wiesner et al.
\cite{wiesner2016supporting} engaged stakeholders in a simulated game under different operational conditions to discover the limitations of the existing requirements and to support the ideation of possible new services. Fischer also uses a multi-player mixed-reality game to generate requirements for interaction and coordination within rich and ‘messy’ real-world socio-technical settings \cite{fischer2014supporting}. However, Hyrynsalmi discusses limitations of gamification techniques \cite{hyrynsalmi2017dark}, for example, users focusing on winning the `game' instead of the challenges of interacting with the system \cite{knaving2013designing}. The gamification approach also requires a significant upfront development effort and proves insufficient to explore the unknown unknowns of the system. Our work takes a more formal approach to elicit requirements using a concrete meta-model and PQs that focus on the human interaction aspects of the multi-agent HotL systems.
\section{Requirements Modeling}
\label{sec:requirements}
Human multi-agent interactions in the domain of emergency missions are impacted by factors such as uncertainty of the agents' knowledge, the degree of human trust in the agent's ability to reason over its knowledge and behavior correctly, and the criticality of the task at hand. Autonomy levels and human interactions should therefore not be applied at the same level for all tasks, in all contexts, and across all phases of the mission, but instead need to be customized according to actions, context, phase, and even human preferences. This introduces the need for a systematic requirements elicitation process to explore the knowledge needs of humans and agents, and identify points at which humans can interact with the agents' autonomous behavior.
To support the elicitation, analysis, and specification of human multi-agent interactions, we developed a set of probing questions~\cite{anish2016probing,miller2009quest_probingQuestions}. These questions can be used to elicit requirements for each human multi-agent interaction point from system stakeholders. Probing questions are not necessarily easy to answer especially as human multi-agent interactions represent an emergent area of study with unknown unknowns~\cite{DBLP:conf/re/SutcliffeS13}. Answering the questions therefore requires a rigorous and systematic requirements engineering elicitation and analysis process that includes brainstorming, interviews, immersive prototyping, and even field-studies in order to fully discover the requirements~\cite{DBLP:journals/ijmms/Robertson01, DBLP:conf/re/Sutcliffe01}.
We structure our probing questions around the four types of human multi-agent interactions defined in Figure~\ref{fig:HumanCollaborationPoints}. These include (1) information sharing, (2) direct feedback and commands, (3) raising or lowering of autonomy levels, and (4) providing behavior rationales and explanations. We map each question to the entities of the meta-model, and then use the answers to specify the requirements for each interaction point as a conceptual model. In each case, the first question is designed to identify specific interaction points, while all subsequent questions are used to explore the details of each interaction.
\subsection{Sharing Information}
At the most basic level, humans and agents must share information with each other in order to create a common understanding of the state of the mission and its environment. We therefore start by posing two key questions concerning the exchange of information.
\begin{enumerate}[leftmargin=.75cm]
\item [{\footnotesize PQ1}:] {\bf What information} do agents or humans need to know about the state of the mission and the environment in which they operate individually or collaboratively?
[\texttt{\small Knowledge, Role, AutonomousDecisions}]
\item [{\footnotesize PQ2}:] When and how will these agents or humans share or acquire information?
[\texttt{\small Knowledge, Information, Role}]
\end{enumerate}
By default, the system must be designed such that information is shared freely across humans and agents. For example, agents acquire knowledge about the environment and the state of the mission through their sensors (e.g., victim detected or wind velocity 20 mph) and through decisions they make (e.g., UAV-1 is tracking a detected victim). They share this information with other active agents and with humans on the ground. However, above and beyond this general exchange of information, we must explore additional explicit interaction points between humans and agents in order to understand the system's requirements.
\subsection{Feedback and Commands}
All five of the scenarios in Section~\ref{sec:human-uav-interactions} introduce the possibility of a human offering feedback or even direct commands. To elicit a more complete list of interaction points, we ask the following question:
\begin{enumerate}[leftmargin=.75cm]
\item [{\footnotesize PQ3}:] When should a {\bf human intervene} by providing direct feedback or commands to multiple agents?
[\texttt{\small IssuedCommand, AutonomousDecision, ProvidedInformation}]
\end{enumerate}
We then ask additional probing questions to explore each of the identified intervention points:
\begin{enumerate}[leftmargin=.75cm]
\item [{\footnotesize PQ4}:] What {\bf triggers} the feedback or command? ~(e.g., solicited by UAV, triggered by a specific event, or offered by the human operator based on his/her general awareness)
[\texttt{AutonomousDecision, Information}]
\item [{\footnotesize PQ5}:] What {\bf information} should be provided in the feedback or command? ~(e.g., knowledge of the scene, permission to perform a specific task, a hint)
[\texttt{\small Information}]
\item [{\footnotesize PQ6}:] How should the agent {\bf respond to the feedback}? ~(e.g., update its situational awareness, obey the command regardless of its current environmental knowledge)
[\texttt{\small Role, AutonomousDecisions}]
\end{enumerate}
\subsection{Providing behavioral rationales}
Scenarios S4 and S5 provided clear examples in which a UAV needed to explain its behavior. To identify other such interaction points we pose the following question:
\begin{enumerate}[leftmargin=.8cm]
\item [{\footnotesize PQ7}:] In what concrete situations would humans require agents to explain themselves?
[\texttt{\small AutonomousDecision}]
\end{enumerate}
The following questions are then posed for each situation in which the agent is expected to explain its behavior.\vspace{3pt}
\begin{enumerate}[leftmargin=.9cm]
\item [{\footnotesize PQ8}:~] Why does the agent need to {\bf explain itself} at this collaboration point? ~(e.g., unexpected behavior)
[\texttt{\small Role}]
\item [{\footnotesize PQ9}:] What {\bf information} needs to be included in the explanation? (e.g., current task, goals, actions, rationales)
[\texttt{\small Information}]
\item [{\footnotesize PQ10}:] Under what circumstances might the human choose to override the agent's decision based on its explanation? If so, what would those overrides look like? (e.g., feedback/command, or lowering of autonomy levels.)
[\texttt{\small HumanInteraction, ProvidedInformation, IssuedCommand, ChangedPermission}]
\end{enumerate}
\subsection{Raising or Lowering of Autonomy Levels}
Scenarios S4 and S5 also provide examples where a human operator may wish to raise or lower autonomy levels. To identify such intervention points we pose the following question:
\begin{enumerate}[leftmargin=.9cm]
\item [{\footnotesize PQ11}:] When and where do the agents exhibit {\bf autonomous decision-making behavior?} [\texttt{\small Role, Autonomous Decision} ]
\end{enumerate}
Each identified intervention point is then explored through the following questions:
\begin{enumerate}[leftmargin=.9cm]
\item [{\footnotesize PQ12}:] {\bf What information} do the agents need in order to exhibit the autonomous behavior? [\texttt{\small Information}]
\item [{\footnotesize PQ13}:] Under {\bf normal operating conditions}, what decisions should the agent be able to make autonomously?
[\texttt{\small AutonomousDecision}]
\item [{\footnotesize PQ14}:] What {\bf constraints} on the agent's autonomy are introduced by issues related to safety, ethics, regulatory requirements, or human trust? (e.g., FAA Part 107 regulations prohibit night-time flight without an explicit waiver)
[\texttt{\small Permission}]
\item [{\footnotesize PQ15}:] How is the {\bf autonomy suppressed or increased} at this interaction point? (e.g., modifying the confidence threshold for automatically tracking a potential victim, disabling/enabling the ability to track without permission, disabling/enabling the ability for a UAV to determine its ideal altitude and velocity during a search -- or altering the range of allowed values.)
[\texttt{\small Role, ChangedPermission}]
\item [{\footnotesize PQ16}:] Are there circumstances in which the human needs to make run-time decisions about suppressing or raising autonomy (i.e., human interaction is required) vs. clearly defined rules by which the autonomy levels can be automatically raised and lowered?
[\texttt{\small Permission, ChangedPermission}]
\item [{\footnotesize PQ17}:] When autonomy is suppressed or increased what {\bf extra support structures} would be needed, if any, for the emergency responders? (e.g., the operator manually pilots multiple UAVs and additional 360$^o$ views are needed)
[\texttt{\small Role}]
\end{enumerate}
\subsection{Constructing Requirements Models}
For each identified human multi-agent interaction point we specify requirements for the interaction by constructing a conceptual model showing named instances of each entity and the relationships between them.
We use the tags assigned to each probing question to identify entities to include in the diagram. We also use the relationships depicted in the meta-model to guide the addition of appropriate relations among the entities.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{figures/replace_example.pdf}
\caption{The conceptual model of human intervention in the automatic selection of a UAV to replace another UAV. A human operator issues commands to modify the role of a UAV, thereby overriding autonomous decision of the system. }
\label{fig:M1_instance}
\includegraphics[width=\linewidth]{figures/sequence_replace.pdf}
\caption{The sequence of events when the human operator intervenes to override an autonomous decision. UAV-1 begins the process of selecting a replacement. However, the human operator overrides the decision by assigning a tracking role to UAV-2 and at the same time suspending UAV-1's search for a replacement.}
\label{fig:sequence_replace}
\end{figure}
We illustrate the construction of the concept models following the template of probing questions with an example from the river search-and-rescue scenario. The constructed model is shown in Fig \ref{fig:M1_instance}. Probing question \textit{\small PQ11} identifies an example of autonomous behavior that occurs when the battery level of a UAV performing a critical task (e.g., tracking) falls below a predefined level. By default, the UAV will automatically RTL; however, it first requests a replacement from other UAVs in the mission. Therefore, \textit{\small PQ11} identifies the \textit{FindReplacement} role of a UAV. The other UAVs in the mission must autonomously and collaboratively \textit{select a replacement} for the tracking task. \textit{\small PQ12} identifies the required information (\textit{location of all UAVs}), while \textit{\small PQ14} and \textit{\small PQ15} identify the \textit{permission levels} a UAV needs in order to serve as a replacement for the tracking task. \textit{\small PQ3} also reveals that human responders reserve the right to override the choice of UAV for any reason, identifying a new command to \textit{replace UAV}.
Consequently, \textit{\small PQ6} clarifies that the targeted UAV must perform the \textit{tracking} task after receiving the replacement command from a human responder. In this way, the probing questions help to identify entities from the meta-model that are required to model this specific human interaction. We then leverage the relationships between entity types defined in the meta-model to construct a conceptual model of the human multi-UAV interaction in the river search-and-rescue scenario as shown in Figure \ref{fig:M1_instance}. Finally, we leverage the conceptual model to explore and specify the sequence of events for the human interactions. This entire scenario is depicted in the Sequence Diagram of Figure~\ref{fig:sequence_replace}.
\section{Threats to Validity}
\label{sec:threats}
There are several threats to validity for our approach.
First, we have applied the probing questions retrospectively to construct the M1 models described in the fire-surveillance example; however, we answered the questions based on information gathered through a series of brainstorm meetings with firefighters. In the next phase of our work, we will further evaluate the questions in live requirements elicitation sessions. Second, we developed our meta-model based on five use-cases primarily developed by our own research group in collaboration with our local fire fighters. We then demonstrated its generalizability using an additional use case that we developed. Our approach needs to be evaluated on use-cases elicited from diverse groups of emergency responders. Finally, our approach currently ends at the modeling stage. To fully evaluated the usefulness of the model and the probing questions, we need to implement and integrate the modeled interactions within our deployed system. We are currently working towards developing the required infrastructure such as AI vision models, on-board analysis and reasoning framework to support autonomous capabilities of the UAVs, and will then evaluate the extent to which our approach produces a viable design for use with physical UAVs.
|
1,108,101,562,937 | arxiv | \section*{Methods}
\textbf{Ultracold Bose-Fermi mixture preparation}. We load Na from a Zeeman
slower and K from a 2D magnetic-optical-trap (MOT) into a two-species
dark-SPOT. The atoms are optically pumped to the $|2,2\rangle$ and
$|9/2,9/2\rangle$ states and are then transferred to an Ioffe-Pritchard cloverleaf
magnetic trap, where Na atoms are subject to forced evaporative cooling and K
atoms are sympathetically cooled. The atomic mixture is then loaded into a
crossed-beam optical dipole trap (wavelength $1064$ nm, beam waist $61$ $\mu$m
and $123$ $\mu$m for horizontal and vertical beams respectively) and Na atoms
are transferred to the $A$ state before further evaporative cooling. At
the end of the optical trap evaporation we adiabatically increase the optical
trap power to the desired value to hold the atoms and perform the experiments.
The Na atoms are always in the $A$ state, and the K atoms are prepared in
different internal states by a $50$ ms adiabatic rapid passage rf sweep with
nearly unit efficiency. The experiments are performed at magnetic fields of
around 130G. The magnetic field is actively stabilized with a stability of
better than 10 mG. We prepare K atoms in the $|9/2,-1/2\rangle$ or
$|9/2,-7/2\rangle$ states at a low field before increasing the magnetic field to
the desired value, since these two states have no Feshbach resonances at about
130 G. K atoms in the $C$ state are prepared from the
$|9/2,-1/2\rangle$ state by a $\pi$ pulse transfer at high field. K atoms in
different internal states are measured with a high-field imaging
technique, and the measured atoms numbers are calibrated by comparing with the
$\sigma-$ cycling transition of $|9/2,-9/2\rangle$.
\textbf{Characterization of the Feshbach resonances}. The binding energy of the
$AB$ Feshbach molecule is measured by the rf loss or dissociation
spectrum. For the rf loss spectrum, we prepare the K atoms in the $C$ state,
and apply a $0.5-1$ s rf pulse to couple the free atom states to the
molecular bound states, where the rf field has a Rabi frequency of $\sim1$ kHz
for the free atomic transition. The binding energy is then obtained by fitting
the loss spectrum (Supplementary Information) with models in Ref.
\cite{Ulmanis2015} . For the dissociation spectrum, we associate the $AB$
molecules from the $A+C$ mixture and then dissociate them into the
$A+|9/2,-7/2\rangle$ state as in the main text. The binding energy is obtained
by fitting the dissociation spectrum using the bound-free Franck-Condon
lineshape in Refs. \cite{Chin2005,Wu2012}.
The binding energy of the $AC$ Feshbach molecule is measured using similar
methods. For the rf loss spectrum, we prepare free K atoms in the $|9/2,-1/2\rangle$
state, and apply a weak rf pulse to observe the loss spectrum. For the dissociation
spectrum, the $AC$ molecules are rf associated from the $A+|9/2,-1/2\rangle$ state.
Then the remaining K atoms in the $|9/2,-1/2\rangle$ state are transferred to the
$|9/2,1/2\rangle$ state via a $\pi$ pulse, leaving the $|9/2,-1/2\rangle$
state empty for the molecule dissociation. The $AC$ molecules has to be
dissociated in such a way because the dissociation from the $AC$ molecule into the
$A+B$ free state is significantly suppressed due to the existence of the
bound-bound transition \cite{Chin2005}.
We first fit the measured binding energies with the universal model
\cite{chin2010}, $E_{b}=-\hbar^{2}/2\mu(a-\bar{a})^{2}$, where $\bar
{a}=51~a_{0}$ is the mean scattering length with $a_{0}$ the Bohr radius, and
$a=a_{\mathrm{bg}}[1-\Delta B/(B-B_{0})]$ is the scattering length near the
Feshbach resonance with the background scattering length $a_{\mathrm{bg}},$
the resonance position $B_{0}$ and the width $\Delta B$ the fitting parameters.
For Feshbach resonance between $A$ and $B,$ the fitting yields $a_{\mathrm{bg}%
}= - 455(18)~a_{0}$, $B_{0}=138.71(20)$ G and $\Delta B=-34.60(34)$ G, which
are in good agreement with previous work \cite{Wu2012}. This is an open
channel dominated resonance with strength $s_{\mathrm{res}}\gg1$ \cite{Viel2016}.
For Feshbach resonance between $A$ and $C,$ we obtain
$a_{\mathrm{bg}}=126(9)~a_{0}$, $B_{0}=130.637(14)$ G and $\Delta
B=4.0(4)$ G. This resonance tends towards closed channel dominance with the
strength $s_{\mathrm{res}}<1$ \cite{Viel2016}. The binding energy measurement
gives a resonance at 130.64 G instead of 129.4 G determined from the enhanced atom
loss measurement \cite{Park2012}. The measured binding energies have also been
fitted with the coupled-channel calculations (Supplementary Information).
\textbf{Reaction dynamics}. The time evolution of the molecule reactant number
$N_{AB}$ and atom product number $N_{B}$ may be described by Eqn.
(\ref{eqn1}), where the overall loss rate is $\gamma_{AB}=\beta
_{A}\bar{n}_{A}+\beta_{C}\bar{n}_{C}+\beta_{r}\bar{n}_{C}$, with $\bar{n}_{A}$
and $\bar{n}_{C}$ mean densities of the $A$ and $C$ atoms, respectively. Here
the first term is the loss due to inelastic collisions with the remaining $A$
atoms, with $\beta_{A}$ the loss rate. The other two terms describe the losses
due to collisions with the atom reactant $C$, which include the desired
reactive collision with reaction rate $\beta_{r}$, and the losses due to
reactions in other channels and inelastic collisions with a loss rate of
$\beta_{C}$. The collisions between the $AB$ molecules can be safely neglected
since the Feshbach molecules are fermionic molecules. As the number of molecule
reactants is about one order smaller than that of the atoms, we assume $\bar
{n}_{A}$ and $\bar{n}_{C}$ are constants during the reactions.
Eqns. (\ref{eqn1}) can be solved with the solution,
\begin{align}
N_{AB}(t) & =N_{AB}(0)e^{-\gamma_{AB}t},\\
N_{B}(t) & =\frac{\beta_{r}\bar{n}_{C}N_{AB}(0)}{\gamma_{AB}}(1-e^{-\gamma
_{AB}t})+N_{B}(0),
\end{align}
where $N_{B}(0)$ describes the atom product number accumulated during
the association process of the $AB$ molecules. Therefore, the decay rate of the
reaction may be given by
\begin{equation}
\beta_{r}=\frac{\Delta N_{B}\gamma_{AB}}{\alpha N_{C}N_{AB}(0)}%
\end{equation}
where $\Delta N_{B}=N_{B}(\infty)-N_{B}(0)$ is the increased atom product
number after the reactions, the mean density of the atom reactant is $\bar{n}_{C}=%
\alpha N_{C}$, with $\alpha=(\frac{m_{\mathrm{K}}\bar{\omega}^2}{4\pi k_{B}%
T_{\mathrm{K}}})^{3/2}$, $\bar{\omega}$ the geometric mean of the trapping
frequencies of the K atoms and $T_{\mathrm{K}}$ the temperature of the K
atoms. Therefore the reaction rate may be obtained by measuring the increased
ratio between the atom product and reactant $\Delta N_{B}/N_{C}$, the initial
molecule reactant number $N_{AB}(0)$, and the decay rate of the $AB$ molecule
$\gamma_{AB}$. Note that the reaction is very fast, part of the $AB$ molecules may
have been reactively lost during the dissociation process. Thus the rf dissociation
rate also needs to be taken into account to correct the initial
molecule number (Supplementary Information). Statistical and systematical
uncertainties in the atom number, molecule number, molecule loss rate and
temperature have all been included to calculate the uncertainty in the final
reaction rate.
\newpage
\section*{Supplementary information}
\subsection{Coupled channel calculation}
In the main text, the measured binding energy is fitted using the simple
universal model. Here we perform the coupled-channel calculation and compare
the theory with the experimental results. The Hamiltonian describing the $s$-wave
scattering is
\begin{equation}
\renewcommand\theequation{S\arabic{equation}} H=T+\sum_{S=0,1}V_{S}%
(r)P_{S}+H_{hf}+H_{z}.
\end{equation}
The first term is the kinetic energy $T=-\frac{\hbar^{2}}{2\mu}\frac{d^{2}}{ dr^{2}}$
with $\mu$ the reduced mass. The second term describes the spin-exchanging
interaction, where $P_{0}=1/4-\mathbf{s}_{\alpha}\cdot\mathbf{s}_{\beta}$ and
$P_{1}=3/4+\mathbf{s}_{\alpha}\cdot\mathbf{s}_{\beta}$ are the singlet and
triplet projection operator respectively with $\mathbf{s}$ the electron spin.
Here and below we refer to Na as $\alpha$ and K as $\beta$. $V_{0}(r)$ and
$V_{1}(r)$ denotes the Born-Oppenheimer singlet potential $X^{1}\Sigma$ and
triplet potential $a^{3}\Sigma$. The Born-Oppenheimer potentials can be
expressed as power expansion of $r$, whose latest version can be found in Ref.
\cite{Temelkov2015s}. $H_{hf}$ is the hyperfine interaction term, described by%
\begin{equation}
\renewcommand\theequation{S\arabic{equation}} H_{hf}=a_{hf\alpha}%
\mathbf{s}_{\alpha}\cdot\mathbf{i}_{\alpha}+a_{hf\beta}\mathbf{s}_{\beta}%
\cdot\mathbf{i}_{\beta},
\end{equation}
where $a_{hf}$ is the hyperfine constant and $\mathbf{i}$ is the nuclear spin.
The last term is the Zeeman term
\begin{equation}
\renewcommand\theequation{S\arabic{equation}} H_{z}=[(g_{s\alpha}s_{z\alpha
}-g_{i\alpha}i_{z\alpha})+(g_{s\beta}s_{z\beta}-g_{i\beta}i_{z\beta})]\mu
_{B}B_z,
\end{equation}
with $g_{s}$ the electron g-factor, $g_{i}$ the nuclear g-factor and $B_{z}$
the bias magnetic field.
The internal state may be expressed in terms of the spin basis $|\sigma
\rangle=|m_{i_{\alpha}},m_{s_{\alpha}};m_{i_{\beta}},m_{s_{\beta}}\rangle.$
The Hamiltonian couples all the internal states with the same $M_{F}%
=m_{f_{\alpha}}+m_{f_{\beta}}$ with $m_{f}=m_{i}+m_{s}$. For a given $M_{F}$
and $B_{z}$, we first diagonalize the $H_{hf}+H_{z}$ to obtain the internal
eigenstate $|\chi_{i}\rangle$ and the threshold energy $E_{i}^{th}$ of each
channel. Expanding the wave function in terms of the new bases $|\psi
\rangle=\sum_{i}\psi_{i}(r)|\chi_{i}\rangle$, we obtain the coupled channel
Schr\"{o}dinger equation
\begin{equation}
\renewcommand\theequation{S\arabic{equation}} \sum_{j}[T\delta_{ij}%
+\sum_{S=0,1}V_{S}(r)\langle\chi_{i}|P_{S}|\chi_{j}\rangle]\psi_{j}%
(r)=(E-E_{i}^{th})\psi_{i}(r)
\end{equation}
for a given entrance energy $E$. The scattering length and the binding energy
are calculated using standard multichannel log-derivative method. In our
numerical calculations, we integrate from $1${\AA } up to $5000${\AA } with a
step of $10^{-3}${\AA }.
There are in total three Feshbach resonances between $^{23}$Na $|1,1\rangle$ and
$^{40}$K $|9/2,-5/2\rangle$ ($|9/2,-3/2\rangle)$) for a magnetic field of up to 300 G.
Therefore, the coupled-channel calculated scattering length is fitted using the formula
$a=a_{\mathrm{bg}}(1+\eta B)(1-\frac{\Delta B_{1}}{B-B_{1}})(1-\frac{\Delta B_{2}}{B-B_{2}%
})(1-\frac{\Delta B_{3}}{B-B_{3}})$~\cite{Jachymski2013s,Viel2016s}, where
$\eta$ is a small parameter taking into account the slow variation of the
background scattering length as a function of the magnetic field. The results are
shown in Table \ref{tab.feshbachres}.
\begin{table}[hbp]
\renewcommand\thetable{S\arabic{table}}
\begin{tabular}{|c|c|c|}
\hline
& $|1,1\rangle\otimes|9/2,-5/2\rangle$ & $|1,1\rangle\otimes|9/2,-3/2\rangle$ \\
\hline
$a_{\mathrm{bg}}(a_{0})$ & -488 & -442\\
\hline
$\eta$ & -0.001 & -0.001\\
\hline
$B_{1}$(G) & 96.64 & 117.27\\
\hline
$B_{2}$(G) & 107.09 & 130.78\\
\hline
$B_{3}$(G) & 137.14 & 177.63\\
\hline
$\Delta B_{1}$(G) & 2.40 & -2.10\\
\hline
$\Delta B_{2}$(G) & 3.83 & 4.83\\
\hline
$\Delta B_{3}$(G) & -41.42 & -57.64\\
\hline
\end{tabular}
\caption{Feshbach resonance parameters given by the coupled-channel calculations. }
\label{tab.feshbachres}
\end{table}
It can be readily seen that the Feshbach resonance positions are slightly
different from the experimental results. Therefore we fit the measured binding
energies by using the coupled-channel calculations with the resonance position as
the only fitting parameter \cite{Zirbel2008s}. The results are shown in
Fig.~\ref{fig_feshbachres}.
\begin{figure}[ptb]
\includegraphics[width=0.9\columnwidth]{fig_binding_energy}
\renewcommand\thefigure{S\arabic{figure}}
\caption{The comparison between the coupled-channel calculations and the experimental results.
The red (blue) open circle represents the binding energy measured from the rf dissociation
(loss) spectrum. The gray shaded region corresponds to the uncertainty of the binding
energy fitted with the coupled-channel calculations. All error bars represent $\pm1$
standard deviations of the statistical uncertainties. }
\label{fig_feshbachres}%
\end{figure}
For the Feshbach resonance between $|1,1\rangle$ and $|9/2,-5/2\rangle$, the universal
model gives a resonance position of $B_{\mathrm{0}}=138.71(20)$ G. The fitting using the
coupled-channel calculations gives a resonance position of $B_{\mathrm{0}}=137.38(5)$ G.
It can be seen in Fig.~\ref{fig_feshbachres} that the c.c. result does not agree very well
with the experimental data. This may imply that the parameters of the potential are still
not accurate enough for this open-channel dominated resonance. For the Feshbach
resonance between $|1,1\rangle$ and $|9/2,-3/2\rangle$, the universal model gives a
resonance position of $B_{\mathrm{0}}=130.637(14)$ G. The fitting using the coupled-channel
calculations gives a resonance position of $B_{\mathrm{0}}=130.635(1)$ G. For this
resonance, the experiment results and the coupled-channel calculations agree very well
with each other. Note that a systematical uncertainty of 10 mG in the magnetic field
is not included in the above analysis.
\subsection{RF loss spectrum}
To characterize the Feshbach resonances, we measure the binding energy of the
Feshbach molecules either using the rf dissociation spectrum or the rf loss
spectrum. The dissociation spectrum has been discussed in the main context.
Here we explain the details of the rf loss spectrum.
In our experiment, we use a weak and long square rf pulse to couple the
free atomic state to the bound molecular state and to observe the
atom losses with respect to the rf frequency as in Ref.~\cite{Ulmanis2015s}. As
the associated molecules are lost quickly by inelastic collisions with the
surrounding atoms (dominately with Na atoms), one molecule loss roughly
corresponds to the loss of one K atom and two Na atoms. We measure the total
atom number of $N_{\mathrm{K}}+0.5N_{\mathrm{Na}}$, which roughly corresponds
to twice of the associated molecule numbers. Besides, we find empirically that
the total atom number has a best stability when adding the K atom number with
one half to one fourth of the Na atom number. The observed atom loss spectrum
is then fitted with the model introduced in Ref.~\cite{Ulmanis2015s,Klempt2008s}
for each magnetic field, which includes the density of the relative motion of
the atom pair, the bound-free Franck-Condon factor, and a collisional
broadening profile.
\begin{figure}[tpbh]
\includegraphics[width=0.9\columnwidth]{fig_lossspec}
\renewcommand\thefigure{S\arabic{figure}}
\caption{Loss spectrum of the Na and K atoms due to the rf coupling of the free-%
bound transition. \textbf{a} and \textbf{b}, the free scattering state $A+C$ is coupled
to the bound $AB$ dimer state. The duration of the rf pulse is 0.5 s for both
measurements. \textbf{c} and \textbf{d}, the free scattering state $A+|9/2,-1/2\rangle$
is coupled to the bound $AC$ dimer state. The duration of the rf pulse is 1.0 s for
129.97 G and 1.2 s for 129.56 G. For the horizontal axis, the free atomic transition
frequency of K is defined to be zero. For the vertical axis, it represents the total atom
number of $N_{\mathrm{K}}+0.5N_{\mathrm{Na}}$ with unit of $10^4$. The dashed
line represents the position of the fitted dimer binding energy. The association rf pulse
has a Rabi frequency of about $2\pi\times1$ kHz for the free atomic transition. The
mixture temperature is about 600 nK for these measurements. All error bars represent
$\pm1$ standard deviations. }\label{fig_lossspec}%
\end{figure}
The measured atom loss spectra at several magnetic fields are shown in
Fig.~\ref{fig_lossspec}. In \textbf{a} and \textbf{b}, the atom mixture is
prepared in the $A+C$ state, and it's coupled to the bound $AB$ dimer state
with a weak association pulse. The fitting gives a binding energy of $171.4(9)$
kHz and $297.0(7)$ kHz for the magnetic field of 127.03 G and 125.03 G,
respectively. In \textbf{c} and \textbf{d}, the atom mixture is prepared in the
$A+|9/2,-1/2\rangle$ state, it's coupled to the bound $AC$ dimer state with
the weak association pulse. The fitting gives a binding energy of $175.6(6)$
kHz and $409.1(12)$ kHz for the magnetic field of 129.97 G and 129.56 G,
respectively.
\subsection{Association spectrum of the Feshbach molecule}
\begin{figure}[ptbh]
\includegraphics[width=0.9\columnwidth]{fig_associationspec}
\renewcommand\thefigure{S\arabic{figure}}
\caption{Association spectrum of the $AB$ Feshbach molecule from the $A+C$
or $A+|9/2,-7/2\rangle$ mixture. The free atomic transition frequency is set to be
zero. And the long tails of the free K atomic transition is due to the mean field
shift from the interaction with the overlapping Na atoms. (Inlet) Molecule
association spectrum (free Na atoms are not shown). The curves are fitted with
the model in Ref.~\cite{Ulmanis2015s,Klempt2008s}. The dashed lines represent
the position of the fitted binding energy of the molecules. All error bars represent
$\pm1$ standard deviations. }\label{fig_assospec}%
\end{figure}
In our experiment, we associate the Feshbach molecules by applying a rf pulse
to transfer the free scattering atom pairs ($A+C$ or $A+|9/2,-7/2\rangle$) to the
$AB$ molecule. For different measurements, the association rf field can be either
Blackman or square pulses, with a peak Rabi frequency of about $2\pi\times20$
kHz for the free atomic transition.
For the observation of the atom product $B$ at different magnetic fields as in
Fig.~2 of the main text, we use the Blackman rf pulses for molecule association
to narrow the spectral width of the free atomic transition (suppress the
Fourier side lobes compared with square pulses). Typical association spectra
with the Blackman rf pulse are shown in Fig.~\ref{fig_assospec}, which is
taken by directly imaging the K atoms in the $B$ state after association. Note
that the imaging itself does not distinguish between the free $B$ atoms and the $AB$
molecules. It's obvious that the atom peak and molecule peak are well
separated in the rf spectrum. This means that at the molecule association
frequency, the background $B$ atoms, directly transferred from the $C$ state
via free atomic transition, is negligible. The is important since it implies
that the observed $B$ atoms in Fig.~2 are not directly transferred by the
association rf pulse, but created from the chemical reactions.
For the measurement of the reaction dynamics, we use square rf pulse to
associate the $AB$ Feshbach molecules from the $A+C$ mixture. The use of
square wave pulse can reduce the length of the association pulse. This square rf
pulse will transfer at most about 5\% of the $C$ atoms to the $B$ state for
detuning of larger than 4 times the Rabi frequency. This can only contribute a
constant background in the atom product signal. The increase of the atom
product with time can only be caused by reaction, which is used to measure the
reaction rate.
\subsection{Time scale of the dissociation procress}
In studying the reaction dynamics, the Feshbach molecules are dissociated into
free atoms for detection by a rf pulse with a duration of about 1 ms and a
peak Rabi frequency of about $2\pi\times16$ kHz for the free atomic
transition. As discussed in the main text, the time scale of the reaction is
about several hundred microseconds. Therefore, in the dissociation process, part of the molecules are lost
due to reaction and thus cannot be dissociated into atoms for detection.
We may derive the relation between the dissociated molecule number and the
initial total molecule number as follows. When applying the dissociation
pulse, the time evolution of the molecule may be described by,
\begin{equation}
\renewcommand\theequation{S\arabic{equation}}
\begin{split}
\dot{N}_{\mathrm{mol}} &= -\gamma_{\mathrm{loss}}N_{\mathrm{mol}} - \gamma_{\mathrm{diss}}N_{\mathrm{mol}}, \\
\dot{N}_{\mathrm{atom}} &= \gamma_{\mathrm{diss}} N_{\mathrm{mol}},
\end{split}
\end{equation}
where $N_{\mathrm{mol}}$ is molecule number and $N_{\mathrm{atom}}$ is the
atom number obtained by dissociation, and $\gamma_{\mathrm{loss}}$ describes
the losses of the molecules via inelastic
and reactive collisions with other atoms (inverse of the molecule lifetime),
and $\gamma_{\mathrm{diss}}$ is the dissociation rate. As the background atom
number is about one order larger than the molecule number, we assume a
constant loss rate of $\gamma_{\mathrm{loss}}$. This equation is
straightforward to solve with the solution,
\begin{equation}
\renewcommand\theequation{S\arabic{equation}} N_{\mathrm{atom}}(t_{\mathrm{rf}%
})=\frac{\gamma_{\mathrm{diss}}N_{\mathrm{mol}}(t_{\mathrm{rf}}=0)}{\gamma_{\mathrm{diss}%
}+\gamma_{\mathrm{loss}}}(1-e^{-\gamma_{\mathrm{diss}}t_{\mathrm{rf}}%
-\gamma_{\mathrm{loss}}t_{\mathrm{rf}}}),
\end{equation}
where $t_{\mathrm{rf}}$ is the duration of the applied dissociation rf pulse. For a
sufficiently long dissociation rf pulse, we have
\begin{equation}
\renewcommand\theequation{S\arabic{equation}} N_{\mathrm{atom}}(t_{\mathrm{rf}}=\infty
)=\frac{\gamma_{\mathrm{diss}}}{\gamma_{\mathrm{diss}}+\gamma_{\mathrm{loss}}%
}N_{\mathrm{mol}}(t_{\mathrm{rf}}=0).\label{eqn.atommolnum}%
\end{equation}
Here $N_{\mathrm{atom}}(t_{\mathrm{rf}}=\infty)$ is just the dissociated molecule
number in the dissociation process. Thus we can obtain the initial number of molecules
from the measured dissociated molecule number.
\begin{figure}[ptb]
\includegraphics[width=0.9\columnwidth]{fig_dissociationduration}
\renewcommand\thefigure{S\arabic{figure}}
\caption{Saturation of the rf dissociation of Feshbach molecules. The measurements
are taken with square rf pulses with frequencies at $50\sim100$ kHz below the
bound-free transition (onset) frequency. The lines represent the fitted results with
the model of $N_{\mathrm{atom}}(t_{\mathrm{rf}})=N_{0}(1-e^{-\gamma_{\mathrm{diss}}%
t_{\mathrm{rf}}})$ where we have neglected $\gamma_{\mathrm{loss}}$ for simplicity.
All error bars represent $\pm1$ standard deviations. }\label{fig_dissduration}%
\end{figure}
The dissociation rate $\gamma_{\mathrm{diss}}$ is estimated as follows. We
associate the $AB$ Feshbach molecules from the $A+C$ mixture and then use a
square rf pulse with variable durations to dissociate the $AB$ Feshbach
molecules into the $A+|9/2,-7/2\rangle$ state. Then we measure the number of K
atoms in the $|9/2,-7/2\rangle$ state as a function of rf pulse duration. This measurement
is carried out at the magnetic fields of 129.10 G and 130.80 G where the
lifetime of molecule is larger than 1.4 ms. The results are shown in
Fig.~\ref{fig_dissduration}.
For the magnetic field of 129.10 G, the dissociation frequency is 150 kHz
below the frequency of the free atomic transition of $B\rightarrow
|9/2,-7/2\rangle$, while the energy of the $AB$ molecule is about $h\times96$ kHz
below that of the free $B$ state. The dissociation rate of this dissociation
process is fitted to be $\gamma_{\mathrm{diss}}=12.56(82)$ kHz, where we have
neglected $\gamma_{\mathrm{loss}}$ since it is about one order smaller than
$\gamma_{\mathrm{diss}}$. For the magnetic field of 130.80 G, the dissociation
frequency is 130 kHz below the free atomic transition frequency, while the
energy of the $AB$ molecule is about $h\times56$ kHz below the free $B$ state. The
saturation rate of the dissociation process is measured to be $\gamma
_{\mathrm{diss}}=10.93(80)$ kHz. In the reaction dynamics measurement, we
assume a constant dissociation rate and use a mean value of $\bar{\gamma
}_{\mathrm{diss}}=11.7(8)$ kHz to correct the initial molecule numbers from
the measured dissociated molecule number.
\subsection{Reaction dynamics measurement}
\label{sec.reactiondynamic}
\begin{figure}[ptbh]
\includegraphics[width=0.9\columnwidth]{fig_reactiondynamics}
\renewcommand\thefigure{S\arabic{figure}}
\caption{Time evolution of the number of $AB$ reactant and $B$ product. All
error bars represent $\pm1$ standard deviations.}\label{fig_reacdym}%
\end{figure}
In the main text, we have explained how to derive the exchange-reaction rate
from the measurement of time evolutions of the molecule reactant number and
the atom product number. The measured time evolution at 129.90 G is given as
an example for simplicity. Here we provide all measured results for the
reaction rates given in Fig. 4 of the main text. The reaction rate can be
given by,
\begin{equation}
\renewcommand\theequation{S\arabic{equation}} \beta_{r}=\frac{\Delta N_{B}%
}{\alpha N_{C}} \frac{\gamma_{AB}}{N_{AB}(0)},
\end{equation}
where $\alpha=(\frac{m_{\mathrm{K}} \bar{\omega}^{2}}{4\pi k_{\mathrm{B}}
T_{\mathrm{K}} })^{3/2}$ is a density coefficient which gives the mean atom
density in the $C$ state as $\bar{n}_{C}=\alpha\times N_{C}$. The temperature
of the $K$ atoms in the $C$ state is measured after molecule rf association at
each magnetic field, which typically gives a temperature of about 650 nK.
Together with the measured trapping frequency for the K atoms in the optical
trap, we can get the value of this density coefficient.
To measure the time evolution of the number of the molecule reactant $AB$, we
dissociate them into the free atom pairs of $A+|9/2,-7/2\rangle$ and measure
the K atom number in the $|9/2,-7/2\rangle$ state, after holding the $AB+C$
mixture in the optical trap for a specific duration. Then the time evolution of the
dissociated molecule number (K atoms in the $|9/2,-7/2\rangle$ state) is fitted
with an exponential decay model as shown in Fig.~\ref{fig_reacdym}\textbf{a},
which gives the dissociated molecule number $N^{\rm{diss}}_{AB}(0)$ for zero
holding duration and the molecule lifetime $1/\gamma_{AB}$. Then the initial
molecule number $N_{AB}(0)$ can be obtained from Eqn.~S\ref{eqn.atommolnum},
where the loss rate $\gamma_{\mathrm{loss}}$ is equivalent to the overall loss
rate of $\gamma_{AB}$.
For the measurement of the product atoms in the $B$ state, we use a rf $\pi$
pulse to transfer the $B$ atoms to the $|9/2,-7/2\rangle$ state for absorption
imaging. The measured time evolution of the atom number ratio is then fitted
with an exponential saturation model, $A\times(1-\exp{(-t/\tau)})$, as shown
in Fig.~\ref{fig_reacdym}\textbf{b}. Note that only the increased part
the $B$ atoms is needed for analysis here. Besides, $\Delta N_{B}$ is the increased
atom number measured in the $|9/2,-7/2\rangle$ state, which is transferred
from the $B$ state with the $\pi$ pulse. Thus a finite transfer efficiency of
the $\pi$ pulses, which is about $\eta_{\pi}=90\%$, has to be taken into
account to give the correct increased ratio.
Finally, the reaction rate is given by
\begin{equation}
\renewcommand\theequation{S\arabic{equation}}
\beta_{r}=\frac{\Delta N_{B}}{\eta_{\pi}\alpha N_{C}}\frac{\gamma_{AB}%
\gamma_{\mathrm{diss}}}{(\gamma_{AB}+\gamma_{\mathrm{diss}})N^{\mathrm{diss}}_{AB}(0)}.
\end{equation}
The reaction rates given in the main text are calculated in this way. All
statistical and systematical uncertainties of the parameters in the above
equation are included to give the uncertainty of the calculated reaction rates.
|
1,108,101,562,938 | arxiv | \section{Introduction}
Metal-insulator transitions (MIT)\cite{MIT1,MIT2,MIT3,MIT4,MIT5} in realistic systems are often considered in a one-particle picture to explain the change of conductivity in a material. In situations such as band, Peierls and Anderson insulators, this yields a successful description, but in the case of Mott insulators the electron-electron correlation gives rise to a contribution which is more important than the electron-ion interaction in the localization of the electronic wave function (WF)\cite{Gebhard, Mott}. In this situations a proper description of the correlated WF becomes crucial, which cannot be in general achieved through the first principle methods usually employed for periodic systems such as the Density Functional Theory (DFT)\cite{DFT1,DFT2,DFT3}. Among the different wave function-based local approaches employed to overcome the dimensionality problem in correlated extended and periodic systems, the method of increments (MoI)\cite{paulus2003, paulus2006, paulus2007, stoll2009, Staemmler2009, mueller2011, voloshina2012} has gained a particular attention in the last decade. This approach can be applied to any wave function method and as shown in the recent work by Voloshina and Paulus\cite{voloshina2014} it successfully retrieves almost 100\% of the correlation energy of bulk materials with large static correlation if applied in a multireference (MR) fashion.\\
In our previous investigation\cite{dmrg_mit} we have exploited the quantum chemical version\cite{chan-rev,reiher-rev,yanai-rev,wooters-rev,szalay-rev} of the density matrix renormalization group (DMRG)\cite{white92,white93} approach to calculate the ground state energy of a model system, \emph{i.e.} beryllium ring-shaped cluster, and explored the use of quantum information theory (QIT)\cite{legeza2003, legeza2004, legeza2006-qpt, rissler06, barcza2010a, boguslawski2012b, boguslawski2013a} to characterize the wave function and thus determine the metal-like and insulating-like character of a system in different regions of the potential energy surface (PES). In the present paper we will focus again on the same system exploring the use of the method of increments (MoI) for closed-shell systems obtaining whole ground state dissociation curves through a multireference approach that allows to describe the crossing region where single reference approaches such as CCSD(T)\cite{CCSD1,CCSD2,CCSD3,CCSD4} fail. DMRG calculations as described in our previous work will be used as reference for testing the MoI approximation to the correlation energy.\\
Even though a model system is considered, we will underline how the use of standard canonical methods become prohibitive because of the high correlation effects involved especially if aiming to the thermodynamic limit. As we will show the MoI formalism used in this work involves only localized orbitals (LOs) which allows it to use limited active space despite large systems are involved. We will exploit this tool to calculate the correlation energy of Be$_n$ rings as large as $n=90$ and extrapolate the behavior at the thermodynamic limit.\\
This paper is structured as follows: in section~\ref{sec:MoI} we describe the formalism of the method of increments for single- and multireference methods as applied in this work; in section~\ref{sec:comp_det} the problematics in describing the system with canonical wave function methods are underlined and we give the details of our calculations; in section~\ref{sec:results} we focus on th results obtained for the Be$_6$ ring in order to highlight the accuracy and the advantages of the multireference MoI in comparison different methods; we also report and compare the results obtained for larger rings and the behavior at the thermodynamic limit is extrapolated; our conclusions are finally drawn in section~\ref{sec:conclusion}.
\section{The Method of Increments}\label{sec:MoI}
\subsection{General Formalism}
The method of increments exploits the short range nature of the electronic correlation. Within this approach localized orbitals are used in order to describe the correlation energy as sum of individual contributions coming from the correlation of different parts of the systems, to which we will refer as bodies. In its general formalism which employs single reference methods such as coupled cluster (CC) or perturbation theory approaches, only the occupied orbitals are localized and some of them together with the virtual canonical orbitals are used to build the correlation space.\\
In the MoI, one starts from a first approximation of the correlation energy, $E^{\rm I}_{\rm corr}$, which is given by the sum of all independent contributions $\epsilon_{i}$ each arising from the $i^{th}$ body:
\begin{equation}
E^{\rm I}_{\rm corr} = \sum_i \epsilon_{i}\label{eq_1body}
\end{equation}
We refer to the individual terms $\epsilon_{i}$ as 1-body increments. The contribution expressed in Eq. \ref{eq_1body} ranges typically between 60\% and 90\% of the correlation energy achievable through the chosen correlation method. The remaining part of $E_{\rm corr}$ is enclosed in the higher order increments which consider the correlation among many bodies. The second natural step is then to include one more body and define the 2-body increments $\Delta \epsilon_{ij}$ as:
\begin{equation}
\Delta \epsilon_{ij} = \epsilon_{ij} - \left(\epsilon_i + \epsilon_j\right)
\end{equation}
Going forward to higher order increments we get similar expressions. For instance the 3-body increments are calculated using the expression:
\begin{equation}
\Delta \epsilon_{ijk} = \epsilon_{ijk} - \left(\Delta \epsilon_{ij} + \Delta \epsilon_{jk} + \Delta \epsilon_{ik}\right) - \left(\epsilon_i + \epsilon_j + \epsilon_k\right)
\end{equation}
Combining these contributions, one can finally express the correlation energy $E_{\rm corr}$:
\begin{equation}
E_{\rm corr} = \sum_i \epsilon_i + \sum_{i<j} \Delta \epsilon_{ij} + \sum_{i<j<k} \Delta \epsilon_{ijk} + \cdots\label{ecorr_sum}
\end{equation}
As pointed out above, the electronic correlation is in general short ranged, and the $ee$-interaction is a two-particle interaction which implies that the increments decrease with the distance and the order of the bodies, \emph{i.e.} if the following convergence criteria are fulfilled:
\begin{equation}
|\Delta \epsilon_{ij}| > |\Delta \epsilon_{ik}| \mbox{ \hspace{0.1cm} for \hspace{0.1cm} } r_{ij} > r_{ik}\label{conv1}
\end{equation}
\begin{equation}
|\Delta \epsilon_{ij}| > |\Delta \epsilon_{ijk}| > |\Delta \epsilon_{ijkl}| > \cdots\label{conv2}
\end{equation}
This allows to truncate the expansion of Eq.~\ref{ecorr_sum}, yielding a meaningful amount of the correlation energy. The local nature of the MoI and the possibility of truncating Eq.~\ref{ecorr_sum} are crucial points in making the method a candidate for the application to extended and periodic systems, for which canonical wave function methods are generally prohibitive. Nevertheless the MoI cannot be universally applied and fails when the required convergences explicated in Eq.~\ref{conv1}~and~\ref{conv2} do not occur. Moreover since the increments might have alternating sign, the MoI is not a variational method.
\subsection{Multireference Formalism}
As recently shown by Voloshina and Paulus \cite{voloshina2014}, a multireference MoI approach can successfully be used to calculate the cohesive energy of bulk metals for systems with a high static correlation contribution. In comparison with the single reference formalism, a different localization pattern is required in this approach. Indeed, besides the occupied orbitals, also the virtual orbitals important for the evaluation of the static correlation have to be localized. This way one can then calculate incremental static contributions to $E_{\rm corr}$ arising from Complete Active Space Self Consistent Field (CAS-SCF)\cite{ref_CASSCF} calculations performed within the bodies constituted by occupied and virtual LOs. Finally on top of the CAS-SCF wave function, a MR calculation is performed for each term of the incremental expansion, including the remaining delocalized virtual orbitals. The scheme is sketched in Fig.~\ref{fig:incre_scheme}. The equations described in the previous section remain still valid are used to expand and truncate the correlation energy. As described in the following section we will employ a minimal basis set and localize all virtual orbitals which will be use to construct the complete active space. We will not perform any multireference calculation on top of the CAS-SCF wave functions and we will refer to the method as CAS-MoI.
\begin{figure}[htb]
\centerline{
\includegraphics[width=0.45\textwidth]{Fig1.eps}
}
\caption{(Color online) Sketch of the partition of the active orbitals for the method of increments. In its multireference formalism, CAS-SCF calculations are performed using localized orbitals only as shown in the highlighted boxes in order to obtain the non-dynamical incremental contributions to the correlation energy. On top of the obtained CAS wave functions, multireference calculations can be performed including excitation to the delocalized virtuals.}
\label{fig:incre_scheme}
\end{figure}
\section{Calculation details}\label{sec:comp_det}
\subsection{The System}
As subject of our investigation we have chosen a model system characterized by a high static correlation contribution, \emph{i.e.}~beryllium ring-shaped clusters, whose pseudo-onedimensional structure was chosen to resemble the periodic Born-von-Karman boundary conditions and to obtain a WF that will inevitably converge with size towards the thermodynamic limit of an infinite linear chain. Because of the quasi degeneracy of valence 2$s$ and virtual 2$p$ orbitals (more than 93\% of the correlation energy of Be atom in its $^1S$ ground state is static), 4$n$ active orbitals would be required to obtain a size-consistent CAS-SCF reference for MR calculations for Be$_n$. This leads to CAS(2$n$, 4$n$) calculations which become of course prohibitive as the number of Be atoms increases. We report in Fig.~\ref{fig:be10_cas} the dissociation curves of Be$_{10}$ calculated using a minimal atomic basis set and different active spaces within the CAS-SCF method. As one can see, despite the use of larger and larger active spaces gives a finer and finer description of the potential energy curves, the calculated dissociation plateaus lie at much higher energies than the dissociation limit calculated within the same method using an active space consisting of the 2$s$ and 2$p$ functions of the free atom. Similar results are reported in more details in Table~\ref{tab_ci} as described later on.\\
This model system is complicated further by the fact that the $p$ functions, besides enlarging the active space, play a big role at the Hartree-Fock level too. Indeed, while at dissociation the HF orbitals will be linear combination of pure 2$s$ atomic orbitals, as the interatomic distance shortens their $p$-character increases and, at a certain geometry, the HOMO switches from a pure $s$ to a pure $p$ molecular orbital. In other words, we encounter a crossing between two HF configurations which dominate the ground state at different interactomic distances. This holds for the finite pseudo-onedimensional clusters indepentely from their size as well for the periodic chain given that both the energy and the character of the discrete Hartree-Fock molecular orbitals of Be$_n$ rings converge towards the crystal orbitals. In order to supplye a graphical representation of these HF wave functions, we report in Fig.~\ref{fig:configurations} the band structure calculated in the two regimes and the $s$-character of the Hartree-Fock valence orbitals of a periodic beryllium chain close to the minimum of the dissociation curve (blue line) and towards dissociation (red line). Respectively, we will refer to these as configuration (Conf) 1 and 2 and we will indicate the two regimes where each of them dominates as metal-like and insulator-like regime. Clearly an accurate single reference method such as CCSD(T) will work perfectly in the regimes where one of the configuration is dominant, but not around the crossing region. We will show how the size problem and the necessity for a MR approach will be overcome through the use of the multireference method of increments allowing us to describe the PES of Be$_n$ till dissociation.\\
By unitary transformation of the canonical orbitals of the two main HF configurations different sets of LOs are obtained (see Fig.~\ref{fig:configurations}). Indeed, in the insulator-like regime localized orbitals resemble atomic 2$s$ and 2$p$ orbitals, while in the metallic regime $\sigma_g$- and $\sigma_u$-like orbitals appear. As we will show, the use of the different starting Hartree-Fock configuration has a huge impact on the effectiveness of the method, despite in both cases the result should converge toward the Full-CI limit.
\subsection{Computation and basis set}
The MOLPRO quantum chemistry package\cite{MOLPRO} was used to perform the different steps of MoI calculations {\emph i.e.} Hartree-Fock calculations, Foster-Boys localization\cite{Boys} and CAS-SCF calculations. We employed a minimal basis set consisting of 1$s$, 2$s$ and 2$p$ atomic functions derived by the Dunning's $cc$-pVDZ\cite{cc_pVDZ}. We kept the core 1$s$ orbitals frozen during the incremental calculations, focusing on the correlation of valence orbitals. The Crystal09\cite{crystal} code was employed to calculate the Hartree-Fock wave function of the periodic chain using the same basis set.
\begin{figure}[htb]
\centerline{
\includegraphics[width=0.45\textwidth]{Fig2.eps}
}
\caption{(Color online) Dissociation curves calculated for the ground state of Be$_{10}$ at the Complete Active Space Self Consistent Field (CAS-SCF) level of theory, using different active spaces. The dashed line corresponds to the dissociation limit calculated using a CAS(2,4) for the Be atom. A minimal basis set has been used in all cases.}
\label{fig:be10_cas}
\end{figure}
\begin{figure}[htb]
\centerline{
\includegraphics[width=0.45\textwidth]{Fig3.eps}
}
\caption{(Color online) Hartree-Fock band structure and $s$-character of the valence crystal orbitals of an infinite beryllium chain for the two leading configurations Conf~1 (blue line) and Conf~2 (red line). The $s$-character, defined as the normalized sum of the squared coefficients of the $s$-bases, is reported in reciprocal space as well as the valence band for the two regimes. Notice that the character of the valence band (full lines) at the X-point of the Brilluoin zone switches from pure $p$ to pure $s$ as sketched in the insets. The localized orbitals obtained by unitary transformation of the valence and virtual orbitals are also shown. As one can see, the difference in $s$-character is reflected by the different symmetry of the localized orbitals.
}
\label{fig:configurations}
\end{figure}
\section{Results}\label{sec:results}
\subsection{Be$_6$ ring}
We start our analysis reporting the single reference CCSD(T) results obtained for Be$_6$ using both the canonical formalism and the MoI expansion, to illustrate how within the same quantum chemical method, this truncation works in retrieving the correlation energy. The method of increments at the 4-body level retrieves more than the 99.99\% of the correlation energy achievable with the canonical CCSD(T) if the proper HF starting configurations (Conf~1~and~2) are used in the two limiting regimes, while around the crossing this percentage drops to around 95\%. Also in this regime the $\mathcal T_1$ diagnostic calculated within the canonical CCSD(T) presents values larger than $0.03$ highlighting the necessity of using a multireference approach for describing the static correlation. However as already stated and shown in Fig.~\ref{fig:be10_cas}, canonical CAS-SCF approaches cannot be used to calculate a size consistent wave function unless a CAS(2$n$,4$n$) is performed, \emph{i.e.} a Full-CI in our minimal basis set, and this is of course not applicable as $n$ increases. Of course, a multireference calculation performed on top of an accurate CAS or RAS calculation would reach the required accuracy, but since it is our interest to explore the convergence toward the thermodynamic limit, it is clear that a local method is preferable. Moreover, this choice in general reduce the problem of choosing a proper and consistent active space.\\
In Fig.~\ref{fig:PES_MR-MoI} we report the dissociation curve of Be$_6$ as obtained using the CAS-MoI at the 4-body level using the Conf~1~and~2. Because in our case, the incremental scheme is converging towards the Full-CI solution, it would be in principle equivalent to start from one or the other configuration. However as it can be seen, truncating the correlation energy at the 4-body level the two results match in a narrow regime only, \emph{i.e.} around the crossing, while for other internuclear distances a non-monotonic behavior is obtained if not the proper HF configuration is used. We want to focus now on the comparison with the DMRG data which were obtained in our previous investigation\cite{dmrg_mit} and we use fixed $M=1024$ block states results as a reference to evaluate the behavior of the CAS-MoI and the other methods discussed. In Fig.~\ref{fig:diff_DMRG} we report the absolute error with respect to the DMRG data of the above described CAS-MoI, canonical CCSD(T) and the very accurate multiconfigurational calculation, RAS(4,24). As it can be seen the CAS-MoI gives the best results for almost any interatomic distance, and while the other approaches depart too much from the DMRG reference around the crossing, the CAS-MoI results stay within an error of $\pm~4~{\rm m}E_h$ on the total energy. Moreover since the increments are all negative and converging (see Fig.~\ref{fig:incre_percent}) we might argue that the MoI is behaving variationally in our calculations. As a consequence, if this is the case, the method of increments is actually retrieving more correlation energy than DMRG (as one can see the error is negative in a quite wide regime). Only in the metal-like regime this approach gives worse results with respect to the other reported even if it underestimate the module of $E_{\rm corr}$ only by $2\times10^{-3}~E_h$ with respect to RAS(4,24)). This can be explained considering the high delocalization of the wave function in this regime which is hard to describe with a local approach. A quantitative comparison between the results with CAS-MoI, DMRG, CCSD(T) and different CAS and RAS methods is reported in Table~\ref{tab_ci}. Again one can realize the difficulty in retrieving a meaningful amount of the correlation energy with canonical methods and how in general the local approach gives reasonable results with relatively less effort.\\
Let us now consider the strong deviations shown in Fig.~\ref{fig:PES_MR-MoI} (dashed lines) that occur when the CAS-MoI is employed starting from a HF configuration which is not the dominant one. In order to explain why in this situation the method fails in describing the electronic structure of the system, let us analyze the behavior of the individual increments as shown in Fig.~\ref{fig:incre_percent}. What can be observed is that, as we overcome the crossing, the required convergence stated in Eq.~\ref{conv1}~and~\ref{conv2} break down and higher order increments give comparable contributions to $E_{\rm corr}$ as lower order ones. This is equivalent to say that in the employed orbital basis (\emph{i.e} HF configuration), higher order contributions become more and more important which is analogous to what was concluded from the analysis of the mutual information as described in our DMRG analysis\cite{dmrg_mit} that showed the increase of long range entanglement. This strong dependence on the starting configuration is of course a great disadvantage of the MoI which can otherwise yield accurate results with relatively cheap calculations.\\
Since we are forced to calculate both configurations, it might appear that no particular advantage arises from the use the CAS-MoI with respect to the single reference approach. On account of this, in Fig.~\ref{fig:diff_conf1_conf2} we report the differences between the energies obtained from Conf~1 and Conf~2 in the crossing regime for both methods. As it can be seen, in the case of CAS-MoI this difference is around $10~{\rm m}E_h$ smaller than CCSD(T) and ranges between $\pm5~{\rm m}E_h$. This means that in this narrow regime, the two bases give us comparable results leading to a smooth curve, which is not the case for CCSD(T). In account of this and previous statements, we can conclude that the CAS-MoI allows to describe the behavior of the PES also in the crossing regime allowing us to describe the whole dissociation curve.\\
We conclude this section comparing the individual increments as obtained from CCSD(T) and CAS-SCF (see Fig.~\ref{fig:incre_percent} and Table~\ref{tab_incre}). For the former method, the lower order increments are larger than for CAS-MoI because a larger active space is involved, but at the 3- and 4-body level the orbital optimization which retrieves the static correlation starts playing a bigger role than the size of active space. In general the single reference method converges faster, but the total correlation energy is larger for CAS-SCF than for the single reference approach.
\begin{figure}[htb]
\centerline{
\includegraphics[width=0.45\textwidth]{Fig4.eps}
}
\caption{(Color online) Potential energy curve for Be$_6$ calculated using the MoI at the 4-body level within the CAS-SCF approach starting from the two HF main configurations.}
\label{fig:PES_MR-MoI}
\end{figure}
\begin{figure}[htb]
\centerline{
\includegraphics[width=0.45\textwidth]{Fig5.eps}
}
\caption{(Color online) Comparison between the differences of the PES for Be$_6$ obtained through different methods and DMRG($M=1024$).}
\label{fig:diff_DMRG}
\end{figure}
\begin{figure*}[htb]
\centerline{
\includegraphics[width=0.9\textwidth]{Fig6.eps}
}
\caption{(Color online) Percent of the correlation energy contribution of every incremental order as a function of distance for a Be$_6$ ring. The data are reported for configuration 1 (doubly occupied $sp$ hybrids) and configuration 2 (2$s^2$). The convergence criteria fulfill only in particular distance regimes where the incremental scheme can successfully be used and more than 99\% of the correlation energy can be retrieved at the 4-body level. At the crossing point 2.60\AA, both configurations can be used.}\label{fig:incre_percent}
\end{figure*}
\begin{figure}[htb]
\centerline{
\includegraphics[width=0.45\textwidth]{Fig7.eps}
}
\caption{(Color online) Comparison between the energy difference $E_{\rm Conf 1}-E_{\rm Conf 2}$ for the canonical CCSD(T) and the CAS-MoI around the crossing for Be$_6$. At the Full-CI level this difference is expected to be zero. At the 4-body level the method of increments gives an acceptable result.}\label{fig:diff_conf1_conf2}
\end{figure}
\begin{table*}[hb]\fontsize{9}{9}\selectfont
\centering
\caption{(Color online) Comparison between different canonical CAS and RAS approaches, CCSD(T), DMRG calculations reported in Ref.[\onlinecite{dmrg_mit}] and MoI calculation at the 4-body level as described in the present paper for Be$_6$.}\label{tab_ci}
\begin{tabular}{lrccp{1.5cm}rcc}
& \multicolumn{3}{c}{Metal-like -- 2.10\AA -- Conf 1} && \multicolumn{3}{c}{Insulator-like -- 3.00\AA -- Conf 2} \\
\cline{2-4}\cline{6-8}
Calculation & CSFs & Energy ($E_h$) & $E_{\rm corr} (\%)$ && CSFs & Energy ($E_h$) & $E_{\rm corr} (\%)$ \\
\hline
CAS(6,6) & 34& -87.329677 & 8.09\% && 34& -86.947385 & 8.54\% \\
CAS(6,12) & 2,086& -87.364902 & 29.31\% && 2,086& -87.034032 & 33.57\% \\
CAS(6,15) & 8,155& -87.367285 & 30.74\% && 8,155& -87.035797 & 34.08\% \\
RAS(4,21) & 13,523& -87.373196 & 34.30\% && 13,521& -87.002608 & 24.49\% \\
CAS(6,21) & 64,835& -87.373263 & 34.34\% && 64,835& -87.002788 & 24.54\% \\
RAS(6,18) & 1,268,308& -87.425535 & 65.82\% && 1,268,308& -87.146271 & 65.98\% \\
RAS(4,23) & 134,843& -87.428804 & 67.79\% && 134,837& -87.141082 & 64.48\% \\
RAS(2,24) & 884& -87.455551 & 83.90\% && 885& -87.160246 & 70.02\% \\
CCSD(T) & & -87.479226 & 98.15\% && & -87.261723 & 99.32\% \\
RAS(4,24) & 295,746& -87.479826 & 98.51\% && 295,752& -87.240226 & 93.12\% \\
RAS(6,24) & 14,972,954& -87.482727 & 100.26\% && 14,972,954& -87.260875 & 99.08\% \\
\hline
CCSD(T)-MoI(4-body)& & -87.479272 & 98.18\% && & -87.261705 & 99.32\% \\
MR-MoI(4-body) & 866,320& -87.478599 & 97.77\% && 866,320& -87.264183 & 100.03\% \\
\hline
DMRG($M=1024$) & & -87.482294 & 100.00\% && & -87.264065 & 100.00\%
\end{tabular}
\end{table*}
\begin{table*}[h]\fontsize{9}{9}\selectfont
\centering
\caption{Individual incremental orders and total correlation energy of Be$_6$ calculated with the MoI at the 4-body level using a single reference method, CCSD(T) and the multireference method as described in this paper.}\label{tab_incre}
\begin{tabular}{lp{0.2cm}ccp{0.4cm}ccp{0.4cm}ccp{0.4cm}cc}
& & \multicolumn{5}{c}{Configuration 1} && \multicolumn{5}{c}{Configuration 2}\\
\cline{3-7}\cline{9-13}
& & \multicolumn{2}{c}{Metal-like -- 2.10\AA} && \multicolumn{2}{c}{Crossing regime -- 2.60\AA} && \multicolumn{2}{c}{Crossing regime -- 2.60\AA} && \multicolumn{2}{c}{Insulator-like -- 3.00\AA} \\
\cline{3-4}\cline{6-7}\cline{9-10}\cline{12-13}
Increments & & CCSD(T) & CAS-SCF && CCSD(T) & CAS-SCF && CCSD(T) & CAS-SCF && CCSD(T) & CAS-SCF \\
\hline
$\sum_i \epsilon_i$ & &-0.0197077 &-0.0195083&&-0.0282556 & -0.0278839&&-0.0443223 & -0.0438349 && -0.0532219 &-0.0531172\\
$\sum_{i,j} \Delta\epsilon_{ij}$ & &-0.0072719 &-0.0051364&&-0.0109604 & -0.0088279&&-0.0113016 & -0.0105205 && -0.0040539 &-0.0041013\\
$\sum_{i,j,k} \Delta\epsilon_{ijk}$ & &-0.0001276 &-0.0019666&&-0.0004272 & -0.0043662&&-0.0005745 & -0.0028508 && -0.0000447 &-0.0004985\\
$\sum_{i,j,k,l} \Delta\epsilon_{ijkl}$ & &-0.0000652 &-0.0004490&&-0.0000009 & -0.0007402&&-0.0001604 & -0.0001981 && 0.0000035 &-0.0000157\\
\hline
$E_{\rm corr}$ & &-0.0271724 &-0.0270603&&-0.0396440 & -0.0418181&&-0.0563588 & -0.0574042 && -0.0573169 &-0.0577327\\
\vspace{-0.2cm}\\
$E_{\rm tot}$ & &-87.479272 &-87.478599&&-87.300283 & -87.3133272&&-87.310961 & -87.317234 && -87.261705 &-87.264200
\end{tabular}
\end{table*}
\subsection{Extension to larger rings}
After analyzing the use of the method for Be$_6$ we report the results for larger rings in order to evaluate the convergence towards the thermodynamic limit. DMRG allowed us to describe the dissociation of Be$_{10}$ (20 electrons in 40 active orbitals) and as for the smaller cluster we report (Fig.~\ref{fig:diff_dmrg_10}) the energy differences of CAS-MoI and CCSD(T) with respect to DMRG. Also in this case it can be seen that the former method behaves much better than the single reference one and that the obtained values differ from DMRG ones by around $5~{\rm m}E_h$.\\
As already stated, moving forward the aim of describing periodic systems, local methods become better candidate than canonical ones for obtaining accurate correlation energies. Within the CAS-MoI, the amount of necessary CSFs depends on the body order only and not on the size of the system, while in a canonical approach prohibitive active space sizes become necessary. The PES calculated for Be$_{10}$ and Be$_{14}$ rings at the 4-body level are shown in Fig.~\ref{fig:6_10_14} in comparison with Be$_6$. Even from these small cluster we can deduce the trend for increasing size, that is a faster convergence in the insulator-like regime than in the metal-like regime and a shift of the crossing towards larger internuclear distances.\\
In order to highlight these convergences we report in Table~\ref{tab_incre_size} the individual increments and the calculated correlation energies for Be$_n$ rings with $n=$ 6, 10, 14, 22, 30 and 90 in the two regimes. As one can see in Fig.~\ref{fig_fits} the individual increments can be fitted with a $n^{-\nu}$ function. This way through extrapolation it was possible to evaluate the values for $n\rightarrow\infty$, \emph{i.e.} for the periodic chain, which are also reported in Table~\ref{tab_incre_size}.\\
It can be observed that both in the metal-like and insulator-like regime the trend is basically quadratic for the 1-body while it rapidally deviates for higher order increments in the metal-like regime where the depencende on the angle is stronger. It has to be underlined that for the 4-body case, the value for $n=6$ was not included in the fit because for obvious geometrical reason was to much out of trend. Clearly the 1-body dominates the total correlation energy and therefore $E_{\rm corr}$ depends also quadratically on $1/n$. We conclude showing in Fig.~\ref{HF_trends} that a similar behavior was obtained for the Hartree-Fock energy too.\\
Of course a faster convergence would be achieved if linear chains had been considered since no angle dependence would be present as in the ring shaped cluster, but these were chosen to impose a higher symmetry to the system, deleting any border effect and allowing the different bodies to be equivalent to each other. Moreover the cyclic structure recalls more the periodic Born-von-Karman boundary conditions imposed in the periodic case.
\begin{figure}[htb]
\centerline{
\includegraphics[width=0.45\textwidth]{Fig8.eps}
}
\caption{(Color online) Energy difference between CAS-MoI at the 4-body level and DMRG($M=1024$) for the Be$_{10}$.}
\label{fig:diff_dmrg_10}
\end{figure}
\begin{figure}[htb]
\centerline{
\includegraphics[width=0.45\textwidth]{Fig9.eps}
}
\caption{(Color online) Potential energy surfaces for Be$_6$, Be$_{10}$ and Be$_{14}$ rings obtained through the CAS-MoI at the 4-body level.}
\label{fig:6_10_14}
\end{figure}
\begin{table*}\fontsize{9}{9}\selectfont
\centering
\caption{Individual incremental orders calculated with the CAS-MoI for Be$_n$ ring-shaped cluster of different sizes form $n=$ 6 to 90 for two internuclear distances, 2.10{\AA} and 3.00{\AA}. The extrapolated values for the infinite chains are also reported.}\label{tab_incre_size}
Metal-like regime -- 2.10{\AA} Conf 1\\
\begin{tabular}{lccccccc}
Increments & 6 & 10 & 14 & 22 & 30 & 90 & $n\rightarrow\infty$ \\
\hline
$\sum_i \epsilon_i$ & -0.01950832 & -0.01852919 & -0.01826832 & -0.01810244 & -0.01804935 & -0.01799396 & -0.017990(2) \\
$\sum_{i,j} \Delta\epsilon_{ij}$ & -0.00513636 & -0.00388669 & -0.00358258 & -0.00340908 & -0.00335721 & -0.00330521 & -0.003305(3) \\
$\sum_{i,j,k} \Delta\epsilon_{ijk}$ & -0.00196657 & -0.00152785 & -0.00145735 & -0.00141879 & -0.00140766 & -0.00139659 & -0.001402(4) \\
$\sum_{i,j,k,l} \Delta\epsilon_{ijkl}$ & -0.00044904 & -0.00048085 & -0.00047395 & -0.00047172 & -0.00047140 & -0.00047112 & -0.00047118(4) \\
\hline
$E_{\rm corr}$ & -0.02706030 & -0.02442458 & -0.02378220 & -0.02340203 & -0.02328562 & -0.02316688 & -0.023155(2)
\end{tabular}\\
\vspace{0.6cm}
Insulator-like regime -- 3.00{\AA} Conf 2\\
\begin{tabular}{lccccccc}
Increments & 6 & 10 & 14 & 22 & 30 & 90 & $n\rightarrow\infty$ \\
\hline
$\sum_i \epsilon_i$ & -0.05311719 & -0.05218930 & -0.05190679 & -0.05174649 & -0.05169682 & -0.05164440 & -0.051633(7) \\
$\sum_{i,j} \Delta\epsilon_{ij}$ & -0.00410130 & -0.00450650 & -0.00464482 & -0.00472404 & -0.00474799 & -0.00477369 & -0.004783(6) \\
$\sum_{i,j,k} \Delta\epsilon_{ijk}$ & -0.00049846 & -0.00063636 & -0.00068277 & -0.00070780 & -0.00071598 & -0.00072474 & -0.000727(2) \\
$\sum_{i,j,k,l} \Delta\epsilon_{ijkl}$ & -0.00001574 & 0.00002165 & 0.00002305 & 0.00002433 & 0.00002457 & 0.00002476 & 0.0000247(5) \\
\hline
$E_{\rm corr}$ & -0.05773268 & -0.05731051 & -0.05721133 & -0.05715400 & -0.05713622 & -0.05711807 & -0.057119(2)
\end{tabular}
\end{table*}
\begin{figure*}[htb]
Metal-like regime -- 2.10{\AA} Conf 1\\
\centerline{
\includegraphics[width=0.9\textwidth]{Fig10_a.eps}
}
Insulator-like regime -- 3.00{\AA} Conf 2\\
\centerline{
\includegraphics[width=0.9\textwidth]{Fig10_b.eps}
}
\caption{(Color online) Convergence with the ring size of the largest individual increments of each order. The red lines were obtained by fitting the calculated data with an equation of the form $a/n^\nu+E^{\infty}$ where $E^{\infty}$ is the value extrapolated for the periodic chain.}\label{fig_fits}
\end{figure*}
\begin{figure*}[htb]
\centerline{
\includegraphics[width=0.9\textwidth]{Fig11.eps}
}
\caption{(Color online) Convergence with the ring size of the Hartree-Fock and total energy per atom for 2.10\AA (left) and 3.00\AA (right).}\label{HF_trends}
\end{figure*}
\section{Conclusion}\label{sec:conclusion}
The method of increments has been applied using a multireference formalism in order to describe the dissociation of ring shaped beryllium cluster of different sizes. The large static correlation involved in these systems made them good candidates for our analysis. Being size consistent and size extensive the MoI allowed us to obtain CAS-SCF wave function that can be further used as basis for multireference calculations. Key point of this investigation was that we were able to describe the whole PES also where single reference methods fail, \emph{i.e.} close to the crossing. DMRG calculations previously performed were crucial to evaluate the reliability of our results. The correlation energy of a system as large as Be$_{90}$ was obtained using this approach. We want to underline that within a canonical method this calculation would involve 180 active electrons in 360 active orbitals which without a local approach this would be impossible to treat. Finally, investigating different sizes of Be$_n$ rings we could evaluate the correlation energy for the periodic system.
\acknowledgments{This research was supported in part by the Hungarian Research Fund (OTKA) under Grants No. NN110360 and No. K100908, the Agence Nationale de la Recherche (ANR), and the German Research Foundation (DFG) via the project "Quantum-chemical investigation of the metal-insulator transition in realistic low-dimensional" (action ANR-11-INTB- 1009 MITLOW PA1360/6-1). The support of the Zentraleinrichtung f\"ur Datenverarbeitung (ZEDAT) at the Freie Universit\"at Berlin is gratefully acknowledged. Travel funds by the Max Planck Society via the International Max Planck Research School are appreciated.}
|
1,108,101,562,939 | arxiv | \section{Introduction}
\input{src/introduction.tex}
\input{src/background-and-requirement-analysis.tex}
\input{src/related-work.tex}
\input{src/data_modeling.tex}
\input{src/visual_design.tex}
\input{src/system_overview.tex}
\input{src/case_studies.tex}
\input{src/expert_interview.tex}
\input{src/discussion.tex}
\input{src/conclusion_and_future_work.tex}
\input{src/acknowledgements.tex}
\bibliographystyle{abbrv-doi}
\section{Related Work}
The related work can be categorized into two types: manufacturing data visualization and time-series data visualization.
\subsection{Manufacturing Data Visualization}
Many visual analytics approaches have been proposed for exploring manufacturing data. According to the survey by Ramanujan et al. \cite{ramanujan2017visual}, there are mainly two types of work: visualization for production planning and simulation, and visualization for process monitoring.
\textbf{Visualization for production planning and simulation.}
Previous work on production planning and simulation aims at revealing the bottleneck, identifying potential problems, and supporting interactions to improve the production strategy.
VIZ\_planner \cite{zhang1996visualizing} is an early study which utilizes bar charts to visualize multiple attributes of production planning.
Sydow et al. \cite{sydow2015visualizing} used Sankey diagrams \cite{wongsuphasawat2012exploring} to show the relationships between orders and available resources.
\modified{These studies provide useful insights into production planning, but they focus on the production planning of a small number of products and are not applicable to the production planning of a rather large number of products.}
Wu et al. \cite{wu2001visualizing} developed a visualization system to reveal weekly machine \modified{loads} in the metal ingot casting process. It supports manually moving the production task from one week to another.
LiveGantt \cite{jo2014livegantt} extends the Gantt chart with reordering and aggregation algorithms to identify common scheduling sequences and support \modified{the} rearrangement of production tasks.
\modified{However, these studies mainly rely on the manual adjustment of the production plan and fail to take advantage of automatic algorithms.}
Worner et al. \cite{worner2013simulation} provided an interactive visual interface where planners could redesign the manufacturing layout. Immediate feedback is provided to support the comparison of different strategies.
\modified{However, the work only provides overall performance indicators for the comparison of manufacturing layouts.}
The simulation of production processes has also been studied to identify anomalies and planning drawbacks for decision making.
Zhou et al. \cite{zhou2011visualizing} applied visualization techniques to the simulation of steel manufacturing to promote the understanding of the complex manufacturing system. W{\"o}rner et al. \cite{worner2011visual} visualized the simulation run of the assembly line to identify potential congestion during the manufacturing process. Post et al. \cite{post2017user} developed a visualization system to show temporal changes in the simulated production process.
\textbf{Visualization for process monitoring.}
With the advent of industry 4.0, a great number of manufacturing data have been collected by sensors deployed in the plant, which promotes the adoption of visual analytics approaches to process monitoring.
TTPView \cite{matkovic2002process}, as an early study, employs focus+context dashboard visualization techniques to present the process monitoring data. ViDX \cite{xu2017vidx} adopts an outlier-preserving aggregation approach in the Marey's graph \cite{tufte2001visual} to identify patterns and diagnose problems in assembly lines. It supports user-driven anomaly detection and presents the 3D model of the machine with errors. BlueCollar \cite{herr2019bluecollar} visualizes the path of the workers in a manufacturing site to facilitate the optimization of production layouts. Chen et al. \cite{chen2018sequence} developed a new sequence mining technique to summarize the patterns of vehicle fault records. Wu et al. \cite{wu2018visual} integrated advanced algorithms with visual analytics approaches to monitor the equipment and predict risks. Zhou et al. \cite{zhou2018visually} visualized the running status of manufacturing facilities by a matrix-based heatmap.
\modified{Although previous studies on production planning can help planners recognize patterns, discover bottlenecks, and improve planning strategies,
they only offer several overall performance indicators for the comparison of different production plans. Planners have no idea about the attributes of each product and the production details in each plant.
Besides, the influence of a sudden change in the raw material supply, the production process and the market demand is not considered, which may reduce the flexibility of a production planning strategy and result in a significant loss.}
Compared to previous work, our approach enables detail-on-demand exploration and comparison of different planning strategies. It combines domain knowledge and automated algorithms to support efficient optimization of production planning.
Furthermore, our solution can reveal the adverse influence of unanticipated changes in the market or the plant, and facilitate a quick adjustment to the production plan.
\subsection{Time-series Data Visualization}
A number of techniques have been developed for time-series data visualization. The existing work can generally be classified into two categories based on the arrangement of the time domain \cite{aigner2011visualization, brehmer2017timelines}: linear time-series data visualization and cyclic time-series data visualization.
\textbf{Linear time-series data visualization.}
Most of the time-series data have a linear time domain, where each time primitive has different predecessors and successors.
An early and widely-used approach for presenting linear time-series data is line charts \cite{playfair1801commercial}. Based on line charts, small multiples \cite{tufte2001visual} were proposed for overview and comparison. Horizon graphs \cite{saito2005two}, stacked graphs \cite{byron2008stacked}, and braided graphs \cite{javed2010graphical} display data in a compact form and make it possible to present a large amount of data \modified{on} a limited screen.
With the increase of the complexity and volume in time-series data, multiple visual analytics systems have been designed to perform various analysis tasks. Line Graph Explorer \cite{kincaid2006line} supports the exploration of large-scale time series data by embedding focus+context encoding into line charts. It encodes values by \modified{the color} instead of \modified{the height}, thus providing a space-saving overview. Javed et al. \cite{javed2010stack} proposed stack zooming, a multi-focus method for exploring long time-series data, which was later improved by KronoMiner \cite{zhao2011kronominer} and Timenotes \cite{walker2016timenotes}. Another multiresolution method is MultiStream \cite{cuenca2018multistream}, which extends the basic streamgraph to describe hierarchical data.
\textbf{Cyclic time-series data visualization.}
Cyclic time domain is composed of periodic time primitive sequences.
Calendar-based visualization \cite{van1999cluster} is used to describe data with weekly, monthly, or yearly patterns. Another visualization technique, the spiral graph \cite{weber2001visualizing}, utilizes rings to represent cyclic time-series data.
Recently, more specialized visualization systems are developed for periodic time-series data. For example, T-Cal \cite{fu2018t} visualizes team communication data with a calendar-based approach. IDMVis \cite{zhang2019idmvis} shows the discrete records of a diabetic's physical condition with colored dots to promote the reasoning and refinement of the treatment.
Our work is inspired by the techniques for visualizing linear time-series data. The production planning data are essentially time-series data. We integrate existing time-series data visualization techniques with the visual encoding for comparative analysis to support the exploration of production plans.
\section{\modified{Background}}
\modifiedSecond{In this section, we introduce the background of production planning, including a detailed description of the data (e.g., the performance indicators) and the analysis of tasks and requirements of production planning. The data description and requirement analysis are based on both an extensive survey of prior research and our interviews with the production planning experts from our industry collaborator.
}
\subsection{\modifiedSecond{Data Description}}
In this paper, we study the production planning problem in \modified{the} manufacturing industry. We work closely with planners and factory managers from a world-leading manufacturing company. The company owns about 50 factories performing production tasks of tens of thousands of \textbf{products} and \textbf{assembly items}.
\modified{Planners need to make a 30-day production plan every day based on a hybrid production planning algorithm \cite{sahling2009solving},
which takes the \textbf{initial inventory} of raw materials, the \textbf{production capacity} of factories~\cite{florian1971deterministic}, the \textbf{arrangement of holidays}, and the \textbf{demand of products} as the input.}
Their work is to assign daily tasks to each factory.
A typical production task is that Factory A is required to produce $n$ pieces of Product B on Day C.
Each product may be produced by several factories, which jointly serve the demand. There are two kinds of demand: the \textbf{real order} from the customer, and the \textbf{predicted demand} based on past orders. In this work, we sum the two types of demand together for simplicity.
Both the inventory and the production output of the plant can be used to serve the demand. If the demand is not satisfied, it will be delayed and produced later according to a predefined priority list of products.
\modified{The production in a factory relies on two factors: the production capacity of the factory and the supply of the \textbf{child components}.}
First, the production capacity can be described as the total resources that can be used to produce products. In the same factory, different products may consume different types of production capacity, which are represented as diverse \textbf{capacity sets}.
Second, the hierarchical structure in the dependency between products and their child components creates a \textbf{bill of materials (BOM) tree} \cite{BOM_explanation}. For example, the production of a mobile phone will consume several central processing units (CPU), a display screen, and so on, while the production of the CPU relies on the supply of arithmetic logic units (ALU), registers, and so on. The leaf nodes of the BOM tree are \textbf{raw materials}, which are purchased from other suppliers. Both the intermediate components and the raw materials on the leaf nodes of the BOM tree can be ordered by customers.
Among multiple performance metrics in production planning, we identify four important \textbf{performance indicators} \cite{parmenter2015key} after discussions with planning practitioners and factory managers:
\textbf{Order delay rate.}
The first and most significant objective is to minimize the order delay rate. A low order delay rate can not only increase \modified{revenue} but also improve customers' satisfaction, which can promote \modified{future sales}.
\textbf{Production cost.}
Reducing the production cost is the second objective. A proper production can satisfy the demand for most products and eliminate inventory.
However, a high production cost does not necessarily indicate that the production plan is bad, since it may be caused by high demand.
\textbf{Inventory cost.}
The third objective is to minimize the inventory cost. A high inventory cost may be caused by excessive production, which will also increase the production cost.
\textbf{Smoothing rate of production capacity use.}
As the last objective, smoothing the weekly use of production capacity aims at keeping machines in a constant working intensity. The smoothing rate is computed to represent the difference in production capacity use between two consecutive weeks. Keeping a low smoothing rate is important since the domain experts stated that the dramatically changing working intensity would cause damage to the production machine and increase the maintenance cost.
\subsection{\modified{Task and Requirement Analysis}}
During the past six months, we have worked closely with six production planning experts from our industry collaborator. Four of them work on the production planning algorithms, while two of them are planners who need to check the algorithm outputs and further assign feasible production tasks to each factory.
\modified{At the early stage of the collaboration, we held weekly meetings with the experts. During the meetings, they explained to us the data, the production planning algorithm, and the problems they were faced with in daily planning.
We then applied cause and effect analysis \cite{ishikawa1990introduction} to the problems in production planning. After compiling a list of problems, we explored the potential causes based on the experts' experience and the literature review, which revealed the key information that the experts needed for decision making in production planning.}
\modified{We identified two categories of analytical tasks after detailed discussions with the experts.
The first type of tasks focuses on the exploration and optimization of production plans. For example, \textit{What's the difference between two plans? Which products perform poorly in terms of the performance indicators? What are the key factors that restrict production?
What kind of change in the configuration data can improve the production plan?}
The second type of tasks is related to the fast response to unanticipated changes in the market and the plant. For instance, \textit{What incidents may have an adverse influence on the production plan? What is the influence of such incidents? What strategies can reduce the adverse influence?}}
\modified{When working on this paper,} we first developed an early prototype and then improved the designs iteratively according to the feedback from the experts. At each iteration, we held weekly meetings to introduce the visual designs and collected comments from them. Finally, we formulated eight design requirements which can be grouped into four levels, and developed the current version of \modified{the} visualization system \blue{(Fig. \ref{fig:teaser})}.
The \textbf{overall-plan-level} requirement focuses on providing an overview of all the production plans.
\begin{enumerate}[label=\textbf{R{\arabic*}},nolistsep]
\item
\textbf{Visualize the optimization process of production planning.}
The visual design should present the summarized algorithm results of production plans and the difference between two plans. The recorded optimization history can not only help users verify the effect of their manipulation but also provide an overview for planners to choose the best strategy.
\end{enumerate}
The \textbf{product-level} requirements focus on displaying the statistics of individual products.
\begin{enumerate}[label=\textbf{R{\arabic*}},resume,nolistsep]
\item
\textbf{Show the distribution of all the products.}
Presenting the distribution of performance indicators for different products can reveal clusters and anomalies of products, which provides guidance for further exploration.
\item
\textbf{Support filtering and selecting products of interest.}
The visualization system should support interactions to filter and select products with specific performance indicator values and present detailed information about the selected products.
\end{enumerate}
The \textbf{detail-production-level} requirements relate to \modified{the} detailed description of the production and the relationship between supply and demand.
\begin{enumerate}[label=\textbf{R{\arabic*}},resume,nolistsep]
\item
\textbf{Visualize the dependency among products.}
Due to the dependency among products in the BOM tree, the production of a parent product may be limited by the lack of child components. Showing the dependency relationship among products, along with their temporal supply/demand distribution allows problem diagnosis and production planning optimization.
\item
\textbf{Present the production detail of a product in different factories.}
The system should display the daily production output and production capacity use of each factory. The analysis of detailed production can disclose the workload of each plant and the reason for insufficient production.
\end{enumerate}
The \textbf{comparison-level} requirements aim at comparative analysis and optimization of production plans.
\begin{enumerate}[label=\textbf{R{\arabic*}},resume,nolistsep]
\item
\textbf{Enable interactive optimization of the production plan.}
The domain experts are eager for the support of visual interactions to improve production planning. To this end, our design should combine the automated algorithm and domain knowledge. The visual encoding should provide guidance for the manipulation and rapid feedback is needed to verify the effect.
\item
\textbf{Support a fast response to unanticipated incidents.}
A sudden change in the market and the plant may have an adverse influence on production planning. Revealing the influence and supporting a quick adjustment are critical to developing an efficient production plan.
\item
\textbf{Support comparative analysis of two production planning strategies.}
The comparison of production plans is important in the checking of the effect of optimization and showing the impact of unanticipated changes in the market and the plant. The visualization system should provide \modified{a} detail-on-demand comparison of planning strategies for decision making.
\end{enumerate}
\section{\modifiedSecond{System Architecture and Implementation}}
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{pictures/figure2_system_overview.png}
\caption{
System architecture. {{\textit{PlanningVis}}} consists of three major modules: (a) data storage, (b) data processing and (c) visualization. The data storage module collects and stores the configuration data and the result of a hybrid production planning algorithm. The data processing module pre-processes the anomalous data, computes the performance indicators and evaluates the difference between two production plans. The visualization module supports interactive exploration and comparison of production plans through three well-coordinated views.
}
\label{fig:system_overview}
\end{figure}
\modifiedSecond{Fig.~\ref{fig:system_overview} illustrates the architecture of the {{\textit{PlanningVis}}} system, which consists of three modules: data storage, data processing, and visualization.}
The data storage module collects the configuration data and the result of a hybrid production planning algorithm (Section 4.1), and \modified{stores them on a server with 64 Intel Xeon CPU processors (E7-4820, 2.0 GHz) and 256 GB memory.}
The data processing module mainly pre-processes anomalous data, computes the performance indicators, and calculates the difference between two production plans based on user requirements. \modified{It is deployed on another server with 64 Intel Xeon CPU processors (E7-4820, 2.0 GHz) and 512 GB memory.}
The visualization module combines three well-coordinated views to support \modified{the} exploration and comparison of production plans in multiple levels of details, \modified{which is displayed on a 23.8 inch monitor with a resolution of 1920 x 1080.}
\modified{The back-end of the {{\textit{PlanningVis}}} system is supported by Flask\footnote{http://flask.pocoo.org/}, where MongoDB\footnote{https://www.mongodb.com/} is used for data storage and Pandas\footnote{https://pandas.pydata.org/} is employed for data processing. The front-end of the {{\textit{PlanningVis}}} system is implemented by Vue.js\footnote{https://vuejs.org/} and D3.js \cite{bostock2011d3}.}
When using {{\textit{PlanningVis}}}, users can first configure the input data of the model based on their needs (e.g., coping with unexpected manufacturing events) in the control panel \blue{(Fig. \ref{fig:teaser}a)} and run the production planning algorithm. Then, the results of the production planning algorithm will be further visualized with three levels of details \blue{(Figs. \ref{fig:teaser}b, \ref{fig:teaser}c and \ref{fig:teaser}d)} to facilitate \modified{the} quick exploration and comparison of different production plans.
\section{Data Modeling}
This section first introduces the hybrid production planning algorithm. Then, the output of the algorithm is processed for feature extraction and comparative analysis.
\subsection{The Hybrid Production Planning Algorithm}
The production planning problem is inherently a multi-objective optimization problem.
\modified{
Specifically, we are focusing on minimizing the four performance indicators suggested by the domain experts: the order delay rate, the production cost, the inventory cost and the weekly smoothing rate of production capacity use.
}
\modified{Then, we employ the weighted sum model (WSM) \cite{triantaphyllou2000multi} to define the multi-objective production planning problem. The WSM can be solved by integer programming.}
However, directly employing integer programming to solve such a large scale production planning problem is time-consuming. Therefore, in this paper, we adopt a hybrid production planning algorithm which combines linear programming \cite{shapiro1993mathematical} and heuristic algorithms \cite{li2007heuristic}. It is adapted from an approach proposed by Sahling et al \cite{sahling2009solving}.
\subsection{Data Processing}
After getting the production planning data generated by the automatic algorithm, further data processing will be conducted to extract features and analyze the difference between two plans.
\modified{
We first compute the daily performance indicators of each product, including the order delay rate, the production cost, the inventory cost, and the smoothing rate of production capacity use.
These performance indicators are used to evaluate the production plan and identify products with production problems.}
Also, we calculate the summarized statistics such as the mean and the variance of a product over 30 days.
The next step is to preprocess two types of anomalous data we identified: missing data and infinite values.
The data of some products are missing because there is no demand for these products, they do not consume any capacity set, or the products are not involved in the production process at all. We assign special negative values to these products so that they can be recognized and displayed differently in the visual design.
There also exist some infinite values in the smoothing rate of production capacity use, owing to the addition of new capacity sets or the signing of the contract with new factories. After the discussion with domain experts, we decide to replace these infinite values with a suitable number, which is neither large enough to affect the summarized performance indicators much nor too small to be noticed in the visualization system.
The last step is to reveal the difference between two production plans, with the aim of supporting the optimization of production planning, and showing the impact of unanticipated changes in the market or the plant.
We summarize three levels of differences: the plan level, the product level, and the production detail level.
\section{Visual Design}
The {{\textit{PlanningVis}}} system is developed to support multi-level exploration and comparison of different production plans, which consists of four parts: a control panel \blue{(Fig. \ref{fig:teaser}a)} to present the configuration data and promote interactively changing the configuration, a plan overview \blue{(Fig. \ref{fig:teaser}b)} to exhibit the optimization history and compare \modified{the} summary information of different plans, a product view \blue{(Fig. \ref{fig:teaser}c)} to reveal the distribution of products and explore individual products of interest, and a production detail view \blue{(Fig. \ref{fig:teaser}d)} to display the BOM tree of products and the production detail of the selected product in related plants.
It combines automatic algorithms and domain knowledge to enable two kinds of what-if analyses: the optimization of production planning, and the simulation of unanticipated changes to facilitate fast re-planning.
The proposed visualization system targets at eight requirements which can be grouped into four levels (Section 2.2).
In this section, we first describe each component of the {{\textit{PlanningVis}}} system as well as the design alternatives. Then, we illustrate the user interactions provided to enable the exploration and analysis of production planning.
\subsection{Control Panel}
The control panel \blue{(Fig. \ref{fig:teaser}a)} aims at supporting visual manipulation on the configuration data of production planning to generate new plans. It displays two types of configuration data: the daily demand of each product, and the available resources in each factory, including the initial inventory of raw materials, the capacity sets and the holiday arrangement.
Based on the interactions provided by the control panel, users can leverage their knowledge to improve a production plan (\textbf{R6}) and simulate an unanticipated incident so that they can take measures in time to reduce the influence (\textbf{R7}).
\textbf{The visual encoding of the configuration data.}
The initial order demand and the resource configuration of the selected plant will be shown when users specify the start and end dates of production planning.
The order demand for a product is displayed by a line chart overlaid with an area chart, where the horizontal axis represents the time and the vertical axis encodes the value. Additionally, the small circles on the line chart can be dragged to change the value.
For the production resources in a plant, we encode the initial inventory of the raw material by a draggable bar chart and illustrate the capacity sets in a similar manner to the order demand for products.
Furthermore, the holiday schedule is represented by small triangles, where blue filled triangles indicate holidays. Users can click an unfilled triangle to arrange a holiday or click a blue filled one to cancel the holiday.
After modifying the configuration data, users can click the ``Run'' button to invoke the production planning algorithm, and the returned result will be added to the plan overview.
\subsection{Plan Overview}
The plan overview utilizes timeline-based glyphs to present the summarized information of various production plans and their differences. The view can reveal the macroscopic impact of configuration changes, including improving the plan and simulating unanticipated incidents in the market or the plant (\textbf{R6}, \textbf{R7}). In addition, it displays the recorded planning history which enables users to progressively optimize the plan (\textbf{R1}). The visual design is composed of plan glyphs and the links between them.
For visual comparison \cite{gleicher2011visual}, we adopt juxtaposition between plan glyphs and explicit encoding in links (\textbf{R8}).
\textbf{Plan glyph.}
The plan glyph encodes summarized algorithm configuration data and performance indicators, as illustrated in \blue{Fig. \ref{3a}}.
\begin{figure}[tb]
\centering
\subfloat[Plan glyph\label{3a}]{%
\includegraphics[width=0.32\columnwidth]{pictures/figure3_plan_glyph_revised.png}}
\hfill
\subfloat[Star plot\label{3b}]{%
\includegraphics[width=0.32\columnwidth]{pictures/figure3_alternative1_revised.png}}
\hfill
\subfloat[Treemap\label{3c}]{%
\includegraphics[width=0.32\columnwidth]{pictures/figure3_alternative2_revised.png}}
\caption{
The glyph design to represent a plan in the plan overview. (a) The glyph design employed in {{\textit{PlanningVis}}}. (b) A star-plot based glyph design. (c) A treemap-based glyph design.
}
\label{fig:plan_glyph}
\end{figure}
The upper part is a bar chart that shows the four key performance indicators suggested by the domain experts, including order delay rate (red), the production cost (blue), the inventory cost (green), and the smoothing rate of production capacity use (purple), where the color scheme is consistent with other views. The \modified{light} gray background in the bar chart indicates the maximum value among all the plans.
\modified{The lower part uses light orange circles to represent four kinds of configuration data
}
These circles, from left to right, indicate the order demand, the initial inventory of raw materials, the available production capacity, and the number of holidays, respectively. The radii of these circles describe values in disparate ranges.
In practice, users may assume the production capacity is infinite so that they can focus on the analysis of other production constraints. \modified{For this special case, we visualize the infinite production capacity as a brown circle (the last plan glyph in \blue{Fig. \ref{fig:teaser}b})}.
The upper part and the lower part are separated by a horizontal line to avoid the misunderstanding that a relationship between the vertically aligned visual primitives exists.
\textbf{Plan glyph design alternatives.}
Two alternatives of the plan glyph are considered during the iterative design process: the star plot and the treemap.
As shown in \blue{Fig. \ref{3b}}, the star plot employs four spokes to display different variables, where the inner orange part encodes the configuration data, while the outer part encodes the performance indicators.
However, a deviation in interpreting the shape of the star may occur, which has been studied in previous work \cite{klippel2009star}, thus making it difficult to compare plans with small differences. In addition, users may mistakenly think that the configuration variable and the performance indicator in the same spoke have a close relationship.
The treemap design is depicted in \blue{Fig. \ref{3c}}, where the bottom orange part exhibits the normalized configuration data and the top part exhibits the normalized performance indicators.
However, it is not convenient to compare different variables in two production plans in the treemap.
\textbf{The visual encoding of the links between plan glyphs.}
The links, as illustrated in \blue{Fig. \ref{fig:teaser}b}, connect two plans in the optimization history to display the difference between them.
\modified{The triangles on the upper part of the link represent the changes of the four types of configuration data, which follow the same order as that of the plan glyph.}
The size of the triangle encodes the value while the orientation of the triangle describes the increase (up) or the decrease (down) of the value. A triangle with a dashed border means that there is no change \modified{in} this type of configuration data.
We use four horizontal lines on the lower part to show the change in the performance indicators. The width of the lines \modified{encodes} the value and the order is the same as that in the plan glyph. The same color scheme is used to indicate \modified{a} decrease while a gray line means \modified{an} increase.
We enable users to hover over the visual cues to view the detailed information and delete a plan. Users can also click the plan glyph to choose the last plan and the current plan for exploration and comparison. An additional link will be shown at the bottom when the selected plans are not consecutive (\blue{Fig. \ref{fig:teaser}b}). After selecting the plans, the product view will give a detailed description of the products.
\subsection{Product View}
The product view contains two components: a segmented parallel coordinates plot to reveal potential clusters and anomalies of products, and the product glyph to give a detailed description of filtered products.
The view can display the distribution of all the products (\textbf{R2}) and support filtering and selecting for further exploration (\textbf{R3}). It provides mesoscopic information on the product level for comparative analysis of two plans (\textbf{R8}), and gives support to the improvement and simulation of production planning (\textbf{R6}, \textbf{R7}).
\textbf{The segmented parallel coordinates plot.}
Since we use a special negative value to represent the product with no demand (Section 4.2), traditional parallel coordinates plots will create a large gap between the normal values and the abnormal ones, and thus compress normal values into a small area. To resolve this issue, we design a segmented parallel coordinates plot.
As illustrated in \blue{Fig. \ref{fig:teaser}c$_1$}, we extend the axis in \modified{the} traditional parallel coordinates plot to two rectangles. The four pairs of rectangles represent the four performance indicators, respectively. Within each pair of rectangles, the upper part displays normal values while the lower part displays abnormal values. Each line in the plot represents a product. When the line passes through a rectangle, a red-and-blue color encoding of this line segment refers to the difference between the last plan and the current plan. The red color indicates an increase and blue indicates a decrease. A gray triangle on the right side of the bar shows that the performance indicator of the current product has an abnormal value in the last plan. Additionally, a brush on the rectangle will highlight the selected products and their product glyphs will be displayed for further exploration.
On the left of the parallel coordinates plot, we also provide sliders which show the ranges of differences for users to filter products. The user can also search for products of interest.
\textbf{Design alternatives.}
Two candidate designs are discussed to display the distribution and clustering of products.
The first design is a scatter plot with the layout generated by the multidimensional scaling (MDS) \cite{kruskal1964multidimensional} technique. However, the domain exports report that they find it difficult to understand why certain products form a cluster, and they cannot identify the distribution of products on the four performance indicators.
Another alternative design is the traditional parallel coordinates plot. We extend it to handle abnormal products and display the difference between plans as discussed above.
\textbf{Product glyph.}
In \blue{Fig. \ref{fig:product_glyph}a}, the product glyph depicts the selected products in detail. It has a circular shape and is tangentially partitioned into four regions to present different performance indicators.
The radius of the innermost sector encodes the value of the performance indicators, and the black line on it points out the average value of all products.
The angle of the arc in the middle shows the variance of the daily performance indicator during the 30-day production planning period.
We also present the differences in the performance indicators between the last plan and the current plan in the outer arc. The arc starts from the center of each region, and the angle shows the difference. An arc \modified{extending} clockwise with the same color scheme to the performance indicator expresses an increase, while one \modified{extending} counterclockwise with only a black and bold border expresses a decrease.
A special case is the data with predefined negative values (Section 4.2), which are shown in gray in all the visual elements. Additionally, we use a gray triangle at the outermost part of the glyph to indicate that the corresponding performance indicator of the last plan is a special negative number.
The user can hover over the visual cues to see the numerical values and click on the product glyph to explore the production details of this product.
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{pictures/figure4_product_glyph_revised.png}
\caption{
(a) The glyph design to describe various properties of a product. (b, c) Two alternative solutions to the product glyph.
}
\label{fig:product_glyph}
\end{figure}
\textbf{Product glyph design alternatives.}
\modified{Three other alternative designs are also considered before we finalize the current glyph design.
}
\blue{Fig. \ref{fig:product_glyph}b} shows one choice which utilizes \modified{a} traditional pie chart to compare different performance indicators of one product. However, it is not easy to compare the same performance indicator between products because the corresponding sectors may have diverse start angles and end angles.
Another choice is shown in \blue{Fig. \ref{fig:product_glyph}c}. It uses the length of the line along the radius of the sector to encode the variance. However, since the line and the sector have different scales, overlaying them may confuse the user.
\modified{The box plot is also considered here. Although a box plot can show the distribution of one performance indicator over 30 days, it is not space efficient to show four box plots to represent the performance indicators of one product. }
\subsection{Production Detail View}
The production detail view can be divided into the left and right parts. The left side is a dependency tree visualization, which reveals the dependency between products (\textbf{R4}). The right side \modified{is} extended bar charts, which visualize the daily production information of the selected product in related factories (\textbf{R5}).
The visual design discloses the microscopic differences between two production planning strategies (\textbf{R8}). It can further help guide the improvement of the production plan (\textbf{R6}) and reveal the impact of an unanticipated incident (\textbf{R7}).
\textbf{Dependency tree.}
As illustrated in \blue{Fig. \ref{fig:teaser}d$_1$}, the dependency tree describes production dependency from the parent of the selected product to raw materials. The tree layout is generated by a depth-first search (DFS) algorithm \cite{tarjan1972depth} starting \modified{from} the node of the selected product, where the vertical position of the product is ranked by the access order and the horizontal position is arranged by the depth of the node in the tree. The parent node of the selected product is placed at the top of the tree. The node with children can be folded and unfolded upon clicking, thus helping the user explore a large BOM tree.
Each unfolded node is accompanied by a heatmap vertically aligned at the right side of the node (\blue{Fig. \ref{fig:teaser}d$_2$}). The heatmap contains two rows to reveal the relationship between supply and demand. The upper row of the heatmap encodes the daily remaining inventory, while the lower row encodes the daily delayed order. The value is represented as the color saturation in the two rows. All the heatmaps are horizontally aligned. In this way, by viewing the tree with heatmaps, users can identify the impact caused by the shortage of the child product to the parent one.
When a user clicks on the heatmap, the daily performance indicators and the production in related plants of the clicked product will be presented.
\textbf{The visual encoding of daily production information.}
\blue{Fig. \ref{fig:teaser}d$_3$} displays the daily performance indicators, namely, the order delay rate, the production cost, the inventory cost, and the smoothing rate of production capacity use, from top to bottom. We encode the value of the current plan with the height of the bar which follows the same color scheme used before. The black rectangular border is used to encode the difference between the current plan and the last plan. The value of the current plan is larger when the border is within the colored bar, and smaller when the border is outside the bar. Bars will be changed to gray if the value is a special negative number we set before, which means the raw data is missing (Section 4.2). Note that maybe only a part of \modified{the} bars representing the order delay rate is gray since the demand is zero on these days. There are only four bars for the smoothing rate of production capacity use because it is computed once a week. A dashed line is shown to point out where the value is zero.
In \blue{Fig. \ref{fig:teaser}d$_4$}, a line chart and a bar chart are utilized to show the daily production in a plant. The clicked product in the dependency tree and the related plants are connected by curves, whose width and color saturation represent the total production output in that plant.
In each plant, the downward bars indicate the daily production output of the product and the upward bars indicate the use of the corresponding capacity set. The visual design is similar to that of the performance indicators. Here, the upward bars may be changed to gray, which implies that the production capacity for this product is infinite.
Above the bar chart, a black line encodes the capacity utilization rate of the last plan and a blue one encodes that of the current plan.
\subsection{User Interactions}
{{\textit{PlanningVis}}} provides various interaction methods to support the exploration and comparison of production plans, the daily optimization of planning, and the quick response to unanticipated changes in raw material supply, the production process and market demand.
\textbf{Navigating through different levels of details.}
We provide three levels of details for users to explore production planning data: the plan level, the product level, and the production level.
The user can first get the summarized information of each plan from the plan overview.
He can then select any pair of plans to browse the product-level difference and explore various properties of individual products in the product view. The detailed production dependency and the temporal statistics are displayed in the production detail view when he clicks one product.
\textbf{Modifying configuration data.}
In the control panel, interactive manipulation is supported to change the value of the configuration data. For order demand and capacity sets, the value can be modified by dragging the dot on the line. For raw materials, users can drag the bar to specify a new value. Besides, a click on the triangle can change the arrangement of the holiday.
\textbf{Filtering, brushing, linking and highlighting.}
In the product view, users can filter the change of performance indicators between two plans to focus on products of interest. After that, the products selected by brushing on the segmented parallel coordinates plot will be highlighted, which will also be displayed in individual glyphs for further exploration. Meanwhile, the capacity sets in the control panel will also be filtered accordingly.
In addition, we provide search boxes in the control panel and the product view to help users look up specific orders, raw materials, capacity sets, and products.
An informative tooltip will also be displayed when users hover over a visual element in {{\textit{PlanningVis}}}.
\section{Case Studies}
\modified{We conducted two case studies to demonstrate the effectiveness of {{\textit{PlanningVis}}}. The users involved in our case studies are domain experts from our industry collaborator, as will be introduced in Section~\ref{sec_expert_interview}. We used a sampled dataset from the real production planning of telecommunication equipment. It contains the 30-day planning data of 1038 products or assembly items.
This sampled dataset is used in both the case studies and the subsequent expert interview (Section~\ref{sec_expert_interview}).}
\subsection{The Optimization of Daily Production Planning}
\modified{In the first case study, the users were asked to identify potential problems in a production plan and further optimize it (\textbf{R6}). During this exploration process, their actions, findings and comments were recorded.}
The users started with an initial production plan (\blue{Fig.~\ref{fig:teaser}b$_1$}) and explored different levels of details based on the visualization system (\textbf{R1}). When they brushed the products with high order delay rates in the parallel coordinates plot, the glyphs of the selected products were displayed on the right and the control panel listed the capacity sets related to the production of the selected products (\textbf{R3}).
The users then browsed through the product glyphs and clicked the glyphs to view the BOM tree and daily production in related factories (\textbf{R5}). They found the line charts were close to the top, which indicated the maximum production capacity, meaning that production capacity use was saturated. This visual cue hints that the production bottleneck is due to the lack of production capacity. They then hovered over the plant and found the saturated production capacity was mainly \textit{Plant\_1\_Capacity\_Set\_11} and \textit{Plant\_1\_Capacity\_Set\_3}. Therefore, the users tried to improve the plan by increasing these capacity sets.
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{pictures/figure5_case1_revised.png}
\caption{
The product glyphs (a) and daily production (b) of \textit{Service\_Router\_20} and \textit{Gateway\_60} after increasing the capacity sets by 50\% for 30 days. The order delay rate decreases because the production is increased or finished before the delivery date of the order.
}
\label{fig:case1_after_increasing_capacity_set}
\end{figure}
After experimenting with different increasing amounts, the users decided to increase these two capacity sets by 50\% for 30 days and generated a new plan (\blue{Fig.~\ref{fig:teaser}b$_2$}). To see the difference between the two plans (\textbf{R8}), the users clicked the two glyphs in the overview, and then filtered the parallel coordinates plot to browse the products with decreasing order delay rates.
After exploring the glyphs of the selected products, they narrowed down to two examples with decreasing order delay rates and increasing smoothing rates, which are shown in \blue{Fig.~\ref{fig:case1_after_increasing_capacity_set}a}. They then clicked the product glyphs to view the production details (\blue{Fig.~\ref{fig:case1_after_increasing_capacity_set}b}). The order delay rate \modified{dropped} because the production \modified{was} increased (\textit{Service\_Router\_20}) and completed before the delivery date (\textit{Gateway\_60}).
Although the production capacity is sufficient in the improved production plan, there are still some products with high order delay rates (\blue{Fig.~\ref{fig:teaser}c$_1$}). The users brushed these products and the glyphs (\blue{Fig.~\ref{fig:teaser}c$_2$}) showed that most of the products had a low production cost, a low inventory cost, and a decreasing smoothing rate (\textbf{R2}).
By clicking the product glyph to further explore the production details in \blue{Fig.~\ref{fig:teaser}d} (\textbf{R4}, \textbf{R5}), the users identified three products, namely, \textit{Routers\_22}, \textit{Routers\_491} and \textit{Service\_Router\_18}, whose production relied on the raw material \textit{common\_32}. They found the capacity set was adequate after Sept. 27 (\blue{Fig.~\ref{fig:teaser}d$_4$}), but the orders were delayed due to the lack of \textit{common\_32} (\blue{Fig.~\ref{fig:teaser}d$_2$}).
To solve the problem, the users tried to increase the initial inventory of \textit{common\_32}. After several trials, they decided to increase the inventory from 1000 to 8000 since this result \modified{led} to the lowest delay rate.
The result is illustrated in \blue{Fig. \ref{fig:case1_after_increasing_inventory}}, where the order delay of \textit{Routers\_22} and \textit{Service\_Router\_18} has been greatly relieved.
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{pictures/figure6_case1_revised.png}
\caption{
The production dependency and daily production of \textit{Routers\_22}, \textit{Routers\_491} and \textit{Service\_Router\_18} after increasing the initial inventory of the raw material \textit{common\_32} from 1000 to 8000. The order delay rate of \textit{Routers\_22} and \textit{Service\_Router\_18} decreases while that of \textit{Routers\_491} is still high.
}
\label{fig:case1_after_increasing_inventory}
\end{figure}
However, the users noticed that \textit{Routers\_491} still kept a high order delay rate, even though the stock of the raw material \textit{common\_32} was enough (\blue{Fig.~\ref{fig:case1_after_increasing_inventory}a}). Then, the users clicked the heatmap to further browse the daily production of \textit{Routers\_491} (\blue{Fig.~\ref{fig:case1_after_removing_constraint}a}). They observed the production of \textit{Routers\_491} was increased but not used to serve its own order demand.
One of the users explained that it was caused by a special production constraint called the fixed component requirement, which is generated based on the importance of products and previous experience in coping with raw material shortage.
In this case, \textit{common\_32} can only be used to produce \textit{Service\_Router\_18} and \textit{Routers\_22}. After they removed the constraint in the planning algorithm, the order delay rate of \textit{Routers\_491} decreased, as shown in \blue{Fig.~\ref{fig:case1_after_removing_constraint}b}.
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{pictures/figure7_case1_revised.png}
\caption{
(a) The order demand of \textit{Routers\_491} is delayed because of the requirement that \textit{common\_32} can only be used to serve the order of \textit{Routers\_22} and \textit{Service\_Router\_18}.
(b) After removing the restriction, the production of \textit{Routers\_491} is increased and the order delay rate drops.
}
\label{fig:case1_after_removing_constraint}
\end{figure}
In summary, the users showed great interest in {{\textit{PlanningVis}}} and highly appreciated its capability to gain deep insights into production planning and to fix potential flaws in a production plan.
\subsection{The Quick Response to Unanticipated Incidents in Manufacturing}
\modified{In the second case study, the users were asked to take measures to reduce the adverse influence caused by unanticipated changes in the market or the plant (\textbf{R7}).}
When the users applied {{\textit{PlanningVis}}} to production planning from Sept. 12 to Oct. 11 in 2018, they found that typhoon Mangkhut \modified{had attacked} the city where \textit{plant\_1} located on Sept. 16 and Sept. 17. \modified{With the advent of the typhoon warning}, they decided to shut down \textit{plant\_1} on the two days. The users simulated this change by adding holidays in the control panel, with the updated plan shown in \blue{Fig.~\ref{fig:case2}a$_2$} (\textbf{R1}). The users then filtered the parallel coordinates plot (\textbf{R3}) and found there were many products with increasing order delay rates (\blue{Fig.~\ref{fig:case2}b}).
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{pictures/figure8_case2_revised.png}
\caption{
(a) The overview of plans generated during the rapid response to typhoon Mangkhut.
(b, c) \textit{Plant\_1} is shut down for two days due to the typhoon Mangkhut. The order delay rate of many products is increased since the production is put off.
(d, e) After increasing \textit{Plant\_1\_Capacity\_Set\_11} and \textit{Plant\_1\_Capacity\_Set\_3} by 50\% for 30 days, the order delay rate of many products decreases, while the production cost and the smoothing rate of production capacity use increase. It is because the production is increased or advanced.
}
\label{fig:case2}
\end{figure}
They then brushed the products with increasing order delay rates and decreasing smoothing rates (\textbf{R2}). The control panel shows that the production of the selected products will consume \textit{Plant\_1\_Capacity\_Set\_11} and {Plant\_1\_Capacity\_Set\_3}. The users then clicked the product glyphs and the heatmap in the BOM tree to browse the daily production in related factories (\blue{Fig.~\ref{fig:case2}c}).
They observed that the production was delayed due to the shortage of the production capacity.
To handle the adverse influence of the typhoon, the users decided to increase the two capacity sets. They experimented with different increasing values, and finally decided to increase both of the capacity sets by 50\% for 30 days (\blue{Fig.~\ref{fig:case2}a$_3$}), which resulted from the trade-off between the increased production cost and the decreased order delay rate (\blue{Fig.~\ref{fig:case2}d}).
The users then clicked the product glyphs \modified{again} to view the daily production. As illustrated in \blue{Fig.~\ref{fig:case2}e}, the order delay rate drops since there is enough production capacity to make sure the production can be finished before the delivery date of the order (\textbf{R5}).
However, there also exist some order demands that cannot be delivered on time, because the delivery date is near the end of the extra holidays, and thus the time left for the production is not enough.
The users then clicked the initial plan (\blue{Fig.~\ref{fig:case2}a$_1$}) and the plan with increased capacity sets (\blue{Fig.~\ref{fig:case2}a$_3$}) to view their differences (\textbf{R8}). Although the adjustment can relieve the increase of the order delay rate, it comes at the cost of increased smoothing rates.
Besides the sudden change in manufacturing, our approach also supports the fast response to unanticipated incidents in the market demand and the supply of raw materials, such as urgent orders and returns of raw materials.
\section{Expert Interviews}
\label{sec_expert_interview}
We also conducted one-on-one interviews with six domain experts from our industry collaborator to evaluate the effectiveness of {{\textit{PlanningVis}}}.
\modified{
The experts are different from those involved in the design of the system and have never used our system before the interview. Four of them ($E_1$-$E_4$) are experts of production planning algorithm and the other two ($P_1$-$P_2$) are production planning practitioners who are responsible for manually checking the algorithm results and further assigning production tasks to different factories. $P_1$ and $P_2$ are also the target users of the visualization system.
}
The interviews started with a brief introduction to the visual designs, interactions and workflow of our visualization system.
Also, we showed a usage scenario to the experts to teach them how to operate our system. \modified{The learning process lasted about 25 minutes.}
Then, we asked the experts to freely explore our system by themselves first and further finish a certain task. For example, identify potential problems in a plan and improve it, and re-arrange the production plan in response to a sudden change in the market or production capability.
During this stage, we asked the participants to think aloud
and they could ask questions if they encountered any problems.
Their comments at this stage were also recorded.
After that, we further asked the experts to comment on the design, the effectiveness and the usage of {{\textit{PlanningVis}}}.
The whole interviews took around 60 minutes.
Their feedback and suggestions are summarized as follows:
\textbf{Methodology.}
The experts appreciated the method which enables the combination of automatic algorithms and users' domain knowledge for production planning. They reported that the algorithm was often not able to model all the factors for real production planning and they usually needed to manually improve the algorithm results.
$P_1$ commented ``{{\textit{PlanningVis}}} involves both the algorithm and our experience for production planning, which
can significantly improve
the production planning efficiency''. The experts also confirmed that the two types of what-if analysis are common and crucial to their daily work. $E_2$ mentioned ``The system makes it easy to identify the adverse influence of the uncertainty in the market and manufacturing''.
$E_1$ said that {{\textit{PlanningVis}}} provided guidance for the optimization of production planning and the quick response to unanticipated changes.
\textbf{Effectiveness.}
The majority of the experts stated that {{\textit{PlanningVis}}} was beneficial for exploring large-scale production planning data. For instance,
$P_2$ said that {{\textit{PlanningVis}}} could greatly reduce his workload as he could quickly check production plans with the help of the plan overview for \modified{a} quick comparison of plans, the product view for fast investigation of product states, and the production detail view for exploring the product dependency and planning details.
In their \modified{current} daily work, they usually need to check details in different tables.
$E_4$ reported that the quick adjustment of production planning in \modified{the} case of unanticipated changes could reduce much loss in real production planning.
$E_3$ also pointed out that our system could be very helpful for localizing the cause of problems in production plans.
Despite all the positive feedback, the experts and practitioners also mentioned a few limitations and gave us good suggestions on {{\textit{PlanningVis}}}.
$E_1$ suggested that the system could further reveal the competition for raw materials and production capacity between products.
$P_1$ commented that different planners were responsible for different products and it would be better if the system could help filter products for them.
\textbf{Usability.}
The experts agreed that the visual designs and the interactions in {{\textit{PlanningVis}}} are well-designed and can facilitate easy exploration of production planning data.
$P_2$ commented that the recorded optimization history in the plan overview could reduce their mental work and provided an easy way to compare plans.
After exploring the product view, $E_3$ thought the segmented parallel coordinates plot was efficient in presenting the multiple performance indicators of hundreds of products and the differences between two plans. The experts enjoyed the way of narrowing down the number of \modified{products} for further exploration by filtering, searching and brushing.
$E_1$ appreciated the design of the skewed tree in the production detail view. He commented ``the alignment of the ending inventory and the delayed order is useful for root reason analysis'' and ``it can be used to illustrate the relationship between factories and assembly lines''. Besides, the experts also stated the production detail view was easy to understand since it adopted visual elements they were familiar with, such as the bar chart, the line chart, and the tree layout.
Although the visual design is informative, the experts commented that the learning curve of {{\textit{PlanningVis}}} was a little steep. \modified{However, once they became familiar with the system, they found it powerful in production planning.}
\section{Discussion}
Our case studies and expert interviews have demonstrated the effectiveness and usability of {{\textit{PlanningVis}}}. However, there are still several aspects that need further discussion.
\modifiedSecond{\textbf{Generalizability.}}
\modified{We mainly evaluated {{\textit{PlanningVis}}} on the planning data of factory production. However, it can be extended to other application domains,
such as the analysis of task assignment of the machines in one plant, vehicle and crew scheduling, and the performance analysis of different scheduling schemes of computer resources (i.e., CPU, memory, threads, and so on).}
\textbf{Algorithm scalability.}
\modified{In} some cases, it may take several seconds for the current production planning algorithm to generate plans, which, however, is still not fast enough and can affect the usability of the system, especially when there are a significantly large number of products and plants.
This issue will be handled in the following perspectives.
First, we can make full use of the computation power of multi-core CPU and GPU to accelerate the production planning algorithm.
Second, global optimization is utilized each time we invoke the algorithm, even though the change to the configuration data of production planning is only related to a small number of products or plants. Therefore, we expect a local optimization method which can improve the production of related products and in related plants.
Third, we plan to explore other advanced algorithms with better efficiency and further integrate them into {{\textit{PlanningVis}}}.
\textbf{Visual design scalability.}
Currently, the plan overview only supports the comparison of less than ten plans,
since the production planners usually only need to compare several production plans in their daily practice.
\modified{
The product view can only support the exploration of hundreds of products.
When there are more products,
filtering strategies can be used here to reduce the number of products to be explored.}
In addition, the production detail view only supports the comparison of several factories, since the domain experts suggest that a product will only be produced in a small number of factories. Furthermore, displaying the detailed production \modified{of} many factories simultaneously \modified{will be} overwhelming.
\textbf{System extensions.}
There are some promising directions to further extend the capability of the {{\textit{PlanningVis}}} system.
First, our approach currently does not take the transportation time and cost into consideration. However, the transportation between two plants is also useful for fully \modified{utilizing} raw materials and child components in the BOM tree.
Second, as suggested by the domain experts, the visualization system can also be applied to the planning of multiple production lines within a factory.
Third, the algorithm employed in {{\textit{PlanningVis}}} can be replaced when more advanced production planning algorithms are available. Also, multiple automatic algorithms of production planning can be further incorporated into {{\textit{PlanningVis}}}. The system can enable users to interactively compare and select a suitable planning algorithm.
\section{Conclusion and Future Work}
In this paper, we propose {{\textit{PlanningVis}}}, a visual analytics system to support interactive exploration and comparison of production plans in three levels of details.
Our approach enables the combination of the automatic planning algorithm with the domain expertise of planning practitioners to support two kinds of what-if analyses: the daily optimization of production planning and the quick response to unanticipated changes in the manufacturing process or the market (e.g., a sudden decrease \modified{in} production capacity or raw material supply, and an abrupt increase in market demand).
Thus, our system \modified{is aligned} with the target of implementing smart factories~\cite{kagermann2013recommendations}, including accelerating production planning and quickly adjusting the production plan according to real-time manufacturing data.
We presented two case studies with real-world production planning data and conducted interviews with six domain experts from a world-leading manufacturing company. The results demonstrate the effectiveness and usability of {{\textit{PlanningVis}}} in exploring, comparing and quickly adjusting production plans.
In future work, we will consider more performance indicators in production planning, such as the transportation cost and the machine failure rate, and support the fast response to more unanticipated incidents in manufacturing, e.g., traffic jams.
Also, our approach only shows the summarized production output and capacity use at the factory level. We plan to reveal the production details \modified{of} each production line and machine within a factory.
Furthermore, we would like to integrate more automatic algorithms for production planning into {{\textit{PlanningVis}}} and it would be interesting to support the comparison of different algorithms and further recommend the most suitable planning algorithm for different production planning tasks.
|
1,108,101,562,940 | arxiv | \section{}
\section{ Introduction}
Multiferroics are materials in which ferroic properties, e.g.,
magnetism and polar order coexist. Magnetic and ferroelectric
ordering couple microscopically or macroscopically to form the
magnetic ferroelectrics. The coupling of the two ordering leads to
the so-called magnetoelectric effect in which the magnetization can
be tuned by the external electric field, and vice versa\cite{1,2,3}.
Magnetic ferroelectrics have potential applications in information
storage, actuators, sensors, and functional devices. Perovskite
BiFeO$_3$ exhibits both weak ferromagnetism and ferroelectric
characteristics, and has been studied extensively in recent
days.\cite{4,5,6,7,8}. The G-type antiferromagnetic (AFM) order of
Fe magnetic moments exhibits a canting caused by the antisymmetric
Dzyaloshinskii-Moriya interaction(DMI) under rhombohedral $R3c$
space group. However, a spiral spin structure of AFM Fe sublattice
rotates through the crystal with a long-wavelength period of
620{\AA}, and decrease the weak ferromagnetism further. In our
previous work, we suggested that the effect of decreasing magnetism
caused by the spiral spin structure can be suppressed by doping
magnetic transitional metal ions in perovskite B sites\cite{9}. We
suggest perovskite Bi$_2$FeMnO$_6$ is a good candidate to fulfill
this requirement in that Mn and Fe have different magnetic moments
and hence lower DMI to produce the rotation of AFM vectors.
Comparing with BiFeO$_3$, the new system should have an observed
magnetization and reversal of magnetization macroscopically. Except
the ferroelectric displacement of Bi site driven by Bi-6s
stereochemically active lone pair induced by the mixing between the
$(ns)^2$ ground state and a low-lying $(ns)^1(np)^1$ excited state,
there exists another structural distortion, the alternating sense of
rotation of the oxygen octahedra along [1 1 1] direction, which is
known as antiferrodistortive(AFD) distortion \cite{10,11}. Also in
our former paper, it is shown that AFD distortion couples with the
weak ferromagnetism due to the DMI which is decreasing with
increasing on-site Coulomb iteraction($U$) \cite{12}. We aim to
extend our study to perovskite Bi$_2$FeMnO$_6$ to investigate the
coupling between AFD distortion and weak ferromagnetism under DMI
using $ab$ $initio$ calculations with considering the
spin-orbital(SO) coupling effect and the noncollinear spin
configuration, while Mn were doped periodically along [1 1 1]
direction in Fe site of BiFeO$_3$ .Does AFD distortion couple with
magnetism in Bi$_2$FeMnO$_6$ when Fe and Mn are arranged in G-type
AFM ordering? What is the origin of coupling between the AFD
distortion and the magnetism under DMI in Bi$_2$FeMnO$_6$? What is
the role of DMI in the coupling between AFD distortion and
magnetization? How does Coulomb iteraction($U$) take effect on the
DMI? In this paper we have proposed the mechanism of DMI in
Bi$_2$FeMnO$_6$,using first-principles calculations based on density
functional theory(DFT).This work can shed light on discovering the
magnetoelectric coupling process in perovskite multiferroics.
The remainder of this paper is organized as follows: In section 2,
we presented the computational details of our calculations. We
reported the calculated results and discussions in section 3. In
section 4, we drew conclusion based on our calculation.
\section{ Computational details}
Our calculations were performed within the local spin density
approximation(LSDA) to DFT using the
ABINIT package\cite{13,14}. The ion-electron interaction was
modeled by the projector augmented wave (PAW) potentials
\cite{15,16} with a uniform energy cutoff of 500 eV. Bi 5d, 6s, and
6p electrons, Fe 4s, 4p,and 3d electrons, and O 2s and 2p electrons
were considered as valence states. Two partial waves per $l$ quantum
number were used. The cutoff radii for the partial waves for Bi, Fe,
and O were 2.5, 2.3, 1.1 a.u., respectively. $6\times6\times6$
Monkhorst-Pack sampling of the Brillouin zone were used for all
calculations. We calculated the net magnetization per unit cell and
the electronic properties within the LSDA+U method where the strong
Coulomb repulsion between localized $d$ states has been considered
by adding a Hubbard-like term to the effective
potential\cite{17,18,19}. The effective Hubbard parameter, the
difference between the Hubbard parameter $U$ and the exchange
interaction $J$ ($U-J$), was changing in the range between 0 and 5
eV for the Fe and Mn $d$ states. For 0 eV of $(U-J)$, $J$ was
varying as 0,0.5, 0.8,and 1 eV, respectively. $J$ remained 1 eV for
other effective Hubbard values. Taking into account the SO
interaction, we introduced the noncollinear spin configuration to
construct the G-type AFM magnetic order with the AFM axis being
along the $x$ axis in Cartesian coordinates in our $ab$ $initio$
calculation.
\section{ Results and discussion}
The initial lattice parameters are introduced as same as BiFeO$_3$
in Ref.[4]. Then cell shape and the atomic positions are fully
relaxed, and the relaxed parameters are given in table 1. One can
see that the lattice constant is decreased with doping Mn in
BiFeO$_3$. The rhombohedral angle is also decreased comparing with
BiFeO$_3$. This shows that the lattice cell is compressed with
substitution of Mn in Fe site while lattice shape is also changed
with decreasing rhombohedral angle.
For the AFD distortion, a rotational vector $\mathbf{R}$ has been
introduced to describe the direction of the rotation of the oxygen
octahedra\cite{12}. $\mathbf{R}_{out}$ corresponds to the state in
which the rotational vectors of two neighboring oxygen octahedra are
deviating away, while $\mathbf{R}_{in}$ is pointing inward. The
rotational angle is 10\textordmasculine in the Cartesian
coordinates\cite{12}. $U$ and $J$ are the Coulomb interaction and
superexchange interaction parameter which were implemented in LSDA+U
calculation as in ref.[12]. $U$ is the amount of energy required to
transfer one electron from one site to its nearest neighbor, while
$J$ indicates the strength of the magnetic superexchange
interaction.
In table 2, we present the net magnetization per unit cell with
respect to $\mathbf{R}_{in}$ and $\mathbf{R}_{out}$ in Cartesian
coordinates for different $U$ and $J$. For the sake of clarity, we
only set varying value of $J$ for $U$=0 eV. Again, one can see
that $J$ value have no effect on the resulting magnetization when
$U$ remains constant. The initial magnetic moment of Fe are arranged
in ferrimagnetic order along $x$ direction which is known as G-type
AFM order. The resultant magnetic moment in $x$ direction is due to
the ferrimagnetic arrangement of Fe and Mn magnetization. The
neighboring moment will interact under the DMI and deviate away from
the original $x$ direction to produce a resultant magnetic moment in
$y$ direction. When the rotational vector $\mathbf{R}$ is reversed,
the resultant magnetic moment will be reversed too until the $U$ is
smaller than 2.7 eV. This indicates the AFD and magnetic ordering is
coupled and DMI is still active in this case. As $U$ is greater than
2.7 eV, the coupling between these two ordering is prohibited. That
is to say, the DMI is precluded in this case. It is worth mentioning
that the component of magnetic moment in $y$ direction for U =2.7 eV
becomes zero. This indicates the critical value of $U$ for
depressing DMI is 2.7 eV in which the antisymmetric Fe and Mn
magneitc moments stop interacting with each other. This critical
value is smaller than that for BiFeO$_3$ where the DMI is turned off
as $U$ is approaching 2.9 eV\cite{12}. This indicates that Fe-Mn
antisymmetric interaction where interacting spin moment are
$(5/2)S-(4/2)S$ is weaker than Fe-Fe antisymmetric interaction where
interacting spin moment are $(5/2)S-(5/2)S$. We believe the weaker
DMI and alternating spin moments in the new system would destroy
the spiral spin structure through the crystal in comparison with the
case in BiFeO$_3$. Hereby the magnetization and reversal of
magnetization can be improved in the new system. In order to make
the reversal of magnetization clear, the resultant AFM vectors are
illustrated in Fig.1 with respect to $U$. AFM vectors have three
components corresponding to $x$, $y$, and $z$ direction,
respectively. It is apparent that there is an anomaly as $U$ is
approaching 2.7 eV which corresponds to the critical value of
eliminating DMI. The magnetization is inverted as $U$ is under the
critical value which indicates the DMI is still active in this $U$
range. Meanwhile the DMI is inversely proportional to the $U$ value.
The deviation of original magnetic moment away from $x$ axis only
occurs in $xoy$ plane. That is why the reversal of magnetic moment
can not be observed in $z$ direction.
The relationship between DMI and Coulomb on-site interaction can be
understood theoretically as follows. The Hamiltonian of the new
perovskite system reads,
\begin{equation}
H_{Bi_2FeMnO_6}=-2\sum_{<Fei,Mnj>}\textbf{J}_{Fei,Mnj}\mathbf{S}_{Fei}\cdot\mathbf{S}_{Mnj}+\sum_{<Fei,Mnj>}\textbf{D}_{Fei,Mnj}\cdot\mathbf{S}_{Fei}\times\mathbf{S}_{Mnj}.
\end{equation}
The first term is attributed to the symmetric superexchange, and
the second term is the antisymmetric DMI contribution which only
occurs when the inversion symmetry of the cations is broken. The
constant \textbf{D} is related to the AFD displacement and the
rotational vector. It functions like a 'DMI constant' which
indicates the strength of the DMI. It reads by the second order
perturbation in the case of one electron per ion
\begin{equation}
\textbf{D}_{Fe,Mn}^{(2)}=(4i/U)[b_{nn'}(Fe-Mn)C_{n'n}(Mn-Fe)-C_{nn'}(Fe-Mn)b_{n'n}(Mn-Fe)],
\end{equation}
From this relation, it can be seen that the DMI constant is
inversely proportional to the on-site Coulomb interaction and has
nothing to do with the symmetric superexchange parameter.
The new system should possess a good insulating property in terms of
the band gap calculated, and hence a reversal states of
ferroelectricity. The band gap for metal oxides can be opened with
adopting an appropriate $U$ value. The corresponding value for
BiFeO$_3$ is about 4.3 eV. We take 5 eV $U$ value for Fe and Mn in
Bi$_2$FeMnO$_6$ to analyze the total density of states(DOS) in that
the band gap can be increased slightly with increasing $U$ value.
The band gap corresponding to two rotational vectors are
demonstrated in Fig.2. Generally, the band gap corresponding to the
two rotational vectors are increasing linearly with respect to $U$
value, while there exist two anomalies to the two AFD states as $U$
approaching 2.7 eV where the band gap are disappeared. These
anomalies in the curve are consistent with the calculating results
for AFM vectors shown in table 2 and Fig. 1, respectively. The
anomalies show the disappearance of DMI occurs at the critical $U$
value and is caused by the hopping of electrons through Fermi
energy. Moreover, the band gap to $\mathbf{R}_{out}$ is more
narrower than to $\mathbf{R}_{in}$ which is probably caused by the
different ligand field related with structure. In comparison with
the total DOS without applying the on-site Coulomb interaction, the
total DOS with $U$ value of 5 eV is shown in Fig. 3. The finite DOS
in the vicinity of Fermi level is pushed away to form the band gap
as $U$ is equal to 5 eV. A 1.26 eV and 0.75 eV band gap is obtained
for $\mathbf{R}_{out}$ and $\mathbf{R}_{in}$ rotational states,
respectively. This indicates that these two AFD states both possess
a semiconducting property. A ferroelectric reversal would be
expected to these two reversal AFD states. However, Bi$_2$FeMnO$_6$
remains metal property without applying $U$. Again this shows that
the on-site Coulomb interaction is significant in opening the band
gap for perovskite metal oxides.
In order to shed light on the role of DMI between neighboring Fe and
Mn ions, we report the orbital resolved density of states(ODOS) of
the magnetic ions as $U$ is equal to 2.6 eV which is exactly under
the critical value for precluding the antisymmetric DMI. The ODOS of
Fe and Mn corresponding to $\mathbf{R}_{out}$ and $\mathbf{R}_{in}$
states are shown in Fig. 4. It is apparent that there are large
amount of doubly degenerate $e_g$ states for Fe and Mn in the
occupied bands around the vicinity of the Fermi level. The charge
hybridization between Fe-3d and Mn-3d electrons is realized mainly
by the $e_g$-$e_g$ states interaction. It suggests that the
$e_g$-$e_g$ AFM interaction play an important role in the
antisymmetric DMI between Fe and Mn ions. $e_g$ state is composed by
$d_{z^2-r^2}$ and $d_{x^2-y^2}$ orbitals. It is worth mentioning
that the finite ODOS in the vicinity of Fermi level is mainly
attributed to the $d_{z^2-r^2}$ orbital. We suggest that $e_g$-$e_g$
AFM interaction is exactly mediated by neighboring $d_{z^2-r^2}$
orbitals of Fe and Mn ions. The charge transformation between Fe and
Mn ions, and hence the inversion of magnetization can be seen from
Fig. 5. The hopping of electrons between neighboring $e_g$-$e_g$
pairs occurs due to the DMI as the inversion symmetry of cations is
broken. Moreover the hopping of electrons couples with the rotation
of the neighboring oxygen octahedra to produce the resultant
magnetization. The charge distribution in neighboring $e_g$-$e_g$
states is polarized to a certain rotational vector, and the charge
distribution is inversely polarized as the rotational vector is
changing direction, leading to the reversal of resultant
magnetization eventually. The charge distribution is supposed to
distribute uniformly without applying the DMI. However, the
homogenous distribution is destroyed by the DMI between neighboring
magnetic ions to form a polarized status. We suggest this
constitutes the microscopic mechanism of DMI in perovskite
multiferroics. On the other hand, the AFM interaction in triply
degenerate $t_{2g}$-$t_{2g}$ composed by $d_{xy},d_{yz},$ and
$d_{xz}$ orbitals is relatively weak comparing with $e_g$-$e_g$ AFM
interaction. Again this shows that the latter is deeply involved in
the antisymmetric DMI.
\section{ Conclusion}
The critical value of $U$ for depressing DMI in Bi$_2$FeMnO$_6$ is
found to be 2.7 eV which is smaller than that in BiFeO$_3$. This
indicates that the DMI in the new system functions weaker in
comparison with that in BiFeO$_3$ . ODOS of Fe and Mn corresponding
to different AFD displacements show that the microscopic mechanism
of the DMI is originating from the hopping of electrons between
neighboring $e_g$-$e_g$ states, and the $e_g$-$e_g$ AFM interaction
couples with the rotation of the neighboring oxygen octahedra.
|
1,108,101,562,941 | arxiv | \section{Introduction}
Transformers \citep{vaswani-attention} have seen widespread use and success in recent years in deep learning, operating on numerous types of data. It is natural to also apply transformers to graph-structured data, treating the input graph's vertices as a bag of tokens. There are two important issues to be resolved for a proper formulation of a graph transformer: first, how should a graph's edges and edge features be taken into account in the transformer architecture, and second, what is an appropriate position encoding (PE) for a transformer operating on a graph? We focus on answering the latter by using a graph automaton to compute the PEs for a graph transformer.
Designing PEs for graph transformers is not a new topic in the graph neural network literature. Broadly speaking, following the terminology used to describe approaches in developing graph convolutional networks, PEs can be categorized as being (1) spectral or (2) spatial in nature \citep{zhang-spectral-spatial}. Spectral methods leverage the graph Laplacian, an incredibly useful descriptor of key structural features of the graph that are able to describe and model phenomena such as heat diffusion and electrical interactions in physics. In contrast, spatial methods use local, node-level features to help the transformer differentiate nodes in the graph. Examples of these features include node degree, shortest-path distance, and self-landing probability during a random walk.
In this work, we propose GAPE (Graph Automaton PE), a PE scheme that is inspired by neither spectral nor spatial methods but rather weighted graph automata. GAPE avoids several pitfalls of previous PE methods, and it offers a more robust method of encoding positional information of nodes in a graph than the spatial methods. We also show mathematical connections between GAPE and other PE schemes, including the sinusoidal encodings of \citet{vaswani-attention}, providing a satisfying theoretical basis for its use. Finally, although we do not provide experimental results, GAPE is able to give PEs for directed graphs in addition to undirected graphs, which spectral methods often assume.
Further, GAPE is able to provide distributed representations for use as PEs. Other graph PE schemes have different strategies for delivering a $k$-dimensional encoding; spectral methods might use the first $k$ eigenvectors of the Laplacian \citep{dwivedi-benchmarking}, while spatial methods might consider random walks of up to $k$ steps \citep{dwivedi-lspe}. These are not distributed representations, in the sense that particular dimensions of the PE of a node correspond to particular features of that node in the graph. One consequence of this is that the choice of $k$ may depend on the size of the graph -- for example, the graph might not even have $k$ eigenvectors.
To outline our contributions, GAPE (1) can (theoretically and empirically) simulate sinusoidal PEs and achieves the same BLEU score on machine translation (MT), (2) can simulate the Laplacian eigenvector PEs, (3) is a generalization of personalized PageRank \citep{pagerank}, (4) competes with other recent PEs on several graph and node-level tasks, and (5) gives a $k$-dimensional distributed representation for any desired~$k$. An additional point is that, while we do not provide experiments, GAPE is able to encode directed graphs, unlike spectral PE methods. Further, we provide a comparison of recent position encodings independent of the use of edge features.
\section{Related Work}
Spectral methods use the eigenvalues and eigenvectors of a graph's Laplacian matrix to construct a PE. Informally, a graph's Laplacian matrix can be thought to measure the ``smoothness'' of functions defined on the graph, and its eigenvectors happen to be the smoothest functions. Each eigenvector can then be interpreted as descriptors of how information propagates through the graph. For example, Laplacian eigenvectors can be used for graph clustering \citep{fiedler, Shi2000}, modeling heat diffusion \citep{coifman-diffusion}, and solving the max-flow/min-cut problem \citep{chung-spectral}. Given these traits, deriving a PE based on the eigenvalues and eigenvectors of the graph Laplacian is well-motivated.
\citet{dwivedi-gtn} draw on ideas from \citet{belkin-niyogi-lape} and use the smallest $k$ eigenvectors of the graph Laplacian according to their associated eigenvalues as PEs. \citet{dwivedi-benchmarking} also claim that these eigenvectors are a generalization of the original Transformer's sinusoidal encodings, which we show to be false in \cref{section:experiments}. \citet{kreuzer-san} take a more comprehensive approach, employing an attention-based method to produce a PE informed by potentially the entire spectrum of the graph Laplacian. However, save for augmentations by \citet{zhang-magnet}, these methods restrict the graph transformer to operating on undirected graphs. This is to ensure that each graph's Laplacian matrix is symmetric and therefore has real eigenvalues.
In contrast, spatial methods utilize local node properties such as node degree \citep{ying-graphormer}, self-landing probability during a random walk \citep{dwivedi-lspe}, and shortest path distance \citep{li-degnn, ying-graphormer, mialon-graphit}.
However, these methods come with their own sets of shortcomings. For example, a node's degree is not enough to uniquely identify its position in a graph; the number of a neighbors a node has is not indicative of which nodes those neighbors are. Other techniques such as those involving random walks and shortest-path distances are limited to local and pairwise node interactions which can become expensive to compute if one wants to encode interactions between nodes further apart in the graph. Some PEs such as one by \citet{dwivedi-lspe} also require a choice of fixed neighborhood size to walk, which limits the uniqueness of the encoding.
\section{GAPE: Graph Automaton Positional Encoding}
\subsection{Weighted graph-walking automata}
In this section, we define a weighted version of graph-walking automata \citep{kunc+okhotin:2013}.
Graph-walking automata, as originally defined, run on undirected, node-labeled graphs; the graphs also have a distinguished initial node, and for every node, the incident edges are distinguished by labels called ``directions.'' Here, we consider directed graphs (and handle undirected graphs simply by symmetrizing them).
Our graphs do not have initial nodes or ``directions.''
\begin{definition}
Let $\Sigma$ be a finite alphabet. A \emph{directed node-labeled graph} over $\Sigma$ is a tuple $G = (V, \ell, A)$, where
\begin{itemize}
\item $V = \{1, \ldots, n\}$ is a finite set of nodes;
\item $\ell \colon V \rightarrow \Sigma$ maps nodes to labels;
\item $A \in \mathbb{R}^{n \times n}$ is an adjacency matrix, that is, $A_{ij} = 1$ if there is an edge from $i$ to $j$, and $A_{ij} = 0$ otherwise.
\end{itemize}
\end{definition}
If $\Sigma = \{1, \ldots m\}$, we can also think of $\ell$ as a matrix in $\{0,1\}^{m \times n}$.
\begin{definition}
Let $\Sigma = \{1, \dots, m\}$ be a finite alphabet.
A \emph{weighted graph-walking automaton (WGWA)} over $\Sigma$ is a tuple $M = (Q, \alpha, \mu, \tau)$, where
\begin{itemize}
\item $Q = \{1, \ldots, k\}$ is a finite set of states;
\item $\alpha \in \mathbb{R}^{k \times m}$ is a vector of initial weights;
\item $\mu \in \mathbb{R}^{k \times k}$ is a matrix of transition weights;
\item $\tau \in \mathbb{R}^{k \times m}$ is a vector of final weights.\footnote{The initial and final weights are commonly called $\lambda$ and $\rho$ (for ``left'' and ``right''), respectively, but since these letters are commonly used for eigenvalues and spectral radii, respectively, we use $\alpha$ and $\tau$ instead.}
\end{itemize}
\end{definition}
\begin{definition}
Let $M$ be a WGWA and $G$ be a directed graph.
A \emph{configuration} of $M$ on $G$ is a pair $(q, v)$, where $q \in Q$ and $v \in V$.
A \emph{run} of $M$ on $G$ from $(q,u)$ to $(r,v)$ with weight $w$, where $q \in S$, is a sequence of configurations $(q_1, v_1), \ldots, (q_T, v_T)$, where $(q_1, v_1) = (q,u)$, $(q_T, v_T) = (r,v)$, and
\begin{equation}
w = \alpha_{q,\ell(q)} \left(\prod_{t=1}^{T-1} \mu_{q_t,q_{t+1}}\right) \tau_{r,\ell(r)}.
\end{equation}
\end{definition}
We can think of a graph automaton as simulating random walks with state.
At each time step, the automaton is positioned at some node and in some state.
At time $t=1$, the automaton starts at some initial node in state $q$ with weight $\alpha_{q,\ell(q)}$.
Then at each time step, if the automaton is at node $u$ in state $q$,
it either moves to node $v$ and transitions to state $r$ with weight $\mu(\ell(u))_{qr}$,
or halts with weight $\tau(\ell(u))_{q,\ell{q}}$.
\subsection{Positional encodings}
To encode the position of node $v$ using a WGWA $M$, we use the total weight of all runs of $M$ starting in any configuration and ending in configuration $(v,r)$. The weights for all possible $r$ give a $k$-dimensional vector, which can be found by solving
\begin{equation}
\label{eq:gape}
P = \mu P A + \alpha \ell \qquad P \in \mathbb{R}^{k \times n}.
\end{equation}
Then $\text{GAPE}_M(v) = P_{:,v} \circ \tau \ell$ (where $\circ$ is elementwise multiplication) is our PE for $v$.
Notice that whether a particular node is able to be assigned a PE does not depend on $k$, unlike the random walk encoding \citep{dwivedi-lspe} or Laplacian eigenvector PE \citep{dwivedi-benchmarking}. The columns of $P$ can be viewed as a distributed representation in the number of states of the WGWA. We pass $P^\top$ through a linear layer whenever $k < d$ where $d$ is the dimension of the node features.
To solve \cref{eq:gape} for $P$, one may use the ``$\operatorname{vec}$ trick'' \citep[p.~59--60]{matrixcookbook}:
\begin{align*}
\operatorname{vec}\bm{P} & = \operatorname{vec}\mu \bm{P} A + \alpha \ell \\
& = (A^\top \otimes \mu) \operatorname{vec} \bm{P} + \alpha \ell \\
(I-A^\top \otimes \mu) \operatorname{vec} \bm{P} & = \alpha \ell
\end{align*}
where $\operatorname{vec}$ flattens matrices into vectors and $\otimes$ is the Kronecker product. However, solving the above linear system directly for $\operatorname{vec} \bm{P}$ runs in $O(k^3n^3)$ time due to inverting $(I-A^\top \otimes \mu)$, which is impractical for even small graphs given a large enough batch size. Fortunately, \cref{eq:gape} belongs to a family equations known as Sylvester equations \citep[p.~111]{horn+johnson:2013}, and can be solved in $O(n^3 + k^3)$ time with the Bartels-Stewart algorithm \citep{bartels-stewart}. This algorithm is implemented in SciPy \citep{scipy}, but unfortunately not PyTorch \citep{pytorch}, so (unless otherwise indicated) we random initialize $\mu$ and $\alpha$ using orthogonal initialization \citep{saxe+:2014}. Because, in \cref{eq:gape}, $\bm P$ is not well-defined unless $\rho < 1$ where $\rho$ is the spectral radius of $\mu$, we scale the orthogonally-initialized $\mu$ by an experimentally chosen ``damping'' factor $\gamma < 1$.
\subsection{Connection with other PEs}
We move on to connect GAPE with other PEs. GAPE can be interpreted as a generalization of other PE schemes previously used in graph transformers.
\subsubsection{Sinusoidal encodings}
For any string $w = w_1 \cdots w_n$, we define the \emph{graph} of $w$ to be the graph with nodes $\{1, \ldots, n\}$, labeling $\ell(1) = 1$ and $\ell(i) = 2$ for $i > 1$, and an edge from $i$ to $(i+1)$ for all $i = 1, \ldots, n-1$.
\begin{proposition}
\label{proposition:sinusoidal-pe}
There exists a WGWA $M$ such that for any string $w$, the encodings $\text{GAPE}_M(i)$ for all nodes $i$ in the graph of $w$ are equal to the sinusoidal PEs of \citet{vaswani-attention}.
\end{proposition}
\begin{proof}
If $G$ is the graph of a string, the behavior of $M$ on $G$ is similar to a unary weighted finite automaton (that is, a weighted finite automaton with a singleton input alphabet), which \citet{debenedetto+chiang:icml2020} have shown can recover the original sinusoidal PEs.
Let
\begin{align*}
\alpha &=
\begin{bmatrix}
0 & 0 \\ 1 & 0 \\ 0 & 0 \\ 1 & 0 \\ \vdots & \vdots
\end{bmatrix} &
\mu &=
\begin{bmatrix}
\cos \theta_1 & \sin \theta_1 & 0 & 0 & \cdots \\
-\sin \theta_1 & \cos \theta_1 & 0 & 0 & \cdots \\
0 & 0 & \cos \theta_2 & \sin \theta_2 & \cdots \\
0 & 0 & -\sin \theta_2 & \cos \theta_2 & \cdots \\
\vdots & \vdots & \vdots & \vdots & \ddots
\end{bmatrix} &
\tau &= \begin{bmatrix}
1 & 1 \\ 1 & 1 \\ 1 & 1 \\ 1 & 1 \\ \vdots & \vdots
\end{bmatrix}
\end{align*}
where $\theta_j = -10000^{-2(j-1)/k}$.
Then the PE for node $i$ is $\alpha_{:,1} \mu^i$, which can easily be checked to be equal to the original sinusoidal PEs.
\end{proof}
To verify the above proposition, we ran an MT experiment
benchmarking several graph PE schemes and compared their performance
with GAPE using the open-source Transformer implementation Witwicky,\footnote{https://github.com/tnq177/witwicky} with default settings.
Results and a complete experimental description are below in \cref{section:experiments}.
\subsubsection{Laplacian eigenvector encodings}
\label{subsection:lape}
Next, we turn to LAPE \citep{dwivedi-benchmarking}, which is defined as follows. Define the graph Laplacian to be $L = D - A$ where $D_{vv}$ is the degree of node $v$. Let $V$ be the matrix whose columns are the eigenvectors of $L$, that is,
\[ L V = V \Lambda \]
where $\Lambda$ is the diagonal matrix of eigenvalues. Then if $k \le n$, define
\[ \text{LAPE}(v) = V_{v,1:k}. \]
\begin{remark}
If we assume an undirected graph and use the Laplacian matrix $L$ in place of $A$, then there is a WGWA $M$ with $n$ states that computes LAPE encodings.
Namely, let $m = 1$, $\alpha = \mathbf{0}$,
let $\mu$ be constrained to be diagonal,
and let $\tau_{q,1} = 1$ for all $q$.
Then $P$ is the solution to the equation $P = \mu P L$.
But observe that $\mu = \Lambda^{-1}$ and $P = Q^\top$ are a solution to this equation. So $\text{GAPE}_M(v) = \text{LAPE}(v)$.
\end{remark}
While the above is true, some caveats are in order. First, the choice of $\mu$ does depend on $L$ and therefore the graph.
Second, the theoretical equivalence of GAPE and LAPE requires a significant adjustment to \cref{eq:gape} by replacing the adjacency matrix with the graph Laplacian, which is less interpretable in an automaton setting.
However, we show in \cref{section:experiments} that, for a fixed graph and using the adjacency matrix, GAPE can actually learn weights $\mu$ and $\alpha$ that fit LAPE closely.
\subsubsection{Personalized PageRank \& random-walk encoding}
Next, we discuss the random-walk PE (RW) used by \citet{dwivedi-lspe} which is a simplification of work by \citet{li-degnn}.
\begin{definition}
Let $u$ be a node in the graph, and let $W = AD^{-1}$ where $D$ is the degree matrix of the graph. Then $RW(u)$ is defined by
\begin{equation} \label{eq:li-rw}
RW(u) = \begin{bmatrix} W_{uu}, [W^2]_{uu}, \dots, [W^k]_{uu} \end{bmatrix} \in \mathbb{R}^k.
\end{equation}
for some experimentally chosen $k$. In other words, $RW(u)_i$ is the probability that an $i$-length random walk starting at $u$ returns to $u$.
\end{definition}
Next, we draw a connection between GAPE and the random walk encoding above by interpreting GAPE as a method of computing personalized PageRank scores. As described by \citet{zhang-ppr}, the $v$th entry of the personalized PageRank (PPR) vector $\pi_{u}$ of a node $u$ is the probability that a random walk from node $u$ terminates at $v$ with damping value $\beta$. At each time step, the random walker either terminates at the current node with probability $\beta$ or continues to a random neighbor with probability $1-\beta$. Formally, define the PPR vector $\pi_u$ to be
$$\pi_u = \beta e_u + (1-\beta)\pi_u W$$
where $e_u$ is a one-hot vector for $u$.
So the matrix of PPR scores $\Pi$ is
\begin{equation}\label{eq:ppr}
\Pi = \beta I + (1-\beta)\Pi W.
\end{equation}
One can see the similarity between \cref{eq:ppr} and \cref{eq:gape}: GAPE is equivalent to PPR when $k = 1$, $\alpha = \beta I$, and $\mu = 1-\beta$ and can be therefore seen as a generalization of PPR. Interestingly, \cref{eq:ppr} can be seen as a way of calculating something analogous to RW. Since $\Pi_{u, v}$ is the probability of a node $u$ landing on node $v$ during an infinitely long random walk, then $\Pi_{u, u}$ is the self-landing probability of a random walker starting at node $u$, which captures similar information as $RW(u)$. The difference is that $RW(u)$ is determined by random walks of a fixed length. It turns out we can recover very similar encodings as $RW(u)$ by taking the diagonals of the PPR matrix for successive powers of $W$. That is, if $\Pi^{(i)}$ is the PPR matrix using $W^i$, then we can define the PPR Power (PPRP) encoding of a node $u$ by
$$PPRP(u) = \begin{bmatrix} \Pi^{(1)}_{uu}, \Pi^{(k)}_{uu}, \dots, \Pi^{(k)}_{uu} \end{bmatrix} \in \mathbb{R}^k.$$
Intuitively, $PPRP(u)_i$ is the self-landing probability of a random walker during an infinitely long random walk with a step size of $i$.
Interestingly, while PPRP and RW are not equal, their relative values are nearly identical, and they give nearly identical performance on certain graph tasks as depicted in \cref{fig:rw-ppr} and \cref{table:rw-ppr-graph}. Given that GAPE is a generalization of PPR and that PPRP and RW are empirically highly similar, then RW and PPRP can therefore be understood as extensions of GAPE, which adds to GAPE's generality.
To simulate RW with PPRP, we compute the RW and PPRP PE matrices for a graph from CYCLES and ZINC, two different graph datasets which are explained in greater detail in \cref{section:experiments}. We also put the experimental details regarding the transformer and its hyperparameters in \cref{section:experiments}. When comparing their performance on these datasets, we use min-max normalization on both RW and PPRP in order to bring them both to similar scales.
\begin{figure}
\includegraphics[width=.5\textwidth]{images/rw.png}\hfill
\includegraphics[width=.5\textwidth]{images/ppr.png}\hfill
\caption{Left is RW, right is PPR for a graph in CYCLES. While PPRP is not calculating self-landing probabilities, the relative differences are the same.}
\label{fig:rw-ppr}
\end{figure}
\begin{table}
\caption{RW vs PPRP with min-max normalization}
\label{table:rw-ppr-graph}
\centering
\begin{tabular}{lccc}
PE scheme & CYCLES $(\uparrow)$& ZINC $(\downarrow)$ \\
\cmidrule(lr){1-3}
RW & 99.95 & 0.207 \\
PPR & \textbf{100.00} & \textbf{0.198}\\
\cmidrule(lr){1-3}
\end{tabular}
\end{table}
\section{Experiments} \label{section:experiments}
In this section, we compare GAPE experimentally to the following graph PEs:
\begin{description}
\item[RW] Random Walk \citep{dwivedi-lspe, li-degnn}
\item[LAPE] LAplacian PE \citep{dwivedi-benchmarking, dwivedi-gtn}
\item[SA] Spectral Attention \citep{kreuzer-san}
\item[SPD+C] Shortest Path Distance +
Centrality \citep{ying-graphormer}
\end{description}
\citet{ying-graphormer} use ``Centrality'' to refer to node degree. First, we test whether GAPE can be fit to LAPE. Then, we compare all of the graph PEs with GAPE in two kinds of tasks: MT and graph and node-level tasks.
\subsection{Simulating LAPE}
\label{sec:simulating_lape}
Above in \cref{subsection:lape}, we showed that GAPE and LAPE are equivalent when $\mu$ is chosen depending on the input graph and the adjacency matrix is replaced with the Laplacian. In the following experiment, we show that GAPE can learn $\alpha$ and $\mu$ using the adjacency matrix to produce a PE that closely fits LAPE.
We first generate three Erd\H{o}s-R\'{e}nyi graphs \citep{erdos-reyni} containing 20 nodes, each with edge probability $p \in \{0.7, 0.2, 0.05\}$. For thoroughness, we also take a random molecule with 29 nodes from the ZINC \citep{irwin-zinc} dataset. The goal is to have GAPE learn all of the Laplacian eigenvectors for each graph, setting $k = n$ for GAPE where $n$ is the number of vertices in the graph.
For each graph, we choose one of four different seeds and train GAPE for 10 epochs using 1000 instances of the graph's Laplacian eigenvectors as training data. The objective function is the entry-wise mean squared error (MSE) between the eigenvectors and the PE produced by GAPE. We average the MSEs over the four seeds and report them in \cref{table:learning-lape-performance}.
\begin{table}[ht]
\caption{Simulating LAPE}
\label{table:learning-lape-performance}
\centering
\begin{tabular}{lcc}
Graph & Average MSE\\
\cmidrule(lr){1-2}
Erd\H{o}s-R\'{e}nyi ($p=0.7$) & $2.47 \times 10^{-7}$ \\
Erd\H{o}s-R\'{e}nyi ($p=0.2$) & $8.53 \times 10^{-8}$ \\
Erd\H{o}s-R\'{e}nyi ($p=0.05$) & $5.46 \times 10^{-7}$ \\
ZINC Molecule & $5.00 \times 10^{-4}$ \\
\cmidrule(lr){1-2}
\end{tabular}
\end{table}
We see in \cref{table:learning-lape-performance} that GAPE learns the Laplacian eigenvectors to a high degree of precision. We specifically chose Erd\H{o}s-R\'{e}nyi graphs with varying $p$ to vary the degree of connectedness of each graph, to test whether GAPE can learn to zero out multiple columns of its PE each corresponding to one of the graph's connected components.
\begin{figure}[ht]
\includegraphics[width=.49\textwidth]{images/init_diff_er_0.7.png}\hfill
\includegraphics[width=.49\textwidth]{images/init_diff_er_0.2.png}\hfill
\caption{Differences between initial weight matrix and target eigenvectors. Left is Erd\H{o}s-R\'{e}nyi ($p=0.7$), and right is Erd\H{o}s-R\'{e}nyi ($p=0.2$).}
\label{fig:init-diffs1}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=.49\textwidth]{images/init_diff_0.05.png}\hfill
\includegraphics[width=.49\textwidth]{images/init_diff_molecule.png}\hfill
\caption{Differences between initial weight matrix and target eigenvectors. Left is Erd\H{o}s-R\'{e}nyi ($p=0.05$), and right is the random ZINC molecule.}
\label{fig:init-diffs2}
\end{figure}
One might be concerned that GAPE is merely learning the trivial solution to \cref{eq:gape}: just set $\mu = 0$ and $\alpha = V$ where $V$ is the Laplacian eigenvector matrix of the given graph. However, our experiments show this does not occur.
In \cref{fig:init-diffs1,fig:init-diffs2}, we plot the absolute differences between the initial weight
matrix and the target eigenvectors. If GAPE were learning the trivial solution to learning LAPE, we would expect each plot to be nearly entirely comprised of 0s (black), but as we can see, the differences between many of the entries of the initial and target matrices range from 1--3, and each heatmap is far from being completely black.
\subsection{Machine translation}
\label{sec:mt_experiments}
Our first experiment tests how well these various non-sinusoidal graph PEs perform on MT from English to Vietnamese. To test GAPE, we use the adjacency matrix of the directed path graph of $n$ vertices, where $n$ is the number of tokens in the input sequence, and use $k = 512$. We also initialize $\alpha_{:,1}$ with a normal initialization and set all other columns of $\alpha$ to 0 in order to force the WGWA to start walking the path graph from its first node.
For LAPE and SA, we represent a sentence of $n$ tokens with a cycle graph of $n$ vertices and take the eigenvectors corresponding to the 20 smallest eigenvalues of its Laplacian.
We use the cycle graph because its Laplacian eigenvectors form the columns of the DFT matrix \citep{davis-circulant} which \citet{bronstein-geodl} note to share a similar structure with Transformer's original sinusoidal encodings.
The sinusoidal encodings would be approximated by the graph's Laplacian eigenvectors, which would confirm the suggestion that the Laplacian eigenvectors of a graph are a generalization of the sinusoidal encodings \citep{dwivedi-benchmarking}.
However, the sinusoidal encoding for a node $u$ in a sequence does not depend on the length of the sequence, but LAPE for a node $u$ does. So, to try to obtain a better fit to sinusoidal encodings, we try an additional variant of LAPE, which we call $\text{LAPE}_{10\text{K}}$, where the cycle graph is fixed with $10,000$ vertices in which the path graph representing the input sentence can be embedded.
For RW, we use the path graph of 1024 vertices (the transformer's maximum input length) with a $20$-hop neighborhood. SPD+C uses the same path graph as RW and uses a shortest path distance embedding size of 256 and a degree embedding size of 2 since any node in the path graph can have a maximum degree of only 2.
The results are shown in the first column of \cref{table:graph-performance}.
We see that GAPE achieves a higher BLEU score than every other graph PE and very nearly matches the original sinusoidal PEs of \citet{vaswani-attention},
empirically confirming \cref{proposition:sinusoidal-pe} and establishing GAPE as a generalization of sinusoidal PEs. Further, despite previous speculations on the connection between eigenvectors of the cycle graph Laplacian and the sinusoidal encodings, LAPE, LAPE$_{\text{10K}}$, and SA underperform, casting doubt on suggestions that these PEs are generalizations of the sinusoidal encodings from the original Transformer. SPD+C performs surprisingly poorly despite its shortest path distance embedding being closely related to the relative PE by \citet{shaw-etal-2018-self, Huang2019MusicTG}. However, it makes sense that the degree embedding is of little use since there are only two degree types in the path graph: one to denote the beginning and end of the sentence, and another to represent the positions of all other tokens. Under the degree encoding, no two tokens' positions between the start and end tokens are distinguishable.
\subsection{Graph- and node-level tasks}
\label{sec:other_experiments}
\subsubsection{Datasets}
\label{subsection:datasets}
We additionally compare performance of GAPE with the other PEs on 7 undirected graph tasks. Below, we give descriptions of each dataset, their importance, and experimental details.
\textbf{ZINC} \citep{irwin-zinc} is graph regression dataset with the task of predicting the solubility of various molecules. Given the wide applications of GNNs on molecular data, we thought to use ZINC as it is one of the most popular molecular graph datasets used in the literature. For consistency with the literature on transformer PEs, we use the 12K subset of the full 500K dataset used by \citet{dwivedi-benchmarking} and \citet{kreuzer-san} and use a batch size of 128.
\textbf{CSL} \citep{murphy-relpool} is a graph classification dataset of circular skip link (CSL) graphs. Each CSL graph is 4-regular with $n$ nodes in a cycle and skip links of length $k$ which define isomorphism classes to distinguish. We use the same dataset as \citet{murphy-relpool} and follow \citet{chen-iso} choosing $n = 41$ and $k = 10$. \citet{murphy-relpool} and \citet{xu2018how} show that the 1-dimensional Weisfeiler-Lehman isomorphism (1-WL) test \citep{weisfeiler-lehman} is unable to distinguish between isomorphism classes of CSL graphs and that message-passing GNNs are at most as powerful as the 1-WL test. So, a PE that succeeds on this dataset can be seen as enhancing the expressive power of its host GNN architecture. There are 150 graphs total, and following \citet{murphy-relpool}, we perform a 5-fold cross validation split with 5 sets of train, validation, and test data proportioned by the ratio 3:1:1 and report the average accuracy of the 5 folds. We use a batch size of 5.
\textbf{CYCLES} \citep{murphy-relpool, loukas-what} is a cycle detection dataset also meant to test the theoretical power of a given GNN. \citet{loukas-what} showed the minimum necessary width and depth of a message-passing GNN to detect 6-cycles, and in this task, we are interested in the additive power of each PE in detecting 6-cycles. We use the same train, test, and validation splits as \citet{loukas-what}, using 200 graphs for training, 1000 graphs for validation and 10000 graphs for test with a batch size of 25.
\textbf{CYCLES-V} is also a cycle detection dataset, except with varying cycle lengths. Each graph is a copied from CYCLES, and nodes are added to lengthen cycles and maintain similar graph diameters. Whereas positive samples of CYCLES only contains 6-cycles, positive samples of CYCLES-V contain cycles of lenth ranging from 6 to 15. We made this adjustment to CYCLES because we wanted to test the generalizability of RW since the choice of random-walk neighborhood size for RW crucially affects its performance; clearly, RW will be able to detect cycles of length $k$ if its neighborhood size is at least $k$. We use the same train, validation, and test splits and batch size as CYCLES.
\textbf{PATTERN and CLUSTER} \citep{dwivedi-benchmarking, abbe-sbm} are node classification datasets generated using the Stochastic Block Model \citep{abbe-sbm}, which is used to model communities within social networks according to certain hyperparameters such as community membership probability. PATTERN has 2 node classes task while CLUSTER has 7. We use the same splits as \citet{dwivedi-benchmarking} with 10,000 graphs for train and 2,000 graphs for validation and test. We use a batch size of 26 and 32 for PATTERN and CLUSTER, respectively.
\textbf{PLANAR} is a new dataset we generate to test the abilities of each PE to help the transformer correctly classify whether a given graph has a planar embedding. We introduce this task as another test of the theoretical power of each PE. In the negative case, detecting a non-planar graph is merely a matter of being able to count nodes since, by Euler's formula, a graph is not planar if $|E| > 3|V| - 6$. However, the positive case is more involved with checking whether there are subgraphs homeomorphic to the complete graph $K_5$ or the utility graph $K_{3, 3}$ \citep{Kuratowski1930, wagner}. We use 7000 graphs for training and 1500 for validation and testing. On average, each graph has 33 nodes. We use a batch size of 32.
For GAPE, on MT, we used PE dimension $k = 512$ and a damping factor $\gamma = 1$. On the graph-level tasks, we use $k=32$ and $\gamma = 0.02$. For RW, we choose $k= 20$ on all tasks except for CYCLES-V, where we chose $k=9$ in order to test its generalizability to longer cycle lengths. For LAPE, we use $k=20$ on all tasks, following \citet{dwivedi-gtn}. For SA, we use $k=10$ on all tasks, following \citet{kreuzer-san}.
\subsubsection{Experimental setup}
We use the graph transformer from \citet{dwivedi-gtn} for our backbone transformer, as it is a faithful implementation of the original Transformer model capable of working on graphs. All models for the graph tasks were subject to a parameter budget of around 500,000 as strictly as possible similar to \citet{dwivedi-gtn} and \citet{kreuzer-san}. Since there is little consensus on how to incorporate edges and edge features into a graph transformer, we omit the use of edge features in all of our tests for a fair comparison, only using node features.
Across all tasks, we use the Adam \citep{kingma-adam} optimizer. For nearly all tasks, we use 10 layers in the graph transformer, 80 node feature dimensions, 8 attention heads, learning rate of 0.005, reduce factor of 0.5, and patience of 10. For ZINC, we follow \citet{dwivedi-benchmarking} and use a learning rate of 0.007 and patience of 15. CSL is such a small dataset that we opted to shrink the number of transformer layers down to 6 and maintain a parameter budget of around 300,000 for greater training speed and to avoid overparametrization.
SA uses a transformer encoder to compute its PE. For this transformer, we use 1 attention layer, 4 attention heads, and 8 feature dimensions in order to respect the 500,000 parameter budget. For CSL, we reduced the number of attention heads to 1 to respect the 300,000 parameter budget.
For SPD+C, we vary the size of the node degree and shortest-path embeddings according to each dataset since each dataset contains graphs of varying sizes. The smallest and largest degree embedding sizes used were 64 and 256, respectively. The smallest and largest shortest-path embedding sizes used were 64 and 512, respectively.
\cref{table:graph-performance} shows performance on graph datasets using neighborhood-level attention. \textbf{Bold} numbers indicate the best score of that column up to statistical significance besides the MT column where the comparison is between the non-sinusoidal encodings only.
For the graph tasks, Baseline refers to the absence of any PE. For MT, Baseline refers to the original sinusoidal encodings by \citet{vaswani-attention}. For each dataset, we take the average of 4 runs each conducted with a different random seed. We report metrics from each dataset's test set based on the highest achieved metric on the corresponding validation set.
\subsubsection{GAPE variants}
We also try several different variations of GAPE, with the following normalization and initial weight vector selection strategies.
\begin{description}
\item[$\text{GAPE}^\ast$] Row-wise softmax on $\mu$ without damping factor $\gamma$.
\item[$\text{GAPE}^{\ast\ast}$] Same as $\text{GAPE}^\ast$ but with a column-wise softmax on $\alpha$.
\item[$\text{GAPE}^*_{20}$, $\text{GAPE}^{**}_{20}$] Like $\text{GAPE}^*$ and $\text{GAPE}^{**}$, respectively, but use $m = 20$ node labels and initialize each initial weight vector $\alpha_{:,\ell}$ to a different random vector for each label $\ell$.
\item[$\text{GAPE}^{**}_{\text{max}}$] Like $\text{GAPE}^{**}$, but use a different node label for each node ($m = n$) and initialize each initial weight vector to a different random vector.
\end{description}
The rationale behind $\ast$ versions of GAPE is that it is counter-intuitive for the transition weights of an automaton to be negative, and so normalizing the rows of $\mu$ should give a more probabilistic interpretation of the WGWA's random walk on the graph. The rationale behind $\ast\ast$ is to normalize the columns of $\alpha$ according the PPR understanding of GAPE; teleportation probabilities cannot be negative.
Finally, recall that $\alpha$, the matrix of initial weights from \cref{eq:gape}, assigns different initial weights to each node label. While our base variation of GAPE uses only one node label, we also tried using $m=20$ labels. The hope is that varying the initial weights can help GAPE learn on graphs with high levels of symmetry, such as those from CSL. By assigning each node a different initial weight vector, the automaton should be forced to compute different weights for nodes with similarly structured neighborhoods. We also experiment with giving every node a unique initial weight vector by letting $m=N$, where $N$ is larger than the size $n$ of any graph in the given dataset, and giving every node a different node label.
\begin{table}
\caption{Results on MT and Graph Tasks. MT is measured by BLEU score, ZINC by mean absolute error (MAE), and the rest are measured by accuracy.}
\centering
\label{table:graph-performance}
\centering
\setlength{\tabcolsep}{5pt}
\resizebox{\textwidth}{!}{
\begin{tabular}{lcccccccc}
\toprule
PE scheme & MT ($\uparrow$) & ZINC ($\downarrow$) & CSL ($\uparrow$) & CYCLES ($\uparrow$) & PATTERN ($\uparrow$) & \multicolumn{1}{l}{CLUSTER ($\uparrow$)} & PLANAR ($\uparrow$) & CYCLES-V ($\uparrow$)\\
\midrule
Baseline & 32.6 & 0.357 & 0.10 & 50.00 & 83.91 & 71.77 & 50.00 & 73.31 \\
\midrule
LAPE & 17.4 & 0.311 & \textbf{100.00} & 97.08 & 85.05 & 72.14 & 96.41 & 88.53 \\
LAPE\_{10\text{K}} & 16.4 & - & - & - & - & - & - & - \\
SA & 16.9 & 0.248 & 87.67 & 89.52 & 83.61 & 72.84 & 97.45 & 81.11 \\
RW & 20.8 & \textbf{0.207} & 93.50 & \textbf{99.95} & \textbf{86.07} & 72.32 & \textbf{98.50} & 90.97 \\
SPD+C & 0.0 & 0.263 & 10.00 & 88.44 & 83.09 & 70.57 & 96.00 & 83.13 \\
\midrule
GAPE & \textbf{32.5} & 0.251 & 10.00 & 79.03 & \textbf{85.99} & {73.40} & 95.97 & 83.86 \\
GAPE$^\ast$ & - & 2.863 & 10.00 & 81.39 & 84.82 & \textbf{74.03} & 64.35 & 90.68\\
GAPE$_{20}^\ast$ & - & 1.339 & 48.67 & 84.66 & 69.95 & 70.57 & 75.12 & 89.01\\
GAPE$^{\ast\ast}$ & - & 2.067 & 10.00 & 83.42 & 83.46 & 72.39 & 63.38 & 91.76\\
GAPE$_{20}^{\ast\ast}$ & - & 0.837 & 58.00 & 85.43 & 81.53 & 72.06 & 70.48 & \textbf{95.12}\\
GAPE$_{\text{max}}^{\ast\ast}$ & - & 0.539 & \textbf{100.00} & 86.01 & 83.53 & 72.40 & 71.87 & 95.05\\
\bottomrule
\end{tabular}
}
\end{table}
\subsubsection{Discussion}
The graph tasks reveal the shortcomings of the vanilla transformer. Without a PE, it is unable to adequately distinguish non-isomorphic graphs in CSL, and also fails to detect 6-cycles and planar/non-planar graphs. All other PEs are able to improve on the baseline transformer, except on the CSL task where most PEs do not achieve 100\% accuracy. The fact that SPD+C is unable to learn CSL suggests that they it is not strictly more powerful than the 1-WL test.
It is interesting that SA underperforms relative to LAPE on many graph tasks given how its approach positions it as an improvement on LAPE and performs better than LAPE when incorporated in the spectral attention graph transformer devised by \citet{kreuzer-san}. While the encoder used to create the PE may need more parameters, a more likely explanation is that performance suffers from a lack of incorporating edge embeddings, which the spectral attention graph transformer assumes but we omit for a fairer comparison of just the node PEs.
We think it is natural to see that RW performs nearly perfectly on finding 6-cycles since a $20$-hop neighborhood is enough to uniquely encode nodes in those 6-cycles, but performance falters on CYCLES-V as expected as a 9-hop neighborhood is unable to provide a unique encoding to nodes in cycles of length greater than 9.
GAPE's ($\gamma = 0.02$) performance surpasses the other PEs only on PATTERN and CLUSTER, with the other variants showing more competitive performance in other datasets. We give a discussion on the various factors causing these trends below.
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{images/csl_1kind.png}\hfill
\includegraphics[width=.5\textwidth]{images/csl_20kinds.png}\hfill
\caption{Comparing GAPE with varying number of node labels $m$. The rows are the PEs. Left is GAPE with $m = 1$, and right is GAPE with $m = 20$.}
\label{fig:csl-kinds}
\end{figure}
\subsubsection{Effect of number of states on performance}
One thought for improving performance of GAPE on the graph tasks is to increase the number of states used in the WGWA on which it is based. However, we observed no positive trend in performance as we increase the number of states. For most tasks, increasing the number of states can even worsen performance. \cref{table:k-study} shows performance on ZINC, CYCLES, CLUSTER, and PATTERN as the number of states $k$ varies using the base GAPE method in \cref{table:graph-performance} with damping factor $\gamma = 0.02$. One thought is that adding states to the automaton adds risk of instability; if the GAPE encoding of a node $u$ is the total weight of all runs from any initial configuration to $(u, r)$ for all possible $r$, then adding states means adding more possible runs which may lead to weights that are too large, depending on the initialization of $\mu$.
\subsubsection{Effect of number of initial weight vectors}
Another thought is to consider how performance varies as the number of node labels $m$ increases. We see in \cref{table:graph-performance} that increasing $m$ from 1 to 20 grants noticeable but not very large performance gains on nearly all datasets except for CSL. We think this is because increasing $m$ forces GAPE to assign different initial weights for runs starting at otherwise seemingly identical nodes. In other words, increasing $m$ is akin to conditioning the automaton's starting weights on distinct node labels on the graph. Such a labeling allows GAPE to distinguish between nodes with the same neighbors and nodes with neighbors with the same features, aiding in classifying highly symmetric graphs like CSL graphs. This explanation applies to the cycle detection datasets as well as increasing $m$ presents up to a 3.36\% increase in accuracy on CYCLES-V.
We further note that when assigning every node a unique initial weight vector in a consistent fashion as in GAPE$^{\ast\ast}_\text{max}$, we see further improved performance on many of these highly symmetric graph datasets, most notably achieving 100\% on CSL. However, on none of these GAPE variants where $m > 1$ do we see a performance gain on PATTERN, CLUSTER, or PLANAR; we instead see a significant amount of overfitting. An explanation is that increasing $m$ may encourage the transformer to rely too heavily on each node's initial weight vectors to memorize node positions, leading too poor generalizability on graphs that do not have a highly symmetric structure. In other words, increasing $m$ strengthens the transformer's notion of node ``identity'' which is useful for distinguishing nodes in symmetric substructures like atom rings and cycles but less useful for tasks where nodes are part of a variety of less symmetric substructures.
\cref{fig:csl-kinds} illustrates more clearly the effects of increasing $m$ for CSL graphs. We see that setting $m=1$ results in assigning the same PE to every node in the CSL graph, preventing the transformer from learning the task. In contrast, setting $m=20$ diversifies the PE, allowing the graph transformer to distinguish between identical neighbors and neighbors with identical features.
\begin{table}
\caption{GAPE Performance as $k$ varies}
\label{table:k-study}
\centering
\begin{tabular}{lcccccc}
\toprule
$k$ & ZINC $(\downarrow)$ & CYCLES $(\uparrow)$ & CLUSTER $(\uparrow)$ & PATTERN $(\uparrow)$\\
\cmidrule(lr){1-5}
8 & 0.281 & 79.70 & 72.41 & 85.76\\
16 & 0.279 & \textbf{86.80} & 72.55 & 78.24 \\
32 & \textbf{0.251} & 79.03 & \textbf{73.40} & \textbf{85.99}\\
64 & 0.278 & 80.36 & 69.90 & 75.52\\
128 & 0.269 & 82.28 & 72.14 & 79.49 \\
\cmidrule(lr){1-5}
\end{tabular}
\end{table}
\section{Limitations and Conclusion}
While GAPE and its variants is able to compete with other PEs on certain graph and node-level tasks, one limitation is its struggle to surpass them on datasets like CSL, ZINC, and CYCLES without the need for $m > 1$. Each of these datasets involve highly symmetrical structures like cycles or rings of atoms, which suggests that improvements on GAPE should seek to capitalize on increasing $m$ thereby enriching $\alpha$ in order to distinguish nodes in these structures. Further work also needs to be done in encouraging GAPE to retain performance on other, less symmetrical datasets. As it stands, it seems doubtful that GAPE with $m = 1$ should be more powerful than the 1-WL test.
Another limitation of GAPE is the computational difficulty of solving for the PE matrix. Using the ``vec trick'' allows us to learn $\mu$ and $\alpha$, but runs far too slow and consumes too much memory. We run out of memory when testing PATTERN with a batch size of 26 and $k=32$, and shrinking the batch size to 18 and number of states to $k=10$ results in a single epoch taking over an 1.25 hours, making learning $\mu$ and $\alpha$ impractical. Using the Bartels-Stewart algorithm on the other hand is much more efficient, but $\mu$ and $\alpha$ must be randomly initialized, giving up possible performance gains and stability. We leave these limitations to future work.
To sum up, GAPE provides a more theoretically satisfying framework for understanding graph PEs. It is a generalization of the original sinusoidal encodings as well as personalized PageRank and RW. Further, it is a generalization of LAPE under certain assumptions and is able to simulate LAPE. With this, GAPE can be seen as tying together the spatial and spectral frameworks of graph PEs under the umbrella of graph automata, all the while providing a distributed representation, which many of the tested PEs do not provide.
|
1,108,101,562,942 | arxiv |
\section{Method}
\label{sec:method}
The pipeline of our method is shown in~\fig{fig:pipeline}.
Starting from a goal (a shape with some external loads), we first simulate the stress field using the finite element method (\ssect{sec:method-simulation}).
We apply a greedy fiber extraction algorithm by ``walking'' in the stress field, and then downsample the greedy path (\ssect{sec:method-greedy}).
We build and optimize an objective function based on the object's stiffness and regularity conditions of the fiber paths, using the adjoint method to calculate the gradients of the objective (\ssect{sec:method-gradient-optimization}).
These steps can be repeated until a desired number of fiber paths are extracted and optimized.
We then perform a coarse-to-fine optimization by upsampling and optimizing the fiber paths a specified number of times (\ssect{sec:method-coarse-to-fine}).
\subsection{Simulation}
\label{sec:method-simulation}
In this subsection, we describe how we solve the stress field given a shape, some external loads, and a specified fiber layout.
We denote the body as $\Omega$, the stress tensor as $\bm{\sigma}$, the strain tensor as $\bm{\varepsilon}$, the displacement vector as $\mathbf{u}$, and the stiffness tensor as $\texttt{C}$.
The linear elastic model can be written as
\begin{align}
\begin{split}
\min & \int_{\Omega} \frac{1}{2} \bm{\varepsilon} : \texttt{C} : \bm{\varepsilon} \mathrm{d} A \\
\text{s.t.} & -\nabla \cdot \bm{\sigma} = f \\
& \bm{\varepsilon} = \frac{1}{2} \bigl(\nabla \mathbf{u} + (\nabla \mathbf{u})^\intercal\bigr) \\
& \bm{\sigma} = \texttt{C} : \bm{\varepsilon},
\end{split}
\label{eqn:linear_elasticity}
\end{align}
where the colon is the dot product.
$f$ is the body force and we set it to $0$.
For certain regions on the boundary of $\Omega$ (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\partial \Omega$), the value of $\mathbf{u}$ is given as input (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, Dirichlet boundary condition).
For the remaining regions, we have $\bm{\sigma} \cdot \mathbf{n} = T$ (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, Neumann boundary condition), where $\mathbf{n}$ is the outward unit normal vector, and $T$ is the tractive force which we set to $0$.
The constitutive equations $\bm{\sigma} = \texttt{C} : \bm{\varepsilon}$ can also be written in a matrix product form; under the assumption of in-plane stress, we have
\begin{equation}
\begin{bmatrix}
\sigma_{11} \\
\sigma_{22} \\
\sigma_{12}
\end{bmatrix}
=
\begin{bmatrix}
\frac{E_1}{1 - \nu_{21}\nu_{12}} & \frac{E_1 \nu_{21}}{1 - \nu_{21} \nu_{12}} & 0 \\
\frac{E_2 \nu_{12}}{1 - \nu_{12} \nu_{21}} & \frac{E_2}{1- \nu_{12} \nu_{21}} & 0 \\
0 & 0 & \mu
\end{bmatrix}
\begin{bmatrix}
\varepsilon_{11} \\
\varepsilon_{22} \\
2\varepsilon_{12}
\end{bmatrix},
\end{equation}
where $E_1$ and $E_2$ are Young's moduli, $\nu_{12}$ and $\nu_{21}$ are the Poisson's ratios, and $\mu$ is the shear modulus.
For simplicity, we assume both plastic and fiber are isotropic materials, and they have different Young's moduli $E_{\text{plastic}}$ and $E_{\text{fiber}}$ and identical Poisson's ratio $\nu$.
The next issue is to calculate the Young's modulus field.
Consider a laminate of height $h_{\text{object}}$, with some layers filled with just plastic and others containing both plastic and fiber. We assume that all layers with fiber, adding up to a total height of $h_{\text{fiber}}$, have identical fiber paths, and omit plastic where fiber is present.
The set of fiber paths is denoted as $P$, and every path $p$ in it is a sequence of vertices on the fiber path.
For a point $x \in \Omega$, for the purpose of differentiability, we define its ``soft'' Young's modulus as
\begin{equation}
E(x) \coloneqq E_{\text{plastic}} \cdot \alpha_{\text{plastic}}(x) + E_{\text{fiber}} \cdot \alpha_{\text{fiber}}(x),
\end{equation}
where
\begin{equation}
\alpha_{\text{fiber}}(x) \coloneqq \sum_{p \in P} \exp \left( -\left( \frac{\text{dis}(p, x)}{w_{\text{fiber}} / 2} \right)^2 \right) \cdot h_{\text{fiber}},
\end{equation}
where $w_{\text{fiber}} = 0.9$ mm is the width of the fiber, $\text{dis}(\cdot, \cdot)$ measures the distance between a point and a path, and
\begin{equation}
\alpha_{\text{plastic}}(x) \coloneqq h_{\text{object}} - \min(\alpha_{\text{fiber}}(x), h_{\text{fiber}}).
\end{equation}
We allow fiber paths to overlap in this setting, as even in real prints from the Markforged Mark Two, we do not observe any problems. We then have ${\mu(x) = \frac{E(x)}{2(1 + \nu)}}$. Finally, we solve the PDE in~\eqn{eqn:linear_elasticity} using FEniCS with DOLFIN \citep{logg2010dolfin} by solving its first-order condition. \fig{fig:pipeline} visualizes an example of the calculated stress field, using line integral convolution~\cite{Cabral93}.
\subsection{Greedy fiber path extraction}
\label{sec:method-greedy}
In this subsection, we describe how we greedily extract a fiber path from a stress field along the directions of maximum tensile stress or perpendicular to the direction of maximum compressive stress.
With the stress tensor $\bm{\sigma}$ calculated from \ssect{sec:method-simulation}, we first calculate the stress on plastic:
\begin{equation}
\bm{\sigma}_{\text{plastic}} \coloneqq \bm{\sigma} \cdot \frac{E_{\text{plastic}} \cdot \alpha_{\text{plastic}}}{E_{\text{plastic}} \cdot \alpha_{\text{plastic}} + E_{\text{fiber}} \cdot \alpha_{\text{fiber}}}.
\end{equation}
For any point $x \in \Omega$, we can calculate the eigenvalue with the largest absolute value $\lambda(x)$ and its corresponding eigenvector $\mathbf{v}(x)$.
We then build a scalar field with $|\lambda(x)|$ and randomly sample a starting point $x_0$ with the field as sampling weights.
From the starting point, we walk in both directions along with $\pm \mathbf{v}(x_0)$ (or perpendicular to $\mathbf{v}(x_0)$ if $\lambda(x_0)$ is negative) at a fixed step size of 0.5 mm.
If we walk outside $\Omega$ or within 1.3 mm to $\partial \Omega$ (number measured from prints from Eiger), we retry at most 19 times with a random rotation uniformly sampled between $-\pi / 12$ to $\pi / 12$.
The algorithm stops when a preset length limit is reached, or we cannot walk in both directions even after retries.
We then downsample the extracted fiber path by keeping 1 of every 20 vertices.
We iterate every subsequence of the downsampled path and select the one that minimizes the objective function we will define in~\sect{sec:method-gradient-optimization}.
We repeat this process 10 times (sampling 10 starting points) and keep the one that minimizes the objective function.
\subsection{Gradient calculation and optimization}
\label{sec:method-gradient-optimization}
In this subsection, we describe how we design an objective function and optimize it using a gradient-based optimizer.
We denote the optimized strain energy in~\eqn{eqn:linear_elasticity} as~$U$, and the set of fiber paths as~$P$.
The objective $\mathcal{L}(P)$ is defined as
\begin{equation}
\small
-U + \sum_{p \in P} \left( w_{\text{lap}} \cdot \mathcal{L}_{\text{lap}}(p) + w_{\text{min\_l}} \cdot \mathcal{L}_{\text{min\_l}}(p) + w_{\text{bdy}} \cdot \mathcal{L}_{\text{bdy}}(p) \right),
\end{equation}
where $w_{\text{lap}}$, $w_{\text{min\_len}}$, and $w_{\text{bdy}}$ are hyper-parameters.
The Laplacian regularizer $\mathcal{L}_{\text{lap}}$ penalizes non-smooth fiber paths:
\begin{equation}
\mathcal{L}_{\text{lap}}(p) \coloneqq s(P)^3 \cdot \sum_{i=2}^{|p| - 1} \left| \left| p_i - \frac{p_{i - 1} + p_{i + 1}}{2} \right| \right|^2\, ,
\end{equation}
where $s(P)$ is a count of the total number of segments in all fiber paths (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, ${\sum_{p \in P} |p| - |P|}$).
The reason to apply the $s(P)^3$ multiplier is because the Laplacian regularizer is sensitive to upsampling, which we discuss in~\ssect{sec:method-coarse-to-fine}, and this multiplier keeps our Laplacian regularizer scale-invariant.
The minimum-length regularizer $\mathcal{L}_{\text{min\_l}}$ penalizes infeasibly-short fiber paths:
\begin{equation}
\mathcal{L}_{\text{min\_l}}(p) \coloneqq \max \left( l_\text{min} - \sum_{i=2}^{|p|} \left| \left| p_i - p_{i - 1} \right| \right|, 0 \right)^2,
\end{equation}
where $l_\text{min}$ is the minimum fiber length that can be printed by the 3D printer.
The boundary regularizer $\mathcal{L}_{\text{bdy}}$ penalizes fiber paths outside $\Omega$ or too close to $\partial \Omega$:
\begin{equation}
\mathcal{L}_{\text{bdy}}(p) \coloneqq \sum_i \max(d_\text{min} - \text{dis}(p_i, \Omega), 0)^2,
\end{equation}
where $\text{dis}(p_i, \Omega)$ measures the distance from $p$ to $\partial \Omega$ (positive for $p_i \in \Omega$, negative otherwise) and $d_\text{min}$ is the lower limit of distance from fiber to the boundary.
The next step is to calculate $\frac{\mathrm{d} \mathcal{L}(P)}{\mathrm{d} P}$.
Here we apply the adjoint method.
Denote the first-order condition of \eqn{eqn:linear_elasticity} as ${F(\mathbf{u}, P) = 0}$.
By the implicit function theorem (under proper regularity conditions) $\mathbf{u}$ can be thought of a function of $P$, and the derivative $\frac{\mathrm{d} \mathbf{u}}{\mathrm{d} P}$ is well-defined.
Taking the derivative of $F$ with respect to $P$, we have
\begin{equation}
\frac{\mathrm{d} F}{\mathrm{d} P} = \frac{\partial F}{\partial \mathbf{u}} \frac{\mathrm{d} \mathbf{u}}{\mathrm{d} P} + \frac{\partial F}{\partial P} = 0,
\end{equation}
which leads to
\begin{equation}
\frac{\mathrm{d} \mathcal{L}(P)}{\mathrm{d} P} = - \frac{\partial \mathcal{L}(P)}{\partial \mathbf{u}}\left( \frac{\partial F}{\partial \mathbf{u}} \right)^{-1} \frac{\partial F}{\partial P} + \frac{\partial \mathcal{L}(P)}{\partial P}.
\end{equation}
We implement this end-to-end differentiation automatically using dolfin-adjoint~\citep{mitusch2019dolfin} and PyTorch~\citep{NEURIPS2019_9015}.
We use the BFGS implementation in SciPy~\citep{virtanen2020scipy} to minimize $\mathcal{L}(P)$, and again we iterate every subsequence of the optimized path and select the one that minimizes $\mathcal{L}(P)$. We can repeat the steps in \ssect{sec:method-simulation}, \ssect{sec:method-greedy}, and \ssect{sec:method-gradient-optimization} several times to extract multiple fiber paths.
\subsection{Coarse-to-fine optimization}
\label{sec:method-coarse-to-fine}
To speed up the optimization, we perform multigrid optimization.
As described in \ssect{sec:method-greedy}, we initially downsample all the fiber paths.
Then, for every fiber path $p$, we insert midpoints between every $p_i$ and $p_{i + 1}$ by B-spline interpolation, using SciPy, and optimize all the fiber paths.
This process can be repeated several times to generate the final fiber paths for 3D printing.
\section{Introduction}
\input{figText/teaser}
Additive manufacturing has revolutionized the ability to fabricate three-dimensional objects of high geometric complexity, with a variety of applications including in healthcare, automotive, and aerospace industries~\citep{shahrubudin2019overview}.
However, the increasing flexibility in manufacturing has outstripped our ability to produce designs that optimally take advantage of 3D printers.
This has motivated research on \emph{computational fabrication} pipelines that augment human specification of goals with computational optimization of designs that best realize those goals, for problems ranging from ensuring structural integrity through controlling appearance and fine-tuning the fabrication process~\cite{Attene18}.
In this work, we address the problem of producing structurally-sound parts that are capable of bearing nontrivial load.
We aim to exploit the capabilities of devices such as the Markforged Mark Two~\cite{MarkTwo}, which is based on conventional fused deposition modeling (FDM) using thermoplastic nylon, but augments this with the ability to extrude and deposit continuous fibers.
Options for the latter include carbon fiber, Kevlar, fiberglass, and HSHT (High Strength High Temperature) fiberglass, all of which offer the ability to selectively strengthen printed parts with respect to tensile loads.
In effect, this process creates fiber-reinforced plastic (FRP) composites~\citep{kabir2020critical}, but with the ability to control fiber placement to achieve specific tradeoffs in strength, weight, and cost.
The optimization of fiber layout is similar to problems traditionally considered in computational fabrication, such as topology optimization (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, removing material from certain regions) and spatially-varying assignment of different materials.
Systems for these latter tasks are typically based on Eulerian analysis and optimization, in which a quantity (density, material choice, \emph{etc}\onedot} \def\vs{\emph{vs}\onedot) is determined for each location in space (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, on a voxel grid).
Similarly, almost all existing methods for optimizing carbon fiber composites focus on the spatially-varying fiber direction field, then use variants of greedy extraction or ODE solvers to extract the fiber paths themselves~\citep{wang2021load, schmidt2020structural}.
In contrast, we are inspired by a Lagrangian point of view: we characterize the strength of the part as a function of the fiber path, compute gradients with respect to changes in fiber coordinates, and optimize the fiber path directly using gradient descent.
This strategy is based on the adjoint method~\citep{errico1997adjoint, cao2003adjoint}, commonly used for PDE-constrained optimization, and exploits modern systems for automatic differentiation~\citep{griewank2008evaluating}, which have evolved considerably in recent years to support a range of machine learning and general optimization problems.
Our end-to-end optimization approach has the benefit of focusing directly on the final goal---maximizing stiffness with respect to external loads---rather than on indirect objectives such as minimizing strain throughout the object.
We incorporate our gradient descent-based optimization into a complete system that addresses three key challenges:
(1) solving for the stress field of the object given external loads,
(2) computing an optimization objective and its gradient based on the stress field, and
(3) providing a good initialization of fiber layout for our local optimizer.
To address the first challenge, we model the composite material using the linear elastic model, and approximately solve the PDE using the finite element method.
Without loss of generality and for the sake of simplicity, we model the composite material in two dimensions under the assumption of in-plane stress (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, we only consider laminates).
We also simplify the problem by considering Dirichlet (fixed-displacement) boundary conditions.
To address the second challenge, we design an objective function based on total strain energy given the boundary conditions: under the assumption of linear elasticity, maximizing this energy is equivalent to maximizing the object's stiffness.
We regularize the objective to ensure that the optimized fiber paths are feasible.
Finally, to address the last challenge, we initialize each fiber path by greedily following the directions of maximum tensile stress (or perpendicular to the direction of maximum compressive stress). We further use a multi-resolution approach inspired by multigrid methods, to reduce the running time of the optimization.
We show designs produced by our method on a number of illustrative case studies, demonstrating that our method yields higher stiffness with less fiber as compared to baseline paths produced by the Eiger software by Markforged~\shortcite{Eiger}.
We compare our results to greedy extraction based on either the stress field or optimized fiber direction field, as well as other ablations including omitting regularization or multi-resolution optimization.
We print our designs (see Figure~\ref{fig:teaser}), using the method of~\citet{Sun:2021:ASO} to compute fiber extruder paths that compensate for fiber stiffness.
Finally, we test our printed parts to verify that our method matches the predicted stiffness in the real world (subject to inherent print-to-print variations in material strength).
\section{Ablation studies}
\label{sec:ablation}
Our algorithm without optimization has been studied in~\sect{sec:experiments} as the \textit{greedy} baseline.
In this section, we study the effects of removing two other components of our method: the Laplacian regularizer and the multi-resolution optimization, using the shape \textit{rectangle with two holes} (\fig{fig:rectangle_2_holes_shape}).
\input{figText/ablation_lap}
\subsection{Ablation study of the Laplacian regularizer}
As both the minimum-length regularizer and the boundary regularizer are intuitively necessary for fiber paths to be long enough for printing purposes and within the object boundary, we study the effect of removing the Laplacian regularizer from the optimization.
We run our algorithm with the same hyper-parameter setting except for $w_\text{lap}=0$. We extract one fiber path and upsample for one time.
As shown in~\fig{fig:ablation_lap}, the optimizer successfully optimizes the low-resolution path as the number of points is still small (\fig{fig:ablation_lap_1}), but introduces jagged results with more degrees of freedom (\fig{fig:ablation_lap_2}), demonstrating the need for some form of regularization.
\input{figText/ablation_res}
\subsection{Ablation study of multi-resolution optimization}
In this subsection, we study how the multi-resolution approach speeds up the optimization process.
For the multi-resolution case, we extract one fiber path, downsample its resolution by a factor of 20, optimize it, and upsample and optimize it three times, with every optimization limited to 100 iterations.
For the single-resolution case, we also extract one fiber path, downsample its resolution by a factor of 2, optimize it and limit the maximum number of optimization iterations to 400.
For a fair comparison, we use the same random seed for both cases when sampling starting points of the greedy path extraction algorithm.
As shown in~\fig{fig:ablation_res}, both cases get similar fiber paths with similar strain energy, but multi-resolution optimization reduces the running time by approximately 40\%.
\section{Discussion}
In this work, we studied the task of fiber path planning in 3D printing for given external loads, aiming at maximizing the stiffness.
We proposed an end-to-end optimization approach that optimizes regularized object stiffness directly to the fiber layout, rather than an intermediate fiber orientation field, with the help of the adjoint method and automatic differentiation.
We perform planning by extracting fiber paths using a greedy algorithm that lays fiber paths along stress directions, followed by coarse-to-fine optimization.
To apply our method, we first measure the effective moduli of plastic and fiber by manufacturing and testing real 3D prints.
We then study our method with three baselines on four case studies and several additional shapes.
The first baseline is concentric fiber rings from Eiger, a leading digital manufacturing software package developed by Markforged.
The second baseline is our method with the optimization part removed, producing fiber paths from the greedy path extraction algorithm.
The third baseline includes a fiber field optimization part which smooths the stress field before using it in the greedy algorithm.
We demonstrated that, both in simulation and real experiments, our method could generate shorter fiber paths while achieving greater stiffness (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, we improved the Pareto front).
We also studied the effects of removing the Laplacian regularizer and the multi-resolution optimization, showing the Laplacian regularizer is necessary for the optimization to be stable and multi-resolution optimization helps reduce the running time.
We would also like to mention some limitations of our method.
First, our simulation simplifies the task by assuming linear elasticity, restricting to in-plane stress, and treating both plastic and fiber as isotropic materials with different Young's moduli and identical Poisson's ratio.
Lifting these assumptions would introduce greater mathematical complexity, but would require no conceptual changes to our approach.
Additionally, the planning is not performed in real time. For example, to plan fiber paths for the shape \textit{rectangle with two holes}, our method uses 10 minutes and 18 minutes to generate the two studied solutions, respectively.
Relative to the time required to design and print a part, this represents only a small increase.
In addition, the hyper-parameters may have to be tuned when the task changes. For example, if we switch to a much larger shape, the scale of strain energy and the lengths of fiber paths will change. We may have to adjust the weight of the Laplacian regularizer, balancing the optimization stability and the variety of fiber paths, though this is usually easy to tune in a few tries.
Lastly, as our optimizer is gradient-based, the optimization may be trapped in a local minimum. Thus a good initialization is important for our method, and we may have to sample greedy paths several times to obtain a good one.
\section{Modulus calculation}
\label{sec:modulus}
In this section, we describe how we determine the effective Young's moduli of nylon and carbon fiber. We print composites with different (baseline) fiber layouts, measure their stiffness, then optimize for moduli such that their stiffness in simulation best matches the real-world measurements.
\input{figText/modulus_prints}
\subsection{3D prints for testing}
We use Eiger to generate nine different layouts of carbon fiber paths: no fiber path, 1 to 3 inner rings, 1 to 3 outer rings, 1 to 2 rings for all walls.
To reduce the bias introduced by the non-uniformity of the material, we print all of them in one batch, as shown in~\fig{fig:modulus_prints}.
Due to the variability of the printing process, we print three batches of these nine prints and pick the batch with the best printing quality.
\input{figText/modulus_pl_curve}
\subsection{Stiffness measurement}
As described in~\sect{sec:hardware}, we test the prints and record their position-load curves (\fig{fig:modulus_pl_curve}).
Note that the beginning of every curve can be noisy as the part is not perfectly vertical, \emph{etc}\onedot} \def\vs{\emph{vs}\onedot.
Additionally, a large load can cause the part to buckle out of the 2D plane, which violates our in-plane stress assumption.
We therefore measure the position change between a load of 150 N and a load of 300 N for every print, and calculate the stiffness by dividing load change (150~N) by position change (in mm).
The results are shown in~\fig{fig:modulus_results}, marked as ``X''.
Note that there is a factor of 0.5 when converting stiffness in N/mm to energy in N$\cdot$mm at 1 mm displacement (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, a stiffness of 500 N/mm corresponds to having strain energy of 250 N$\cdot$mm at 1~mm displacement).
\input{figText/modulus_results}
\subsection{Simulation and modulus calculation}
For each measured data point, we apply Dirichlet boundary conditions corresponding to 1~mm displacement on the two shorter sides of the two holes on the rectangle. We calculate the strain energy of the object, then do a grid search for the values of the moduli of nylon and carbon that minimize the sum of squared distances between measured and simulated stiffness.
The search yields moduli of 0.40 GPa for nylon and 20.1 GPa for carbon, with results shown in~\fig{fig:modulus_results}.
As we can see, the simulation results mostly match the real results, with small residuals relative to the energy.
\section{Related work}
\subsection{Fiber orientation optimization in 3D printing}
\label{sec:prev_orientation}
A task that is similar to fiber path planning is fiber orientation optimization, where researchers discretize space into elements and optimize fiber orientations in them.
Additional steps, such as greedy extraction, ODE solvers, or geometric methods, must be performed to produce fiber paths from the orientation field.
Thus fiber orientation optimization can serve as the first step of fiber path planning, which we will discuss in~\ssect{sec:rlt-path-planning}.
See \citet{hu2021review} for a survey (called ``free material optimization'').
The most common approach for orientation optimization is to set density and orientation as design variables and optimize an objective such as compliance~\citep{chu2021robust, da2020topology, jung2022inverse} or the Tsai–Wu failure criterion~\citep{ma2022strength} with a gradient-based optimizer.
To address the checkerboard pattern issue (periodicity of the orientation variables), researchers usually use filtering~\citep{andreassen2011efficient} to smooth the design variable field (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, through a weighted average of neighboring elements).
Another choice of design variable is the lamination parameters: \citet{shafighfard2019design} and \citet{demir2019design} proposed to first optimize lamination parameters, search for the best fitting fiber orientations from the optimized lamination parameters, and then perform an optimization on the orientation field while considering manufacturing constraints (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, curvature).
There are also iterative variants.
For example, \citet{caivano2020topology} proposed iterating between calculating the principal stress direction and updating the material distribution until convergence.
While mainly concentrating on orientation optimization, some approaches do ultimately generate fiber paths.
For example, \citet{fedulov2021optimization} first optimized density and orientation and then used third-party software for printing trajectory generation;
\citet{schmidt2020structural} performed density and orientation optimization and generated streamlines using the 4th-order Runge-Kutta integrator for visualization.
\input{figText/pipeline}
\subsection{Fiber path planning in 3D printing}
\label{sec:rlt-path-planning}
A variety of path planning algorithms have been proposed for continuous fiber-reinforced plastics---see \citet{zhang2020review} for a survey.
The most common approach is to first perform an optimization (topology, orientation, \emph{etc}\onedot} \def\vs{\emph{vs}\onedot), and then extract fiber paths from the result.
As discussed, orientation optimization is one choice of the optimization (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, use density and orientation as the design variables), but there are different methods for path extraction.
\citet{wang2021load} proposed to ``walk'' in the field along with the stress direction and consider the angle turned in every move to produce smoothed paths.
\citet{papapetrou2020stiffness} described three methods for path extraction: the offset method and the EQS (Equally-Space) method use the geometry of the optimized layout, and the streamline method fits the orientation field with streamlines.
\citet{safonov20193d} proposed to alternate between topology optimization and fiber orientation updates using an evolutionary heuristic method.
There are also more potential choices for the design variable.
For example, one choice is to only optimize the density.
\citet{li2021full} performed topology optimization of material density (without orientation) using regularizers that force the fiber material to form lines.
However, they did not extract fiber paths explicitly at the end, so it is unclear whether the fibers are directly printable.
\citet{li2020path} and \citet{chen2021topological} proposed to lay fibers along with the load transmission trajectories.
\citet{almeida2019cross} proposed to perform the SIMP (Solid Isotropic Material with Penalization) method first, designed the fiber pattern manually, and then used a genetic algorithm to determine the number of fiber rings/paths that would minimize compliance (defined as mass divided by stiffness).
\citet{sugiyama20203d} proposed to calculate the stress field and update fiber paths so that they follow the direction of maximum principal stress, repeating this process until convergence.
Apart from these two-stage approaches, there are also end-to-end approaches based on genetic algorithms.
For example, \citet{yamanaka2016fiber} modeled fiber paths as streamlines and optimized them directly using a genetic algorithm.
In summary, most existing works perform fiber planning in two stages (topology/orientation optimization followed by path extraction).
In contrast, our method performs an end-to-end optimization of the fiber layout, maximizing the regularized object stiffness via a gradient-based optimizer.
\subsection{PDE-constrained optimization}
Also related to the problem of optimizing geometry to maximize stiffness is the area of PDE-constrained optimization, in which an optimization problem is subjected to physical constraints expressed via partial differential equations (PDEs)~\citep{de2015numerical}.
There are two common types of algorithms to solve PDE-constrained optimization problems: \textit{all-at-once} and \textit{black-box}~\citep{herzog2010algorithms}.
\textit{All-at-once} treats both the design variable and the state variable as independent variables, so the method may not satisfy the constraints before the optimization finishes.
A common \textit{all-at-once} algorithm is SQP (sequential quadratic programming)~\citep{boggs1995sequential}.
A disadvantage of the \textit{all-at-once} approach is the dimension of the state variable can be very large, which makes the optimization costly.
\textit{Black-box} solves the problem in reduced form, by treating the design variable as the only independent variable, so that a gradient-based optimizer can be applied (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, gradient descent, Newton's method).
We formalize the fiber path planning task as a PDE-constrained optimization problem and use the \textit{black-box} approach, specifically the adjoint method, to solve it.
\section{Fabrication and Experimental setup}
\label{sec:hardware}
\input{figText/instron}
In this section, we describe how we manufacture real 3D prints and measure their position-load curves.
We use a Markforged Mark Two printer with nylon as the plastic material and carbon fiber as the reinforcing fiber material.
We print laminates with a height of 2 mm and 16 layers, from which the 4th, 7th, 10th, and 13th layers are fiber layers.
All layers without fiber and regions in fiber layers without fiber are filled with nylon (solid fill).
For the 2D shape, we use a 46 mm $\times$ 30 mm rectangle with two rounded isosceles trapezoid holes, the same shape as shown in~\fig{fig:pipeline}.
The isosceles trapezoids have two sides of 11 mm and 14 mm and a height of 11 mm, with every corner smoothed by an arc with a radius of 1 mm.
We will reuse this shape in~\ssect{sec:modulus},~\ssect{ssec:case-3}, and~\ssect{sec:ablation}.
To measure the position-load curve of a print, we insert two square nuts into both its holes and apply tension to them using a universal testing machine (Instron 600DX), as shown in~\fig{fig:instron}.
The machine is programmed to move at a speed of 20 mm/min until the object breaks or by a manual stop when we believe enough data is collected.
A position-load curve is recorded for every print.
\section{Experiments}
\label{sec:experiments}
\input{figText/shapes}
In this section, we present detailed evaluations of the performance of our method in both simulation and real experiments on four case studies (\ssect{ssec:case-1}--\ssect{ssec:case-4}), then show several additional results in~\ssect{ssec:addition-shapes}.
We start with two simple shapes---a rectangle and a ``plus'' shape---then move to more complex shapes: rectangles with two and four holes (\fig{fig:shapes}).
The first baseline we use is \textit{concentric} fiber rings from Eiger, which have three different types: \textit{inner}, \textit{outer}, and \textit{all walls}.
For the next two case studies, to better illustrate the effectiveness of our algorithm on complex shapes and loads, we include two additional baselines: (1) \textit{greedy}, simplifying our algorithm by removing all the optimization components and directly generating results using the greedy algorithm; (2) \textit{field-opt-greedy}, similar to \textit{greedy} but with an additional step of field optimization before running the greedy algorithm.
The latter baseline, intended to represent the approach of previous work on fiber orientation optimization (see \ssect{sec:prev_orientation}), optimizes a vector field that aligns to the stress direction, with a smoothing regularizer.
Additional details about the field optimization can be found in the appendix.
We refer to the results from our method as \textit{optimized}.
For all the experiments (unless otherwise specified), we use the BFGS optimizer and limit the maximum number of iterations to 500 and a gradient tolerance of $3 \times 10^{-9}$.
The objective function is set with $w_{\text{lap}} = 1 \times 10^{-8}$, $w_{\text{min\_l}} = 1$, and $w_{\text{bdy}} = 1$.
\input{figText/rectangle_steps}
\input{figText/rectangle_fiber_paths}
\subsection{Case 1: rectangle}
\label{ssec:case-1}
In this case study, we show how our algorithm works step by step on a rectangle (45 mm $\times$ 30 mm), with tension applied to its two shorter sides (\fig{fig:rectangle_shape}).
As we can see in the first row in~\fig{fig:rectangle_steps}, we first greedily extract a fiber path and optimize it.
When we add and optimize a second fiber path, the two paths separate and curve.
In the second row, we extract a third greedy path and optimize the three paths together.
Finally, we double the number of points of both fiber paths and optimize the three paths together.
The last step does not help much since the task is relatively simple.
As shown in~\fig{fig:rectangle_fiber_paths}, for a fixed displacement of 1 mm, our algorithm uses less fiber while achieving higher energy in simulation, compared to 1 \textit{outer} concentric fiber ring, which lays fiber in vertical directions that are much less useful than fibers in horizontal directions.
\input{figText/plus_fiber_paths}
\input{figText/plus_plot}
\subsection{Case 2: ``plus'' shape}
As shown in~\fig{fig:plus_shape}, we use a ``plus'' shape whose edges are all of length 15 mm, and we apply tension to two sides of the shape.
We compare fiber paths of \textit{outer} and \textit{optimized} in~\fig{fig:plus_fiber_paths}, with three solutions from each strategy.
As we can see, \textit{outer} lays fibers in regions of low relevance to the loads applied, in contrast to \textit{optimized} which prioritizes regions of high relevance to the loads.
We also observe that the optimization process automatically distributes fiber paths uniformly as we extract more fiber paths.
Based on our simulation, for a fixed displacement at 1 mm, the fiber paths of \textit{optimized} improve upon the Pareto front of \textit{outer}, as shown in~\fig{fig:plus_plot}.
\input{figText/rectangle_2_holes_fiber_paths}
\subsection{Case 3: rectangle with two holes}
\label{ssec:case-3}
As shown in~\fig{fig:rectangle_2_holes_shape}, we also tested a rectangle (46 mm $\times$ 30 mm) with two rounded isosceles trapezoid holes, with external forces applied to the two sides of the holes.
\paragraph{Planned fiber paths and simulation results}
The fiber paths generated from all methods are shown in~\fig{fig:rectangle_2_holes_fiber_paths}.
We set the maximum greedy fiber path length so that fiber lengths of \textit{greedy}, \textit{field-opt-greedy}, and \textit{optimized} are comparable.
As we can see, the baseline methods use only geometric information; both \textit{greedy} and \textit{field-opt-greedy} generate similar fiber paths along stress directions, but paths from \textit{field-opt-greedy} are smoother; \textit{optimized} wraps fiber paths tightly around the holes while aligned with stress direction, yielding larger strain energy when using a similar amount of fiber.
\input{figText/rectangle_2_holes_plot_concentric}
\input{figText/rectangle_2_holes_plot_greedy}
\paragraph{Real experiment results}
To evaluate the quality of fiber paths, we perform real-world experiments by applying tension to 3D prints on a universal testing system (600DX from Instron).
Due to the limited space on the printer bed, two sets of comparisons are performed separately: (1) \textit{inner}, \textit{outer}, and \textit{all walls} \textit{vs.} \textit{optimized}; (2) \textit{greedy} and \textit{field-opt-greedy} \textit{vs.} \textit{optimized}.
We thus printed eight batches, four for each set of comparisons.
Again, as in~\sect{sec:hardware}, we measure the stiffness of a print by calculating the slope of its position-load curve, picking two points that have loads of 150 N and 300 N.
The results of \textit{inner}, \textit{outer}, and \textit{all walls} \textit{vs.} \textit{optimized} are shown in~\fig{fig:rectangle_2_holes_plot_concentric}.
As we can see, our algorithm consistently provides significantly higher stiffness than the concentric baselines when using a similar or lower amount of fiber.
Note that the fiber lengths may have slight discrepancies between simulation and real-world experiments since they are from different path generation algorithms (one from our re-implementation of Eiger, another from Eiger directly).
The results of \textit{greedy} and \textit{field-opt-greedy} \textit{vs.} \textit{optimized} are shown in~\tbl{tbl:rectangle_2_holes_plot_greedy}. Again, our algorithm consistently improves the stiffness over the two baselines while using a similar or lower amount of fiber.
\input{figText/rectangle_4_holes_fiber_paths}
\input{figText/rectangle_4_holes_plot}
\input{figText/additional_shapes}
\subsection{Case 4: rectangle with four holes}
\label{ssec:case-4}
As shown in~\fig{fig:rectangle_4_holes_shape}, we also tested a rectangle (84 mm $\times$ 28 mm) with four rounded isosceles trapezoid holes.
We design the shape to be multi-functional---if we label the holes from 1 to 4 from left to right, we assume the user uniformly chooses one of the four settings: 1) hole 1 and hole 3; 2) hole 1 and hole 4; 3) hole 2 and hole 3; 4) hole 2 and hole 4.
To support this multi-functional shape, we simulate all four cases and calculate the average strain energy.
The fiber paths from all the methods are shown in~\fig{fig:rectangle_4_holes_fiber_paths}.
Again, both \textit{greedy} and \textit{field-opt-greedy} produce fibers along stress directions with fiber paths from \textit{field-opt-greedy} being slightly smoother.
\textit{Optimized} lays the first fiber over all holes and lays the second fiber around the middle two holes, due to the multi-functional nature of the shape.
The energy-fiber usage plot is shown in~\fig{fig:rectangle_4_holes_plot}, where \textit{optimized} improves upon the Pareto front of every baseline.
\subsection{Results on additional shapes}
\label{ssec:addition-shapes}
In this subsection, we provide results from our method and baselines on several additional shapes.
The shape designs are inspired by sketches from SketchGraphs~\citep{seff2020sketchgraphs}, a large-scale dataset of sketches of real-world CAD models, as well as shapes from existing works~\citep{shafighfard2019design, ma2022strength}.
We use a Laplacian regularizer weight $w_{\text{lap}}=5\times10^{-7}$, and the results are shown in~\fig{fig:additional_shapes}, with every dotted line a Dirichlet boundary condition.
For the first shape, all methods use a similar amount of fiber but \textit{optimized} achieves much higher energy than others.
For the second shape, \textit{optimized} uses a similar amount of fiber as \textit{concentric}, less fiber than \textit{greedy} and \textit{field-opt-greedy} but achieves higher energy.
For the third shape, \textit{optimized} achieves comparable energy as \textit{concentric} but saves approximately 70\% of fiber. Compared to \textit{greedy} and \textit{field-opt-greedy}, \textit{optimized} achieves much higher energy while using slightly more fiber.
For the fourth and fifth shapes, \textit{optimized} uses less fiber or comparable fiber as other baselines while achieving significantly higher energy.
\section{Appendix: field optimization}
Given a stress field $\bm{\sigma}$, we would like to find a fiber field $\bm{v}: \Omega \to \mathbb{R}^2$ such that (1) its direction is aligned with $\bm{\sigma}$; (2) it is smooth. We solve $\bm{v}$ by minimizing an objective function that reflects both proprieties:
\begin{equation}
\mathcal{L}(\bm{v}; \bm{\sigma}) \coloneqq \alpha_{\texttt{stress}} \cdot \mathcal{L}_{\texttt{stress}}(\hat{\bm{v}}; \bm{\sigma}) + \alpha_{\texttt{smooth}} \cdot \mathcal{L}_{\texttt{smooth}}(\hat{\bm{v}}),
\end{equation}
where $\alpha_{\texttt{stress}}$ and $\alpha_{\texttt{smooth}}$ are hyper-parameters, and
\begin{equation}
\hat{\bm{v}}(x, y) \coloneqq \bm{v}(x, y) / ||\bm{v}(x, y)||
\end{equation}
is the normalized $\bm{v}$, as the objective function should be invariant regardless of the length of $\bm{v}(x, y)$. Note that the objective function should also be invariant if we randomly flip some $\bm{v}(x, y)$'s, which needs some special handling, as we will discuss below.
\paragraph{Consistent with $\bm{\sigma}$.}
For a specific point $(x, y) \in \Omega$, we calculate the tension in the stress field $\bm{\sigma}$ along $\hat{\bm{v}}(x, y)$, which is $\hat{\bm{v}}(x, y)^\intercal \bm{\sigma}(x, y) \hat{\bm{v}}(x, y)$. We then integrate it over $\Omega$ and get
\begin{equation}
\mathcal{L}_{\texttt{stress}}(\hat{\bm{v}}; \bm{\sigma}) \coloneqq -\iint_\Omega \hat{\bm{v}}(x, y)^\intercal \bm{\sigma}(x, y) \hat{\bm{v}}(x, y) \mathrm{d} x \mathrm{d} y,
\end{equation}
where the negative sign indicates we would like to maximize the tension along the field direction.
\paragraph{Smoothness.}
We penalize the squared Frobenius norm of the gradient of $\hat{\bm{v}}$:
\begin{equation}
\mathcal{L}_{\texttt{smooth}}(\hat{\bm{v}}) \coloneqq \iint_\Omega || \nabla \hat{\bm{v}}(x, y) ||_F^2 \mathrm{d} x \mathrm{d} y.
\end{equation}
Note that the penalty should be invariant to flips of $\hat{\bm{v}}(x, y)$'s, so we handle this invariance when calculating the finite difference:
\begin{align}
\tiny
\begin{split}
|| \nabla \hat{\bm{v}}(x, y) ||_F^2 \coloneqq & \min \left( \left| \left| \frac{\hat{\bm{v}}(x + h, y) - \hat{\bm{v}}(x, y)}{h} \right| \right|^2, \left| \left| \frac{\hat{\bm{v}}(x + h, y) + \hat{\bm{v}}(x, y)}{h} \right| \right|^2 \right) \\
& + \min \left( \left| \left| \frac{\hat{\bm{v}}(x, y + h) - \hat{\bm{v}}(x, y)}{h} \right| \right|^2, \left| \left| \frac{\hat{\bm{v}}(x, y + h) + \hat{\bm{v}}(x, y)}{h} \right| \right|^2 \right),
\end{split}
\end{align}
where $h$ is the step size.
In the experiments, we set $\alpha_{\texttt{stress}}$ to $1$ and $\alpha_{\texttt{smooth}}$ to $0.02$. We use the BFGS optimizer with a gradient tolerance of $1 \times 10^{-6}$ and set the maximum number of iterations to $100$.
|
1,108,101,562,943 | arxiv | \section{Introduction}
Recent advancements in micro- and nanofluidics motivated a significant interest in electrokinetic phenomena, both from the theoretical and applied points of view \cite{Ramos, Morgan}. These phenomena occur in systems exhibiting spatial separation of charges that results from dissociation of polar chemical groups at the solid-fluid electrolyte interface and subsequent formation of an equilibrium electric double layer. Application of the electric field causes electrokinetic flows with a characteristic velocity proportional to the applied field. Besides this classical effect, there is a broad class of phenomena in which separation of charges is caused by the electric field itself \cite{Bazant}-\nocite{gamayunov1986pair,dukhin1986pair,murtsovkin1996nonlinear}\cite{squires2004induced}. Since the induced charge is proportional to the applied field, the resulting flow velocities grow with the square of the field, $v\sim E^2$, \cite{Bazant}-\nocite{gamayunov1986pair,dukhin1986pair,gamayunov1986pair}\cite{squires2004induced}. These phenomena, called collectively an {\it Induced-Charge Electrokinetics} (ICEK) \cite{Bazant}, are most often considered for ideally polarizable (conducting) solid particles. {The ICEK} velocity scale is $v^{metal}\sim\varepsilon_0\varepsilon_{medium}E^2a/\eta$, where $a$ is the radius of the colloid while $\varepsilon_0\varepsilon_{medium}$ and $\eta$ are the dielectric permittivity and viscosity of the electrolyte, respectively \cite{Bazant,murtsovkin1996nonlinear,squires2004induced}. If the particle is a solid dielectric with permittivity $\varepsilon_0\varepsilon_p$, the ICE{K} flows are still present but with a much reduced velocity, $v^{diel}\sim\varepsilon_0\varepsilon_pE^2\lambda_D/\eta$, where $\lambda_D$ is the Debye screening length \cite{murtsovkin1996nonlinear,squires2004induced}. In aqueous electrolytes $\lambda_D$ is typically much smaller than $a$ (tens of nanometers vs micrometers).
Another mechanism to achieve charge separation---even without a solid component---is to use an anisotropic fluid, such as a nematic liquid crystal, as an electrolyte \cite{Hernandez_1, Hernandez_2, Lavrentovich_Nature, Lazo_1, Lazo_2, Sasaki}. The anisotropy of the medium in presence of spatial gradients of the orientational order makes it possible to move charged ions to different locations. The subsequent motion of the fluid induced by the electric field gives rise to nonlinear effects \cite{Lavrentovich_Nature}, called the Liquid-Crystal-Enabled electrokinetics (LCEK) \cite{Hernandez_1, Hernandez_2, Lavrentovich_Nature, Lazo_1, Lazo_2, Peng_pattern,we_pattern}. Both the experiments and theoretical considerations demonstrate that the LCEK flow velocities are proportional to the square of the electric field \cite{Hernandez_1, Hernandez_2, Lavrentovich_Nature, Lazo_1, Lazo_2, Peng_pattern,we_pattern}. Because the flow direction is independent of the field polarity, LCEK transport can be driven by an alternating current, a feature desired in technological applications. {Note that the term "electokinetics"---as applied to a small solid particle located in a fluid electrolyte---embraces two complementary phenomena. The first is {\it electrophoresis}, i.e., the motion of the particle with respect to the fluid under the action of a uniform electric field. The second is {\it electro-osmosis}, the motion of a fluid electrolyte with respect to the particle that is immobilized (for example, glued to the substrate), also under the action of the externally applied uniform electric field. This classification can be applied to both the ICEK and LCEK. In particular, \cite{Lavrentovich_Nature,Lazo_1} describe electrophoresis of solid colloidal particles freely suspended in a nematic electrolyte, while \cite{Lazo_2} deals with a flow of the nematic electrolyte around solid particles that are glued to a substrate; in both cases, the characteristic velocities grow as $\sim E^2$. The first effect is called the Liquid-Crystal-Enabled Electrophoresis (LCEP) \cite{Lazo_1}, while the second is known as the Liquid-Crystal-Enabled Electro-osmosis (LCEO) \cite{Lazo_2}. In our work, we deal with the LCEO effect, considering an immobilized spherical particle in the nematic electrolyte.}
In this paper we derive a mathematical model for {electro-osmotic} flows in nematic liquid crystals, where the nematic component is described by the second-rank tensor order parameter, known as the $\mathsf{Q}$-tensor. The model generalizes our previous work that extended Ericksen-Leslie formalism \cite{Leslie_1, Leslie_2, Walkington} to nematic electrolytes, where we established a system of governing equations from the local form of balance of linear and angular momentum within the framework of the director-based theory. An alternative derivation can be found in \cite{we_pattern}, where we arrived at the same system of equations in a more formal, but also more efficient manner following a variational formulation of nematodynamics, as proposed in \cite{Sonnet_tensor, Sonnet_dissipative}. Because the director models have a limited applicability {\cite{KleLa,BaZa}} in that they cannot model nematic biaxiality and topological defects---other than vortices---here we use the strategy in \cite{Sonnet_tensor, Sonnet_dissipative,we_pattern} to arrive at the appropriate $\mathsf{Q}$-tensor-based theory.
As an illustrative example, we consider a stationary, relatively small (sub-micrometer) colloidal sphere that sets a perpendicular surface anchoring of the preferred orientation of the nematic. The director field around the particle is either of the quadrupolar type with an equatorial disclination loop \cite{kuksenok1996director} or of dipolar symmetry, with {a hyperbolic hedgehog point defect (strictly speaking, a small disclination ring; see, e.g., \cite{PhysRevLett.116.147801} and references therein.)} residing on one side of the sphere \cite{poulin1997novel}. Numerical simulations demonstrate electro-osmotic flows around these two configurations that are in qualitative agreement with the experimental data \cite{Lazo_2} but also highlight features characteristic for the {Induced-Charge Electro-osmotic (ICEO) flows} around a dielectric sphere in absence of materials anisotropies \cite{gamayunov1986pair,murtsovkin1996nonlinear,squires2004induced}. {Note that, since disclinations loops cannot be modeled within the framework of a director-based theory, this particular system is beyond the scope of the approach that we developed in \cite{we_pattern}.}
The paper is organized as follows. In Section \ref{s:1} we recall the principle of minimum energy dissipation and then use this principle in Section \ref{s:2} to derive the system of governing equations for our model. In Section \ref{s:3} we solve the governing system numerically to obtain the flow and charge patterns for electrokinetic flows around a stationary spherical particle in a cylindrical column of a nematic electrolyte.
\section{Principle of minimum energy dissipation}
\label{s:1}
There is a variety of variational principles governing behavior of evolutionary systems \cite{Mielke}.
In classical mechanics, for instance, irreversible dynamics of a system can be described by means of a Rayleigh dissipation function $\mathcal{R}=\frac{1}{2}\xi_{ij}\dot{q}_i\dot{q}_j$ quadratic in generalized velocities $\dot{q}=(\dot{q}_1, ..., \dot{q}_M)$ (summation over repeated subscripts is implied hereafter).
The basic idea is to balance frictional and conservative forces in Lagrange's dynamical equations
\begin{equation}\label{Lagrange_eq}
\frac{d}{dt}\frac{\partial \mathcal{L}}{\partial \dot{q}_m} -\frac{\partial \mathcal{L}}{\partial q_m} +\frac{\partial \mathcal{R}}{\partial \dot{q}_m} = 0,
\end{equation}
where $q=(q_1, ..., q_M)$ are generalized coordinates conjugated with the velocities $\dot{q}$ and $\mathcal{L}=\frac{1}{2}a_{ij}(q)\dot{q}_i\dot{q}_j-\mathcal{U}(q)$ is the Lagrangian of the system, defined as the difference between the kinetic energy $\frac{1}{2}a_{ij}(q)\dot{q}_i\dot{q}_j$ and the potential energy $\mathcal{U}(q)$. In what follows, we assume that the matrices $\left(\xi_{ij}\right)$ and $\left(a_{ij}\right)$ are symmetric.
Similarly to their non-dissipative counterparts, Eqs.~\eqref{Lagrange_eq} can be recast into a variational problem as their solutions provide critical points of the functional
$$\int_{\Omega} d^3r\left\{ \dot{\mathcal{E}}+\mathcal{R} \right\}$$
with respect to a special class of variations $\delta\dot{q}$ of the generalized velocities $\dot{q}$.
Here $\Omega\subset\mathbb R^3$ is the region occupied by the system, $\mathcal{E}=\mathcal{L}+2\mathcal{U}$ is the total energy and the superimposed dot (as well as $\frac{d}{dt}$) denotes the total or material time derivative.
Unlike Hamilton's principle of stationary action, the current approach ``freezes'' both the configuration $q$ and the generalized forces $X_m:=\frac{d}{dt}\frac{\partial\mathcal{L}}{\partial \dot{q}_m}-\frac{\partial\mathcal{L}}{\partial q_m}$, $m=1,\ldots,M$ acting on the system at a given time. The state of the system is then varied by imposing arbitrary instantaneous variations $\delta\dot{q}$ of the velocities $\dot{q}$. Note that variations $\delta q$, $\delta\dot{q}$, and $\delta\ddot{q}$ are mutually independent except for the condition that the generalized forces $X_m,\ m=1,\ldots,M$ should remain unaltered \cite{Sonnet_book}.
Then, by using the product rule and relabeling, we indeed have
\begin{multline}
\frac{\delta}{\delta \dot{q}_m} \int_{\Omega} d^3r \left\{ \dot{\mathcal{E}} +\mathcal{R} \right\} =
\frac{\delta}{\delta \dot{q}_m} \int_{\Omega} d^3r \left\{ a_{ij}\ddot{q}_j\dot{q}_i +\frac{1}{2}\frac{\partial a_{ij}}{\partial q_k}\dot{q}_k\dot{q}_j\dot{q}_i +\frac{\partial \mathcal{U}}{\partial q_i}\dot{q}_i +\mathcal{R} \right\} \\
= \frac{\delta}{\delta \dot{q}_m} \int_{\Omega} d^3r \left\{ \left[\frac{d}{dt}\left( a_{ij}\dot{q}_j \right) -\frac{1}{2}\frac{\partial a_{kj}}{\partial q_i}\dot{q}_k\dot{q}_j +\frac{\partial \mathcal{U}}{\partial q_i}\right]\dot{q}_i +\mathcal{R} \right\} =\frac{\delta}{\delta \dot{q}_m} \int_{\Omega} d^3r \left\{ X_i\dot{q}_i +\mathcal{R} \right\} \\
=X_m+\frac{\partial \mathcal{R}}{\partial \dot{q}_m}=\frac{d}{dt} \frac{\partial \mathcal{L}}{\partial \dot{q}_m} -\frac{\partial \mathcal{L}}{\partial q_m} +\frac{\partial \mathcal{R}}{\partial \dot{q}_m},
\end{multline}
for every $m=1,\ldots,M$.
Hence, the Euler-Lagrange equations
\begin{equation}\label{Principle}
\frac{\delta}{\delta \dot{q}} \int_{\Omega} d^3r \left\{ \dot{\mathcal{E}} +\mathcal{R}\right\} = 0
\end{equation}
are identical to the generalized equations of motion \eqref{Lagrange_eq} and thus govern dynamics of a dissipative mechanical system. Since the conservative forces are assumed to be fixed here and $\mathcal{R}$ is a positive-definite function, the equations \eqref{Principle} yield a minimum of energy dissipation \cite{Sonnet_tensor, Sonnet_dissipative}.
It is worth noting that for overdamped systems---where $\ddot{q}=0$---this principle of minimum energy dissipation is equivalent to the Onsager's variational approach \cite{Doi_PR}.
\section{Nematic electrolyte}
\label{s:2}
In this section, we apply the principle \eqref{Principle} to a nematic electrolyte subject to an external electric field.
It was shown earlier that under an appropriate choice of the generalized velocities this framework is capable of reproducing the classical Ericksen-Leslie equations of nematodynamics \cite{Sonnet_tensor, Sonnet_dissipative}, {as well as the equations for ionic transport \cite{Xu}, and flow and sedimentation in colloidal suspensions \cite{Doi_PR}.} Below we demonstrate that it can be extended so as to take into account the presence of an ionic subsystem.
\subsection{Energy of the system}
{Consider a nematic liquid crystal that contains $N$ species of ions with valences $z^{\alpha}$ at concentrations $c^{\alpha}$, where $1\leq\alpha\leq N$. We assume that all ionic concentrations are small so that the resulting electrolyte solution is dilute. In LCEO experiments \cite{Lazo_2}, the concentration of ions is on the order of $10^{19}$ m$^{-3}$, which correspond to typical distances between isolated ions to be rather large, $\sim 0.5$ micrometer in absence of the electric field.} Then one can write the energy density of the ionic subsystem {as a sum} of the entropic and Coulombic contributions
\begin{equation}\label{E_ion}
\mathcal{E}_{ion} = k_{B} \Theta \sum_{\alpha=1}^{N} c^{\alpha} \ln c^{\alpha} +\sum_{\alpha=1}^{N} e c^{\alpha} z^{\alpha} \Phi,
\end{equation}
where $k_B$ and $\Theta$ stand for the Boltzmann constant and the absolute temperature, respectively, $\Phi$ denotes the electric potential, and $e$ the elementary charge.
Under the action of the field, the ions move with {the effective} velocities $\mathbf{u}^{\alpha}$ which satisfy the continuity equations
\begin{equation}\label{Charge_conservation}
\frac{\partial c^{\alpha}}{\partial t} + \nabla\cdot (c^{\alpha}\mathbf{u}^{\alpha}) = 0.
\end{equation}
Nematics themselves are anisotropic ordered fluids.
A typical nematic consists of elongated molecules whose local orientation can be described by a coarse-grained vector field $\mathbf{n}\equiv -\mathbf{n}$ with non-polar symmetry, the director. This unit-length vector field appropriately describes uniaxial nematic states with constant degree of orientational order $S$.
In general, the degree of orientational order may not be constant, a nematic may contain disclinations, or be in a biaxial state (characterized by a spatially varying degree of biaxiality $P(\mathbf{r})$ and a set of not one, but two mutually orthogonal unit-length vector fields). Neither of these effects can be modeled within the framework of the standard director theory {\cite{KleLa,BaZa}}. The appropriate order parameter to characterize all available nematic states is a symmetric traceless second rank tensor $\mathsf{Q}$ with three, possibly different, eigenvalues.
In the uniaxial limit, two of the eigenvalues are equal so that
\begin{equation}\label{Q_tensor}
\mathsf{Q}_{ij} = S(n_in_j-\frac{1}{3}\delta_{ij}).
\end{equation}
Then the free energy per a unit volume of a nematic liquid crystal can be written in the following form
\begin{equation}\label{E_LdG}
\mathcal{E}_{LdG} = -\frac{A}{2}\mathsf{Q}_{ij}\mathsf{Q}_{ij} +\frac{B}{3}\mathsf{Q}_{ij}\mathsf{Q}_{jk}\mathsf{Q}_{ki} +\frac{C}{4}(\mathsf{Q}_{ij}\mathsf{Q}_{ij})^2 +\frac{L}{2}(\partial_k\mathsf{Q}_{ij})(\partial_k\mathsf{Q}_{ij}),
\end{equation}
where the first three terms represent the so-called Landau-de Gennes potential
\begin{equation}\label{E_LdGp}
\mathcal{E}_{LdG}^p = -\frac{A}{2}\mathsf{Q}_{ij}\mathsf{Q}_{ij} +\frac{B}{3}\mathsf{Q}_{ij}\mathsf{Q}_{jk}\mathsf{Q}_{ki} +\frac{C}{4}(\mathsf{Q}_{ij}\mathsf{Q}_{ij})^2,
\end{equation}
given by an expansion of the free energy of the nematic in terms of the order parameter. The last term $\frac{L}{2}(\partial_k\mathsf{Q}_{ij})(\partial_k\mathsf{Q}_{ij})=\mathcal{E}_{LdG}^{e}$ in \eqref{E_LdG} accounts for elasticity of the liquid crystal with one elastic constant approximation being adopted from now on.
In order to take into account the interaction between the electric field $\mathbf{E}=-\nabla\Phi$ and the liquid crystal, we have to supplement the potential energy \eqref{E_LdG} of the nematic by
\begin{equation}\label{E_E}
\mathcal{E}_{E} = -\frac{1}{2}\mathbf{D}\cdot\mathbf{E},
\end{equation}
where $\mathbf{D}$ denotes the electric displacement vector that satisfies
\begin{equation}\label{Maxwell_eq}
\nabla\cdot\mathbf{D}=\sum_{\alpha=1}^{N} e c^{\alpha} z^{\alpha}.
\end{equation}
It should be noted that care must be taken in dealing with the electric field in this problem.
The field is substantially nonlocal, that is, its changes can affect the system even if they occur outside the region $\Omega$ occupied by the system.
In order to avoid dealing with the field outside of $\Omega$, we assume that the system under investigation is surrounded by conductors that are held at a prescribed potential $\Phi_{\partial\Omega}$.
Then the electric field exists in $\Omega$ only, so that $D_i = \varepsilon_0\varepsilon_{ij}E_j$ where
\begin{equation}\label{Epsilon_ij}
\varepsilon_{ij} = \frac{1}{3}(\varepsilon_{\|}+2\varepsilon_{\perp})\delta_{ij} +\Delta\varepsilon\mathsf{Q}_{ij}
\end{equation}
with $\Delta\varepsilon=\varepsilon_{\|}-\varepsilon_{\perp}$, $\varepsilon_{\perp}$ and $\varepsilon_{\|}$ being dielectric permittivities perpendicular and along the director, respectively, measured in units of the vacuum permittivity $\varepsilon_0$.
Equation~\eqref{Epsilon_ij} can, in fact, used as an implicit phenomenological definition of the tensor order parameter $\mathsf{Q}$.
Thus, neglecting inertia of molecular rotations $(\ddot{\mathsf{Q}}_{ij}=0)$, one can write the total energy per unit volume of the system in the form
\begin{equation}\label{Total_energy}
\mathcal{E}= \frac{1}{2}\rho v_i v_i +\mathcal{E}_{LdG} +\mathcal{E}_{E} +\mathcal{E}_{ion}
\end{equation}
with $\rho$ being the nematic mass density and $\mathbf{v}$ {the macroscopic velocity} of its flow which we assume to be incompressible, $\nabla\cdot\mathbf{v}=0$. {The incompressibility assumption is justified since a typical electrokinetic velocity is negligibly small compared to the speed of sound in nematics \cite{PhysRevLett.28.799}, corresponding to the Mach number $\sim 10^{-8}$. Note that the assumption that the nematic electrolyte solution is dilute allows us to think of $\rho$ and $\mathbf{v}$ as the density and the velocity of the nematic flow, respectively. Indeed, both of these quantities are defined as weighted volume averages of the velocities of the nematic and ionic constituents and the volume fraction of ions in the dilute solution is small.}
\subsection{Dissipation function}
We require the dissipation function to be frame-indifferent, positive-definite and quadratic in the generalized velocities.
Then, choosing $\mathbf{v}$ and $\dot{\mathsf{Q}}$ to be the generalized velocities, the dissipation function of a nematic liquid crystal $\mathcal{R}_{nem}$ has to be quadratic in $\mathbf{v}$ and $\dot{\mathsf{Q}}$.
This restriction, however, does not specify the dependence of the dissipation function on $\mathsf{Q}$ which, in general allows for a large number of nematic viscosity coefficients \cite{Sonnet_tensor}. Following \cite{Er91}, we reduce the number of these coefficients by restricting $\mathcal{R}_{nem}$ to the terms that are at most quadratic in the scalar order parameter $S$. Then
\begin{multline}
2\mathcal{R}_{nem} = \zeta_1\mathring{\mathsf{Q}}_{ij}\mathring{\mathsf{Q}}_{ji} +2\zeta_2\mathsf{A}_{ij}\mathring{\mathsf{Q}}_{ji} +2\zeta_3\mathsf{A}_{ij}\mathring{\mathsf{Q}}_{jk}\mathsf{Q}_{ki} +2\zeta_4\mathsf{A}_{ij}\mathsf{A}_{jk}\mathsf{Q}_{ki} +\zeta_5\mathsf{A}_{ij}\mathsf{A}_{jk}\mathsf{Q}_{kl}\mathsf{Q}_{li}\\
+\zeta_6\left(\mathsf{A}_{ij}\mathsf{Q}_{ji}\right)^2 +\zeta_7\mathsf{A}_{ij}\mathsf{A}_{ji}\mathsf{Q}_{kl}\mathsf{Q}_{lk} +\zeta_8\mathsf{A}_{ij}\mathsf{A}_{ji},
\label{eq:stuff}
\end{multline}
where $\mathsf{A}_{ij}=\frac{1}{2}(\partial_j v_i +\partial_i v_j)$ represents the symmetric part of the velocity gradient and $\mathring{\mathsf{Q}}_{ij}=\dot{\mathsf{Q}}_{ij}-\mathsf{W}_{ik}\mathsf{Q}_{kj}-\mathsf{W}_{jk}\mathsf{Q}_{ki}$, with $\mathsf{W}_{ij}=\frac{1}{2}(\partial_jv_i-\partial_iv_j)$, gives the rate of the $\mathsf{Q}$-tensor change relative to a flow vorticity \cite{Sonnet_tensor}.
Inserting the uniaxial representation \eqref{Q_tensor} of the tensorial order parameter $\mathsf{Q}$ in \eqref{eq:stuff} and taking into account that $\mathring{n}_i=\dot{n}_i-\mathsf{W}_{ij}n_j$ and $\dot{S}=0$, the dissipation function takes the form
\begin{equation}
2\mathcal{R}_{nem}^{(\mathbf{n})} = (\alpha_3-\alpha_2) \mathring{n_i}^2 +2(\alpha_5-\alpha_6) \mathring{n_i} \mathsf{A}_{ij}n_j +(\alpha_5+\alpha_6)(\mathsf{A}_{ij}n_j)^2 +\alpha_4(\mathsf{A}_{ij})^2+\alpha_1(n_i \mathsf{A}_{ij} n_j)^2,
\end{equation}
when written in terms of the director $\mathbf{n}$. Now one can relate the viscosities $\zeta_i$ to the Leslie's viscosities $\alpha_j$ \cite{DK84}:
\begin{equation}\label{Viscosities}
\begin{split}
\alpha_3-\alpha_2 =& 2S^2\zeta_1, \qquad\qquad
\alpha_6-\alpha_5 = 2S\zeta_2 +\frac{1}{3}S^2\zeta_3,\\
\alpha_1 =& S^2\zeta_6, \qquad\qquad
\alpha_5+\alpha_6 = S\zeta_4 +\frac{1}{2}S^2\zeta_5,\\
\alpha_4 =& \zeta_8 -\frac{1}{3}S\zeta_4 +\frac{1}{3}S^2\left(\frac{1}{3}\zeta_5+2\zeta_7\right).
\end{split}
\end{equation}
It follows from \eqref{Viscosities} that the viscosities $\zeta_3$, $\zeta_5$, and $\zeta_7$ are higher-order corrections to the Leslie's viscosities in terms of the scalar order parameter $S$.
Thus, one can set $\zeta_3=\zeta_5=\zeta_7=0$ and arrive at a simpler form of the dissipation function
\begin{equation}\label{R_nem}
2\mathcal{R}_{nem} = \zeta_1\mathring{\mathsf{Q}}_{ij}\mathring{\mathsf{Q}}_{ji} +2\zeta_2\mathsf{A}_{ij}\mathring{\mathsf{Q}}_{ji} +2\zeta_4\mathsf{A}_{ij}\mathsf{A}_{jk}\mathsf{Q}_{ki} +\zeta_6\left(\mathsf{A}_{ij}\mathsf{Q}_{ji}\right)^2 +\zeta_8\mathsf{A}_{ij}\mathsf{A}_{ji},
\end{equation}
which involves only five nematic viscosities. {As we will demonstrate below, this choice of $\mathcal{R}_{nem}$ results in the expression for viscous stress identical to that derived in \cite{Ping_Sheng}.}
For the nematic electrolyte, we also need to incorporate dissipation due to the motion of ions. Taking into account that the mobilities of ions along and perpendicular to the director $\mathbf{n}$ are different and treating $\mathbf{u}^{\alpha}$ with $1\leq\alpha\leq N$ as the generalized velocities, the contribution of ions to dissipation is given by \cite{Carme}
\begin{equation}\label{R_ion}
2\mathcal{R}_{ion} = k_B\Theta \sum_{\alpha=1}^{N} c^{\alpha}(\mathsf{D}_{ij}^{\alpha})^{-1}(u_i^{\alpha}-v_i)(u_j^{\alpha}-v_j).
\end{equation}
Here the diffusion matrix $\mathsf{D}_{ij}^{\alpha}$ accounts for the anisotropy of the liquid crystal electrolyte. {The expression \eqref{R_ion} is a direct generalization of the dissipation function for ordinary colloidal solutions \cite{Doi_PR}. }
Thus, the total energy dissipation rate in the nematic electrolyte is equal to the sum $\mathcal{R}=\mathcal{R}_{nem}+\mathcal{R}_{ion}$ with $\mathcal{R}_{nem}$ as specified in \eqref{R_nem}.
\subsection{Governing equations}
Once the energy $\mathcal{E}$, the dissipation $\mathcal{R}$, and the generalized velocities of the system are specified, we are in a position to derive equations describing electro-osmotic flows in nematics.
The equations are implicitly given by
\begin{equation}\label{EL_eq}
\begin{split}
\frac{\delta}{\delta \mathbf{v}}\int_{\Omega} d^3r \left\{ \dot{\mathcal{E}}+\mathcal{R} -p^{\prime} (\partial_i v_i) -\Lambda \mathsf{Q}_{ii} \right\}=0, \\
\frac{\delta}{\delta \dot{\mathsf{Q}}}\int_{\Omega} d^3r \left\{ \dot{\mathcal{E}}+\mathcal{R} -p^{\prime} (\partial_i v_i) -\Lambda \mathsf{Q}_{ii} \right\}=0, \\
\frac{\delta}{\delta \mathbf{u}^{\alpha}}\int_{\Omega} d^3r \left\{ \dot{\mathcal{E}}+\mathcal{R} -p^{\prime} (\partial_i v_i) -\Lambda \mathsf{Q}_{ii} \right\}=0,
\end{split}
\end{equation}
where the two Lagrange multipliers, $p^{\prime}$ and $\Lambda$, are associated with the flow incompressibility and the tracelessness of the tensor order parameter, respectively.
But before deriving the explicit form of \eqref{EL_eq}, let us specify the boundary conditions for our problem.
Although one can simply use the natural boundary conditions that follow from the principle of minimum energy dissipation \eqref{Principle}, here we impose Dirichlet conditions on $\partial\Omega$. In particular,
\begin{equation}
\label{eq:bc}
\mathbf{v}=0,\quad \dot{\mathsf{Q}}=0,\quad \mathbf{u}^{\alpha}=0 \quad \text{on } \partial\Omega.
\end{equation}
This choice of boundary conditions slightly simplifies further consideration and corresponds to a majority of experimental setups.
Next, we calculate the rate of change of the energy in Eq.~\eqref{EL_eq}: we start by computing
\begin{multline}\label{E_nem_dot}
\frac{d}{d t} \int_{\Omega} d^3r \left\{ \frac{1}{2} \rho \mathbf{v}^2 +\mathcal{E}_{LdG}\left(\mathsf{Q}, \nabla\mathsf{Q}\right) \right\} = \\
= \int_{\Omega} d^3r \left\{\left[ \rho\dot{v_l} +\partial_k\left(\frac{\partial \mathcal{E}_{LdG}}{\partial (\partial_k \mathsf{Q}_{ij})}(\partial_l \mathsf{Q}_{ij})\right) \right] v_l +\left[\frac{\partial \mathcal{E}_{LdG}}{\partial \mathsf{Q}_{ij}} -\partial_k\left(\frac{\partial \mathcal{E}_{LdG}}{\partial (\partial_k \mathsf{Q}_{ij})}\right) \right]\dot{\mathsf{Q}}_{ij} \right\},
\end{multline}
and
\begin{equation}
\frac{d}{dt} \int_{\Omega} d^3r \mathcal{E}_{E}(\mathsf{Q}, \nabla\Phi) =
\int_{\Omega} d^3r \left\{ \frac{\partial \mathcal{E}_{E}}{\partial \mathsf{Q}_{ij}}\dot{\mathsf{Q}}_{ij} +\frac{\partial \mathcal{E}_{E}}{\partial (\partial_i \Phi)}(\partial_i \dot{\Phi}) -\frac{\partial \mathcal{E}_{E}}{\partial (\partial_i \Phi)}(\partial_k \Phi)(\partial_i v_k)\right\},
\end{equation}
with help of the identity $\dot{(\partial_k\mathsf{Q}_{ij})}=\partial_k\dot{\mathsf{Q}}_{ij} -(\partial_k v_l)(\partial_l \mathsf{Q}_{ij})$.
Recall that $$\mathcal{E}_{E} = -\varepsilon_0 (\bar{\varepsilon}\delta_{ij}+\Delta\varepsilon \mathsf{Q}_{ij})(\partial_i \Phi) (\partial_j \Phi)/2,$$ where $\bar{\varepsilon}=(\varepsilon_{\|}+2\varepsilon_{\perp})/3$. Then
\begin{equation}
\frac{\partial \mathcal{E}_E}{\partial \mathsf{Q}_{ij}} = -\frac{1}{2}\varepsilon_0 \Delta\varepsilon (\partial_i \Phi) (\partial_j \Phi)\quad\mathrm{and}\quad\frac{\partial \mathcal{E}_E}{\partial (\partial_i \Phi)} = -\varepsilon_0 \varepsilon_{ij} (\partial_j \Phi).
\end{equation}
Hence
\begin{multline}\label{E_field_dot}
\frac{d}{dt} \int_{\Omega} d^3r \mathcal{E}_{E}(\mathsf{Q}, \nabla\Phi) = \\
= \int_{\Omega} d^3r \left\{ -\frac{1}{2}\varepsilon_0\Delta\varepsilon E_i E_j \dot{\mathsf{Q}}_{ij} -(\partial_i D_i) \dot{\Phi} -\partial_i (\varepsilon_0\varepsilon_{ij} E_j E_k) v_k \right\} +\int_{\partial\Omega} d^2r \left\{ (\nu_i \varepsilon_0\varepsilon_{ij}E_j) \dot{\Phi} \right\}.
\end{multline}
On a conductor-dielectric interface, the normal component of the displacement, $D_i\nu_i$, is given by the surface charge density $\sigma$. It follows from \eqref{eq:bc} and the definition of a material derivative that the surface integral in \eqref{E_field_dot} can be written as
\begin{equation}
\int_{\partial\Omega} d^2r \left\{ (\nu_i \varepsilon_0\varepsilon_{ij}E_j) \dot{\Phi} \right\} = \int_{\partial\Omega} d^2r\, D_i\nu_i \frac{\partial\Phi}{\partial t} =\int_{\partial\Omega} d^2r\, \sigma \frac{\partial \Phi}{\partial t}.
\end{equation}
This integral gives the power spent by charges located at $\partial\Omega$ and can be omitted when $\Phi_{\partial\Omega}$ varies slowly compared to the timescales of the dynamics associated with $\mathbf{v}$, $\mathbf{u}^{\alpha}$ and $\dot{\mathsf{Q}}$.
For the ionic subsystem, we have
\begin{equation}\label{E_ion_dot}
\frac{d}{dt}\int_{\Omega} d^3r \mathcal{E}_{ion}(c^{\alpha}, \Phi)=\\
\int_{\Omega} d^3r \sum_{\alpha=1}^{N} \left\{(\partial_i \mu^{\alpha}) c^{\alpha} (u_i^{\alpha} -v_i) +e c^{\alpha}z^{\alpha}\dot{\Phi} -\mu^{\alpha}c^{\alpha}(\partial_i v_i) \right\},
\end{equation}
where $\mu^{\alpha} = \frac{\partial \mathcal{E}_{ion}}{\partial c^{\alpha}} = k_{B}\Theta (\ln c^{\alpha} +1) +e z^{\alpha} \Phi$ is the chemical potential of the $\alpha$-th ion species \cite{Eisenberg}.
Note that $\dot{\mathcal{E}}_{ion}$ includes the term $\sum_{\alpha}ec^{\alpha}z^{\alpha}\dot{\Phi}$ whereas $\dot{\mathcal{E}}_{E}$ contains $-(\partial_i D_i)\dot{\Phi}$; these terms cancel out when combined together in the expression for the total power $\dot{\mathcal{E}}$. This is due to the fact that the electric field obeys the Maxwell's equation \eqref{Maxwell_eq}.
We could have instead obtained the same equation \eqref{Maxwell_eq} for $\mathbf{D}$ from \eqref{Principle}, if we chose to treat $\dot{\Phi}$ as a generalized velocity. Then
\begin{equation}
\frac{\delta}{\delta \dot{\Phi}}\int_{\Omega} d^3r \left\{ \dot{\mathcal{E}}+\mathcal{R} -p^{\prime} (\partial_i v_i) -\Lambda \mathsf{Q}_{ii} \right\}=-\partial_i D_i +\sum_{\alpha=1}^{N} ec^{\alpha}z^{\alpha} = 0.
\end{equation}
Since the present framework deals with the energy of the entire system this derivation properly addresses the nonlocality of the field.
The variational derivatives of the total dissipation function $\mathcal{R}$ are given by
\begin{eqnarray}\label{dR}
&\frac{\delta}{\delta \dot{\mathsf{Q}}_{ij}} \int_{\Omega}d^3r\mathcal{R}= \frac{\partial \mathcal{R}_{nem}}{\partial \mathring{\mathsf{Q}}_{ij}} = \zeta_1\mathring{\mathsf{Q}}_{ij} +\zeta_2 \mathsf{A}_{ij},& \label{dRdn}\\
&\frac{\delta}{\delta u_i^{\alpha}} \int_{\Omega}d^3r\mathcal{R} = k_B\Theta c^{\alpha}(\mathsf{D}_{ij}^{\alpha})^{-1}(u_j^{\alpha}-v_j),&\label{dRdu}\\
&\frac{\delta}{\delta v_i} \int_{\Omega}d^3r\mathcal{R}= \frac{\delta}{\delta v_i}\int_{\Omega}d^3r\mathcal{R}_{nem} - k_B\Theta \sum_{\alpha=1}^{N}c^{\alpha}(\mathsf{D}_{ij}^{\alpha})^{-1}(u_j^{\alpha}-v_j).&\label{dRdv}
\end{eqnarray}
Using the explicit form \eqref{R_nem} of $\mathcal{R}_{nem}$ and the chain rule
$$\frac{\partial}{\partial(\partial_j v_i)} = \frac{\partial}{\partial\mathsf{A}_{ij}} +\mathsf{Q}_{ki}\frac{\partial}{\partial\mathring{\mathsf{Q}}_{jk}} -\mathsf{Q}_{kj}\frac{\partial}{\partial\mathring{\mathsf{Q}}_{ik}},$$
we obtain that $$\frac{\delta}{\delta v_i}\int_{\Omega}d^3r\mathcal{R}_{nem} = -\partial_j\mathsf{T}_{ij}^V,$$ where the viscous stress tensor
\begin{multline}\label{Viscous_stress}
\mathsf{T}_{ij}^V = \zeta_1 \left(\mathring{\mathsf{Q}}_{jk}\mathsf{Q}_{ki} -\mathring{\mathsf{Q}}_{ik}\mathsf{Q}_{kj}\right) +\zeta_2\mathring{\mathsf{Q}}_{ij} +(\zeta_4+\zeta_2)\mathsf{A}_{jk}\mathsf{Q}_{ki}+\\
+(\zeta_4-\zeta_2)\mathsf{A}_{ik}\mathsf{Q}_{kj} +\zeta_6 \left(\mathsf{A}_{kl}\mathsf{Q}_{lk}\right)\mathsf{Q}_{ij} +\zeta_8\mathsf{A}_{ij}
\end{multline}
is identical to that suggested in \cite{Ping_Sheng}.
Thus, it follows from \eqref{E_ion_dot} and \eqref{dRdu} that
\begin{equation}\label{Nernst0_eq}
\frac{\delta}{\delta u_i^{\alpha}}\int_{\Omega} d^3r \left\{ \dot{\mathcal{E}}+\mathcal{R} -p^{\prime} (\partial_i v_i) -\Lambda n_i\dot{n_i} \right\}= c^{\alpha}\left(\partial_i \mu^{\alpha} +k_B\Theta (\mathsf{D}_{ij}^{\alpha})^{-1}(u_j^{\alpha}-v_j)\right) = 0.
\end{equation}
Combining this with the continuity equation \eqref{Charge_conservation}, we arrive at
\begin{equation}\label{Nernst_eq}
\frac{\partial c^{\alpha}}{\partial t} +\partial_j \left[ c^{\alpha}v_j -\frac{c^{\alpha}}{k_B\Theta}\mathsf{D}_{ij}^{\alpha}(\partial_i \mu^{\alpha}) \right] = 0.
\end{equation}
Likewise, equations \eqref{E_nem_dot}, \eqref{E_field_dot} and \eqref{dRdn} yield
\begin{multline}\label{Leslie_eq}
\frac{\delta}{\delta \dot{\mathsf{Q}}_{ij}}\int_{\Omega} d^3r \left\{ \dot{\mathcal{E}}+\mathcal{R} -p^{\prime} (\partial_i v_i) -\Lambda \mathsf{Q}_{ii} \right\}= \\
= \frac{\partial \mathcal{E}_{LdG}}{\partial \mathsf{Q}_{ij}} -\partial_k \left[ \frac{\partial \mathcal{E}_{LdG}}{\partial(\partial_k \mathsf{Q}_{ij})} \right] -\Lambda \delta_{ij} -\frac{1}{2}\varepsilon_0\Delta\varepsilon E_i E_j +\zeta_1\mathring{\mathsf{Q}}_{ij} +\zeta_2 \mathsf{A}_{ij} =0.
\end{multline}
Finally, combining \eqref{E_nem_dot}, \eqref{E_field_dot}, \eqref{E_ion_dot}, \eqref{dRdv} and \eqref{Nernst0_eq} we arrive at
\begin{multline}\label{Navier0_eq}
\frac{\delta}{\delta v_i}\int_{\Omega} d^3r \left\{ \dot{\mathcal{E}}+\mathcal{R} -p^{\prime} (\partial_i v_i) -\Lambda \mathsf{Q}_{ii} \right\}= \\
= \rho \dot{v_i} +\partial_k \left[ \frac{\partial \mathcal{E}_{LdG}}{\partial(\partial_k \mathsf{Q}_{mn})}(\partial_i \mathsf{Q}_{mn}) -\mathsf{T}_{ik}^V -\varepsilon_0\varepsilon_{kj}E_j E_i \right] +\partial_i p^{\prime} +\partial_i\left[\sum_{\alpha=1}^{N}c^{\alpha}\mu^{\alpha}\right] =0.
\end{multline}
The sum $p^{\prime}+\sum_{\alpha}c^{\alpha}\mu^{\alpha}$ can be defined as the total pressure $p$, thus yielding an alternative form
\begin{equation}\label{Navier_eq}
\rho \dot{v_i} +\partial_k \left[ \frac{\partial \mathcal{E}_{LdG}}{\partial(\partial_k \mathsf{Q}_{mn})}(\partial_i \mathsf{Q}_{mn}) +p\delta_{ik} -\mathsf{T}_{ik}^V -\varepsilon_0\varepsilon_{kj}E_j E_i \right] =0
\end{equation}
of \eqref{Navier0_eq}.
Equations \eqref{Maxwell_eq}, \eqref{Nernst_eq}, \eqref{Leslie_eq} and \eqref{Navier_eq} along with
the definition of the chemical potential
\begin{equation}
\mu^{\alpha} = \frac{\partial \mathcal{E}_{ion}}{\partial c^{\alpha}} = k_{B}\Theta (\ln c^{\alpha} +1) +e z^{\alpha} \Phi
\end{equation}
and constraints $\nabla\cdot\mathbf{v}=0$, $\mathsf{Q}_{ii}=0$ constitute the full set of equations governing electro-osmosis in nematic liquid crystals, which can be written in the following invariant form
\begin{equation}\label{The_System}
\begin{cases}
\frac{\partial c^{\alpha}}{\partial t} +\text{div} \left[ c^{\alpha}\mathbf{v} -\frac{c^{\alpha}}{k_B\Theta}\mathsf{D}^{\alpha}(\nabla \mu^{\alpha}) \right] = 0,\\
\frac{\partial \mathcal{E}_{LdG}}{\partial \mathsf{Q}} -\text{div} \left[ \frac{\partial \mathcal{E}_{LdG}}{\partial(\nabla \mathsf{Q})} \right] -\Lambda \mathsf{I} -\frac{1}{2}\varepsilon_0\Delta\varepsilon \mathbf{E}\otimes\mathbf{E} +\zeta_1\mathring{\mathsf{Q}} +\zeta_2 \mathsf{A} =0,\\
\rho \dot{\mathbf{v}} +\text{div} \left[ -\mathsf{T}^{\text{el}} +p\mathsf{I} -\mathsf{T}^V -\varepsilon_0 \mathbf{E}\otimes \hat{\varepsilon}\mathbf{E} \right] =0,\\
\text{div} \left[ \frac{1}{3}\left(\varepsilon_{\|}+2\varepsilon_{\perp}\right)\mathbf{E} +\Delta\varepsilon \mathsf{Q} \mathbf{E}\right] = \frac{e}{\varepsilon_0}\sum_{\alpha=1}^{N}c^{\alpha}z^{\alpha},\\
\mu^{\alpha} = k_{B}\Theta (\ln c^{\alpha} +1) +e z^{\alpha} \Phi,\\
\text{div } \mathbf{v} =0,\\
\text{Tr } \mathsf{Q} =0,
\end{cases}
\end{equation}
where the elastic stress tensor $\mathsf{T}^{\text{el}}=-\dfrac{\partial \mathcal{E}_{LdG}}{\partial(\partial_k \mathsf{Q}_{mn})}(\partial_i \mathsf{Q}_{mn}) \,\mathbf{e}_i\otimes\mathbf{e}_{k}$, the dielectric tensor $\hat{\varepsilon}=\varepsilon_{ij}\,\mathbf{e}_i\otimes\mathbf{e}_j$ and $\mathsf{I}$ is the identity tensor.
\section{electro-osmotic flow around a spherical particle}
\label{s:3}
In this section, we consider a simple but illustrative example of liquid crystal-enabled electro-osmotic flow (LCEO) around an immobilized spherical particle placed at the center of a large cylindrical domain filled with a nematic electrolyte.
Recently, a similar problem in a rectangular container was experimentally examined in \cite{Lazo_2}.
Despite the difference in geometry, the physical mechanism of LCEO is essentially the same in both cases.
The colloidal inclusion distorts the otherwise uniform ordering of the liquid crystal molecules, inducing spatial variations of the order tensor $\mathsf{Q}$ field.
In the presence of an electric field, inhomogeneities of $\mathsf{Q}$, along with the anisotropy of dielectric permittivity and conductivity of the liquid crystal give rise to spatial separation of electric charges present in the system. This field-induced charging of distorted regions of the nematic electrolyte is a distinctive feature of LCEO, which consequently yields {electro-osmotic} flow with the velocity quadratic in the electric field.
The profile of the flow, as will be seen below, depends on the symmetry of the tensor field $\mathsf{Q}$ as well as on anisotropies of ionic conductivities and the dielectric permittivity of the nematic.
Let us consider a micron-sized spherical colloidal particle suspended in a nematic electrolyte subject to a uniform electric field $\mathbf{E}=(0, 0, -E)$.
For the sake of simplicity, assume that the ionic subsystem consists of two species with valences $z^+=1$ and $z^-=-1$ and concentrations $c^{+}$ and $c^{-}$, respectively. We assume equal mobility matrices
\[\mathsf{D}^{+}=\mathsf{D}^{-}=\mathsf{D}_{ij}=\bar{\mathsf{D}}(\bar{\lambda}_{\sigma}\delta_{ij} +(\lambda_{\sigma}-1)\mathsf{Q}_{ij})\,\mathbf{e}_i\otimes\mathbf{e}_j,\]
where $\left\{\mathbf{e}_i\right\}_{i=1,2,3}$ is a set of mutually orthonormal vectors in $\mathbb{R}^3$ and $\lambda_{\sigma}=\sigma_{\|}/\sigma_{\perp}>0$ denotes the ratio of the conductivity along and perpendicular to the nematic director, respectively; $\bar{\lambda}_{\sigma}=\frac{1}{3}(\lambda_{\sigma}+2)$ and $\bar{\mathsf{D}}>0$.
For further analysis of the system of governing equations \eqref{The_System}, it is convenient to introduce nondimensional variables
\begin{equation}
\begin{split}
\tilde{\mathbf{r}}=\frac{\mathbf{r}}{a}, \qquad \tilde{t}=\frac{t}{\bar{v}}, \qquad \tilde{\Phi}=\frac{\Phi}{Ea},\qquad \tilde{c}^{\pm}=\frac{c^{\pm}}{\bar{c}}, \tilde{\mathbf{v}}=\frac{\mathbf{v}}{\bar{v}}, \\ \tilde{p}=\frac{p}{\bar{p}}, \qquad \tilde{\mathsf{D}}_{ij}=\frac{\mathsf{D}_{ij}}{\bar{\mathsf{D}}}, \qquad \tilde{\mathsf{T}}^V_{ij}=\mathsf{T}^V_{ij}\frac{a}{\zeta_8 \bar{v}},
\end{split}
\end{equation}
where $a$ is the radius of the particle and $\bar{x}$ denotes the characteristic value of $x$.
Then omitting the tildes for notational simplicity, one can rewrite the system \eqref{The_System} in the following nondimensional form
\begin{equation}\label{The_System_nondim}
\begin{cases}
\text{Pe}\left(\dfrac{\partial c^{\pm}}{\partial t} +\text{div} \left[ c^{\alpha}\mathbf{v} \right]\right) -\text{div} \left[ \mathsf{D}\left(\nabla c^{\pm} \mp c^{\pm} G \mathbf{E} \right) \right] = 0,\\
\dfrac{\partial \mathcal{E}_{LdG}}{\partial \mathsf{Q}} -\text{div} \left[ \dfrac{\partial \mathcal{E}_{LdG}}{\partial(\nabla \mathsf{Q})} \right] -\Lambda \mathsf{I} -\dfrac{1}{2}\dfrac{a^2}{\xi_E^2} \mathbf{E}\otimes\mathbf{E} +\text{Er}\left(\dfrac{\zeta_1}{\zeta_8} \mathring{\mathsf{Q}} +\dfrac{\zeta_2}{\zeta_8} \mathsf{A}\right) =0,\\
\text{Re}\, \dot{\mathbf{v}} +\text{div} \left[ -\dfrac{1}{\text{Er}}\mathsf{T}^{\text{el}} +p\mathsf{I} -\mathsf{T}^V -\mathbf{E}\otimes\dfrac{\hat{\varepsilon}}{\varepsilon_{\perp}}\mathbf{E} \right] =0,\\
\text{div} \left[ \dfrac{1}{3}\left(\lambda_{\varepsilon}+2\right)\mathbf{E} +\left(\lambda_{\varepsilon}-1\right) \mathsf{Q} \mathbf{E}\right] = B\left( c^+-c^- \right),\\
\text{div } \mathbf{v} =0,\\
\text{Tr } \mathsf{Q} =0,
\end{cases}
\end{equation}
which implies $\bar{p} = \dfrac{\zeta_8 \bar{v}}{a}$ and $\bar{v} = \dfrac{\varepsilon_0\varepsilon_{\perp}aE^2}{\zeta_8}$, and where the nondimensional parameters
\begin{equation}
\begin{split}
\text{Pe}=\frac{\bar{v}a}{\bar{\mathsf{D}}}, \qquad \text{Er}=\frac{\zeta_8 \bar{v} a}{L}, \qquad \frac{a^2}{\xi_E^2}=\frac{\varepsilon_0\Delta\varepsilon E^2 a^2}{L}, \\ \text{Re}=\frac{\rho\bar{v}a}{\zeta_8}, \qquad B=\frac{e\bar{c}a}{\varepsilon_0\varepsilon_{\perp}E}, \qquad G=\frac{eaE}{k_{B}\Theta}
\end{split}
\end{equation}
along with $\lambda_{\varepsilon}=\varepsilon_{\|}/\varepsilon_{\perp}$ are introduced. Here $\xi_E=\sqrt{L/(\varepsilon_0|\Delta\varepsilon|E^2)}$ is the electric coherence length. We consider the colloidal sphere to be relatively small, $a\approx\,\upmu$m; the rest of the parameters are close to the ones used in typical experiments on LCEO: $\rho\approx 1$~g/cm$^3$, $\Delta\varepsilon\approx 10$, $\varepsilon_{\perp}\approx 10$, $L\approx 10$~pN, $\bar{\mathsf{D}}\approx 5\cdot 10^{-11}$~m$^2$/s, $\zeta_8\approx 0.1$~Pa$\cdot$s, $\bar{c}=10^{19}$~m$^{-3}$, $E\approx 40$~mV/$\upmu$m, and $\Theta=293$~K. Then the nondimensional parameters have the following values
\begin{equation}
\text{Pe}\approx 0.03, \qquad \text{Er}\approx 0.01, \qquad \frac{a^2}{\xi_E^2}\approx 0.01, \qquad \text{Re}\approx 1\cdot10^{-8}, \qquad B\approx 0.45, \qquad G\approx 1.6.
\end{equation}
Smallness of the first three characteristic numbers is of particular importance in what follows. Since diffusive transport of ions prevails over advective transport (the Peclet number $\text{Pe}\ll 1$) and the elasticity of the liquid crystal dominates over its viscosity (the Ericksen number $\text{Er}\ll 1$), the order parameter $\mathsf{Q}$ and the concentrations of ions $c^+$ and $c^-$ are not significantly affected by the liquid crystal flow.
Moreover, due to the small ratio of the particle radius $a$ to the electric coherence length $\xi_E$, we can also neglect the influence of the electric field on the molecular alignment.
{Among the parameters listed above, only the radius $a$ of the sphere has a value that is different from what was used in the experiment in \cite{Lazo_2}, where $a=1\,\upmu$m in simulations vs. $a=25\,\upmu$m in the experiment. This departure is motivated by the two closely related reasons, (i) by the fact that small particles in a large nematic domain can feature both dipolar director field with a hyperbolic hedgehog and a quadrupolar director with an equatorial disclination ring and (ii) by the fact that the model developed in our work allows us to describe the LCEO effects in presence of the disclination rings which are naturally stable around the small spheres. As discussed below, the relative stability of the two director geometries around a small sphere can be tuned by slightly adjusting the size of the particle. This allows us to compare the electro-osmotic flow patterns for the two different symmetries of director distortions while keeping the physical parameters close to each other in the two cases. As the particles become bigger, the hedgehog configuration in a large domain becomes progressively more stable, while the ring configuration needs to be supported either by an external field or by strong confinement \cite{PhysRevLett.85.4719}. In the experiments \cite{Lazo_2}, the comparison between the hedgehog and ring configuration was made possible by placing the spheres into a shallow cell with the thickness that is only slightly larger than the diameter of the spheres. Proximity of bounding walls complicates the numerical analysis of the flows and to some extent masks the difference caused by the different symmetry of the director field near the surface of the spheres. To avoid the complications associated with the strong confinement, in what follows we analyze the case of the small particles.
A significant computational simplification associated with choosing the particle to be small results from the decoupling of the equations in \eqref{The_System_nondim}. Note that in \cite{Lazo_2}, for a particle of radius $25\,\upmu$m, the experimentally observed velocity of propagation was $~4\,\upmu$m/s, which corresponds to $\text{Er}=O(1)$. The system \eqref{The_System} can still be solved numerically in this situation, but at a significantly higher computational cost since the equations remain fully coupled.}
Thus, the system of equations \eqref{The_System_nondim} can be solved in three consecutive steps.
First, we find the alignment tensor $\mathsf{Q}$ from
\begin{equation}\label{Eq_for_Q}
\begin{cases}
\dfrac{\partial \mathcal{E}_{LdG}}{\partial \mathsf{Q}} -\text{div} \left[ \dfrac{\partial \mathcal{E}_{LdG}}{\partial(\nabla \mathsf{Q})} \right] -\Lambda \mathsf{I} =0,\\
\text{Tr }\mathsf{Q}=0,
\end{cases}
\end{equation}
then calculate the concentrations $c^{\pm}(\mathbf{r})$ and the electric field $\mathbf{E}=-\nabla\Phi$ given by
\begin{equation}\label{Eq_for_E}
\begin{cases}
\text{div} \left[ \mathsf{D}\left(\nabla c^{\pm} \mp c^{\pm} G \mathbf{E} \right) \right] = 0,\\
\text{div} \left[ \frac{1}{3}\left(\lambda_{\varepsilon}+2\right)\mathbf{E} +\left(\lambda_{\varepsilon}-1\right) \mathsf{Q}\mathbf{E}\right] = B\left( c^+-c^- \right),
\end{cases}
\end{equation}
and finally, solve
\begin{equation}\label{Eq_for_v}
\begin{cases}
\text{div} \left[ -\dfrac{1}{\text{Er}} \mathsf{T}^{\text{el}} +p\mathsf{I} -\mathsf{T}^V -\frac{1}{\varepsilon_{\perp}} \mathbf{E}\otimes\hat{\varepsilon}\mathbf{E} \right] =0,\\
\text{div } \mathbf{v} = 0
\end{cases}
\end{equation}
for the pressure $p(\mathbf{r})$ and the velocity field $\mathbf{v}(\mathbf{r})$.
\begin{figure}
\begin{center}
\includegraphics[width=.36\textwidth]{Mesh}
\caption{Domain of simulation.
The mesh was generated by \textit{Gmsh} \cite{Gmsh}.
Thick red lines depict physical boundaries of the domain. }\label{Domain}
\end{center}
\end{figure}
\subsection{Alignment tensor}
The non-dimensionalized Landau-de Gennes free energy $\mathcal{E}_{LdG}$, which enters \eqref{The_System_nondim} and subsequently \eqref{Eq_for_Q} and \eqref{Eq_for_v}, is given by
\begin{equation}\label{E_LdG_nondim}
\mathcal{E}_{LdG} = {\left(\frac{a}{\xi}\right)}^2 \left\{-\frac{1}{2}\text{Tr }\mathsf{Q}^2 +\frac{B}{3A}\text{Tr }\mathsf{Q}^3 +\frac{C}{4A}\left(\text{Tr }\mathsf{Q}^2\right)^2 \right\} +\frac{1}{2}\left|\nabla\mathsf{Q}\right|^2,
\end{equation}
where $\xi=\sqrt{L/A}\sim 10$~nm stands for the nematic coherence length and $A$, $B$, and $C$ are constant at a given temperature.
The Landau-de Gennes potential $\mathcal{E}_{LdG}^p$ defined in \eqref{E_LdGp} determines whether the nematic phase is thermodynamically stable. It is minimized by a uniaxial tensor $\mathsf{Q}=S_0(\mathbf{n}\otimes\mathbf{n}-\frac{1}{3}\mathsf{I})$ with $S_0=\frac{1}{4C}\left(-B+\sqrt{B^2+24AC} \right)$ for any $\mathbf {n}\in\mathbb S^2$.
Following Fukuda \textit{et al.} \cite{Fukuda_ring, Fukuda_flow}, we set $C=-B=3A$ so as $S_0=1$.
Assuming the same scalar order parameter $S_0=1$ at the particle surface and introducing a unit-length vector $\bm{\nu}$ normal to it, we impose the Dirichlet boundary condition $\mathsf{Q}=\bm{\nu}\otimes\bm{\nu}-\frac{1}{3}\mathsf{I}$ corresponding to the strong homeotropic anchoring of the nematic. At infinity we assume the uniform nematic alignment, i.e., $\mathsf{Q}=\mathbf{n}^0\otimes\mathbf{n}^0-\frac{1}{3}\mathsf{I}$, where $\mathbf{n}^0=(0, 0, 1)$.
The topological constraints imposed by our choice of boundary data produce either a line or point singularity in the vicinity of the particle.
Theoretical \cite{Stark_transition, Fukuda_ring, Ravnik_modelling} and experimental \cite{Loudet,Voltz} studies show that a small particle ($a/\xi\lesssim 60$) will be encircled by a disclination loop, known as a Saturn ring, whereas a point defect, a hyperbolic hedgehog, will be energetically favorable provided that $a/\xi\gtrsim 60$. Note that both configurations are axisymmetric with respect to $\mathbf{n}^0$.
Therefore, in cylindrical coordinates $\{\rho, \phi, z\}$ with the $z$-axis pointing along the director at infinity $\mathbf{n}^0$, the alignment tensor $\mathsf{Q}=\mathsf{Q}(\rho,z)$ does not depend on the azimuthal angle $\phi$.
While the problem \eqref{Eq_for_Q} was solved explicitly in the limit of small particles \cite{Alama}, there is no analytical solution for the hedgehog configuration in three dimensions. In two dimensions the solution, however, is well known \cite{Lubensky}. Indeed, the director field $\mathbf{n}^{2D}=(\cos\psi, \sin\psi)$ around a circular particle located at the origin of Cartesian coordinate system $\{x,y\}$ and a pointlike topological defect at $(0, -y_0)$ is given by
\begin{equation}\label{Psi}
\psi = 2\arctan\frac{x}{y} -\arctan\frac{x}{y+y_0} -\arctan\frac{x}{y+1/y_0}.
\end{equation}
In our study, this two-dimensional solution $\mathbf{n}^{2D}$ is used as an initial guess for the axially symmetric problem. We use the nonlinear variational solver developed by the FEniCS Project---a collection of open source software for automated solution of differential equations by finite element methods \cite{AlnaesBlechta2015a, LoggMardalEtAl2012a,LoggWellsEtAl2012a,LoggWells2010a, KirbyLogg2006a,LoggOlgaardEtAl2012a, OlgaardWells2010b, AlnaesEtAl2012, Alnaes2012a, Kirby2004a, Kirby2004a, AlnaesLoggEtAl2009a, AlnaesLoggEtAl2012a}.
In the case of small particles $(a/\xi\lesssim 60)$, the initial state relaxes to a Saturn ring configuration, while for large particles $(a/\xi\gtrsim 60)$ it results in a hedgehog-like solution that, in agreement with \cite{Fukuda_ring, Ravnik_modelling,PhysRevLett.116.147801}, is in fact a small ring disclination rather than a point defect.
The computed solutions of the problem \eqref{Eq_for_Q} for $a/\xi=30$ and $a/\xi=70$ are visualized in Fig.~\ref{Criterion_plot} by plotting of a scalar criterion $u$ proposed in \cite{Lux}. The criterion utilizes the fact that the eigenvalues of the tensor order parameter $\mathsf{Q}$ corresponding to a uniaxial nematic state can be written as $-s$, $-s$, $2s$. Then $\text{Tr }\mathsf{Q}^2=6s^2$ and $\det\mathsf{Q}=2s^3$ and one can introduce a scalar quantity
\begin{equation}\label{Criterion}
u = \frac{\left(\det\mathsf{Q} \right)^2}{\left(\text{Tr }\mathsf{Q}^2 \right)^3} -\frac{1}{54},
\end{equation}
whose nonzero values indicate biaxial alignment of the liquid crystal molecules. Note that in the absolute units, the radius of the colloidal spheres is rather small, 0.3 microns and 0.7 microns, respectively; experiments reported so far deal with bigger spheres, $a=25$ microns \cite{Lazo_2}.
\begin{figure}
\begin{center}
\includegraphics[width=.36\textwidth]{Saturn}
\hspace{10pt}
\includegraphics[width=.36\textwidth]{Hedgehog}
\caption{Spherical particle accompanied by a Saturn ring \textbf{(a)} and a hyperbolic hedgehog \textbf{(b)} topological defects.
Nonzero values of the biaxiality parameter $u$ given by \eqref{Criterion} indicate biaxial alignment of the liquid crystal molecules.}\label{Criterion_plot}
\end{center}
\end{figure}
\subsection{Charge separation}
Once the tensor field $\mathsf{Q}$ is known, we solve the problem \eqref{Eq_for_E} for the ionic concentrations $c^{\pm}=c^{\pm}(\rho,z)$ and the electric potential $\Phi=\Phi(\rho,z)$, subject to Dirichlet boundary conditions $c^{\pm}=1$ and $\Phi=z$ at $z=\pm Z$ (see Fig.~\ref{Domain}). Here, the Maxwell equation in \eqref{Eq_for_E} should also be solved inside the particle. Therefore, the dielectric permittivity $\epsilon_{p}$ of the particle has to be specified as it determines the distribution of ions in the system and thus influences the flow.
In the present study, we focus on dielectric colloids which are commonly used in practice. In particular, Fig.~\ref{Charge_plot} shows nondimensional charge density $q=c^{+}-c^{-}$ around a dielectric spherical particle with $\epsilon_p=0.4\varepsilon_{\perp}$.
Note that the separation of charges in the system arises from an interplay between the orientational ordering of the nematic and its anisotropic permittivity and conductivity, determined by the tensor field $\mathsf{Q}$ and the parameters $\lambda_{\varepsilon}$ and $\lambda_{\sigma}$, respectively. This result is in line with the expectations that the space charge around colloidal spheres is proportional to the anisotropy of dielectric permittivity and electric conductivity \cite{Lazo_2}. A similar, but probably simpler, interplay in patterned nematics \cite{Carme,Peng_pattern,we_pattern}, where spatially varying director field is induced by means of specific anchoring at the substrates, yields the electrokinetic charge density $q_{\text{pat}}\propto \lambda_{\varepsilon}-\lambda_{\sigma}$.
In the system under investigation, the charge distribution $q(\mathbf{r})$ is also sensitive to the values of $\lambda_{\sigma}$ and $\lambda_{\varepsilon}$, but it does not vanish when $\lambda_{\varepsilon}=\lambda_{\sigma}$. This is not surprising, given the fact that even in isotropic electrolytes -- where $\lambda_{\varepsilon}=\lambda_{\sigma}=1$ -- a dielectric sphere in presence of an applied electric field is capable of generating space charges and cause induced-charge electro-osmosis (ICEO) \cite{Bazant,gamayunov1986pair,murtsovkin1996nonlinear}. This effect is especially pronounced when the Debye screening length $\lambda_D=\frac{1}{e}\sqrt{\frac{\varepsilon_0\varepsilon_{medium}k_B\theta}{n}}$ (where $n$ is the concentration of ions) around the colloid is comparable to the radius of the colloid, as will be discussed later in the context of the field-induced electro-osmotic velocities.
\begin{figure}
\begin{center}
\includegraphics[width=.36\textwidth]{Charge_Q_12}
\includegraphics[width=.36\textwidth]{Charge_d_12}\\
\vspace{3pt}
\includegraphics[width=.36\textwidth]{Charge_Q_21}
\includegraphics[width=.36\textwidth]{Charge_d_21}\\
\vspace{3pt}
\includegraphics[width=.36\textwidth]{Charge_Q_22}
\includegraphics[width=.36\textwidth]{Charge_d_22}
\caption{Nondimensional charge density $q=c^+-c^-$ around a spherical particle with: a Saturn ring \textbf{(a),(c),(e)}; and a hedgehog \textbf{(b),(d),(f)} topological defect.
Here $\lambda_{\varepsilon}$=1, $\lambda_{\sigma}$=2 in \textbf{(a),(b)};
$\lambda_{\varepsilon}$=2, $\lambda_{\sigma}$=1 in \textbf{(c),(d)}; and
$\lambda_{\varepsilon}=\lambda_{\sigma}$=2 in \textbf{(e),(f)}.}\label{Charge_plot}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=.36\textwidth]{Velocity_Q_12}
\includegraphics[width=.36\textwidth]{Velocity_d_12}\\
\vspace{3pt}
\includegraphics[width=.36\textwidth]{Velocity_Q_21}
\includegraphics[width=.36\textwidth]{Velocity_d_21}\\
\vspace{3pt}
\includegraphics[width=.36\textwidth]{Velocity_Q_22}
\includegraphics[width=.36\textwidth]{Velocity_d_22}
\caption{Velocity field around a spherical particle with: a Saturn ring \textbf{(a),(c),(e)}; and a hedgehog \textbf{(b),(d),(f)} defect.
Here $\lambda_{\varepsilon}$=1, $\lambda_{\sigma}$=2 in \textbf{(a),(b)};
$\lambda_{\varepsilon}$=2, $\lambda_{\sigma}$=1 in \textbf{(c),(d)}; and
$\lambda_{\varepsilon}=\lambda_{\sigma}$=2 in \textbf{(e),(f)}.
The nondimensional viscosities are as follows: $\tilde{\zeta}_1=0.3$, $\tilde{\zeta}_2=0$, $\tilde{\zeta}_4=1.3$, $\tilde{\zeta}_6=-0.15$.
}\label{Velocity_plot}
\end{center}
\end{figure}
\subsection{Flow profile}
We are now in a position to solve the system of equations \eqref{Eq_for_v} for the pressure $p=p(\rho,z)$ and the velocity $\mathbf{v}=\mathbf{v}(\rho,z)$ of the electro-osmotic flow.
One can further simplify the problem by taking advantage
of the fact that $\text{Er} \ll 1$ and $a^2 / \xi_E^2 \ll 1$.
Since these two parameters are small, the elastic stress tensor $\mathsf{T}^{\text{el}}=-\dfrac{\partial \mathcal{E}_{LdG}}{\partial(\partial_k \mathsf{Q}_{mn})}(\partial_i \mathsf{Q}_{mn})\,\mathbf{e}_i\otimes\mathbf{e}_k$ is determined by the order parameter $\mathsf{Q}$ that satisfies \eqref{Eq_for_Q}.
It follows then that $\text{div} \mathsf{T}^{\text{el}}=-\nabla\mathcal{E}_{LdG}$.
Now splitting the total pressure $p$ into the static $p^0=\text{const}-\mathcal{E}_{LdG}/\text{Er}$ and hydrodynamic $p^h$ parts \cite{Stark_Stokes}, we arrive at the following system
\begin{equation}\label{Eq_for_flow}
\begin{cases}
\text{div} \left[ p^h\mathsf{I} -\mathsf{T}^V -\frac{1}{\varepsilon_{\perp}} \mathbf{E}\otimes\hat{\varepsilon}\mathbf{E} \right] =0,\\
\text{div } \mathbf{v} = 0.
\end{cases}
\end{equation}
Here the viscous stress is
\begin{equation}
\mathsf{T}^V = \tilde{\zeta}_1 \left(\mathsf{Q}\mathring{\mathsf{Q}}-\mathring{\mathsf{Q}}\mathsf{Q}\right) +\tilde{\zeta}_2\mathring{\mathsf{Q}} +\left(\tilde{\zeta}_4+\tilde{\zeta}_2\right)\mathsf{Q}\mathsf{A} +\left(\tilde{\zeta}_4-\tilde{\zeta}_2\right)\mathsf{A}\mathsf{Q} +\tilde{\zeta}_6 \text{Tr} \left(\mathsf{Q}\mathsf{A}\right)\mathsf{Q} +\mathsf{A},
\end{equation}
where $\tilde{\zeta}=\zeta/\zeta_8$, $2\mathsf{A}=\nabla\mathbf{v}+(\nabla\mathbf{v})^T$, and $\mathring{\mathsf{Q}} = \dot{\mathsf{Q}} -\mathsf{WQ} +\mathsf{QW}$ with $2\mathsf{W}=\nabla\mathbf{v}-(\nabla\mathbf{v})^T$.
Solutions to \eqref{Eq_for_flow} computed under no-slip conditions $(\mathbf{v}=0)$ at the physical boundaries of the domain of simulation (see Fig.~\ref{Domain}) are depicted in Fig.~\ref{Velocity_plot}.
Similar to the charge density $q$ discussed above, the flow $\mathbf{v}$ is sensitive to the degrees of anisotropy $\lambda_{\varepsilon}$ and $\lambda_{\sigma}$, as well as to the symmetry of the director field.
In particular, the { quadrupolar} flow profiles around the particle encircled by an equatorial Saturn ring are symmetric with respect to the plane of the defect.
On the contrary, the particle accompanied by a hedgehog gives rise to the velocity fields $\mathbf{v}$ of dipolar symmetry, in qualitative agreement with \cite{Lazo_2}. {Indeed, the direct comparison can be made between the Fourier analysis of the experimental velocity data in Fig.~4 in \cite{Lazo_2} and the insets (a) and (b) in Fig.~\ref{Velocity_plot}, given that $\lambda_\sigma>1$ and $\lambda_\varepsilon=1$ in both cases. The flow profiles in Fig.~4c around the sphere with a disclination ring, shown in \cite{Lazo_2} and Fig.~\ref{Velocity_plot}a are both of the "puller" type with the streams along the axis parallel to the electric field being directed toward the sphere. The flow in Fig.~\ref{Velocity_plot}a consists of the two rolls, that are also present in Fig.~4c in \cite{Lazo_2}. The experiment also shows pairs of micro-vortices located very closely to the poles of the sphere of a size that is smaller than the radius of the sphere. These microvortices are not featured in the simulations, apparently because of the differences between the confinement geometries considered here and in \cite{Lazo_2}. Note that the quadrupolar symmetry of the director pattern in the disclination ring configuration makes the electro-osmotic flows symmetric with respect to the equatorial plane of the sphere. There is thus no "pumping" of the fluid from one pole of the sphere to another, as demonstrated experimentally in \cite{Lazo_2}. The situation changes for the sphere with an accompanying hedgehog, as described below.
The flow profiles around the sphere with a dipolar director configuration caused by the hedgehog are of the "pumping" type in both the experiments (Fig.~4f in \cite{Lazo_2}) and simulations (Fig.~\ref{Velocity_plot}a), with the mirror symmetry with respect to the equatorial plane being broken. The flow in Fig.~\ref{Velocity_plot}b consists of one roll. The flow at the axis of rotational symmetry of the configuration is directed from the side that is defect free to the surface of the sphere. The maximum velocity of the axial flow is achieved at the defect-free side of the sphere; the axial velocity is much lower near the hedgehog. All these features are in complete agreement with the experiment, see Fig.~4f in \cite{Lazo_2}. The vortex in Fig.~\ref{Velocity_plot}b rotates in the counterclockwise direction; its center is shifted towards the defect-free end of the sphere, again as in the experiment \cite{Lazo_2}. The only difference is that the experiment shows an additional vortex in a far field, with the center that is separated from the sphere by a distance about $4a$; this vortex does not appear in the simulations, apparently because of the difference in the confinement geometry (note that in addition to being shallow, the experimental cell is practically infinitely long and wide in the horizontal plane, which brings another difference as compared to the domain of simulations).
Interchanging the values of $\lambda_\sigma$ and $\lambda_\varepsilon$ in Fig.~\ref{Velocity_plot}c,d essentially reverses the direction of the flow, confirming the observation that the velocity in LCEO should be proportional to the difference between these quantities at leading order \cite{Lazo_2}. This reversal is also in agreement with the recent experiments and 2D director-based numerical simulations \cite{Paladugu} performed for a liquid crystal in which the sign of $\lambda_\sigma-\lambda_\varepsilon$ can be reversed by a suitable choice of composition or temperature. However, if one extends the comparison of the present simulations to the experimental LCEO flows in patterned nematic cells without colloidal inclusions \cite{Carme,Peng_pattern, we_pattern}, then one can observe an important difference. Namely, the LCEO flows in patterned nematics \cite{Carme,Peng_pattern, we_pattern} vanish when $\lambda_\varepsilon$ and $\lambda_\sigma$ are equal. In contrast, our simulations demonstrate nonzero velocity field $\mathbf{v}$ even in the case of $\lambda_{\varepsilon}=\lambda_{\sigma}$. As mentioned above, this effect is in line with the model developed for ICEO flows around dielectric spheres \cite{Bazant,gamayunov1986pair,murtsovkin1996nonlinear}. We now discuss this issue in a greater detail.}
Considering an uncharged immobilized dielectric sphere placed in a uniform electric field, Murtsovkin found the analytical solutions for the radial and azimuthal ICEO flows that show a quadrupolar symmetry \cite{murtsovkin1996nonlinear} and a typical amplitude near the surface
\begin{equation}
\label{eq:add}
v^{diel}=\beta\frac{\varepsilon_0\varepsilon_{medium}}{\eta}\frac{aE^2}{1+\frac{\varepsilon_{medium}a}{\varepsilon_p\lambda_D}},
\end{equation}
where $\beta$ is a scalar coefficient that depends on the geometry of the system (for an infinite system with $\lambda_D\ll a$ and $\beta=\frac{9}{32\pi}\approx0.1$). For an aqueous electrolyte we have that $\varepsilon_{medium}\approx80$, $\lambda_D\approx50$ nm, thus for a typical dielectric particle of a micron size and a permittivity of glass, $\varepsilon_p\approx5$ , one can safely assume $\varepsilon_{medium}a\gg\varepsilon_p\lambda_D$ so that $v^{diel}=\beta\frac{\varepsilon_0\varepsilon_p}{\eta}\lambda_DE^2$. This velocity is, by a factor about $\lambda_D/a$, smaller than the ICEO flow velocities around ideally polarizable (conductive) spheres \cite{Bazant,murtsovkin1996nonlinear}. The smallness of this effect around dielectric spheres has been confirmed experimentally by a direct comparison of ICEO velocities around conducting (gold) and dielectric (glass) spheres of the same size in the same aqueous electrolyte \cite{peng2014induced}. In the case of a nematic electrolyte, the ratio $\varepsilon_{medium}a/\varepsilon_p\lambda_D$ is not necessarily very large, as $\varepsilon_{medium}$ and $\varepsilon_p$ are often of the same order of magnitude and the Debye screening length is in the range $0.1-1\,\upmu$m \cite{thurston1984physical,nazarenko1994anchoring,ciuchi2007ac}. For the micron-size particles considered in this study, $\varepsilon_{medium}a/\varepsilon_p\lambda_D$ is of order $1$. On the other hand, analytical estimates of the {LCEO} flows velocities yield a typical amplitude $v^{LCEO}=\alpha\frac{\varepsilon_0\varepsilon_\perp}{\eta}\left(\frac{\varepsilon}{\varepsilon_\perp}-\frac{\sigma}{\sigma_\perp}\right)aE^2$ where $\alpha$ is an unknown dimensionless parameter of order $0.1-1$ that is expected to depend on the director field, strength of anchoring, etc. \cite{Lazo_2}. Recent experiments \cite{Paladugu} on LCEP of spheres with $a=5\upmu$m show that $\alpha$ approximately equals $1$. The ICEO and LCEO flow velocities around dielectric spheres in the nematic electrolyte can thus be of comparable magnitudes. When $\frac{\varepsilon}{\varepsilon_\perp}-\frac{\sigma}{\sigma_\perp}=0$, the total velocity around the sphere would not vanish, being determined by the isotropic contribution \eqref{eq:add}. For example, with $\varepsilon_{medium}=\varepsilon_p=7$, $\eta=0.1$ Pa\,s, $a=\lambda_D=0.3\,\upmu$m, $E=40\times10^3\,$V/m, the estimate is $v^{diel}=0.01\,\upmu$m/s. The ICEO effect is apparently more pronounced around smaller particles explored in this work; as the particles become larger as in the experiments \cite{Lazo_2}, this effect would become of a lesser importance. On the other hand, the {LCEO} effect is expected to diminish as the particle becomes smaller, since the smaller (submicrometer and less) particles are not capable to produce strong director gradients needed for charge separation. It would be of interest to explore the relative strength of ICEK and LCEK in the isotropic and the nematic phases of the same liquid crystal material for particles of a different size.
It is also worth noting that, if the applied electric field reverses, the charge distributions depicted in Fig.~\ref{Charge_plot} is inverted while the flow profiles shown in Fig.~\ref{Velocity_plot} remain unaltered (compare, for instance, Fig.~\ref{Reversed_field_plot} to Fig.~\ref{Charge_plot}b and Fig.~\ref{Velocity_plot}b).
We conclude that the differences between the flow profiles shown in Fig.~\ref{Velocity_plot} and the experimental observations in \cite{Lazo_2} are primarily due to different geometry of the experiment \cite{Lazo_2} where the electrolyte was confined to a planar cell of thickness comparable to the particle diameter. Furthermore, these differences stem from the fact that the particles considered in this study are much smaller than those in \cite{Lazo_2}.
\begin{figure}
\begin{center}
\includegraphics[width=.36\textwidth]{Charge_reversed_field}
\includegraphics[width=.36\textwidth]{Velocity_reversed_field}
\caption{Charge density and velocity of LCEO around the particle with a hedgehog generated by the electric field of inverted polarity.
Here $\lambda_{\varepsilon}=1$, $\lambda_{\sigma}=2$, $\tilde{\zeta}_1=0.3$, $\tilde{\zeta}_2=0$, $\tilde{\zeta}_4=1.3$, $\tilde{\zeta}_6=-0.15$.}\label{Reversed_field_plot}
\end{center}
\end{figure}
{\section{Conclusions}
In this paper we derived a mathematical model for electro-osmosis in nematic liquid crystals described in terms of the tensor order parameter. Following Onsager's variational approach to irreversible processes, we use the formalism that balances conservative and frictional forces obtained by varying the appropriately chosen free energy and dissipation functionals. In the current study these are given by their established expressions for nematic liquid crystals and colloidal suspensions. To illustrate the capabilities of the model, we consider a relatively simple example of electro-osmotic flow around an immobilized spherical particle. The physically relevant micrometer-size of the particle is chosen so that (a) the elastic energy minimizing nematic configuration contains disclination loops that can only be described within a tensor order parameter theory and (b) the equations of the governing system decouple, simplifying the computational procedure.
The numerical simulations for these particles demonstrate that both induced-charge- and liquid-crystal enabled electrokinetic effects are simultaneously present in the nematic electrolyte. The quadrupolar flow profiles around the particle encircled by an equatorial Saturn ring are symmetric with respect to the plane of the defect, while the particle accompanied by a hedgehog gives rise to the velocity fields $\mathbf{v}$ of dipolar symmetry. Unlike the LCEO in patterned nematics which vanishes when $\lambda_{\varepsilon}$ and $\lambda_{\sigma}$ are equal, here we observe nonzero velocity field $\mathbf{v}$ even in the case of $\lambda_{\varepsilon}=\lambda_{\sigma}$. This effect is in line with the model developed for ICEO flows around dielectric spheres and it should become more pronounced with the decreasing radius of the particle. When the applied electric field is reversed, the charge distribution within the system is inverted, while the flow profiles remain unaltered, confirming that the LCEO velocity is proportional to the square of the applied field.
We attribute the differences between the flow profiles obtained in this work and the experimental observations in \cite{Lazo_2} to the fact that the particle in the experiment was much larger and the geometry of the experiment itself was different. Here the particle was assumed to be suspended in space filled with the nematic electrolyte with the uniform director orientation away from the particle. On the other hand, in \cite{Lazo_2}, the electrolyte was confined to a planar cell of thickness comparable to the particle diameter.
The proposed model can be also employed to study general electrokinetic phenomena in nematics, including the systems that contain macroscopic colloidal particles and complex network of topological defects.}
\begin{acknowledgments}
Support from the following National Science Foundation Grants is acknowledged by the authors: No. DMS-1434969 (D.~G. and O.~M.~T.), No. DMS-1435372 (C.~K., M.~C.~C., and J.~V.), No. DMS-1434185 (O.~L.), No. DMS-1434734 (N.~J.~W.), and No. DMS-1418991 (N.~J.~W.). The authors wish to thank Douglas Arnold for useful discussions regarding numerical simulations.
\end{acknowledgments}
|
1,108,101,562,944 | arxiv | \section{Introduction}
Causal analysis based on linear structural equation model and path analysis is widely used in sociology, economics, biology, etc.
Pearl extended the concept of total effects in the path analysis to a general structural equation model and defined it as the intervention effect \cite{pearl1995causal}.
Fixing a variable $X$ at a certain value $x$ by an external operation is called intervention, and the intervention effect is mathematically defined as a causal effect on the response variable $Y$.
The intervention effect is defined based on a causal diagram that expresses the existence or nonexistence of a causal relationship between variables and conditional probability distributions that expresses causal relationships among variables.
However, in general, the causal diagram and the conditional probability distributions among variables are unknown, so it is necessary to estimate both from the data.
That is, the calculation of the intervention effect based on the causal diagram consists of the following steps.
\begin{enumerate}
\item Estimate a causal diagram from the data
\item Estimate the conditional probability distributions among variables from the data
\item Calculate the intervention effect
\end{enumerate}
The estimation methods of the causal diagram are roughly divided into two categories: constraint-based methods (such as PC algorithm \cite{spirtes1991algorithm}) that estimates the structure with constraints such as conditional independence among variables, and score-based methods (such as GES algorithm \cite{chickering2002finding}) that output a graph with the maximum approximate value of posterior probability.
Estimation of a conditional probability distribution is a general topic not limited to causal inference, and widely used approaches are estimating a parameter by assuming a parametric probability distribution or estimating by a nonparametric method.
In this research, we assume parametric probability distributions for the conditional probability distributions.
Although it is known that the identifiability of causal diagrams would change by assumptions on the conditional probability distributions \cite{shimizu2006linear}, this research does not deal with that point in depth.
However, we note that the proposal in this research is applicable as long as parametric distribution is assumed for the conditional probability distribution.
Since the intervention effect is defined on the causal diagram and the conditional probability distributions, it seems natural to estimate it by the above procedure.
However, if we formulate the problem of estimating the intervention effect based on the statistical decision theory, estimating it by this procedure is not necessarily optimal.
In this study, the problem of estimating the intervention effect is formulated in the framework of the statistical decision theory for each case where the causal diagram is known and unknown, and the optimal decision function is derived under the Bayes criterion.
The remainder of the paper is organized as follows.
In Section 2, the definitions of the structural equation model, causal diagram, and intervention effect are described.
In Section 3, we formulate the problem to estimate the intervention effect as a statistical decision problem for the case where the causal diagram is known and derive the optimal decision function under the Bayes criterion.
In Section 4, we do the same thing as in Section 3 for the case where the causal diagram is unknown.
In Section 5, we evaluate the effectiveness of the proposed method by comparing the intervention effect estimated by the proposed method and that estimated by two stage method, that is, calculate the intervention effect after estimating the causal diagram and/or the conditional probability distributions.
Finally, we give a summary and future works in Section 6.
\section{Causal diagram and intervention effect}
Here, after describing the definition of the causal diagram, we describe the mathematical definition of the intervention effect.
\subsection{Causal diagram}
\begin{defi}
Let $G$ be a directed acyclic graph (DAG) and $V=(X_{1}, X_{2}, \ldots, X_{m})$ be a set of random variables that corresponds to the set of the vertices of $G$.
$G$ is called a causal diagram if it specifies the causal relationships among variables in the following form,
\begin{align}
X_{i}=g_{i}(\mbox{pa}(X_{i}),\epsilon_{i}),\quad i=1,\ldots,m, \label{SEM}
\end{align}
and the random variables are generated according to this causal relationship.
The equations (\ref{SEM}) are called structural equations for $X_{1}, X_{2}, \ldots, X_{m}$.
$\mbox{pa}(X_{i})\subset V$ is the set of variables that have an arrow that heads to $X_{i}$.
We assume that $\epsilon_{1}, \epsilon_{2}, \ldots, \epsilon_{m}$ are mutually independent.
\end{defi}
Let $p(x_{i}|\mbox{pa}(x_{i}))$ be the conditional probability distribution of $X_{i}$ given $\mbox{pa}(X_{i})$.
\begin{figure}[t]
\begin{center}
\includegraphics[keepaspectratio=true,width=\linewidth]{diagram.eps}
\caption{Examples of causal diagram.}
\label{fig_diagram}
\end{center}
\end{figure}
\begin{example}
If the causal diagram of the random variables $X, Y, Z$ is $G_{1}$ in Figure \ref{fig_diagram}, there are causal relationships,
\begin{align}
Z&=g_{Z}(X, \epsilon_{Z}),\\
Y&=g_{Y}(Z, \epsilon_{Y}).
\end{align}
Similarly, if the causal diagram of the random variables $X, Y, Z$ is $G_{2}$ in Figure \ref{fig_diagram}, there are causal relationships,
\begin{align}
X&=g_{X}(Z, \epsilon_{X}),\\
Y&=g_{Y}(X, Z, \epsilon_{Y}).
\end{align}
\end{example}
\subsection{Intervention effect}
In a causal diagram, an external operation that fixes the value of $X$ to a constant regardless of the value of other variables is called intervention, and the distribution of $Y$ after the intervention is called intervention effect. Its mathematical definition is given as follows \cite{pearl1995causal}.
\begin{defi}
Let $V=\left\{X, Y, Z_{1}, Z_{2}, \ldots, Z_{p}\right\}$ be the set of vertices of a causal diagram $G$.
The intervention on $Y$ when intervening $X=x$ is defined as
\begin{align}
p(y|\mbox{do}(X=x))=\int\cdots\int \frac{p(x,y,z_{1},\ldots,z_{p})}{p(x|\mbox{pa}(x))}dz_{1}\ldots dz_{p}\label{effect_no_parameter}.
\end{align}
$\mbox{do}(X=x)$ means that $X$ is fixed to $x$ by intervention.
\end{defi}
(\ref{effect_no_parameter}) can be calculated only after the causal diagram is determined and the conditional distributions among the random variables are estimated.
Let $m$ be the variable that represents the causal diagram and the conditional probability distributions are parametric distributions specified by a parameter $\bm{\theta}_{m}$.
To clarify that the intervention effect depends on $m$ and $\bm{\theta}_{m}$, we rewrite (\ref{effect_no_parameter}) as follows.
\begin{multline}
p(y|\mbox{do}(X=x),m,\bm{\theta}_{m})=\\
\int\cdots\int \frac{p(x,y,z_{1},\ldots,z_{p}|m, \bm{\theta}_{m})}{p(x|\mbox{pa}(x),m, \bm{\theta}_{m})}dz_{1}\ldots dz_{p}.\label{effect_parameter}
\end{multline}
\begin{example}
Assume that the causal diagram $m$ of $X, Y, Z$ is $G_{1}$ in Figure \ref{fig_diagram} and the structural equations are linear, that is,
\begin{align}
Z&=\theta_{Z|X}X+\epsilon_{Z},\quad \epsilon_{Z}\sim \mathcal{N}(0, 1^{2}),\label{example_SEM_1_1}\\
Y&=\theta_{Y|Z}Z+\epsilon_{Y},\quad \epsilon_{Y}\sim\mathcal{N}(0, 1^{2}),\label{example_SEM_1_2}
\end{align}
where $\mathcal{N}(\mu, \sigma^{2})$ denotes the normal distribution with mean $\mu$ and variance $\sigma^{2}$.
Then, $\bm{\theta}_{m}=(\theta_{Z|X}, \theta_{Y|Z})$ and the intervention effect on $Y$ when intervening $X=x$ is given by
\begin{align}
p(y|\mbox{do}(X=x),m=G_{1}, \bm{\theta}_{m})=\mathcal{N}(y; \theta_{Y|Z}\theta_{Z|X}x, 1+\theta_{Y|Z}^{2}),
\end{align}
where $\mathcal{N}(\cdot; \mu, \sigma^{2})$ denotes the probability density function of $\mathcal{N}(\mu, \sigma^{2})$.
In this case, it is well known that the intervention effect equals to the conditional pribability distribution $p(y|x,\bm{\theta}_{m})$ and the above formula describes this in detail.
Similarlly, assume that the causal diagram $m$ of $X, Y, Z$ is $G_{2}$ in Figure \ref{fig_diagram} and the structural equations are given by
\begin{align}
X&=\theta_{X|Z}Z+\epsilon_{X},\quad \epsilon_{X}\sim\mathcal{N}(0, 1^{2}),\label{example_SEM_2_1}\\
Y&=\theta_{Y|X}X+\theta_{Y|Z}Z+\epsilon_{Y},\quad \epsilon_{Y}\sim\mathcal{N}(0, 1^{2}).\label{example_SEM_2_2}
\end{align}
Then, $\bm{\theta_{m}}=(\theta_{X|Z}, \theta_{Y|X}, \theta_{Y|Z})$ and the intervention effect on $Y$ when intervening $X=x$ is given by
\begin{align}
&p(y|\mbox{do}(X=x),m=G_{2}, \bm{\theta}_{m})=\mathcal{N}(y; \tilde{\mu}, \tilde{s}^{-1}),\\
&\tilde{\mu}=\tilde{s}^{-1}\theta_{Y|X}x-\mu_{Z}\theta_{Y|Z},\\
&\tilde{s}=\frac{s_{Z}}{\theta_{Y|Z}^{2}+s_{Z}},
\end{align}
where we assumed that $Z\sim\mathcal{N}(\mu_{Z}, s_{Z}^{-1})$.
\end{example}
\section{Decision theoretic approach for estimating intervention effect; causal diagram is known}
Here, we consider the case where the causal diagram $m$ is known, but $\bm{\theta}_{m}$ is unknown.
In this case, we cannot calculate (\ref{effect_parameter}) directly and we have to estimate it from the data.
Let $D^{n}=(x_{n}, y_{n}, z_{1n},\ldots, z_{pn})_{n=1,\ldots,N}$ be a sample of $X, Y, Z_{1}, \ldots, Z_{p}$ with size $n$.
Decision function $AP:D^{n}\mapsto p(y|x)$ outputs an estimate of the intervention effect.
We have to define some loss function for the decision function.
In this study, the Kullback-Leibler divergence with the intervention effect is used as a loss function.
\begin{multline}
Loss(\bm{\theta}_{m}, AP(D^{n}))= \\
\int p(y|\mbox{do}(X=x),m,\bm{\theta}_{m})\ln \frac{p(y|\mbox{do}(X=x),m,\bm{\theta}_{m})}{AP(D^{n})(y|x)}dy.\label{loss}
\end{multline}
The risk function is defined as the expectation of the loss function with respect to $D^{n}$.
\begin{align}
Risk(\bm{\theta}_{m},AP)=E_{D^{n}|\bm{\theta}}\left[Loss(\bm{\theta}_{m}, AP(D^{n}))\right].
\end{align}
The risk function is a function of the parameter $\bm{\theta}_{m}$ and there is no decision function that minimizes the risk function for all parameter $\bm{\theta}_{m}\in\Theta_{m}$.
In this study, we assume a prior distribution $p(\bm{\theta}_{m})$ for the parameter $\bm{\theta}_{m}$ and consider the following Bayes risk function.
\begin{align}
BR(AP)=E_{\bm{\theta}_{m}}\left[Risk(\bm{\theta}_{m}, AP)\right].\label{BR}
\end{align}
Then, the following theorem holds.
\begin{theo}
\label{theorem1}
The Bayes optimal decision function that minimizes (\ref{BR}) is given by
\begin{align}
AP^{*}(D^{n})=p(y|\mbox{do}(X=x),m,D^{n}),\label{bayes_optimal_fixed_model}
\end{align}
where
\begin{multline}
p(y|\mbox{do}(X=x),m,D^{n})=\\
\int p(y|\mbox{do}(X=x),m,\bm{\theta}_{m})p(\bm{\theta}_{m}|m, D^{n})d\bm{\theta}_{m},\label{predict_fixed_model}
\end{multline}
\end{theo}
\begin{proof}
The minimization of the Bayes risk function is reduced to the minimization of the loss function weighted by the posterior distribution \cite{berger2013statistical}.
That is,
\begin{multline}
\argmin_{AP} BR(AP)=\\
\argmin_{AP} \int Loss(\bm{\theta}_{m}, AP(D^{n}))p(\bm{\theta}_{m}|m, D^{n})d\bm{\theta}_{m}.
\end{multline}
Substituting (\ref{loss}) into the loss function and removing the terms that do not depend on $AP$, we have
\begin{align}
\argmin_{AP} BR(AP)=\argmax_{AP} \int\int p(y|\mbox{do}(X=x), m,\bm{\theta}_{m})\nonumber\\
\times p(\bm{\theta}_{m}|m,D^{n})\ln AP(D^{n})d\bm{\theta}_{m}dy\\
=\argmax_{AP}\int p(y|\mbox{do}(X=x),m,D^{n})\ln AP(D^{n})dy.
\end{align}
From Shannon's inequality \cite{cover2012elements},
\begin{multline}
\argmax_{AP}\int p(y|\mbox{do}(X=x),m,D^{n})\ln AP(D^{n})dy=\\
p(y|\mbox{do}(X=x),m,D^{n}).
\end{multline}
\hfill $\Box$
\end{proof}
\begin{example}
Assume that the causal diagram $m$ for $X, Y, Z$ is $G_{1}$ in Figure \ref{fig_diagram} and the structural equations are given by (\ref{example_SEM_1_1}) and (\ref{example_SEM_1_2}).
In addition, as the prior distributions of $\theta_{Y|Z}, \theta_{Z|X}$, assume that $\theta_{Y|Z}, \theta_{Z|X}\sim\mathcal{N}(0,\alpha^{-1})$.
Then, the Bayes optimal estimator of the intervention effect is given by
\begin{align}
p(y|\mbox{do}(X=x), m=G_{1}, D^{n})=\nonumber\\
\int\int \mathcal{N}(y; \theta_{Y|Z}\theta_{Z|X}x, 1+\theta_{Y|Z}^{2})\mathcal{N}(\theta_{Y|Z}; \mu_{Y|Z}, s_{Y|Z}^{-1})\times \nonumber \\
\mathcal{N}(\theta_{Z|X}; \mu_{Z|X}, s_{Z|X}^{-1})d\theta_{Y|Z}d\theta_{Z|X},\label{predict_example_1}
\end{align}
\begin{align}
\mu_{Y|Z}&=s_{Y|Z}^{-1}\bm{z}^{T}\bm{y},\\
s_{Y|Z}&=\alpha+\bm{z}^{T}\bm{z},\\
\mu_{Z|X}&=s_{Z|X}^{-1}\bm{x}^{T}\bm{z},\\
s_{Z|X}&=\alpha+\bm{x}^{T}\bm{x},
\end{align}
where $\bm{x}=(x_{1},\ldots,x_{N})^{T}, \bm{y}=(y_{1},\ldots,y_{N})^{T}, \bm{z}=(z_{1},\ldots, z_{N})$.
Similarly, assume that the causal diagram $m$ for $X, Y, Z$ is $G_{2}$ in Figure \ref{fig_diagram} and the structural equations are given by (\ref{example_SEM_2_1}) and (\ref{example_SEM_2_2}).
In addition, as the prior distributions of $\theta_{Y|X}, \theta_{Y|Z}$, assume that $\theta_{Y|X}, \theta_{Y|Z}\sim\mathcal{N}(0, \alpha^{-1})$.
Let $\bm{\theta}_{Y|XZ}=(\theta_{Y|X},\theta_{Y|Z})$, then, the Bayes optimal estimator of the intervention effect is given by
\begin{multline}
p(y|\mbox{do}(X=x), m=G_{2}, D^{n})=\\
\int \mathcal{N}(y; \tilde{\mu}, \tilde{s}^{-1})\mathcal{N}(\bm{\theta}_{Y|XZ};\bm{\mu}_{Y|XZ}, \bm{S}_{Y|XZ}^{-1})d\bm{\theta}_{Y|XZ},\label{predict_example_2}
\end{multline}
\begin{align}
\tilde{\mu}&=\tilde{s}^{-1}\theta_{Y|X}x-\mu_{Z}\theta_{Y|Z}\\
\tilde{s}&=\frac{\alpha s_{Z}}{\alpha\theta_{Y|Z}^{2}+s_{Z}}\\
\bm{\mu}_{Y|XZ}&=\bm{S}_{Y|XZ}^{-1}\bm{X}_{\setminus \bm{y}}^{T}\bm{y},\\
\bm{S}_{Y|XZ}&=\alpha\bm{I}+\bm{X}_{\setminus \bm{y}}^{T}\bm{X}_{\setminus \bm{y}},
\end{align}
where $\mathcal{N}(\cdot; \bm{\mu}, \bm{\Sigma})$ denotes the probability density function of the mulrivariate normal distribution with mean vector $\bm{\mu}$ and covariance matrix $\bm{\Sigma}$ and
\begin{align}
\bm{X}_{\setminus \bm{y}}=
\begin{pmatrix}
\bm{x}^{T}\\
\bm{z}^{T}
\end{pmatrix}^{T}.
\end{align}
We note that the Bayes optimal estimator (\ref{predict_example_1}) and (\ref{predict_example_2}) cannot be calculated analytically even in the cases of the linear structural equation model of these examples.
In the later experiments, we performed a numerical integration for the calculations.
\end{example}
\section{Decision theoretic approach for estimating intervention effect; causal diagram is unknown}
Here, we consider the case where not only the parameter $\bm{\theta}_{m}$, but also the causal diagram $m$ is unknown.
Since $m$ is unknown, the loss function is defined for $m$ and $\bm{\theta}_{m}$.
\begin{multline}
Loss(m,\bm{\theta}_{m}, AP(D^{n}))=\\
\int p(y|\mbox{do}(X=x),m,\bm{\theta}_{m})\ln \frac{p(y|\mbox{do}(X=x),m,\bm{\theta}_{m})}{AP(D^{n})(y|x)}dy.
\end{multline}
The risk function is given by
\begin{align}
Risk(m,\bm{\theta}_{m},AP)=E_{D^{n}|\bm{\theta}_{m},m}\left[Loss(m, \bm{\theta}_{m}, AP(D^{n}))\right].
\end{align}
In this study, we consider the case where the set of candidate causal diagrams is given by $\mathcal{M}$ and we can assume the prior distribution $p(m)$ for $m\in\mathcal{M}$ and $p(\bm{\theta}_{m}|m)$ for $\bm{\theta}_{m}$ under $m$.
Then, the Bayes risk function is given by
\begin{align}
BR(AP)=E_{m}\left[E_{\bm{\theta}_{m}|m}\left[Risk(m,\bm{\theta}_{m}, AP)\right]\right].\label{BR_model}
\end{align}
In this case, the following theorem holds.
\begin{theo}
The Bayes optimal estimator that minimizes (\ref{BR_model}) is given by
\begin{align}
AP^{*}(D^{n})=p(y|\mbox{do}(X=x),D^{n}),\label{predict_mixed_model}
\end{align}
where
\begin{multline}
p(y|\mbox{do}(X=x),D^{n})=\\
\sum_{m\in\mathcal{M}}p(m|D^{n})p(y|\mbox{do}(X=x),m,D^{n}),
\end{multline}
and $p(y|\mbox{do}(X=x),m,D^{n})$ is given by (\ref{predict_fixed_model}).
\end{theo}
\begin{proof}
It is proved in the same manner as the proof of Theorem 1.\hfill $\Box$
\end{proof}
\begin{example}
Assume that the set $\mathcal{M}$ of the candidate causal diagrams is $\left\{G_{1}, G_{2}\right\}$ in Figure \ref{fig_diagram} and the structural equations under each causal diagram are given in the same way as in Examples 2 and 3.
When the prior distribution of the model $m$ is $p(m=G_{1}), p(m=G_{2})$ and the prior distribution of the parameter $\bm{\theta}_{m}$ under each model are given in the same way as in Example 3, the Bayes optimal estimator of the intervention effect is given by
\begin{align}
&p(y|\mbox{do}(X=x),D^{n})=\nonumber\\
&p(m_{1}|D^{n})p(y|\mbox{do}(X=x),m=G_{1},D^{n})+\\
&p(m_{2}|D^{n})p(y|\mbox{do}(X=x),m=G_{2},D^{n}),\nonumber
\end{align}
where $p(y|\mbox{do}(X=x),m=G_{1},D^{n}),p(y|\mbox{do}(X=x),m=G_{2},D^{n})$ are the same as given by (\ref{predict_example_1}) (\ref{predict_example_2}).
\end{example}
\section{Numerical experiments}
In this section, we show the effectiveness of the proposed method through numerical simulations.
\subsection{Case 1 : causal diagram is known}
\label{experiment_1}
First, we deal with the case where the causal diagram is known.
We consider the two cases, one is that the true diagram is $G_{1}$ in Figure \ref{fig_diagram} and the other is that the true diagram is $G_{2}$ in Figure \ref{fig_diagram}.
The structural equations are (\ref{example_SEM_1_1}) and (\ref{example_SEM_1_2}) for $G_{1}$ and (\ref{example_SEM_2_1}) and (\ref{example_SEM_2_2}) for $G_{2}$.
We assume that the probability distributions of variables corresponding to leaf nodes in each model, that is, $X$ in $G_{1}$ and $Z$ in $G_{2}$, are both $\mathcal{N}(0, 1^{2})$.
We also assume that the prior distributions of the parameters under each model, that is, $\theta_{Y|Z}, \theta_{Z|X}$ in $G_{1}$ and $\theta_{X|Z}, \theta_{Y|X}, \theta_{Y|Z}$ in $G_{2}$, are all $\mathcal{N}(0, 1^{2})$.
We consider the problem to estimate the intervention effect on $Y$ when intervening $X=1$ given $D^{n}=(x_{n}, y_{n}, z_{n})_{n=1,\ldots,N}$ as a sample of $(X, Y, Z)$.
We compare the following three methods.
\begin{description}
\item[Method 1 (ML)] \ \\ Calculate the maximum likelihood (ML) estimator $\bm{\theta}_{m, ML}$ by
\begin{align}
\hat{\bm{\theta}}_{m, ML}=\argmax_{\bm{\theta}_{m}}p(D^{n}|\bm{\theta}_{m}),
\end{align}
and substitute it to (\ref{effect_parameter}).
\item[Method 2 (MAP)] \ \\Calculate the maximum a posteriori (MAP) estimator $\bm{\theta}_{m, MAP}$ by
\begin{align}
\hat{\bm{\theta}}_{m,MAP}=\argmax_{\bm{\theta}_{m}}p(\bm{\theta}_{m}|D^{n}),
\end{align}
and substitute it to (\ref{effect_parameter}).
\item[Method 3 (BAYES)] \ \\Calculate the Bayes optimal estimator (\ref{bayes_optimal_fixed_model}).
\end{description}
Figure \ref{fig_result1} shows the Kullback-Leibler divergence between the true intervention effect on $Y$ when intervening $X=1$ in the model $G_{1}$ and the estimator of each method.
Figure \ref{fig_result2} is the same result for the model $G_{2}$.
In either case, as the sample size increases, the results of the three methods converge.
This can be explained by the fact that the posterior distribution of parameters concentrates around the MAP estimator as the sample size increases, and the MAP estimator and the ML estimator also approaches.
However, when the sample size is small, method 2 is better than method 1, and method 3 is better than method 2.
In this experiment, we experimented with models with very few variables, so the difference of each method is small, but it is expected that the difference of each method will become larger as the model becomes more complicated.
\begin{figure}[t]
\begin{center}
\includegraphics[keepaspectratio=true,width=\linewidth]{model1.eps}
\caption{The Kullback-Leibler divergence between the true intervention effect on $Y$ when intervening $X=1$ in the model $G_{1}$ and the estimator of each method.}
\label{fig_result1}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[keepaspectratio=true,width=\linewidth]{model3.eps}
\caption{The Kullback-Leibler divergence between the true intervention effect on $Y$ when intervening $X=1$ in the model $G_{2}$ and the estimator of each method.}
\label{fig_result2}
\end{center}
\end{figure}
\subsection{Case 2 : causal diagram is unknown}
Next, we deal with the case where the causal diagram is unknown.
Let the set $\mathcal{M}$ of the candidates of the causal model be $\left\{G_{1}, G_{2}\right\}$ in Figure \ref{fig_diagram}.
The assumptions for the structural equations, the probability distributions of the leaf variables, and the prior distributions of the parameters are the same as the previous experiment.
We also assume that $p(m=G_{1})=p(m=G_{2})=\frac{1}{2}$.
Note that $X$ and $Y$ are conditionally independent when $Z$ is given in the model $G_{1}$, but they are not in the model $G_{2}$, so we can identify that which model generated data with high probability as the sample size increases.
As in the case of the previous experiment, we consider the problem to estimate the intervention effect on $Y$ when intervening $X=1$ given $D^{n}=(x_{n}, y_{n}, z_{n})_{n=1,\ldots,N}$ as a sample of $(X, Y, Z)$.
We compare the following two methods.
\begin{description}
\item[Method 1 (MAP)] \ \\Estimate the model by
\begin{align}
\hat{m}=\argmax_{m\in\mathcal{M}}p(m|D^{n})
\end{align}
and calculate the Bayes optimal estimator under the model $\hat{m}$,
\begin{align}
p(y|\mbox{do}(X=x),\hat{m},D^{n}).
\end{align}
\item[Method 2 (BAYES)] \ \\Calculate the Bayes optimal estimator (\ref{predict_mixed_model}).
\end{description}
Figure \ref{fig_result3} shows the Kullback-Leibler divergence between the true intervention effect on $Y$ when intervening $X=1$ and the estimator of each method.
The results of the two methods also approach as the sample size increases.
This can be explained from the fact that the as the sample size increases, the posterior probability of the true model approaches to $1$.
However, when the sample size is small, Method 2 is better than Method 1.
In this experiment, we experimented with only two candidate models, so there are differences between two methods only in small sample sizes.
It is expected that the difference will increase as the number of candidate models increases.
\begin{figure}[t]
\begin{center}
\includegraphics[keepaspectratio=true,width=\linewidth]{mixed_model.eps}
\caption{The Kullback-Leibler divergence between the true intervention effect on $Y$ when intervening $X=1$ and the estimator of each method. Models $G_{1}$ and $G_{2}$ appear with equal probability.}
\label{fig_result3}
\end{center}
\end{figure}
\section{Conclusion and future works}
In this study, the Bayes optimal estimation method for estimating the intervention effect was derived by formulating the estimation problem in the framework of the statistical decision theory.
In the estimation of the intervention effect, it is common to first estimate the causal diagram, estimate the conditional probability distributions among the variables, then calculate the intervention effect.
However, from the viewpoint of the Bayes decision theory framework, instead of determining models and parameters, weighting with a posterior probability or posterior distribution is optimal.
We describe some future works.
In the examples in this paper, we dealt with the case where the structural equations are linear.
It is necessary to derive the general form of the Bayes optimal estimator for those cases.
Further, it seems to be meaningful to investigate how the difference between the methods in the experiments becomes large in the cases other than the linear structural equation model.
In this study, we did not mention the calculation methods and computational complexity.
Even if the model is known and structural equations are linear, the Bayes optimal intervention effect estimator cannot be analytically calculated.
Therefore, in this paper, the estimator was calculated by numerical integration.
As the model becomes more complicated, the computational complexity will become higher.
It is necessary to construct an approximation algorithm that efficiently calculates the Bayes optimal estimator.
Also, when the model is unknown, it is necessary to calculate the posterior probability of all models, but as the number of candidate models becomes large, this also becomes computationally difficult.
It is also necessary to construct an approximation algorithm that efficiently calculates the Bayes optimal estimator in the case where the model is unknown.
\section*{Acknowledgment}
We would like to acknowledge all
members of Matsushima Lab. and Goto Lab. in Waseda Univ. for their
helpful suggestions to this work.
This research is partially supported by No. 16K00417 of Grant-in-Aid for
Scientific Research Category (C) and No. 18H03642 of Grant-in-Aid for Scientific Research Category (A), Japan Society for the Promotion
of Science.
\bibliographystyle{IEEEtran}
|
1,108,101,562,945 | arxiv | \section{INTRODUCTION}
Not long after the discovery of pulsars---whose characteristic signal was linked to magnetic fields \citep{hewish_etal_1968}---the potential role of magnetic fields in the core-collapse supernova (CCSN) explosion mechanism began to be investigated \citep[e.g.,][]{leblanc_wilson_1970,bisnovatyi-kogan_etal_1976,meier_etal_1976,symbalisty_1984}.
In principle, a differentially rotating proto-neutron star (PNS) could both amplify magnetic fields and serve as an energy reservoir available to be tapped by those fields, giving rise to magnetically powered explosions.
An early conclusion, however, was that both unrealistically rapid rotation \emph{and} unrealistically strong magnetic fields would be needed at the pre-collapse stage for magnetic fields to play a principal role in the explosion dynamics \citep{leblanc_wilson_1970,symbalisty_1984}.
In more recent years interest in strong magnetic fields has returned in connection with a number of observables related to core-collapse supernovae, including asymmetries in the explosion ejecta \citep{wheeler_etal_2002}, natal neutron star kick velocities \citep{laiQian_1998}, and especially the high-energy electromagnetic activity connected to some neutron stars known as magnetars, or Anomalous X-ray Pulsars (AXPs) and Soft Gamma Repeaters (SGRs) \citep[e.g.,][]{duncanThompson_1992,thompsonDuncan_2001,hurley_etal_2005,woodsThompson_2006}.
AXPs and SGRs are characterized by quiescent X-ray luminosities as high as $10^{35}$~erg s$^{-1}$, with sporadic outbursts releasing up to $10^{41}$~erg per event.
Gamma-ray outbursts from SGRs are even more energetic, an extreme example being the giant flare from SGR 1806-20, which released an estimated $10^{46}$~erg over 380~s \citep{hurley_etal_2005}.
Furthermore, AXPs and SGRs are neutron stars characterized by relatively long rotation periods ($P\gtrsim1$~s) and high spin-down rates ($\dot{P}\gtrsim10^{-12}$~ss$^{-1}$) \citep[e.g.,][]{lorimerKramer_2005}.
As their rotational energy cannot account for the electromagnetic emission, and because of the strong magnetic torques implied by high spin-down rates, they are believed to be young neutron stars powered by dissipation of extremely strong surface magnetic fields \citep[$10^{14}$-$10^{15}$~G,][]{duncanThompson_1996,thompsonDuncan_2001}.
On the theoretical side, the discovery of the magneto-rotational instability (MRI) by \citet{balbusHawley_1991} and its application to CCSNe \citep[initiated by][]{akiyama_etal_2003} relaxed the requirement of strong pre-collapse $B$-fields,
renewing interest in magnetic fields as a possible key ingredient in the explosion mechanism of some supernovae \citep[i.e., those from rapidly rotating progenitor cores; e.g.,][]{wheeler_etal_2002,obergaulinger_etal_2005,moiseenko_etal_2006,burrows_etal_2007,takiwaki_etal_2009}.
(The MRI results in exponential growth of the magnetic energy on the rotation timescale.)
However, the rotational energy falls off quadratically with increasing rotation period, and is about $5\times10^{49}$~erg for a 20~ms period PNS---much less than the characteristic CCSN explosion energy of $\sim10^{51}~\mbox{erg}\equiv1~\mbox{Bethe (B)}$.
Thus any magneto-rotationally driven supernovae likely would be peculiar events, since magnetic progenitor cores tend to rotate slowly at the pre-collapse stage \citep{heger_etal_2005}.
Leaving aside the explosion mechanism, the relationship between the formation of neutron star magnetic fields and CCSNe is still an open and interesting question, particularly in the case of magnetars (AXPs and SGRs) \citep{lorimerKramer_2005}.
\citet{thompsonDuncan_1993} argued that such strong fields must be generated during the neutrino cooling epoch after the collapse of the progenitor's iron core, and possibly before the explosion is initiated ($\sim1$~s after core collapse).
Their model remains one of the prevailing theories for magnetar formation, and includes a convective $\alpha-\Omega$ dynamo, which operates when the rotation period is comparable to the turnover time of entropy-driven convection ($\lesssim3$~ms) near the surface of the PNS.
The rapid turnover time may suggest that magnetars are formed in the magnetically-driven explosion of collapsed, rapidly rotating progenitors, whose remnant is spun down by MHD processes at later times.
\citet{bonanno_etal_2003,bonanno_etal_2005} found that neutron finger instabilities \citep[e.g.,][]{bruennDineva_1996} may also result in dynamo action in PNSs with rotation periods as long as 1~s.
In this scenario, the formation of neutron star magnetic fields may be slow (compared to the explosion time scale), and their creation is not necessarily tied to dynamics in the supernova explosion.
The MRI may also operate near the surface of the PNS, and contribute to neutron star magnetization.
The lack of sufficient rotational energy in magnetized pre-collapse progenitor cores, as predicted by stellar evolution models \citep{heger_etal_2005}, has sparked some recent interest in MHD processes in non-rotating CCSN environments \citep{endeve_etal_2010,guilet_etal_2011,obergaulingerJanka_2011}.
These studies investigate field amplification mechanisms and the possible role of amplified $B$-fields on the dynamics of slowly or non-rotating collapsed progenitors, in which rotational MHD processes are insignificant.
In particular, \citet[][hereafter \citetalias{endeve_etal_2010}]{endeve_etal_2010} studied magnetic field amplification by the stationary accretion shock instability \citep[SASI,][]{blondin_etal_2003}.
The SASI is central to the theory of CCSNe: recent simulations lead to the conclusion that it likely plays an important role in neutrino-powered explosions \citep{bruenn_etal_2006,buras_etal_2006,mezzacappa_etal_2007,scheck_etal_2008,marekJanka_2009,suwa_etal_2010,muller_etal_2012}, and may also explain certain observables of pulsars, including their proper motion \citep{scheck_etal_2004} and spin \citep{blondinMezzacappa_2007}.
Thus, magnetic fields may be an important part of a supernova model if the SASI is found to be sensitive to their presence.
In \citetalias{endeve_etal_2010} we adopted the idealized model of \citet{blondin_etal_2003} and \citet{blondinMezzacappa_2007}, and added a weak radial (split monopole) magnetic field.
We presented results from 2D (axisymmetric) and 3D MHD simulations of the SASI, and found that SASI-driven flows may result in significant magnetic field amplification.
Magnetic field evolution in axisymmetric simulations was found to be geometrically constrained.
Moreover, the non-axisymmetric spiral SASI mode \citep{blondinMezzacappa_2007} dominates the post-shock flows in 3D simulations at late times.
The nonlinear evolution of the spiral mode drives vigorous turbulence below the shock, which results in exponential amplification of $B$-fields due to ``stretching" \citep[e.g.,][]{ott_1998}, and the magnetic energy becomes concentrated in intense, intermittent magnetic flux ropes.
We presented results from models with non-rotating and weakly rotating initial conditions, and weak ($10^{10}$~G) and stronger ($10^{12}$~G) initial magnetic fields.
The magnetic fields were not found to reach dynamically significant levels (i.e., components of the Maxwell stress tensor did not contribute significantly to the total stress), and hence no impact of magnetic fields on local or global dynamics was demonstrated.
However, we found that SASI-induced turbulent magnetic field amplification is very sensitive to the spatial resolution adopted in the numerical simulations.
Most of the 3D models presented in \citetalias{endeve_etal_2010} were performed at ``medium" spatial resolution (grid cells with sides $\Delta l=1.56$~km), while one model was performed with ``high" spatial resolution ($\Delta l=1.17$~km).
The thickness of magnetic flux ropes was found to decrease in proportion to $\Delta l$.
We did not observe convergence of $B$-field amplification with increasing spatial resolution.
Nevertheless, the simulations implied neutron star magnetization as a result of SASI-induced magnetic field amplification.
This paper continues and extends the investigations initiated in \citetalias{endeve_etal_2010}.
It improves on our previous study in several important ways, including (1) coverage of a larger parameter space, (2) higher spatial resolution (up to $1280^{3}$ zones), and (3) computation of kinetic and magnetic energy spectra.
With the new set of simulations we investigate in some detail the nature of SASI-driven turbulence, and the growth and impact of magnetic fields during operation of the SASI.
We investigate the saturation level of magnetic energy in our simulations, the (kinetic) energy reservoir available for magnetic field amplification, and the factors determining the magnetic energy growth rate.
We also consider as in \citetalias{endeve_etal_2010} the impact of amplified magnetic fields on global shock dynamics, in particular any impact they may have on the SASI.
Finally, we attempt to quantify the levels of neutron star magnetization that may be expected from SASI dynamics.
We find that the SASI-driven turbulence shares several similarities with non-helical turbulence \citep[e.g.,][]{brandenburg_etal_1996,haugen_etal_2004}, and results in an efficient small-scale dynamo.
Magnetic fields grow exponentially in the turbulent flows driven by the SASI as long as the ``kinematic regime'' remains valid.
The kinematic regime ends when the magnetic energy becomes comparable (locally) to the kinetic energy of the turbulent flows---the magnetic energy source.
From the computed energy spectra we estimate the ``turbulent" kinetic energy, $E_{\mbox{\tiny kin}}^{\mbox{\tiny tur}}$, available for magnetic field amplification, and, in our idealized model, $E_{\mbox{\tiny kin}}^{\mbox{\tiny tur}}$ constitutes about $10\%$ of the total kinetic energy below the shock ($E_{\mbox{\tiny kin}}\sim5\times10^{49}$~erg).
The total magnetic energy saturates at about $E_{\mbox{\tiny mag}}\sim5\times10^{47}$~erg.
The presence of amplified magnetic fields results in less kinetic energy on small spatial scales, but we find no impact of magnetic fields on global shock dynamics, which is consistent with considerations of the energetics.
However, magnetic field evolution remains sensitive to numerical resolution, and magnetic fields are subject to significant numerical dissipation during the saturated state, and our ability to quantify fully the impact of magnetic fields in a more realistic situation is therefore limited.
The magnetic energy growth time decreases with increasing resolution, and, based on the turnover time of the SASI-driven turbulence, is estimated to be a few milliseconds.
We argue that the MHD processes studied in this paper may contribute significantly to strong, small-scale neutron star magnetic fields, and provide a connection between the magnetic fields of neutron stars at birth and supernova dynamics.
The saturation energies may be sufficient to power flaring activity of AXPs, and possibly SGRs.
Moreover, their formation does not require progenitor rotation.
\section{SETUP OF NUMERICAL SIMULATIONS}
We employ the same numerical code and three-dimensional initial conditions we used in \citetalias{endeve_etal_2010}, which follow closely the adiabatic setup described in \citet{blondin_etal_2003} and \citet{blondinMezzacappa_2007}: a stationary accretion shock is placed at a radius $r=R_{\mbox{\tiny{Sh}}}=200$~km, and a highly supersonic flow is nearly free-falling towards the shock for $r>R_{\mbox{\tiny{Sh}}}$ with $\edens{kin}+\edens{grav}\approx0$.
Between the shock and the PNS the flow settles subsonically---obeying the Bernoulli equation $\edens{kin}+\edens{int}+P+\edens{grav}=0$---and is nearly in hydrostatic equilibrium.
Matter is allowed to flow through an inner boundary placed at $r=R_{\mbox{\tiny PNS}}=40$~km.
The mass density and pressure just inside $R_{\mbox{\tiny PNS}}$ are determined from values just outside $R_{\mbox{\tiny PNS}}$ using power-law extrapolations: $\rho\propto r^{-3}$ and $P\propto r^{-4}$, respectively (a procedure that proved necessary in order to maintain the steady state of the unperturbed initial condition).
\begin{figure}
\epsscale{1.0}
\plotone{./figure1.png}
\caption{Plot of the initial condition for the non-rotating weak-field model with $B_{0}=10^{10}$~G ($\model{10}{0.0}{00}$): internal energy density ($\edens{int}$, solid line), magnitude of gravitational potential energy density ($|\edens{grav}|$, dash-dot line), kinetic energy density ($\edens{kin}$, dotted line), and magnetic energy density ($\edens{mag}$, dashed line) versus radial distance from the center of the PNS. The surface of the PNS (our inner boundary) is fixed at $r=R_{\mbox{\tiny PNS}}=40$~km and the shock is initially located at $r=R_{\mbox{\tiny Sh}}=200$~km. Inside the shock $|\edens{grav}|$, $\edens{int}$ and $\edens{mag}$ follow roughly the same power-law ($\propto r^{-4}$), while $\edens{kin}\propto r^{-1}$. The flow is in steady state free-fall outside $R_{\mbox{\tiny{Sh}}}$, with $\edens{kin}$ and $|\edens{grav}|$ proportional to $r^{-2.5}$, and $\edens{int}\propto r^{-2}$. $\edens{mag}$ has been multiplied by $10^{6}$ to become visible on the plot. (The dashed line is also identical to $\edens{mag}$ in the strong-field model ($\model{13}{0.0}{00}$), cf. Table \ref{tab:computedModels}.) \label{fig:initialCondition}}
\end{figure}
Figure \ref{fig:initialCondition} displays the initial configuration of a spherically symmetric, non-rotating stationary accretion shock with a weak radial magnetic field ($B_{0}=1\times10^{10}$~G; the initial magnetic fields in our simulations are discussed in further detail below).
We plot internal energy density $\edens{int}=P/(\gamma-1)$, kinetic energy density
$\edens{kin}= \rho\vect{u}\cdot\vect{u}/2$,
magnetic energy density
$\edens{mag}=\vect{B}\cdot\vect{B}/(2\mu_{0})$,
and the magnitude of the gravitational potential energy density $\edens{grav}=\rho\Phi$ versus radial distance from the center of the star.
Here $\rho$, $\vect{u}$, $P$, $\vect{B}$, and $\Phi$ are the mass density, fluid velocity, fluid pressure, magnetic flux density (magnetic field), and gravitational potential, respectively.
The vacuum permeability is denoted $\mu_{0}$.
We adopt the ideal gas equation of state, with the ratio of specific heats set to $\gamma=4/3$.
The time-independent point-mass gravitational potential is $\Phi=-GM/r$, where $G$ is Newton's constant and $M=1.2~M_{\odot}$ is the mass of the central object.
The accretion rate ahead of the shock is $\dot{M}=0.36~M_{\odot}\mbox{ s}^{-1}$, which is held fixed during the simulations.
Our numerical simulation code, GenASiS, solves the adiabatic, non-relativistic, ideal MHD equations including gravity \citepalias[cf. Eqs. (1)-(4) in][]{endeve_etal_2010}.
Starting from the semi-analytic initial condition, balance equations for mass density $\rho$, momentum density $\vect{S}=\rho\vect{u}$, and magneto-fluid energy density $\edens{fluid}=\edens{int}+\edens{kin}+\edens{mag}$ are evolved with a second-order HLL-type ideal MHD scheme in a manner that ensures conservation of mass and energy (i.e., volume integrals of $\rho$ and $\edens{fluid}+\edens{grav}$) to numerical precision.
The magnetic induction equation is evolved in a divergence-free manner via the method of constrained transport \citep{evansHawley_1988}. \citepalias[See][and the references therein for further details. See also Appendix \ref{app:numericalDissipation} in this paper.]{endeve_etal_2010}
Without initial perturbations the initial configuration in Figure \ref{fig:initialCondition} remains stationary.
In order to initiate the SASI we perturb the initial condition by adding small ($\sim1\%$) random perturbations to the initial pressure profile in the region $r\in[R_{\mbox{\tiny PNS}},R_{\mbox{\tiny{Sh}}}]$.
These perturbations initiate the SASI and allow us to study the evolution of magnetic fields in SASI-driven flows.
The topology, strength and distribution of magnetic fields in core-collapse supernova progenitors are highly uncertain.
A similar uncertainty applies to our knowledge of the angular momentum distribution in the progenitor core.
These uncertainties then apply directly to the initial conditions of simulations aimed at studying the evolution and impact of magnetic fields in core-collapse supernovae.
Rotation and magnetic fields in stellar interiors are intimately coupled in a complex multidimensional interplay.
Stellar core rotation can drive the evolution of magnetic fields, while the magnetic fields can play an important role in distributing the core's angular momentum \citep[e.g.,][]{spruit_2002}.
Three-dimensional stellar evolution models (even without magnetic fields) extending all the way to iron core collapse are currently not available.
However, some insight into the issue of core magnetic fields (and rotation) is provided by recent stellar evolution calculations \citep[e.g.,][]{heger_etal_2005,meynet_etal_2011}.
In particular, \citet{heger_etal_2005} included magnetic fields in their calculations, and found that magnetic torques can significantly reduce the rotation rate of the pre-collapse iron core.
The resulting magnetic fields were dominated by a toroidal component $B_{\varphi}$ ($B_{\varphi}/B_{r}=10^{3}-10^{4}$, where $B_{r}$ is the poloidal (radial) component of the magnetic field).
They also reported that the core rotation rate and magnetic field strength at the pre-supernova stage is an increasing function of progenitor mass.
In the iron core of their 15~$M_{\odot}$ model, the toroidal and poloidal magnetic fields are $B_{\varphi}\approx5\times10^{9}$~G and $B_{r}\approx8\times10^{5}$~G, respectively, while in their 35~$M_{\odot}$ model, the toroidal and poloidal magnetic fields are $B_{\varphi}\approx1\times10^{10}$~G and $B_{r}\approx1\times10^{7}$~G, respectively.
Accounting for the three orders of magnitude increase attained during core-collapse, the \citet[][]{heger_etal_2005} models predict the post-bounce toroidal and poloidal magnetic fields to be in the range of $10^{12}-10^{13}$~G and $10^{9}-10^{10}$~G, respectively.
This is in the range of `common pulsars', inferred from observations of pulsar spin periods and corresponding spin-down rates \citep{lorimerKramer_2005}, but significantly lower than that of magnetars \citep[][]{duncanThompson_1992}.
\begin{table}
\begin{center}
\caption{Tabular overview of computed models. \label{tab:computedModels}}
\begin{tabular}{cccc}
Model & $B_{0}$ (G) & $l_{0}$ (cm$^{2}$ s$^{-1}$) & $t_{\mbox{\tiny end}}$ (ms) \\
\tableline
\tableline
\model{10}{0.0}{00} & $1\times10^{10}$ & 0.0 & 1100 \\
\model{10}{1.5}{15} & $1\times10^{10}$ & $1.5\times10^{15}$ & 878 \\
\model{10}{4.0}{15} & $1\times10^{10}$ & $4.0\times10^{15}$ & 678 \\
\tableline
\model{12}{0.0}{00}\tablenotemark{a} & $1\times10^{12}$ & 0.0 & 1126 \\
\model{12}{1.5}{15} & $1\times10^{12}$ & $1.5\times10^{15}$ & 1000 \\
\model{12}{4.0}{15} & $1\times10^{12}$ & $4.0\times10^{15}$ & 644 \\
\tableline
\model{13}{0.0}{00} & $1\times10^{13}$ & 0.0 & 1100 \\
\tableline
\tableline
\end{tabular}
\tablecomments{$^{\rm a}$ Model computed with multiple grid resolutions.}
\end{center}
\end{table}
Investigating the role of initial $B$-field topology in our simulations is beyond the scope of this study, which is restricted to an initially radial (split monopole) magnetic field configuration; $B_{r}=\mbox{sign}(\cos\vartheta)\times B_{0}(R_{\mbox{\tiny PNS}}/r)^{2}$, where $\vartheta$ is the polar angle.
We only vary the strength of the initial magnetic field $B_{0}$ at the surface of the PNS ($r=R_{\mbox{\tiny PNS}}$).
In particular, we vary $B_{0}$ in the range from $1\times10^{10}$~G to $1\times10^{13}$~G (cf. Table \ref{tab:computedModels}).
The initial magnetic energy density profile for the model with $B_{0}=1\times10^{10}$~G is represented by the dashed line in Figure \ref{fig:initialCondition}, where it has been boosted by a factor of $10^{6}$ to become visible on the plot.
(The corresponding profile for the model with $B_{0}=1\times10^{13}$~G is identical to the dashed line in Figure \ref{fig:initialCondition}.)
Clearly, when comparing the magnetic energy density to $\edens{kin}$ and $\edens{int}$, all our models are initiated with weak magnetic fields.
From the perspective of the \citet{heger_etal_2005} models, our initial magnetic fields are purely poloidal and stronger than their predicted poloidal fields, but comparable (in magnitude) to their predicted toroidal magnetic fields.
Based on the expected multidimensional character of the post-bounce supernova dynamics (e.g., convection and the SASI) and the strength of the magnetic fields ($\edens{mag}$ is small relative to $\edens{kin}$ and $\edens{int}$) we do not expect the magnetic fields to retain the anisotropic ($B_{\varphi}/B_{r}\gg 1$) configuration predicted by the stellar evolution calculations of \citet{heger_etal_2005}.
Progenitors from multidimensional stellar evolution calculations may deviate significantly from their spherically symmetric counterparts \citep[][]{arnettMeakin2011}.
We believe the initial magnetic field configuration we have chosen has only (at best) a secondary impact on our results, and that initial insight into the MHD evolution in core-collapse supernovae can be obtained from the simulations presented here.
For comparison, the non-rotating simulations recently presented by \citet{obergaulingerJanka_2011} with weak initial magnetic fields (models s15-B10 and s15-B11 in that study) start with purely poloidal pre-collapse core magnetic fields of $1\times10^{10}$~G and $1\times10^{11}$~G, respectively.
After core-collapse and shock stagnation, the strength of the magnetic field in the stable layer separating the PNS convection zone and the gain region is about $4\times10^{12}$~G and $3\times10^{13}$~G, respectively (cf. Table 2 in \citet{obergaulingerJanka_2011}).
This layer coincides roughly with our inner boundary at $r=R_{\mbox{\tiny PNS}}$.
Thus, the magnetic field strength in their collapsed weak-field models is comparable (initially) to that of our strongest-field model.
Our rotating models are initiated by setting the pre-shock gas into rotation about the $z$-axis by specifying the azimuthal velocity according to $u_{\varphi}=l_{0}\sin\vartheta/r$, where $l_{0}$ is the (constant) specific angular momentum.
We have computed rotating models where $l_{0}$ has been set to $1.5\times10^{15}$~cm$^{2}$ s$^{-1}$ and $4.0\times10^{15}$~cm$^{2}$ s$^{-1}$.
The discretized ideal MHD equations are solved in a cubic computational domain with sides $L$ and volume $V_{L}=L^{3}$.
Cartesian coordinates are employed.
The computational domain is divided into $N$ zones in each spatial dimension, resulting in $N^{3}$ cubic zones with sides $\Delta l=L/N$.
To conserve computational resources we start our simulations in a relatively small computational domain with $L=L_{\mbox{\tiny min}}=600$~km and $N=512$ (resulting in $\Delta l\approx1.17$~km).
The time-step in our simulations (limited by the Courant-Friedrichs-Lewy condition) is about 5-10~$\mu$s, depending on the stage of the particular run.
The runs are evolved to a physical time of about 1~s, which results in about $10^{5}$ time-steps taken per simulation.
The MHD solver is parallelized using the Message Passing Interface (MPI), and the computational domain is subdivided into blocks containing an equal number of zones, which are distributed among MPI processes.
During the simulations we keep the number of zones per block (MPI process) fixed to $32^{3}$.
Once the SASI evolves into the nonlinear regime the volume encompassed by the shock grows, and the shock eventually interacts with the boundary of the computational domain.
When this happens, we expand the computational domain by adding a layer of $32^{3}$-zones blocks (i.e., we add 64 zones in each coordinate direction) and restart the simulation from the last checkpoint written before the shock interacted with the boundary of the computational domain.
We repeat this process, and run our simulations until the shock interacts with the boundary of the largest computational box $L=L_{\mbox{\tiny max}}=1500$~km, or the simulation time reaches $t=1100$~ms, whichever occurs first.
Since we keep $\Delta l$ fixed during the simulations, the largest computational domain is covered by 1280 zones in each spatial dimension.
During a run, we write simulation output for analysis and visualization every 2~ms of physical time, resulting in tens of Terabytes of data from each model.
\section{SIMULATION RESULTS}
We focus on the magnetic field evolution during the nonlinear phase of the SASI, during which magnetic fields are amplified most effectively and the potential for back-reaction of the induced fields on the fluid flow is greatest.
We do not apply any rigorous criterion for the onset of nonlinearity; we simply note when the accretion shock deviates noticeably from its spherically symmetric initial shape, and the post-shock velocity field has developed a significant non-radial component.
(The upper panels of Figure \ref{fig:machNumberAndVorticity} are representative of the early nonlinear phase.)
\begin{figure*}
\epsscale{1.0}
\plotone{./figure2.png}
\caption{Nonlinear operation of the spiral SASI mode in model $\model{13}{0.0}{00}$: flow Mach number $|\vect{u}|/c_{S}$ (left panels) and magnitude of fluid vorticity $|\boldsymbol\omega|(\equiv|\curl{\vect{u}}|)$ (right panels). The adiabatic sound speed is $c_{S}=\sqrt{\gamma P/\rho}$. Snapshots are taken at $t=720$~ms (upper panels) and $t=820$~ms (lower panels). To highlight the spiral mode pattern in each panel, the viewing normal is aligned with the total angular momentum in $V_{\mbox{\tiny{Sh}}}$. The shock surface is traced out by the white contour. Velocity vectors where $|\vect{u}|\ge10^{4}$~km s$^{-1}$ are shown in the left panels. \label{fig:machNumberAndVorticity}}
\end{figure*}
Our simulations vary in initial magnetic field strength and spatial resolution, and feature both non-rotating and rotating configurations (see Table \ref{tab:computedModels}).
We focus first---and predominantly---on non-rotating models, often referring to model $\model{10}{0.0}{00}$ (non-rotating model with $B_{0}=1\times10^{10}$~G) as the ``weak-field model,'' and to model $\model{13}{0.0}{00}$ (non-rotating model with $B_{0}=1\times10^{13}$~G) as the ``strong-field model.''
The rotating models are also briefly discussed, but we find that the turbulent magnetic field amplification is mostly unaffected by rotation.
We initiate the SASI with random pressure perturbations in the post-shock flow in order to avoid biased excitation of particular modes (i.e. ``sloshing'' vs. ``spiral''), and find that all our simulations exhibit flows typical of the spiral mode. This is consistent with our results in \citetalias{endeve_etal_2010}, and with \citet{blondinMezzacappa_2007}, who found the spiral mode to dominate the late-time evolution independent of the initial perturbation. It is also consistent with the conclusions of \cite{fernandez_2010}, who demonstrated that the spiral modes of the SASI can be viewed as a superposition of sloshing modes out of phase, and that any superposition of sloshing modes with non-zero relative phase leads to spiral modes and angular momentum redistribution in the post-shock flow, which potentially spins up the underlying PNS \citep[see also][]{blondinShaw_2007}.
The development of spiral SASI modes thus seems to be a general outcome (in 3D) of perturbing the (convectively stable) spherically symmetric initial condition.
Moreover, a recent laboratory experiment---a shallow water analogue to a shock instability \citep[SWASI,][]{foglizzo_etal_2012}---found spiral modes to emerge favorably from the nonlinear phase.
\subsection{Turbulence from Spiral SASI Modes}
Figure \ref{fig:machNumberAndVorticity} illustrates the flows that develop from the nonlinear spiral SASI mode.
The renderings are created from the strong-field model, but the hydrodynamic developments exhibited by this model and highlighted in Figure \ref{fig:machNumberAndVorticity} are typical of all our non-rotating models.
The flow Mach number is shown in the left panels, and in the right panels the magnitude of fluid vorticity is displayed.
The upper panels ($t=720$~ms) depict the early development of the nonlinear spiral SASI mode.
The shock surface is still quasi-spherical, but significant angular momentum redistribution has occurred in the post-shock flow, and the presence of strong counterrotating flows is apparent (cf. velocity vectors).
The shock triple-point \citep[cf.][]{blondinMezzacappa_2007}, positioned to the lower left ($\sim$seven o'clock position), has just formed, and is visible as the kink in the shock surface (cf. white contour).
The shock triple-point (a line segment extending across the shock surface) connects the pre-shock accretion flow and the two counterrotating flows in the post-shock gas.
It moves on the shock surface in the counterclockwise direction in Figure \ref{fig:machNumberAndVorticity}.
A layer of strongly sheared flows extends from the triple-point, downstream from the shock.
This is clearly seen in both plots of fluid vorticity.
This shear flow is one site of post-shock vorticity generation in our simulations.
The post-shock flows are still subsonic for $t=720$~ms.
In the lower panels ($t=820$~ms) the shock triple-point has completed about one and a half revolutions along the shock surface and is now positioned to the upper right ($\sim$two o'clock position).
The shock volume has grown by more than a factor of three compared to the upper panels, and the shape of the accretion shock and the mass distribution in the shocked cavity are even more aspherical.
The supersonic pre-shock accretion flow impinges on the shock at an oblique angle due to the aspherical shock and its off-center position.
The significant tangential velocity component (relative to the shock surface), which is preserved across the shock, leads to supersonic post-shock flows ahead of (and directed towards) the triple-point.
These supersonic flows, which strengthen the shear flow discussed above, are directed down towards the PNS and result in further vorticity generation as they decelerate up the density gradient or impinge on the PNS.
The inviscid vorticity equation is obtained by taking the curl of Euler's equation \citep[e.g.,][]{landauLifshitz_1959}
\begin{equation}
\pderiv{\boldsymbol\omega}{t}
=\curl{(\vect{u}\times\boldsymbol\omega)}
+\f{1}{\rho^{2}}\gradient{\rho}\times\gradient{P},
\label{eq:vorticityEquation}
\end{equation}
where the first term on the right-hand side describes changes in vorticity due to fluid motions, and the second term is the baroclinic vector.
(Magnetic fields are neglected in Eq. (\ref{eq:vorticityEquation}), but it remains valid for weak magnetic fields.)
In particular, vorticity may be generated in regions where isosurfaces of density and pressure intersect.
Figure \ref{fig:vorticityProduction} displays the polytropic constant $\kappa=P/\rho^{\gamma}$ (a proxy for entropy) in a slice through model $\model{13}{0.0}{00}$ at $t=820$~ms with focus on the shear flows emanating from the shock triple-point.
Contours of constant density (dashed) and pressure (solid) are also plotted.
The density and pressure contours are mostly parallel, but diverge strongly in the shear layer, indicating intersecting density and pressure isosurfaces and vorticity generation through the baroclinic term in Eq. (\ref{eq:vorticityEquation}).
(Also, significant vorticity amplification occurs in the shear layer and elsewhere through the ``advection" term in Eq. (\ref{eq:vorticityEquation}).)
Vorticity is generated, amplified, and distributed in a large fraction of the post-shock volume during operation of the SASI.
(Movie 1 in the online material shows the generation and evolution of vorticity in the time interval from $t=720$~ms to $t=820$~ms.)
The vorticity field exhibits strong intermittency in the late stages of SASI evolution.
We also note the development of vorticity tube structures (vortex tubes) during the operation of the spiral mode.
(See also Movie 2, which shows a full revolution of a vorticity still-frame at $t=820$~ms.)
\citet{meeBrandenburg_2006} pointed out that the presence of vorticity may be helpful for turbulent magnetic field amplification.
\begin{figure}
\epsscale{1.0}
\plotone{./figure3.png}
\caption{Slice through model \model{13}{0.0}{00} at $t=820$~ms showing the distribution of the polytropic constant $\kappa=P/\rho^{\gamma}$ around the shear layer associated with the shock triple-point. Contours of constant density ($\rho=6\times10^{8}$~g~cm$^{-3}$ and $\rho=3\times10^{8}$~g~cm$^{-3}$; dashed black and gray, respectively) and pressure ($P=1.7\times10^{27}$~erg~cm$^{-3}$ and $P=6.5\times10^{26}$~erg~cm$^{-3}$; solid black and gray, respectively) are also plotted. \label{fig:vorticityProduction}}
\end{figure}
Strongly forced accretion-driven turbulence develops as a result of the SASI, and the post-shock flow becomes roughly divided into a supersonic (driving) component and a subsonic (volume-filling) turbulent component (cf. lower left panel in Figure \ref{fig:machNumberAndVorticity}).
This is also reflected in the probability density function (PDF) of the velocity field below the shock.
In the left panel in Figure \ref{fig:pdfVelocityAndVorticity} we plot normalized PDFs of the $x$-component of the velocity.
We plot the \emph{total} PDF (solid black line), associated with the subsonic \emph{and} supersonic flow, and the PDF associated with the subsonic flow only (dashed black line).
The supersonic flows contribute only to the tails of the distribution.
The center of the distribution moves in response to the triple-point's movement along the shock surface (cf. gray curves), but when averaged over one revolution about the PNS, the PDF is practically centered on zero: we find $\left[\xi=u_{x}/u_{\mbox{\tiny rms}}\right]_{\mbox{\tiny PDF}}=\int_{-\infty}^{\infty}\xi\,\mbox{PDF}(\xi)\,d\xi\approx0.019$.
In the right panel of Figure \ref{fig:pdfVelocityAndVorticity} we plot the PDF of the $x$-component of the vorticity.
The vorticity PDF is more peaked, has extended exponential tails, and is also centered about zero; $\left[\omega_{x}/\omega_{\mbox{\tiny rms}}\right]_{\mbox{\tiny PDF}}\approx0.002$.
Similar vorticity distributions were found in simulations of convectively driven turbulence by \cite{brandenburg_etal_1996} and attributed to intermittency in hydrodynamic turbulence \citep[see also][]{kraichnan_1990,ishihara_etal_2009}.
\citet{brandenburg_etal_1996} characterized the intermittency of a variable $f$ by the kurtosis of its PDF
\begin{equation}
\mbox{Kurt}(f)=\left[f^{4}\right]_{\mbox{\tiny PDF}}/\left[f^{2}\right]_{\mbox{\tiny PDF}}^{2},
\end{equation}
where $\left[f^{n}\right]_{\mbox{\tiny PDF}}=\int_{-\infty}^{\infty}\,f^{n}\,\mbox{PDF}(f)\,df$ is the $n$-th moment of the PDF (assuming zero mean).
For the $x$-component of the velocity below the shock we find $\mbox{Kurt}(u_{x}/u_{\mbox{\tiny rms}})\approx4.6$, and for the $x$-component of the vorticity we find $\mbox{Kurt}(\omega_{x}/\omega_{\mbox{\tiny rms}})\approx26.7$ \citep[i.e., similar to][]{brandenburg_etal_1996}.
\begin{figure*}
\epsscale{1.0}
\plottwo{./figure4a.png}
{./figure4b.png}
\caption{Normalized probability density functions (PDFs) of the $x$-component of velocity (left) and the $x$-component of vorticity (right). The PDFs are constructed from the post-shock flows in model \model{13}{0.0}{00} during the nonlinear operation of the spiral SASI mode (from $t=810$~ms to $t=922$~ms, which corresponds to roughly one full revolution of the shock triple-point about the PNS). The rms values of velocity and vorticity below the shock are $u_{\mbox{\tiny rms}}=\sqrt{2E_{\mbox{\tiny kin}}/M_{\mbox{\tiny{Sh}}}}$ and $\omega_{\mbox{\tiny rms}}=\sqrt{2\Omega/V_{\mbox{\tiny{Sh}}}}$, respectively, and $E_{\mbox{\tiny kin}}$, $M_{\mbox{\tiny{Sh}}}$, and $\Omega$ are the kinetic energy, mass, and enstrophy in $V_{\mbox{\tiny{Sh}}}$, the volume bounded by the shock surface and the surface of the PNS. We show PDFs for individual time states in gray and the average over all the time states in black. In the left panel, the total PDFs are represented by the solid lines, while the dashed lines are the PDFs constructed from subsonic flows only ($|\vect{u}|/c_{S}<1$). The (averaged) PDF associated with the subsonic flow fits well with the Gaussian $0.01975\times\exp{[-3.1\times(u_{x}/u_{\mbox{\tiny rms}})^{2}]}$. (The PDFs constructed from the other velocity and vorticity components look similar.) \label{fig:pdfVelocityAndVorticity}}
\end{figure*}
\subsection{Amplification of Weak Magnetic Fields from Turbulence: Elementary Concepts}
The SASI-driven hydrodynamic developments result in turbulent amplification of initially weak magnetic fields, which is the focus of this study \citepalias[see also][]{endeve_etal_2010}.
Here we very briefly review some elementary concepts pertaining to such magnetic field amplification \citep[for details, see for example reviews by][]{ott_1998,brandenburgSubramanian_2005}.
Stellar interiors are extremely good electrical conductors.
In a perfectly conducting fluid the electric field is $\vect{E}=-(\vect{u}\times\vect{B})$, and Faraday's law (the induction equation), which governs the evolution of the magnetic field, becomes
\begin{equation}
\pderiv{\vect{B}}{t}=\curl{(\vect{u}\times\vect{B})},
\label{eq:inductionEquation}
\end{equation}
where the right-hand side (the induction term) describes changes to the magnetic field due to fluid motions.
We note that, modulo the baroclinic vector, Eqs. (\ref{eq:vorticityEquation}) and (\ref{eq:inductionEquation}) have identical form, suggesting a possible analogy between $\boldsymbol\omega$ and $\vect{B}$ \citep{batchelor_1950}.
Moreover, \citet{batchelor_1950} argued that the distribution of $\boldsymbol\omega$ and $\vect{B}$ will be similar in fully developed turbulence.
An important difference, however, is that the vorticity equation is nonlinear, while the induction equation is linear for a specified velocity field.
Nevertheless, similarities between vorticity and magnetic field are observed in our simulations.
Equation (\ref{eq:inductionEquation}) can be combined with the mass conservation equation to form \citep[e.g.,][]{landauLifshitz_1960}
\begin{equation}
\dderiv{}{t}\left(\f{\vect{B}}{\rho}\right)=\left(\f{\vect{B}}{\rho}\cdot\nabla\right)\vect{u},
\label{eq:inductionEquationStretching}
\end{equation}
where $d/dt=\partial/\partial t+(\vect{u}\cdot\nabla)$, and $\vect{B}/\rho$ changes due to gradients in the velocity field.
Equation (\ref{eq:inductionEquationStretching}) has an important physical interpretation.
It has the exact same form as the evolution equation for an infinitesimal ``fluid line" connecting fluid elements and moving with the flow.
Thus, two infinitely near fluid elements initially connected by a magnetic field line remain on that magnetic field line, and the value of $\vect{B}/\rho$ varies in proportion to the distance between the fluid elements \citep{landauLifshitz_1960}.
Thus, the magnetic field is ``frozen" in a perfectly conducting fluid.
In an approximately incompressible fluid, the magnetic field grows in direct proportion to the separation between fluid elements.
The interpretation of Eq. (\ref{eq:inductionEquationStretching}) is equivalent to the following simple consideration of a magnetic flux tube, with strength $b$, length $l$, and cross-section $a$, which permeates (and is frozen in) a fluid element with density $\rho$:
let the fluid element be ``stretched" by the flow to a new state (characterized by $b'$, $l'$, $a'$, and $\rho'$).
Then, mass conservation ($\rho'l'a'=\rho la$) and magnetic flux conservation ($b'a'=ba$) gives
\begin{equation}
\f{b'}{\rho'}=\f{b}{\rho}\times(l'/l).
\end{equation}
In the incompressible limit, the field is amplified in direct proportion to the stretching of the tube.
At the same time, the flux tube undergoes a decrease in the scale perpendicular to the stretching ($a'=a\times(l/l')<a$).
The decrease in flux tube cross-section proceeds until (1) the field becomes strong enough to react back on the fluid, preventing further stretching, or (2) resistive (non-ideal) effects become important (Section \ref{sec:magneticEnergyGrowthRates}), or a combination of (1) and (2).
Stretching is a very useful concept for turbulent $B$-field amplification.
The frozen-in condition can result in rapid magnetic field amplification in a turbulent flow.
Initially adjacent fluid elements separate rapidly, perhaps exponentially, in the chaotic flows that characterize turbulence \citep{ott_1998}.
Thus, an initially weak magnetic field (i.e., $\vect{u}$ is independent of $\vect{B}$) may amplify exponentially by stretching, and the growth rate is roughly given by the inverse turnover time of turbulent eddies \citep[e.g.,][]{kulsrudAnderson_1992}.
This is also apparent from a simplistic consideration of Eq. (\ref{eq:inductionEquationStretching}): the turbulent velocity varies significantly $\sim\mathcal{O}(u_{\mbox{\tiny rms}}^{\mbox{\tiny tur}}$) over a turbulent eddy of size $\lambda_{\mbox{\tiny eddy}}$, hence $B^{-1}(dB/dt)\sim u_{\mbox{\tiny rms}}^{\mbox{\tiny tur}}/\lambda_{\mbox{\tiny eddy}}$.
Exponential amplification of weak magnetic fields is commonly seen in MHD turbulence simulations \citep[e.g.,][]{choVishniac_2000,brandenburg_2001,haugen_etal_2004}.
Exponential growth ceases when the magnetic field becomes strong enough to cause a back-reaction on the fluid (i.e., $\vect{u}$ becomes dependent on $\vect{B}$).
Amplification of weak magnetic fields through turbulent stretching is initially a kinematic mechanism (i.e., described by Eq. (\ref{eq:inductionEquation}) for a specified velocity field).
As such, it differs from magnetic field amplification by the MRI in a fundamental way.
The MRI is a dynamic mechanism, described by the full MHD system of equations, and requires the Lorentz force to be included in Euler's equation.
(Also, the MRI requires differential rotation in the PNS to operate, while a turbulent dynamo can operate without PNS rotation.)
However, for weak progenitor $B$-fields \citep{heger_etal_2005}, both mechanisms require high spatial resolution for simulation \citepalias[e.g.,][]{obergaulinger_etal_2009,endeve_etal_2010}, and may ultimately be computationally prohibitive to capture properly in large-scale multi-physics simulations.
We comment here on the amount of kinetic helicity $\boldsymbol{\omega}\cdot\vect{u}$ in our simulations of SASI-driven turbulence.
Kinetic helicity is a measure of ``handedness" (or lack of mirror symmetry) in the turbulent flows, and is an important quantity in dynamo theory \citep[e.g.,][and reference therein]{brandenburgSubramanian_2005}.
Turbulent flows with kinetic helicity can support a so-called inverse cascade and produce large scale magnetic fields (i.e., larger than the turbulent forcing scale) in the nonlinear, saturated state \citep[e.g.,][]{meneguzzi_etal_1981,brandenburg_2001}.
Non-helical turbulence results in mostly small-scale magnetic fields \citep[e.g.,][]{brandenburg_etal_1996,haugen_etal_2004}.
We have constructed PDFs of the relative kinetic helicity $h_{\mbox{\tiny kin}}=(\boldsymbol\omega\cdot\vect{u})/(\omega_{\mbox{\tiny rms}}u_{\mbox{\tiny rms}})$ in the post-shock flow.
The kinetic helicity distributions are similar to the vorticity distributions (strongly peaked with exponential tails), with $\left[h_{\mbox{\tiny kin}}\right]_{\mbox{\tiny PDF}}\approx-8.3\times10^{-4}$ and $\mbox{Kurt}(h_{\mbox{\tiny kin}})\approx28.4$.
Despite the apparent handedness associated with the spiral SASI mode, the resulting turbulence is essentially non-helical.
(This may change, however, if a rapidly---and differentially---rotating PNS is included in the model.)
Thus, we expect that SASI-driven turbulence in our simulations results in magnetic field amplification due to a non-helical small-scale dynamo.
\subsection{Time Evolution of Global Quantities}
\label{sec:timeGlobal}
\begin{figure*}
\epsscale{1.0}
\plottwo{./figure5a.png}
{./figure5b.png}
\plottwo{./figure5c.png}
{./figure5d.png}
\caption{Time-evolution of global quantities integrated over the shock volume $V_{\mbox{\tiny{Sh}}}$, bounded by the surface of the PNS, $\partial V_{\mbox{\tiny PNS}}$, and the surface of the shock, $\partial V_{\mbox{\tiny{Sh}}}$, in non-rotating models in which the initial magnetic field has been varied from $1\times10^{10}$~G to $1\times10^{13}$~G. Plotted are kinetic energy $E_{\mbox{\tiny kin}}$ (top left), magnetic energy change (relative to the initial) $\Delta E_{\mbox{\tiny mag}}/E_{\mbox{\tiny mag},0}$ (top right), angular momentum $|\vect{L}|$ (bottom left), and average shock radius $\bar{R}_{\mbox{\tiny Sh}}=(3V_{\mbox{\tiny{Sh}}}/(4\pi))^{1/3}$. Models $\model{10}{0.0}{00}$, $\model{12}{0.0}{00}$, and $\model{13}{0.0}{00}$ are represented by solid, dashed, and dotted lines, respectively. The initial magnetic energy $E_{\mbox{\tiny mag},0}$ in these models is $2.3\times10^{-12}$~B, $2.3\times10^{-8}$~B, and $2.3\times10^{-6}$~B, respectively. The dash-dotted lines in the top panels are proportional to $\exp{(t/\tau)}$, where the growth times are $\tau=85$~ms and $\tau=66$~ms in the top left and top right panels, respectively. \label{fig:overviewNonRotating}}
\end{figure*}
An overview of the simulations with non-rotating initial conditions is given in Figure \ref{fig:overviewNonRotating}, in which we plot the time-evolution of selected globally integrated quantities for models $\model{10}{0.0}{00}$ (solid lines), $\model{12}{0.0}{00}$ (dashed lines), and $\model{13}{0.0}{00}$ (dotted lines): kinetic energy (upper left), relative magnetic energy change (upper right), angular momentum (lower left), and average shock radius (lower right).
All quantities are obtained from integration over the shock volume $V_{\mbox{\tiny{Sh}}}$, bounded by the surface of the PNS $\partial V_{\mbox{\tiny PNS}}$ and the surface of the shock $\partial V_{\mbox{\tiny{Sh}}}$.
The magnetic energy change is scaled with the initial magnetic energy for easy comparison across the models ($(E_{\mbox{\tiny mag}}-E_{\mbox{\tiny mag,0}})/E_{\mbox{\tiny mag,0}}$ is plotted).
The kinetic energy of the settling flow beneath the shock is initially about $2\times10^{-3}$~B. It begins to grow rapidly during the initial ramp-up phase of the SASI, which starts around 400~ms in all models.
In particular, for the weak-field model, the post-shock kinetic energy grows exponentially with a nearly constant growth rate over the time period extending from $t\approx510$~ms to $t\approx720$~ms.
The growth time during this epoch is about $\tau\approx85$~ms (cf. dash-dotted line in the top left panel in Figure \ref{fig:overviewNonRotating}).
The kinetic energy in the models with a stronger initial magnetic field grows somewhat slower initially ($t\lesssim660$~ms), and then at a rate similar to that of the weak-field model.
The growth slows down considerably for $t\gtrsim800$~ms, but the kinetic energy continues to grow throughout the nonlinear phase and reaches similar levels in all three models, with variability on a shorter timescale superimposed.
When averaged over the time interval extending from 900~ms to 1100~ms we find\footnote{The temporal average of a variable $X$, over the interval $T=t_{2}-t_{1}$, is denoted $\langle X\rangle_{t_{1}}^{t_{2}}=\f{1}{T}\int_{t_{1}}^{t_{2}}X\,dt$.} $\timeAverage{E_{\mbox{\tiny kin}}}{0.9}{1.1}=$ 0.051~B, 0.048~B, and 0.044~B for models $\model{10}{0.0}{00}$, $\model{12}{0.0}{00}$, and $\model{13}{0.0}{00}$, respectively.
While these time-averaged post-shock kinetic energies are slightly smaller in the models with a stronger initial magnetic field, we have not found convincing evidence that this slight decrease in global kinetic energy is a result of a stronger magnetic field.
We find, however, that strong magnetic fields affect flows on small spatial scales (cf. Section \ref{sec:spectralAnalysis}).
In terms of spherical harmonics, the SASI is characterized by exponentially growing power in low-order modes \citep{blondinMezzacappa_2006}.
As a result of this, the shock surface deviates exponentially from its initially spherical shape.
The obliquity of the shock front relative to the pre-shock accretion flow causes the non-radial post-shock kinetic energy to grow (also exponentially) at the expense of thermal energy \citep{blondin_etal_2003}.
We have decomposed the post-shock kinetic energy into radial and non-radial components; $E_{\mbox{\tiny kin},\parallel}=\f{1}{2}\int_{V_{\mbox{\tiny{Sh}}}}\rho u_{r}^{2}\,dV$ and $E_{\mbox{\tiny kin},\perp}=\f{1}{2}\int_{V_{\mbox{\tiny{Sh}}}}\rho(u_{\vartheta}^{2}+u_{\varphi}^{2})\,dV$, respectively.
The non-radial component grows much faster ($\tau\lesssim50$~ms) than the radial component, and the growth seen in Figure \ref{fig:overviewNonRotating} is due to a combination of the two.
(The kinetic energy associated with the three velocity components become similar in the saturated state, and $E_{\mbox{\tiny kin},\perp}\approx2E_{\mbox{\tiny kin},\parallel}$.)
Saturation of the post-shock kinetic energy may be due to the development of turbulence via secondary instabilities \citep[e.g.,][and Section \ref{sec:spectralAnalysis} in this paper]{guilet_etal_2010}.
The exact details that determine the growth rate of the post-shock kinetic energy are tied directly to the physical origin of the SASI, which is not the focus of this paper.
We focus primarily on magnetic field amplification in the flows that result from SASI activity.
The magnetic energy grows at the expense of the turbulent kinetic energy below the shock (cf. Section \ref{sec:spectralAnalysis}).
After an initial spurt, all the models shown in Figure \ref{fig:overviewNonRotating} experience an early period of exponential magnetic energy growth with essentially the same growth rate (cf. the temporal window from $t=650$~ms to $t=780$~ms).
Such evolution is expected in a kinematic growth regime, in which the magnetic field's back-reaction on the fluid is negligible.
The magnetic energy in the weak-field model ($\model{10}{0.0}{00}$, $B_{0}=1\times10^{10}$~G) grows exponentially at a nearly constant rate, with growth time $\tau\approx66$~ms, until the end of the simulation ($t=1100$~ms), and receives a total boost of about five orders of magnitude.
The magnetic energy growth time is significantly shorter ($\sim25\%$) than the total kinetic energy growth time during the overlapping period of exponential growth.
In the model with $B_{0}=1\times10^{12}$~G ($\model{12}{0.0}{00}$) we also find that $E_{\mbox{\tiny mag}}$ grows steadily until the end of the run ($t=1126$~ms).
The magnetic energy in this model initially grows at the same rate as the weak-field model, but its growth rate clearly tapers off at later times ($t\gtrsim900$~ms).
The strong-field model ($\model{13}{0.0}{00}$, $B_{0}=1\times10^{13}$~G) also exhibits exponential magnetic energy growth ($\tau\approx66$~ms) early on.
Then, around $t\approx780$~ms, its growth rate drops almost discontinuously, and $E_{\mbox{\tiny mag}}$ grows by only about $50\%$ for the remainder of the simulation (until $t=1100$~ms).
Model $\model{13}{0.0}{00}$ receives a total boost in magnetic energy of about a factor of 300.
The abrupt change in the magnetic energy growth rate observed in the strong-field model occurs when magnetic fields become dynamically important in localized regions below the shock (cf. Section \ref{sec:radialProfiles}).
At the end of the simulations the magnetic energy in models $\model{10}{0.0}{00}$, $\model{12}{0.0}{00}$, and $\model{13}{0.0}{00}$ has reached about $1.8\times10^{-7}$~B, $2.3\times10^{-4}$~B, and $8.9\times10^{-4}$~B, respectively.
The magnetic energy in the weak-field model is many ($\sim$ five) orders of magnitude below the post-shock kinetic energy at this point.
In the strong-field model it saturates below $10^{-3}$~B, which is also significantly less than the total kinetic energy in the post-shock flow.
Also, the boost in magnetic energy in the strong-field model is almost an order of magnitude lower than the difference in average post-shock kinetic energy between the strong-field and the weak-field models, which is about $7\times10^{-3}$~B.
At the end of the simulations the magnetic energy in the strong-field model is less than a factor of four larger than in model $\model{12}{0.0}{00}$, while it initially was a factor $10^{2}$ larger.
We point out that the magnetic energies listed above are values recorded when the simulations were stopped after roughly one explosion time ($t\sim1$~s).
(SASI-induced magnetic field amplification ceases once the explosion is initiated.)
However, the listed values should \emph{not} be interpreted as upper limits on the magnetic energy for the different initial magnetic fields.
The magnetic energy growth rate is (for reasons we detail in this paper) underestimated by the numerical simulations, and we suspect that all models---independent of the initial magnetic field---will reach a saturated state, similar to the strong-field model, well within an explosion time.
The issue of magnetic energy growth, saturation, and its effect on the post-shock flow will be discussed in later sections.
The induced magnetic fields do not notably affect the global characteristics of the shock evolution.
The plots of total angular momentum $|\vect{L}|$ in $V_{\mbox{\tiny{Sh}}}$ and the average shock radius $\bar{R}_{\mbox{\tiny Sh}}$ show that these quantities reach similar values in all the non-rotating models.
The angular momentum reaches a few $\times10^{47}$~g cm$^{2}$ s$^{-1}$, consistent with the comparable models of \citet{blondinMezzacappa_2007} and \citet{fernandez_2010}.
Moreover, during the nonlinear evolution, after the period of exponential growth of the angular momentum in $V_{\mbox{\tiny{Sh}}}$, we find $|\vect{L}|\lesssim f\dot{M}\bar{R}_{\mbox{\tiny Sh}}^{2}$, with $f\approx 0.25$ \citep[cf.][]{fernandez_2010}.
The average shock radius exhibits significant variability, and briefly exceeds 500~km in some of the models.
In particular, we find $\timeAverage{\bar{R}_{\mbox{\tiny Sh}}}{0.9}{1.1}=$ 484~km, 466~km, and 438~km, for models $\model{10}{0.0}{00}$, $\model{12}{0.0}{00}$, and $\model{13}{0.0}{00}$, respectively.
The larger-amplitude oscillations in kinetic energy and the smaller average shock radius exhibited by the strong-field model ($\model{13}{0.0}{00}$) may be attributed to this model's somewhat different nonlinear evolution, which is due to the stochastic nature of the nonlinear SASI rather than to the stronger initial field.
Model $\model{13}{0.0}{00}$ first evolves into a typical spiral mode pattern (cf. Figure \ref{fig:machNumberAndVorticity}), but later develops a flow pattern reminiscent of the sloshing mode, with two oppositely directed high-speed streams, emanating from opposite sides of the shock, terminating on opposite sides of the PNS, or colliding head-on deep in the shocked cavity.
The appearance of this flow pattern coincides with the turnover in the angular momentum seen in the lower left panel ($t\approx950$~ms).
Note that there is little or no response in the magnetic energy evolution due to these rearrangements in the large scale flow.
This is consistent with the magnetic field being amplified by small-scale rather than large-scale flows.
Comparing Figure \ref{fig:overviewNonRotating} with Figure 12 of \citetalias{endeve_etal_2010}, we note that spatial resolution ($\Delta l=1.17$~km in this paper versus $\Delta l=1.56$~km in \citetalias{endeve_etal_2010}) affects global magnetic quantities much more than global fluid quantities.
In particular, models 3DB10Rm and 3DB12Rm in \citetalias{endeve_etal_2010} correspond to models $\model{10}{0.0}{00}$ and $\model{12}{0.0}{00}$ respectively.
The increased spatial resolution has no significant impact on the post-shock kinetic energy, total angular momentum, average shock radius, or exponential growth time for the kinetic energy ($\tau\approx85$~ms).
The magnetic energy in model 3DB12Rm is boosted by a factor of $2\times10^{3}$, while model 3DB10Rm received a boost of less than a factor of $10^{3}$.
We also measured an exponential growth time for the magnetic energy of $\tau\approx71$~ms over several hundred milliseconds in \citetalias{endeve_etal_2010}.
These results are somewhat different than the results presented in Figure \ref{fig:overviewNonRotating}, which show that the increased resolution results in a larger boost in the magnetic energy and shorter exponential growth time ($\tau \approx 66$~ms), and that the magnetic energy growth at late times depends on the initial magnetic field strength.
The sensitivity to spatial resolution was also pointed out in our previous study, and it will be further discussed later in this paper.
\subsection{Evolution of Spherically Averaged Radial Profiles and Saturation of Magnetic Energy}
\label{sec:radialProfiles}
\begin{figure*}
\epsscale{1.0}
\plottwo{./figure6a.png}
{./figure6b.png}
\plottwo{./figure6c.png}
{./figure6d.png}
\caption{Spherically averaged radial profiles at selected times for the non-rotating weak-field (left panels) and the strong-field (right panel) models. Upper panels: rms magnetic field strength (solid black) and maximum magnetic field strength (dashed red). For reference, the initial magnetic field profile is plotted in each panel (dash dot). We also plot reference lines proportional to $\exp{(-r/L_{B})}$ (dash-dotted) with $L_{B}=120$~km (weak-field case) and $L_{B}=100$~km (strong-field case). The magnetic field strengths have been normalized to the initial value $B_{0}$ at $r=R_{\mbox{\tiny PNS}}$. Lower panels: magnetic energy density (solid black), kinetic energy density (dotted blue), and thermal pressure (dashed red). For model $\model{10}{0.0}{00}$ profiles are plotted for 666~ms, 792~ms, 918~ms, and 1100~ms (marked by diamonds in the upper right panel of Figure \ref{fig:overviewNonRotating}), while for model $\model{13}{0.0}{00}$ we plot radial profiles for times 720~ms, 820~ms, 922~ms, and 1100~ms (marked by squares in the upper right panel of Figure \ref{fig:overviewNonRotating}). Thicker lines indicate a more advanced time state. Note that the magnetic energy densities in the weak-field model (lower left panel) has been multiplied by a factor of $10^{4}$. \label{fig:sphericalProfilesNonRotating}}
\end{figure*}
The magnetic energy in the strong-field model reaches saturation relatively early in the nonlinear evolution ($t\approx780$~ms).
To help elucidate the physical conditions under which the magnetic energy growth in our simulations is quenched, we plot, in Figure \ref{fig:sphericalProfilesNonRotating}, spherically averaged radial profiles from the evolution of the weak-field model ($\model{10}{0.0}{00}$, left panels) and the strong-field model ($\model{13}{0.0}{00}$, right panels).
In the upper panels we plot the rms magnetic field strength $B_{\mbox{\tiny rms}}$ (solid lines) and the maximum magnetic field strength $B_{\mbox{\tiny max}}$ (dashed red lines) versus radial distance from the center of the PNS.
Values are computed in spherical shells centered on the origin of the computational domain $r=\sqrt{x^{2}+y^{2}+z^{2}}=0$, with thickness $\delta L=20$~km and volume $\delta V_{i}$, which includes all zones with radial coordinate $r\in[r_{i}-\delta L/2,r_{i}+\delta L/2)$, with $r_{i}=50$~km, $70$~km,$\ldots$,$(L-\delta L)/2$.
The rms magnetic field is computed from the shell-volume-averaged\footnote{$\volumeAverage{X}{V}=\f{1}{V}\int_{V}X\,dV$ denotes the volume average of $X$ over the volume $V$.} magnetic energy density $B_{\mbox{\tiny rms}}=\sqrt{2\mu_{0}\volumeAverage{\edens{mag}}{\delta V_{i}}}$, while $B_{\mbox{\tiny max}}$ is simply the maximum magnetic field over all zones in each shell.
In the lower panels we plot the shell-volume-averaged magnetic energy density $\volumeAverage{\edens{mag}}{\delta V_{i}}$ (solid lines), kinetic energy density $\volumeAverage{\edens{kin}}{\delta V_{i}}$ (dotted blue lines), and fluid pressure $\volumeAverage{P}{\delta V_{i}}$ (dashed red lines) versus radial distance.
For each model we plot four profiles (time states) of each variable.
The time states, which are also indicated with diamonds ($\model{10}{0.0}{00}$) and squares ($\model{13}{0.0}{00}$) in the upper right panel of Figure \ref{fig:overviewNonRotating}, are chosen to emphasize the temporal magnetic field evolution in each of the models and to contrast the two models.
The spherically averaged radial profiles further illustrate the differences in magnetic field evolution of the weak-field and strong-field models.
The magnetic field in model $\model{10}{0.0}{00}$ intensifies steadily throughout the nonlinear evolution, at all radii below the shock, and $B_{\mbox{\tiny rms}}$ evolves self-similarly during the later stages.
Toward the end of the weak-field run, the rms magnetic field has received a boost of about two orders of magnitude near $R_{\mbox{\tiny PNS}}$, and stays above $B_{0}$ for $r\lesssim500$~km.
The maximum magnetic field is typically an order of magnitude above the rms magnetic field, which is an indication of strong spatial intermittency in the magnetic field (see also Figures \ref{fig:magneticFieldStrength} and \ref{fig:pdfBFieldAndVorticityDotBField} below).
In the lower left panel of Figure \ref{fig:sphericalProfilesNonRotating} we see that the magnetic energy density is many orders of magnitude smaller than the kinetic energy density and pressure for the shown times, consistent with kinematic magnetic field growth.
Not even in localized regions does the magnetic energy density become comparable to the kinetic energy density or the pressure.
At the end of the simulation, there are only a few zones in which the ratio of magnetic-to-kinetic energy exceeds $10^{-2}$.
The strong-field model's magnetic energy evolution is not governed by kinematic growth (except for during the initial boost received at early times), but rather by dynamic interactions with the fluid in a saturated state (and numerical dissipation; see Sections \ref{sec:spectralAnalysis} and \ref{sec:magneticEnergyGrowthRates}).
The magnetic energy in this model falls off the exponential growth curve around $t=780$~ms.
The thin solid curve in the upper right panel in Figure \ref{fig:sphericalProfilesNonRotating} ($t=720$~ms) represents the transition from the initial magnetic field profile (dash-dot curve) to the saturated state.
Although the post-shock flow is governed by vigorous turbulence at later times, there are only minor changes to the rms and maximum magnetic field profiles.
The relative boost of $B_{\mbox{\tiny rms}}$ and $B_{\mbox{\tiny max}}$ in model $\model{13}{0.0}{00}$ is about an order of magnitude less than what is observed in model $\model{10}{0.0}{00}$.
At the end of the strong-field model, the rms magnetic field exceeds $10^{13}$~G for $r\lesssim225$~km, while the maximum magnetic field exceeds $10^{14}$~G out to $r\approx200$~km.
At the end of both runs the rms magnetic field follows an exponential decrease with radius, $B_{\mbox{\tiny rms}}$ is proportional to $\exp{(-r/L_{B})}$, where the characteristic length scale $L_{B}$ is about 120~km and 100~km for model $\model{10}{0.0}{00}$ and model $\model{13}{0.0}{00}$, respectively (cf. dash-dotted lines).
From the upper panels in Figure \ref{fig:sphericalProfilesNonRotating} it is apparent that the exponential decrease in $B_{\mbox{\tiny rms}}$ with radius holds reasonably well in both models throughout the nonlinear evolution.
Moreover, the averaged kinetic energy density below the shock has increased significantly compared to the initial condition (Figure \ref{fig:initialCondition}) and roughly follows a power-law in radius $r^{\alpha}$, with the exponent $\alpha$ varying between $-2.7$ and $-2.3$.
The decrease in kinetic energy density with radius is mostly due to the decrease in mass density: the mass density falls off as $r^{-3}$ inside $r=150$~km, and almost as $r^{-2}$ outside $r=150$~km.
During the runs, the spherically averaged pressure remains relatively quiescent inside $r=150$~km, where it falls off as $r^{-4}$.
As noted above, the decrease in kinetic energy density (the source of magnetic energy) follows a power law with radius, while the magnetic energy density (and $B_{\mbox{\tiny rms}}$) decreases exponentially with radius.
On the other hand, we find that the enstrophy $\f{1}{2}\volumeAverage{\omega^{2}}{\delta V_{i}}$ also decreases exponentially with radius, with a length scale comparable to (but somewhat shorter than) that of $\volumeAverage{\edens{mag}}{V_{i}}$.
Moreover, by comparing the lower panels of Figure 6 in \citetalias{endeve_etal_2010} (showing $|\vect{B}|$) with Figure 11 in \citetalias{endeve_etal_2010} (showing $|\boldsymbol\omega|$) we see that the spatial distribution of magnetic field and vorticity is very similar.
These observations support a similarity between vorticity and magnetic field \citep{batchelor_1950}.
(We have not investigated the physical reasons for the particular spatial distribution of vorticity and magnetic field in further detail, but we plan to do so in a future study.)
The relative boost in $B_{\mbox{\tiny rms}}$ decreases monotonically with the initial field strength in our models.
The results from model $\model{12}{0.0}{00}$ (not shown) confirm this trend.
This is because the models with stronger initial fields reach saturation during the simulation.
(The growth rate is the same in all models during the kinematic regime.)
Saturation occurs when the magnetic energy becomes comparable (locally) to the kinetic energy.
In particular, we find that the kinematic growth regime ends when $B_{\mbox{\tiny max}}^{2}/(2\mu_{0})\lesssim\volumeAverage{\edens{kin}}{\delta V_{i}}$.
For model $\model{12}{0.0}{00}$ we find $B_{\mbox{\tiny max}}^{2}/(2\mu_{0})\approx 0.1\times\volumeAverage{\edens{kin}}{\delta V_{i}}$ and $B_{\mbox{\tiny max}}^{2}/(2\mu_{0})\approx 0.3\times\volumeAverage{\edens{kin}}{\delta V_{i}}$ for $t=966$~ms and $t=1126$~ms, respectively, which represent time states after the magnetic energy growth has fallen off the exponential curve with growth time $\tau=66$~ms (Figure \ref{fig:overviewNonRotating}).
In model $\model{13}{0.0}{00}$, the ratio $B_{\mbox{\tiny max}}^{2}/(2\mu_{0}\volumeAverage{\edens{kin}}{\delta V_{i}})$ stays above $0.3$ for the three most advanced time states shown in Figure \ref{fig:sphericalProfilesNonRotating}, and hovers around unity at the end of the simulation.
In both models the ratio stays remarkably constant with distance from the PNS, varying by less than a factor of two, out beyond $r=300$~km (although exact details vary by model).
Thus, turbulence-induced magnetic fields may impact dynamics in localized regions throughout the shock volume.
\subsection{Full Spatial Distributions, Intermittency, and Saturation of Magnetic Energy}
\label{sec:spatialDistributions}
\begin{figure*}
\epsscale{1.0}
\plottwo{./figure7a.png}
{./figure7b.png}
\caption{Magnetic field magnitude $|\vect{B}|$ near the end ($t=1068$~ms) of the weak-field simulation ($\model{10}{0.0}{00}$). The left panel shows a global view of the magnetic fields below the shock (traced out with a white contour). The right panel shows the magnetic field in a $200$~km$^{3}$ volume near the PNS. \label{fig:magneticFieldStrength}}
\end{figure*}
Figure \ref{fig:magneticFieldStrength} shows volume renderings of the magnetic field magnitude in model $\model{10}{0.0}{00}$ near the end of the simulation ($t=1068$~ms).
The left panel shows a global view of the amplified magnetic fields below the shock (white contour), and the right panel shows a zoomed in view in a $200$~km$^{3}$ volume near the PNS.
These renderings illustrate the complicated, highly intermittent magnetic fields that develop from SASI-induced turbulent flows.
The magnetic energy is concentrated in thin, folded, intense flux ropes.
Notice in the left panel that amplified magnetic fields do not extend all the way to the shock, but are confined to a smaller volume, which is characterized by highly turbulent flows (see also the distribution of vorticity for model $\model{13}{0.0}{00}$ in the volume renderings in Figure \ref{fig:machNumberAndVorticity}, and the associated movies).
The spatial distribution of the magnetic field (in particular the intermittency) in the strong-field model is similar to that of the weak-field model.
\begin{figure*}
\epsscale{1.0}
\plottwo{./figure8a.png}
{./figure8b.png}
\caption{Normalized PDFs of the magnetic field components (left panel) and the cosine of the angle between vorticity and magnetic field (right panel) in model \model{10}{0.0}{00}. The PDFs are constructed by averaging over the time period from $804$~ms to $918$~ms. The PDFs of the $x$-, $y$-, and $z$-components of the magnetic field (solid, dotted, and dashed lines, respectively) are practically indistinguishable. \label{fig:pdfBFieldAndVorticityDotBField}}
\end{figure*}
Normalized PDFs of individual components of the magnetic field from model \model{10}{0.0}{00} are plotted in the left panel of Figure \ref{fig:pdfBFieldAndVorticityDotBField}.
The shape of the distributions is strongly peaked with extended exponential tails (similar to the vorticity distributions in Figure \ref{fig:pdfVelocityAndVorticity}), and the PDFs of the different magnetic field components are practically indistinguishable.
The intermittency is high, $\mbox{Kurt}(B_{x}/B_{\mbox{\tiny rms}})\approx\mbox{Kurt}(B_{y}/B_{\mbox{\tiny rms}})\approx\mbox{Kurt}(B_{z}/B_{\mbox{\tiny rms}})\approx32.5$ (somewhat larger than, but comparable to, the intermittency of the vorticity field), which is consistent with the visual impression given in Figure \ref{fig:magneticFieldStrength}.
We note that the PDFs are highly symmetric, which implies small overall polarity of the field; the magnitude of the mean values $\left[B_{x}/B_{\mbox{\tiny rms}}\right]_{\mbox{\tiny PDF}}$, $\left[B_{y}/B_{\mbox{\tiny rms}}\right]_{\mbox{\tiny PDF}}$, and $\left[B_{z}/B_{\mbox{\tiny rms}}\right]_{\mbox{\tiny PDF}}$ are all less than $0.01$.
The PDF of the cosine of the angle between $\boldsymbol\omega$ and $\vect{B}$ is plotted in the right panel of Figure \ref{fig:pdfBFieldAndVorticityDotBField}, which shows that the vorticity and magnetic field tend to be aligned or anti-aligned, and gives further support to the similarity between the magnetic and vorticity fields.
Similar distributions were also reported by \citet{brandenburg_etal_1996}.
The ratio $B_{\mbox{\tiny max}}^{2}/(2\mu_{0}\volumeAverage{\edens{kin}}{\delta V_{i}})$ (used in Section \ref{sec:radialProfiles} to characterize saturation of the magnetic energy) is still only an approximate measure of the relative strength of the magnetic field since the kinetic energy density is averaged over the shell.
The highly intermittent magnetic fields created by turbulence have been ``expelled" from the fluid \citep{thompsonDuncan_1993}.
As evolution proceeds, an increasing percentage of the total magnetic energy resides in regions where the ratio of magnetic-to-kinetic energy $\beta_{\mbox{\tiny kin}}^{-1}(=v_{\mbox{\tiny A}}^{2}/|\vect{u}|^{2})$ exceeds $10^{-2}$, $10^{-1}$, and $1$: $55\%$, $10\%$, and $0.5\%$, respectively, for model $\model{12}{0.0}{00}$ at $t=966$~ms, while for $t=1126$ the respective percentages have increased to $72\%$, $20\%$, and $1.5\%$.
(The percentages quoted for $t=1126$~ms are very similar to those quoted in \citetalias{endeve_etal_2010} for model 3DB12Ah, which was computed with the same resolution and initial condition as model $\model{12}{0.0}{00}$, but with a different initial perturbation.)
The fraction of the total magnetic energy concentrated in regions with high $\beta_{\mbox{\tiny kin}}^{-1}$ stays relatively constant during the saturated state of the strong-field model ($t\ge780$~ms): we find $\sim90\%$ ($\beta_{\mbox{\tiny kin}}^{-1}\ge10^{-2}$), $\sim50\%$ ($\beta_{\mbox{\tiny kin}}^{-1}\ge10^{-1}$), and $\sim7\%$ ($\beta_{\mbox{\tiny kin}}^{-1}\ge1$).
Volume renderings in Figure \ref{fig:inverseKineticBeta} show the spatial distribution of $\beta_{\mbox{\tiny kin}}^{-1}$ late in the evolution of the strong-field model, illustrating the spatial \emph{and} temporal intermittency of turbulence-induced strong magnetic fields.
The snapshots are temporally separated by $10$~ms, which is longer than the turbulent turnover time (cf. Section \ref{sec:spectralAnalysis}), but is significantly shorter than both the advection time and the Alfv{\'e}n crossing time (defined loosely as $\bar{R}_{\mbox{\tiny{Sh}}}/|\vect{u}|$ and $\bar{R}_{\mbox{\tiny{Sh}}}/v_{A}$ respectively).
Concentrations of high $\beta_{\mbox{\tiny kin}}^{-1}$, which can briefly exceed unity in localized regions, are scattered throughout the shock volume.
As noted above, about $50\%$ of the total magnetic energy resides in regions where $\beta_{\mbox{\tiny kin}}^{-1}\ge10^{-1}$, and these magnetic fields occupy less than $10\%$ of the total shock volume.
(Movie 3 and Movie 4 in the online material show the time-evolution of $\beta_{\mbox{\tiny kin}}^{-1}$ from $t=1050$~ms to $t=1100$~ms and a full revolution of $\beta_{\mbox{\tiny kin}}^{-1}$ for $t=1100$~ms, respectively.)
Alfv{\'e}n waves are likely excited by the SASI activity discussed in this paper.
\citet{suzuki_etal_2008} performed simulations in spherical symmetry and investigated the role of Alfv{\'e}n waves on the core-collapse supernova explosion mechanism.
They found that---for sufficiently strong magnetic fields ($\gtrsim2\times10^{15}$~G)---heating associated with Alfv{\'e}n wave energy dissipation may revive the stalled shock.
For weaker magnetic fields no shock revival was observed.
We have not attempted to identify Alfv{\'e}n waves, or energy dissipation due to Alfv{\'e}n waves, in our simulations (the highly dynamic nature of the SASI-driven flows makes this a nontrivial task).
However, the magnetic fields attained in our strong-field model are significantly weaker than $10^{15}$~G and we do not expect Alfv{\'e}n wave heating due to the mechanism studied by \citet{suzuki_etal_2008} to result in significant energy deposition near the shock (or elsewhere) in our simulations.
\citet{guilet_etal_2011} recently suggested a mechanism for magnetic field amplification in the vicinity of an Alfv{\'e}n surface (i.e., where $v_{\mbox{\tiny A}}=|\vect{u}|$).
In their model, Alfv{\'e}n waves, excited for example by the SASI, may amplify near the Alfv{\'e}n surface and create a dynamic back-reaction.
However, the turbulent nature of the hydromagnetic evolution in our simulations may result in unfavorable conditions for this mechanism to operate.
In particular, regions where $v_{\mbox{\tiny A}}=|\vect{u}|$ appear and disappear in a highly intermittent manner (cf. Figure \ref{fig:inverseKineticBeta}, and associated movies).
\begin{figure*}
\epsscale{1.0}
\plotone{./figure9.png}
\caption{Select snapshots of the logarithm of the magnetic-to-kinetic energy ratio $\beta_{\mbox{\tiny kin}}^{-1}=v_{\mbox{\tiny A}}^{2}/|\vect{u}|^{2}$ late in the highly nonlinear magnetically saturated phase of the strong-field model ($\model{13}{0.0}{00}$). The Alfv{\'e}n speed is $v_{\mbox{\tiny A}}=|\vect{B}|/\sqrt{\mu_{0}\rho}$. The snapshots are separated by 10~ms, taken at $t=1090$~ms (left) and $t=1100$~ms (right). \label{fig:inverseKineticBeta}}
\end{figure*}
\subsection{Spectral Analysis}
\label{sec:spectralAnalysis}
Further important insight into the numerical simulations can be gained from a Fourier decomposition of the magnetic and kinetic energy.
In particular, our analysis presented in \citetalias{endeve_etal_2010} lacked the ability to quantify the amount of turbulent kinetic energy available to amplify the magnetic field as well as the magnetic field's impact on the evolution of the small-scale flows.
We seek to address these questions in this section.
Following \citet{ryu_etal_2000} we compute Fourier amplitudes from components of the velocity and magnetic fields in the computational domain\footnote{The Fourier transforms are computed using the FFTW library documented at http://www.fftw.org.}
\begin{equation}
\widehat{X}(\vect{k})
=\f{1}{V_{\mbox{\tiny L}}}\int_{V_{\mbox{\tiny L}}}X(\vect{x})\times\exp\left( i \vect{k}\cdot\vect{x}\right)\,dV,
\label{eq:fourierTransform}
\end{equation}
where $X(\vect{x})$ represents $\sqrt{\rho}u_{j}$ or $B_{j}$, with $j\in\{x,y,z\}$.
We then compute the kinetic and magnetic spectral energy density on a $k$-space shell
\begin{equation}
\widehat{e}_{\mbox{\tiny kin}}(k)
=\f{1}{2}\int_{k\mbox{-shell}}
\sum_{j}|\widehat{\sqrt{\rho}u_{j}}|^{2}
k^{2}\,d\Omega_{k}
\end{equation}
and
\begin{equation}
\widehat{e}_{\mbox{\tiny mag}}(k)
=\f{1}{2\mu_{0}}\int_{k\mbox{-shell}}
\sum_{j}|\widehat{B_{j}}|^{2}
k^{2}\,d\Omega_{k},
\end{equation}
respectively.
The magnitude of the wave vector (wavenumber) is $k=|\vect{k}|=(k_{x}^{2}+k_{y}^{2}+k_{z}^{2})^{1/2}$ and $d\Omega_{k}$ is a solid angle element in Fourier space.
Proper normalization of the Fourier transform ensures that integration of the spectral energy densities over $k$-space equals real-space integrals of the corresponding energy densities; i.e.,
\begin{equation}
\int_{k_{\mbox{\tiny min}}}^{k_{\mbox{\tiny max}}} \widehat{e}_{\mbox{\tiny kin}}\,dk
=\int_{V_{L}}e_{\mbox{\tiny kin}}\,dV
\equiv \widehat{E}_{\mbox{\tiny kin}} \label{eq:parsevalKin}
\end{equation}
and
\begin{equation}
\int_{k_{\mbox{\tiny min}}}^{k_{\mbox{\tiny max}}} \widehat{e}_{\mbox{\tiny mag}}\,dk
=\int_{V_{L}}\edens{mag}\,dV
\equiv \widehat{E}_{\mbox{\tiny mag}}, \label{eq:parsevalMag}
\end{equation}
where the integrals over $k$ extend from $k_{\mbox{\tiny min}}=2\pi/L$ (defined by the spatial scale of the computational box $L$) to $k_{\mbox{\tiny max}}=2\pi/\Delta l$ (limited by finite grid resolution).
The real-space integrals extend over the volume of the computational domain $V_{L}$.
Note that $\widehat{E}_{\mbox{\tiny mag}}$, the total magnetic energy in the computational domain, practically equals the magnetic energy below the shock wave $E_{\mbox{\tiny mag}}$, while a similar equality does not hold for the kinetic energy ($\widehat{E}_{\mbox{\tiny kin}}>E_{\mbox{\tiny kin}}$) because the kinetic energy of the supersonic accretion flow above the shock (included in the Fourier transform) is substantial, but contributes mostly to the spectrum at small $k$ (see Figure \ref{fig:compensatedKineticEnergySpectra}).
\subsubsection{Varying the Initial Magnetic Field Strength}
The evolution of magnetic energy in Fourier space is shown in Figure \ref{fig:spectralMagneticEnergyDensityNonRotating}, in which we plot magnetic energy spectra during the nonlinear SASI for the weak-field model (left panel) and the strong-field model (right panel), for the same times used for the spherically averaged radial profiles displayed in Figure \ref{fig:sphericalProfilesNonRotating}.
Initially (not shown), the magnetic energy spectrum decreases monotonically with increasing $k$.
Then, most of the magnetic energy resides on relatively large scales ($k\lesssim0.1$), and $\edensk{mag}$ is roughly proportional to $k^{-2}$ for larger $k$-values.
The energy spectra in Figure \ref{fig:spectralMagneticEnergyDensityNonRotating} exhibit features typical of MHD turbulence simulations \citep[see][for a recent comprehensive review]{brandenburgSubramanian_2005}.
The spectral magnetic energy density increases with wavenumber, roughly as $\edensk{mag}\propto k^{3/2}$, for small $k$-values \citep[cf.][]{kazantsev_1968,brandenburgSubramanian_2005}. It reaches a maximum around 0.2-0.3~km$^{-1}$, where the turnover is due to numerical diffusivity (Figure \ref{fig:energySpectraResolutionStudy}), beyond which it decreases rapidly with increasing $k$.
The spectral magnetic energy density remains basically self-similar over the time-intervals displayed in Figure \ref{fig:spectralMagneticEnergyDensityNonRotating}.
The weak-field model exhibits the exponential growth on all scales typical of a kinematic small-scale dynamo \citep{brandenburgSubramanian_2005}, the peak value increasing by almost three orders of magnitude (from $\sim6\times10^{-10}$~B~km to $\sim4\times10^{-7}$~B~km), as seen in Figure \ref{fig:spectralMagneticEnergyDensityNonRotating}. In contrast the peak value saturates in the strong-field model, increasing by only a factor of about four (to $\sim2\times10^{-3}$~B~km). However the spectral shape stays relatively unchanged in both cases, with the full width at half maximum being roughly constant over time ($\sim0.40$~km$^{-1}$ and $\sim0.38$~km$^{-1}$ for the weak-field model and the strong-field model, respectively).
\begin{figure*}
\epsscale{1.0}
\plottwo{./figure10a.png}
{./figure10b.png}
\caption{Temporal evolution of the spectral magnetic energy density $\edensk{mag}$ for the weak-field (left) and the strong-field (right) models, respectively. The spectral distributions are plotted at 666~ms, 792~ms, 918~ms, and 1100~ms (model $\model{10}{0.0}{00}$) and 720~ms, 820~ms, 922~ms, and 1100~ms (model $\model{13}{0.0}{00}$), respectively (i.e., the same times for the respective models as those displayed in Figure \ref{fig:sphericalProfilesNonRotating}). The dotted vertical reference lines denote spatial scales (from left to right) of 300~km, $20\times\Delta l$, and $10\times\Delta l$, where $\Delta l=1.17$~km is the size of a computational cell. The dash-dot line in each panel is proportional to $k^{3/2}$. The mean magnetic wavenumber $\bar{k}_{\mbox{\tiny mag}}$ (Eq. (\ref{eq:meanMagneticWaveNumber})) is indicated by diamonds. \label{fig:spectralMagneticEnergyDensityNonRotating}}
\end{figure*}
The spectral shape in Figure \ref{fig:spectralMagneticEnergyDensityNonRotating} appears unchanged across the two models, even though the strong-field model saturates and the weak-field model does not.
There are, however, small differences.
At the end of the simulations ($t=1100$~ms), the normalized spectra $\edensk{mag}/\max(\edensk{mag})$ of models $\model{10}{0.0}{00}$ and $\model{12}{0.0}{00}$ lie practically on top of each other for all $k$.
The corresponing spectrum of the strong-field model follows those of the weaker-field models for large $k$ (although the peak is slightly shifted to the left), but has ``excess" power for $k\lesssim0.1$~km$^{-1}$: the integral $\f{1}{\max(\edensk{mag})}\int_{k_{\mbox{\tiny min}}}^{0.1}\edensk{mag}\,dk$ is about $65\%$ larger in the strong-field model than in the weaker-field models.
In the simulations of non-helical MHD turbulence by \citet{haugen_etal_2004}, the magnetic energy spectra grow self-similarly until saturation ($\edensk{mag}\sim\edensk{kin}$), which occurs first on smaller spatial scales, and on larger scales later (i.e., the smallest wavenumber where $\edensk{mag}\sim\edensk{kin}$ moves to even smaller $k$), and the magnetic energy spectrum appears to align itself with the kinetic energy spectrum with $\edensk{mag}\gtrsim\edensk{kin}$, almost up to the forcing scale.
This may suggest that the shape of the saturated and unsaturated spectra should differ more than displayed in Figure \ref{fig:spectralMagneticEnergyDensityNonRotating}.
Despite the differences (saturated or not), we have not been able to find a reasonable explanation for why the spectral shape in the two models remain so similar.
From the spectral magnetic energy distribution we obtain the mean magnetic wavenumber
\begin{equation}
\bar{k}_{\mbox{\tiny mag}}=\f{1}{\widehat{E}_{\mbox{\tiny mag}}}\int_{k_{\mbox{\tiny min}}}^{k_{\mbox{\tiny max}}}k\edensk{mag}\,dk,
\label{eq:meanMagneticWaveNumber}
\end{equation}
and the characteristic spatial scale of the magnetic field $\bar{\lambda}_{\mbox{\tiny mag}}=2\pi/\bar{k}_{\mbox{\tiny mag}}$.
(The mean magnetic wavenumber is indicated by a diamond on each of the energy spectra in Figure \ref{fig:spectralMagneticEnergyDensityNonRotating}.)
During the initial ramp-up to nonlinear SASI evolution we find that $\bar{\lambda}_{\mbox{\tiny mag}}$ decreases rapidly with time, from $\bar{\lambda}_{\mbox{\tiny mag}}\approx60$~km initially to $\bar{\lambda}_{\mbox{\tiny mag}}\approx20$~km around $t=650$~ms, and stays relatively constant thereafter in all the non-rotating models.
Magnetic field amplification in our simulations is caused by turbulent stretching of flux tubes.
In a kinematic dynamo the characteristic scale of the magnetic field decreases exponentially.
If the kinematic approximation remains valid, the decrease is halted by resistive dissipation when the spatial dimension of the field (the flux tube thickness) approaches the resistive scale \citep{shekochihin_etal_2002}.
The kinematic approximation remains valid throughout the evolution of the weak-field model.
The temporal constancy of $\bar{\lambda}_{\mbox{\tiny mag}}$ for $t\ge650$~ms in the weak-field model is a strong indication that numerical diffusion plays an important role in our simulations.
In our numerical scheme, we have adopted the HLL Riemann solver \citep{harten_etal_1983}, which approximates the MHD Riemann problem by only considering the left- and right-propagating fast magnetosonic waves.
This approximation results in diffusive evolution of intermediate waves (e.g., slow magnetosonic, Alfv{\'e}n, and entropy waves), and is the main source of dissipation in our simulations.
(No other form of dissipation, physical or numerical, has been explicitly included in our simulations.)
The inherent diffusivity of schemes based on the HLL Riemann solver also affects the evolution of small-scale structures (e.g., turbulence induced magnetic fields.
See Appendix \ref{app:numericalDissipation} for further details on the source and nature of numerical dissipation of the magnetic energy in our simulations.)
Moreover, in the strongly nonlinear regime of the SASI at $t\gtrsim750$~ms we find that $\bar{\lambda}_{\mbox{\tiny mag}}$ is somewhat larger ($\sim10\%$) in models with a stronger initial magnetic field.
Specifically, we find $\timeAverage{\bar{\lambda}_{\mbox{\tiny mag}}}{0.8}{1.1}\approx18$~km for the weak-field model and $\timeAverage{\bar{\lambda}_{\mbox{\tiny mag}}}{0.8}{1.1}\approx20$~km for the strong-field model.
This trend is consistent with the magnetic field becoming strong enough to cause a back-reaction on the fluid through the magnetic tension force and thereby limit the extent to which magnetic flux tubes are stretched and bent by the chaotic flow induced by the SASI.
\citet{shekochihin_etal_2001} observed a strong anti-correlation between the magnetic field strength and the curvature of magnetic flux tubes in their small-scale dynamo simulations; i.e., that the strongest magnetic fields are less curved.
(This effect could potentially be much stronger in simulations similar to ours, but performed at significantly higher spectral resolution, where the magnetic diffusion scale would move to larger $k$-values.)
We find that the magnetic curvature radius $\lambda_{\mbox{\tiny c}}$ and the magnetic rms scale $\lambda_{\mbox{\tiny rms}}$ \citepalias[cf. Eqs. (16) and (17), respectively, in][]{endeve_etal_2010} evolve similarly to $\bar{\lambda}_{\mbox{\tiny mag}}$.
In particular, for the weak-field model we find $\timeAverage{\lambda_{\mbox{\tiny c}}}{0.8}{1.1}\approx9.5$~km and $\timeAverage{\lambda_{\mbox{\tiny rms}}}{0.8}{1.1}\approx3.7$~km.
The corresponding values for the strong-field model are about $10\%$ larger.
Note that $\lambda_{\mbox{\tiny c}}$ and $\lambda_{\mbox{\tiny rms}}$ combined characterize the structure of the magnetic field.
They measure respectively how sharply magnetic flux tubes are bent and how thinly they are stretched.
Such information is not contained in $\bar{\lambda}_{\mbox{\tiny mag}}$ alone.
Spectral kinetic energy distributions from the non-rotating models with different initial field strengths are shown in Figure \ref{fig:spectralKineticEnergyDensityNonRotating}.
The stochastic nature of the SASI and the turbulent flows necessitates the use of temporally averaged spectra when cross-comparing the models; in particular,
the kinetic energy spectra shown in Figure \ref{fig:spectralKineticEnergyDensityNonRotating} are averaged over the time period extending from 800~ms to 1100~ms (that is, we plot $\timeAverage{\edensk{kin}}{0.8}{1.1}$ versus $k$).
The spectra show that the majority of the kinetic energy resides on relatively large spatial scales (small $k$; see the dotted vertical reference lines).
The spectral kinetic energy density is roughly proportional to $k^{-4/3}$ for small $k$-values ($k\lesssim0.2$~km$^{-1}$), while for larger $k$-values ($k\gtrsim0.6$~km$^{-1}$) the flow is heavily influenced by numerical dissipation and the kinetic energy decreases rapidly with increasing wavenumber ($\edensk{kin}\propto k^{-9/2}$).
Magnetic field amplification is driven by the turbulent flows, and it is the kinetic energy of the small-scale motions that is available to be tapped by the magnetic fields.
\begin{figure*}
\epsscale{1.0}
\plottwo{./figure11a.png}
{./figure11b.png}
\caption{Left panel: (time-averaged) kinetic energy spectra $\edensk{kin}$ from non-rotating models $\model{10}{0.0}{00}$ (solid line), $\model{12}{0.0}{00}$ (dotted line), and $\model{13}{0.0}{00}$ (dotted line). Right panel: difference in spectral kinetic energy relative to the weak-field reference model ($1-\edensk{kin}/\widehat{e}_{\mbox{\tiny kin}}^{\,\mbox{\tiny ref}}$), where $\widehat{e}_{\mbox{\tiny kin}}^{\,\mbox{\tiny ref}}$ is the spectral kinetic energy density of the weak-field model ($\model{10}{0.0}{00}$). In both panels we include vertical reference lines to indicate spatial scales of 300~km, $20\times\Delta l$, and $10\times\Delta l$. In the left panel we also include reference lines proportional to power-laws in $k$: $k^{-5/3}$ (long-dashed), $k^{-4/3}$ (dash-dotted), and $k^{-9/2}$ (dash-dot). \label{fig:spectralKineticEnergyDensityNonRotating}}
\end{figure*}
When comparing the non-rotating models in the left panel of Figure \ref{fig:spectralKineticEnergyDensityNonRotating}, we see a decreasing trend in the spectral kinetic energy density for larger wavenumbers ($k\gtrsim0.2$) in models with a stronger initial magnetic field.
(The decrease in kinetic energy on small scales is balanced by a corresponding increase in magnetic energy.)
We emphasize this difference further in the right panel, where we plot the difference in $\edensk{kin}$ for the stronger field models relative to the weak-field model, $1-\edensk{kin}/\widehat{e}_{\mbox{\tiny kin}}^{\,\mbox{\tiny ref}}$.
(The spectral kinetic energy density of the weak-field reference model is here denoted $\widehat{e}_{\mbox{\tiny kin}}^{\,\mbox{\tiny ref}}$.)
The spectral kinetic energy density in model $\model{12}{0.0}{00}$ is reduced by up to $\sim6\%$, while in the strong-field model it is reduced by a maximum of $\sim15\%$ relative to the weak-field model.
The largest difference is seen in the strongly diffusive regime around $k=0.6$~km$^{-1}$.
These results demonstrate that even relatively weak initial magnetic fields can be amplified and impact the flow, although only on small spatial scales.
For larger spatial scales ($k\lesssim0.06$) the differences are caused by differences in the pre-shock kinetic energy (due to differences in $\bar{R}_{\mbox{\tiny Sh}}$ and box size $L$).
The kinetic energy of the pre-shock flows, density stratification (resulting in the real-space power-law (in radius) in the spherically averaged kinetic energy density (Section \ref{sec:radialProfiles})), and compressibility (due to the presence of supersonic flows below the shock) contribute to the kinetic energy spectra in Figure \ref{fig:spectralKineticEnergyDensityNonRotating}.
To further investigate details about the shape of the kinetic energy spectra due to these factors, in particular the $-4/3$ slope (as opposed to the $-5/3$ slope of Kolmogorov turbulence), we have computed (1) kinetic energy spectra with the pre-shock flow velocity set to zero $\edensk{kin}^{\,\mbox{\tiny I}}(k)$, (2) kinetic energy spectra with the pre-shock flow velocity set to zero \emph{and} corrected for the radial density stratification $\edensk{kin}^{\,\mbox{\tiny II}}(k)$ (i.e., we use $X=\sqrt{\volumeAverage{\rho}{V_{\mbox{\tiny L}}}}u_{j}$ with $j\in\{x,y,z\}$ in Eq. (\ref{eq:fourierTransform})), and (3) kinetic energy spectra with the pre-shock flow velocity set to zero \emph{and} corrected for radial density stratification, \emph{but} with local compressibility retained $\edensk{kin}^{\,\mbox{\tiny III}}(k)$ (i.e., we use $X=\sqrt{\bar{\rho}}u_{j}$, where $\bar{\rho}=\volumeAverage{\rho}{V_{\mbox{\tiny L}}}\times(\rho/\volumeAverage{\rho}{\delta V_{i}})$, in Eq. (\ref{eq:fourierTransform})).
Results from these calculations are plotted in Figure \ref{fig:compensatedKineticEnergySpectra}, where we plot compensated kinetic energy spectra from the weak-field model (averaged over the time interval from $804$~ms to $918$~ms): $\edensk{kin}\times k^{5/3}$ (solid line), $\edensk{kin}^{\,\mbox{\tiny I}}\times k^{5/3}$ (dotted line), $\edensk{kin}^{\,\mbox{\tiny II}}\times k^{5/3}$ (dashed line), and $\edensk{kin}^{\,\mbox{\tiny III}}\times k^{5/3}$ (dash-dot line).
(In Figure \ref{fig:compensatedKineticEnergySpectra}, $\edensk{kin}^{\,\mbox{\tiny II}}$ and $\edensk{kin}^{\,\mbox{\tiny III}}$ have been multiplied by a factor of three for convenient comparison with $\edensk{kin}$ and $\edensk{kin}^{\,\mbox{\tiny I}}$.)
\begin{figure}
\epsscale{1.0}
\plotone{./figure12.png}
\caption{Compensated kinetic energy spectra from model \model{10}{0.0}{00}. (The spectra are computed by averaging over the time period from $804$~ms to $918$~ms.) The solid line corresponds to the kinetic energy spectrum shown as the solid line in left panel in Figure \ref{fig:spectralKineticEnergyDensityNonRotating}. The kinetic energy spectrum obtained when setting the pre-shock flow velocity to zero $\edensk{kin}^{\,\mbox{\tiny I}}$ is represented by the dotted line. Kinetic energy spectra with corrections for density stratification (also with the pre-shock flow excluded), $\edensk{kin}^{\,\mbox{\tiny II}}$ and $\edensk{kin}^{\,\mbox{\tiny III}}$, are represented by the dashed and dash-dot lines, respectively (see text for details). Note the narrow inertial range in the stratification-corrected spectra ($\edensk{kin}^{\,\mbox{\tiny II}},\edensk{kin}^{\,\mbox{\tiny III}}\propto k^{-5/3}$) in $k\in[0.04,0.1]$~km$^{-1}$ (i.e., spatial scales from $\sim160$~km to $\sim60$~km). \label{fig:compensatedKineticEnergySpectra}}
\end{figure}
When comparing $\edensk{kin}$ and $\edensk{kin}^{\,\mbox{\tiny I}}$ in Figure \ref{fig:compensatedKineticEnergySpectra} we see that the supersonic pre-shock accretion flow contributes to the energy spectrum, mostly for small $k$-values, but the two spectra remain similar in shape.
(By integrating the two spectra over all $k$ we find $\widehat{E}_{\mbox{\tiny kin}}\approx0.053$~B and $\widehat{E}_{\mbox{\tiny kin}}^{\,\mbox{\tiny I}}\approx0.040$~B.)
On the other hand, the kinetic energy spectra change markedly when the density stratification is excluded from its computation.
The $-4/3$ spectral slope seen in the left panel of Figure \ref{fig:spectralKineticEnergyDensityNonRotating} is due to density stratification from $R_{\mbox{\tiny PNS}}$ to $R_{\mbox{\tiny{Sh}}}$.
Effects due to compressibility are subdominant and do not change the shape of the spectrum in any significant way.
(\citet{kritsuk_etal_2007} scaled the velocity with $\rho^{1/3}$ to recover Kolmogorov $-5/3$ scaling in spectra from simulations of supersonic isothermal turbulence.)
Moreover, we observe a narrow inertial range in $k\in[0.04,0.1]$~km$^{-1}$ (i.e., spatial scales from $\sim160$~km to $\sim60$~km) where $\edensk{kin}^{\,\mbox{\tiny II}},\edensk{kin}^{\,\mbox{\tiny III}}\propto k^{-5/3}$.
For larger $k$-values (around $k=0.2$~km$^{-1}$) we observe a bump in the $\edensk{kin}^{\,\mbox{\tiny II}}$ and $\edensk{kin}^{\,\mbox{\tiny III}}$ spectra \citep[e.g.,][and references therein]{ishihara_etal_2009}, which is less pronounced for $\edensk{kin}^{\,\mbox{\tiny III}}$.
We note that the simulations by \citet{haugen_etal_2003,haugen_etal_2004} argue in favor of a $k^{-5/3}$ spectrum for non-helical MHD turbulence.
(Given infinite spectral resolution, the $\edensk{kin}$ spectrum could possibly also follow $-5/3$ scaling for larger $k$, where the spectrum presumably would be less influenced by density stratification.)
The identification of Kolmogorov-like spectra in our simulations helps us associate post-shock flows with turbulence.
In Figure \ref{fig:compensatedKineticEnergySpectra}, the peak in the $\edensk{kin}^{\,\mbox{\tiny II}}$ and $\edensk{kin}^{\,\mbox{\tiny III}}$ spectra for smaller $k$-values ($k\approx0.02$~km$^{-1}$; i.e., spatial scales around 300~km) is associated with the large scale SASI flows (cf. Figure \ref{fig:machNumberAndVorticity}), which drive post-shock turbulence.
The peak is located around $k=0.05$~km$^{-1}$ (i.e., spatial scales around 125~km) in the earlier stages of SASI-development, when the spectrum also has a steeper-than-$-5/3$ slope for larger $k$.
When the SASI develops nonlinearly, and the average shock radius begins to increase, the peak moves to smaller $k$-values, and the $\edensk{kin}^{\,\mbox{\tiny II}}$ and $\edensk{kin}^{\,\mbox{\tiny III}}$ spectra develop Kolmogorov slopes.
Thus, the power in the large scale flows cascades to smaller-scale flows.
In particular, integrating the $\edensk{kin}^{\,\mbox{\tiny I}}$ spectrum in Figure \ref{fig:compensatedKineticEnergySpectra} over $k$, from $k=0.04$ to $k_{\mbox{\tiny max}}$, gives $0.022$~B.
Thus, a large fraction (up to $\sim50\%$) of the kinetic energy below the shock may be associated with turbulence.
These observations suggest that the SASI saturates due to the development of turbulence via secondary instabilities (e.g., the Kelvin-Helmholtz instability), which feed on the power in the low-order SASI modes \citep[e.g.,][]{guilet_etal_2010}.
The turbulent energy is either dissipated via viscous heating, or converted into magnetic energy and dissipated via Joule heating.
However, we find that significantly less than $50\%$ of the post-shock kinetic energy is accessed for magnetic field amplification (cf. Figure \ref{fig:turbulentKineticEnergy}).
We use the unmodified kinetic energy spectrum $\edensk{kin}$ in our further analysis since it is related to the total kinetic energy via Eq. (\ref{eq:parsevalKin}), and therefore most useful for extracting quantitative information from our simulations.
From the spectral kinetic energy density we obtain the turbulent kinetic energy in our simulations
\begin{equation}
E_{\mbox{\tiny kin}}^{\mbox{\tiny tur}}
=\int_{k_{\mbox{\tiny tur}}}^{k_{\mbox{\tiny max}}}\edensk{kin}\,dk.
\label{eq:turbulentKineticEnergy}
\end{equation}
For the purpose of studying magnetic field amplification, we have chosen to define turbulent flows to include flows residing on scales with $k\gek_{\mbox{\tiny tur}}=2\pi/\lambda_{\mbox{\tiny tur}}$, where the turbulent spatial scale covers 25 grid cells $\lambda_{\mbox{\tiny tur}}=25\times\Delta l\approx30$~km (for $\Delta l=1.17$~km).
For reference, $\lambda_{\mbox{\tiny tur}}$ is more than an order of magnitude smaller than the average shock radius, which again is comparable to the forcing scale of the turbulent flows (i.e., the scale of the supersonic downdrafts from the shock triple-point), but comparable to $R_{\mbox{\tiny PNS}}$ ($\sim25\%$ smaller).
This particular choice for $k_{\mbox{\tiny tur}}$ is motivated by several factors, including (1) most of the magnetic field amplification occurs on spatial scales with $k>k_{\mbox{\tiny tur}}$ (Figures \ref{fig:spectralMagneticEnergyDensityNonRotating} and \ref{fig:energySpectraResolutionStudy}), (2) any dynamic effect of the magnetic field is seen on scales with $k\gtrsimk_{\mbox{\tiny tur}}$ (Figure \ref{fig:spectralKineticEnergyDensityNonRotating}), and (3) the flow Taylor microscale, $\lambda_{\mbox{\tiny T}}=\sqrt{\volumeAverage{u^{2}}{V_{\mbox{\tiny{Sh}}}}/\volumeAverage{|\curl{\vect{u}}|^{2}}{V_{\mbox{\tiny{Sh}}}}}$, which measures the average size of turbulent eddies \citep[e.g.,][]{ryu_etal_2000}, is comparable to $\lambda_{\mbox{\tiny tur}}$ (about a factor of two smaller).
\begin{figure}
\epsscale{1.0}
\plotone{./figure13.png}
\caption{Turbulent kinetic energy (Eq. (\ref{eq:turbulentKineticEnergy}), black lines) and total magnetic energy (red lines) versus time in non-rotating models in which the initial magnetic field strength is varied: $\model{10}{0.0}{00}$ (solid), $\model{12}{0.0}{00}$ (dashed), and $\model{13}{0.0}{00}$ (dotted). The dash-dotted line is proportional to $\exp{(t/\tau)}$, with $\tau=100$~ms. The long-dashed horizontal line indicates the upper limit for turbulent kinetic energy, assuming Kolmogorov scaling applies for $k\gek_{\mbox{\tiny tur}}$. ($10^{-2}~\mbox{B}=10^{49}~\mbox{erg}$.) \label{fig:turbulentKineticEnergy}}
\end{figure}
In Figure \ref{fig:turbulentKineticEnergy} we plot the time evolution of the turbulent kinetic energy in the non-rotating models.
The turbulent kinetic energy evolves similarly to the total kinetic energy below the shock (cf. top left panel in Figure \ref{fig:overviewNonRotating}). It grows exponentially during the ramp-up of the SASI and reaches a saturation level, where the intermittent time variability is superimposed on a barely noticeable overall growth.
During the exponential growth phase, the growth rate is somewhat lower and the saturation level about an order of magnitude below the total kinetic energy beneath the shock.
The exponential growth time in the weak-field model is $\tau\approx100$~ms.
The time-averaged saturation levels for the turbulent kinetic energy in the respective models are found to be $\timeAverage{E_{\mbox{\tiny kin}}^{\mbox{\tiny tur}}}{0.8}{1.1}=5.90\times10^{-3}$~B ($\model{10}{0.0}{00}$), $5.48\times10^{-3}$~B ($\model{12}{0.0}{00}$), and $5.11\times10^{-3}$~B ($\model{13}{0.0}{00}$), which implies $\sim7\%$ ($4.2\times10^{-4}$~B) and $\sim13\%$ ($7.9\times10^{-4}$~B) reductions with respect to the weak-field model for model $\model{12}{0.0}{00}$ and model $\model{13}{0.0}{00}$, respectively.
These reductions in turbulent energy are comparable to the increase in magnetic energy in the respective models.
Thus the magnetic energy grows at the expense of the turbulent kinetic energy.
The saturation level for the turbulent kinetic energy in our models is only about a factor of two below what is obtained by (hypothetically) assuming Kolmogorov scaling ($\edensk{kin}\propto k^{-5/3}$) for $k\gek_{\mbox{\tiny tur}}$ (indicated by the long-dashed line in Figure \ref{fig:turbulentKineticEnergy}). Thus, $E_{\mbox{\tiny kin}}^{\mbox{\tiny tur}}\approx 10^{-2}$~B may serve as an upper limit for turbulent kinetic energy in our models, and therefore also as a reasonable upper limit on the magnetic energy attainable in these simulations.
(The turbulent kinetic energy may depend on the accretion rate ahead of the shock, which is held fixed in our simulations.
Thus, the upper limit on $E_{\mbox{\tiny kin}}^{\mbox{\tiny tur}}$ is only approximate.)
We find that the magnetic energy in model $\model{13}{0.0}{00}$ saturates at about $10\%$ of the turbulent kinetic energy.
One must keep in mind that the magnetic energy is also heavily influenced by numerical dissipation during the saturated phase.
It is entirely possible that the magnetic energy can grow beyond the levels seen in our simulations, but probably not much above the long-dashed horizontal line in Figure \ref{fig:turbulentKineticEnergy}.
Small-scale dynamo simulations commonly show that the magnetic energy spectrum lies slightly above the kinetic energy spectrum on the smallest scales during saturation \citep[e.g.,][]{brandenburgSubramanian_2005}.
The magnetic energy does not exceed the kinetic energy in any part of the spectrum in our simulations.
This may be due to finite resolution and numerical dissipation on the smallest scales.
\subsubsection{Varying the Spatial Resolution}
In \citetalias{endeve_etal_2010} we found that magnetic field amplification from SASI-induced turbulent flows is very sensitive to the spatial resolution adopted in the numerical simulations.
(In general, increased spatial resolution results in stronger magnetic fields and improves the conditions for a dynamical influence of magnetic fields.)
With the energy spectra we continue to study the effect of resolution in this section.
\begin{figure*}
\epsscale{1.0}
\plottwo{./figure14a.png}
{./figure14b.png}
\caption{Energy spectra from simulations in which the spatial resolution has been varied. Kinetic energy spectra are plotted in the left panel, and the fractional magnetic energy enclosed by $k$-space shell with radius $k$, $f_{\mbox{\tiny mag}}(k)$ (cf. Eq. (\ref{eq:eMagEnclosed})), is plotted in the right panel. Results are plotted for the non-rotating model with $B_{0}=1\times10^{12}$ ($\model{12}{0.0}{00}$). The spatial resolution in these runs has been set to $\Delta l=2.34$~km (dotted lines), $1.56$~km (dashed lines), and $1.17$~km (solid lines). The kinetic energy spectra are averaged over the time period extending from $800$~ms to $1100$~ms. For each model, magnetic energy spectra are plotted at $t=800$~ms, $900$~ms, $1000$~ms, and $1100$~ms (thicker lines represent more advanced time states). The mean magnetic wavenumber (cf. Eq. (\ref{eq:meanMagneticWaveNumber})) is denoted with a diamond on each spectrum in the right panel. In both panels we include vertical reference lines indicating the spatial scale of 300~km (dash-dot line), and $10\times\Delta l$ (with line styles matching each of the models). \label{fig:energySpectraResolutionStudy}}
\end{figure*}
Energy spectra from simulations of the non-rotating model with $B_{0}=1\times10^{12}$~G for various spatial resolutions are plotted in Figure \ref{fig:energySpectraResolutionStudy}.
In the left panel we plot the spectral kinetic energy density.
In the right panel we plot the fractional magnetic energy enclosed by the $k$-space shell with radius $k$
\begin{equation}
f_{\mbox{\tiny mag}}(k)=
\f{1}{\widehat{E}_{\mbox{\tiny mag}}}
\int_{k_{\mbox{\tiny min}}}^{k}\edensk{mag}(k')\,dk',
\label{eq:eMagEnclosed}
\end{equation}
where $f_{\mbox{\tiny mag}}(k)$ is normalized to the total magnetic energy $\widehat{E}_{\mbox{\tiny mag}}$ so that $f_{\mbox{\tiny mag}}(k_{\mbox{\tiny max}})=1$.
Results from three simulations are presented, and the grid size has been varied by a factor of two: $\Delta l=2.34$~km (low resolution; dotted lines), $\Delta l=1.56$~km (medium resolution; dashed lines), and $\Delta l=1.17$~km (high resolution; solid lines).
Kinetic energy spectra are averaged over a time period extending from $800$~ms to $1100$~ms, while magnetic energy spectra are plotted for $t=800$~ms, $900$~ms, $1000$~ms, and $1100$~ms.
The kinetic energy spectra are very similar and follow each other closely for small wavenumbers ($k\lesssim0.2$~km$^{-1}$).
Numerical dissipation influences the kinetic energy for larger $k$-values, and $\edensk{kin}$ begins to fall off more rapidly with increasing $k$.
The fall-off starts at smaller $k$-values for the lower resolution runs: $\edensk{kin}$ falls below $10^{-3}$~B km$^{-1}$ around $k=0.5$~km$^{-1}$ in the low resolution run, and around $k=0.8$~km$^{-1}$ in the high resolution run.
Since most of the kinetic energy below the shock resides on large scales, and includes the flows associated with the supersonic stream ahead of the shock triple-point (Figure \ref{fig:machNumberAndVorticity}), the total kinetic energy below the shock, $\eShock{kin}$, is insensitive to the spatial resolution.
During the highly nonlinear operation of the SASI spiral mode we find $\timeAverage{\eShock{kin}}{0.8}{1.1}=0.045$~B, $0.043$~B, and $0.044$~B, for the low, medium, and high resolution model, respectively.
Our definition of $k_{\mbox{\tiny tur}}$, used in Eq. (\ref{eq:turbulentKineticEnergy}), is not optimal when comparing simulations computed with different spatial resolutions, since it results in smaller $k_{\mbox{\tiny tur}}$ and more turbulent kinetic energy in models with larger $\Delta l$ ($\timeAverage{E_{\mbox{\tiny kin}}^{\mbox{\tiny tur}}}{0.8}{1.1}=8.6\times10^{-3}$~B, $6.3\times10^{-3}$~B, and $5.5\times10^{-3}$~B, for low, medium, and high resolution, respectively).
For the purpose of quantifying the increase in kinetic energy on small scales due to higher resolution, we fix $k_{\mbox{\tiny tur}}=0.2$~km$^{-1}$ and find that $\timeAverage{E_{\mbox{\tiny kin}}^{\mbox{\tiny tur}}}{0.8}{1.1}$ increases (linearly) by a factor of two when $\Delta l$ is decreased by a corresponding factor of two ($\timeAverage{E_{\mbox{\tiny kin}}^{\mbox{\tiny tur}}}{0.8}{1.1}=2.7\times10^{-3}$~B, $4.5\times10^{-3}$~B, and $5.5\times10^{-3}$~B for low, medium, and high resolution, respectively).
The self-similar evolution of the magnetic energy spectra is clearly displayed in the right panel of Figure \ref{fig:energySpectraResolutionStudy}.
The different models are well separated, while each model's temporally separated spectra fall practically on top of each other.
The spectra are shifted to larger $k$-values when the resolution is increased.
The shift to the right in the spectrum is a direct consequence of a corresponding shift of the spatial scale where numerical diffusion dominates.
The characteristic scale of the magnetic field is roughly constant with time over the time period displayed, but decreases linearly with increasing spatial resolution.
In particular, we find $\timeAverage{\bar{\lambda}_{\mbox{\tiny mag}}}{0.8}{1.1}\approx32$~km, $23$~km, and $18$~km, for the low, medium, and high resolution runs, respectively.
Similarly, the magnetic rms scale $\lambda_{\mbox{\tiny rms}}$ (the average flux tube thickness) decreases by almost a factor of two when $\Delta l$ is reduced by a factor of two (from $7.0$~km to $3.8$~km).
The shift to smaller spatial scales---in particular the decrease in the flux tube thickness---afforded by higher spatial resolution results in stronger magnetic fields and an increase in the magnetic energy.
The integrated magnetic energy below the shock in the low resolution model reaches saturation for $t<800$~ms, and does not grow much beyond $3\times10^{-6}$~B.
(Saturation of magnetic energy in this model is solely due to numerical dissipation.
The influence of magnetic fields on small scale flows emphasized in the right panel of Figure \ref{fig:spectralKineticEnergyDensityNonRotating} is not observed in lower-resolution models.)
In the high resolution model the magnetic energy grows throughout the run, and reaches $2\times10^{-4}$~B near the end (dashed red line in Figure \ref{fig:turbulentKineticEnergy}).
It is interesting to note that during the time span from 800~ms to 1100~ms, between $68\%$ and $77\%$ of the total magnetic energy resides on scales smaller than $\lambda_{\mbox{\tiny tur}}$, nearly independent of spatial resolution.
(There is a weak decrease in the percentage with increasing resolution).
The corresponding percentage for spatial scales smaller than $10\times\Delta l$ varies between $16\%$ and $21\%$.
We expect the magnetic energy spectra will continue to move to higher wavenumbers when the resolution is increased beyond that of our simulations.
The shift to smaller spatial scales (smaller flux tube cross-section) is accompanied by stronger magnetic fields, and we expect the flux tube cross-section to decrease until the magnetic fields become strong enough to cause a back-reaction on the fluid through the Lorentz force.
(\citet{haugen_etal_2003} presented converged magnetic energy spectra in their simulations of non-helical MHD turbulence.
In their converged spectra, most of the magnetic energy resides at a wavenumber $\sim5$ times the minimum wavenumber in the computational domain.)
\subsection{Magnetic Energy Growth Rates}
\label{sec:magneticEnergyGrowthRates}
In this section we focus on the relative importance of mechanisms that control the exponential growth rate of magnetic energy when the magnetic field is weak and the kinematic approximation remains valid.
We also consider the impact of finite numerical resolution on the growth rate in our simulations.
An eddy turnover time $\tau_{\mbox{\tiny eddy}}=\bar{\lambda}_{\mbox{\tiny mag}}/u_{\mbox{\tiny rms}}^{\mbox{\tiny tur}}$ is commonly invoked as the characteristic exponential growth time of magnetic fields in a turbulent small-scale dynamo \citep[e.g.,][]{kulsrudAnderson_1992}. Here $\bar{\lambda}_{\mbox{\tiny mag}}$ is the characteristic spatial scale of the magnetic field defined below Eq. (\ref{eq:meanMagneticWaveNumber}). The turbulent rms velocity is
\begin{equation}
u_{\mbox{\tiny rms}}^{\mbox{\tiny tur}}=\left(\f{2E_{\mbox{\tiny kin}}^{\mbox{\tiny tur}}}{M_{\mbox{\tiny{Sh}}}}\right)^{1/2},
\label{eq:rmsVelocity}
\end{equation}
where $M_{\mbox{\tiny{Sh}}}$ is the mass in $V_{\mbox{\tiny{Sh}}}$.
The use of $M_{\mbox{\tiny{Sh}}}$ in Eq. (\ref{eq:rmsVelocity}), instead of only the mass of the flow included in $E_{\mbox{\tiny kin}}^{\mbox{\tiny tur}}$, results in an underestimate of $u_{\mbox{\tiny rms}}^{\mbox{\tiny tur}}$.
On the other hand, $E_{\mbox{\tiny kin}}^{\mbox{\tiny tur}}$ (and therefore $u_{\mbox{\tiny rms}}^{\mbox{\tiny tur}}$) is also sensitive to the definition of $k_{\mbox{\tiny tur}}$, which may be larger than the value we use in Eq. (\ref{eq:turbulentKineticEnergy}) and result in smaller $E_{\mbox{\tiny kin}}^{\mbox{\tiny tur}}$.
Nevertheless, Eq. (\ref{eq:rmsVelocity}) provides a reasonable order-of-magnitude estimate of $u_{\mbox{\tiny rms}}^{\mbox{\tiny tur}}$.
We find that $u_{\mbox{\tiny rms}}^{\mbox{\tiny tur}}$ grows rapidly during the initial ramp up of the SASI and then levels off at later times.
In the non-rotating models we find $\timeAverage{u_{\mbox{\tiny rms}}^{\mbox{\tiny tur}}}{0.8}{1.1}\approx4000$~km s$^{-1}$.
A turbulent rms velocity of several $\times10^{3}$~km s$^{-1}$ is consistent with an inspection of the subsonic flows below the shock: the average velocity among the zones with $|\vect{u}|/c_{S}\le 1$ is about $7000$~km s$^{-1}$.
For the non-rotating models $\tau_{\mbox{\tiny eddy}}$ is about $5$~ms during the highly nonlinear stage of strong SASI activity.
(Another commonly used expression for the eddy turnover time, $\volumeAverage{|\curl{\vect{u}}|^{2}}{V_{\mbox{\tiny{Sh}}}}^{-1/2}$, gives a similar result.)
We now investigate the individual magnetic energy growth rates relevant to our simulations.
Assuming a non-ideal electric field $-(\vect{u}\times\vect{B})+\eta\vect{J}$ with scalar resistivity $\eta$, the evolution equation for the magnetic energy density is easily obtained by dotting the Maxwell-Faraday (induction) equation with $\vect{B}/\mu_{0}$:
\begin{equation}
\pderiv{\edens{mag}}{t}
+\divergence{\vect{P}}
=-\vect{u}\cdot\left(\vect{J}\times\vect{B}\right)
-\f{1}{\mu_{0}}\vect{B}\cdot\curl{\left(\eta\vect{J}\right)},
\label{eq:magneticEnergyEquation}
\end{equation}
where $\vect{P}=[\vect{u}(\vect{B}\cdot\vect{B})-\vect{B}(\vect{B}\cdot\vect{u})]/\mu_{0}$ and $\vect{J}=\left(\curl{\vect{B}}\right)/\mu_{0}$. (See also Eq. (10) as well as the discussion in Section 3.3 in \citetalias{endeve_etal_2010}.) The first and second terms on the right-hand-side of Eq. (\ref{eq:magneticEnergyEquation}) represent work done against the Lorentz force ($W_{\mbox{\tiny L}}$) and magnetic energy decay due to resistive (Joule) dissipation ($-Q_{\mbox{\tiny J}}$), respectively. Kinetic energy of the flow is converted into magnetic energy if $W_{\mbox{\tiny L}}>0$.
It is apparent from Eq. (\ref{eq:magneticEnergyEquation}) that the total magnetic energy growth rate $\tau_{\mbox{\tiny tot}}^{-1}=\langle\edens{mag}\rangle^{-1}\langle\partial\edens{mag}/\partial t\rangle$ equals the sum $\tau_{\vect{J}\times\vect{B}}^{-1}+\tau_{\vect{P}}^{-1}+\tau_{\mbox{\tiny J}}^{-1}$ of individual rates due to work done against the Lorentz force, accretion of magnetic energy (Poynting flux) through $\partial V_{\mbox{\tiny PNS}}$, and resistive energy dissipation.
(The angle brackets in the total rate imply an integral over a volume bounded by the surface $\partial V_{\mbox{\tiny PNS}}$ and a spherical surface enclosing the accretion shock.)
The Poynting flux through the spherical surface enclosing the accretion shock vanishes because $\vect{u}\parallel\vect{B}$ ahead of the shock .
The Poynting flux through $\partial V_{\mbox{\tiny PNS}}$ and resistive dissipation generally result in decay of the magnetic energy in the computational domain.
The decay must be overcome by the Lorentz work term in order for the magnetic energy to increase.
In \citetalias{endeve_etal_2010} we found flux tube stretching by turbulent flows driven by the spiral SASI mode to be the dominant mechanism for magnetic field amplification (see also Figure \ref{fig:magneticEnergyGrowthRatesB0_1E10L0_0_0E00} below).
The magnetic energy growth rate due to work done against the Lorentz force is
\begin{eqnarray}
\tau_{\vect{J}\times\vect{B}}^{-1}
&=&
\f{1}{E_{\mbox{\tiny mag}}}\int_{V}\vect{u}\cdot\left(\vect{J}\times\vect{B}\right)\,dV \nonumber \\
&\approx&
2u_{\mbox{\tiny rms}}^{\mbox{\tiny tur}}/\bar{\lambda}_{\mbox{\tiny mag}} = 2 \tau_{\mbox{\tiny eddy}}^{-1},
\label{eq:growthRateLorentzWork}
\end{eqnarray}
where the turbulent rms velocity $u_{\mbox{\tiny rms}}^{\mbox{\tiny tur}}$ and the characteristic scale of the magnetic field $\bar{\lambda}_{\mbox{\tiny mag}}$ have been used.
(The factor two in the second part of Eq. (\ref{eq:growthRateLorentzWork}) stems from the factor of one half in the definition of magnetic energy, but is probably not important for this rough estimate.)
The corresponding growth time is then approximately
\begin{equation}
\tau_{\vect{J}\times\vect{B}}
\approx
2.5~\mbox{ms}
\left(\f{\bar{\lambda}_{\mbox{\tiny mag}}}{20~\mbox{km}}\right)
\left(\f{u_{\mbox{\tiny rms}}^{\mbox{\tiny tur}}}{4000~\mbox{km s}^{-1}}\right)^{-1}.
\label{eq:growthTimeLorentzWork}
\end{equation}
\begin{figure}
\epsscale{1.0}
\plotone{./figure15.png}
\caption{Magnetic energy growth rates versus time for the non-rotating weak-field model ($\model{10}{0.0}{00}$).
The growth rates are based on terms appearing in Eq. (\ref{eq:magneticEnergyEquation}) and are due to work done against the Lorentz force (solid line) and Poynting flux losses due to accretion through the spherical surface with $r=R_{\mbox{\tiny PNS}}$ (dash-dot line).
We also plot the growth rates due to compression (dotted line) and stretching (dashed line) (Eqs. (13) and (11) in \citetalias{endeve_etal_2010}, respectively). \label{fig:magneticEnergyGrowthRatesB0_1E10L0_0_0E00}}
\end{figure}
The volume occupied by the PNS is excluded from our simulations, and the magnetic energy in $V_{\mbox{\tiny{Sh}}}$ is also affected by accretion of magnetized matter through $\partial V_{\mbox{\tiny PNS}}$.
The decay rate due to this process is
\begin{equation}
\tau_{\vect{P}}^{-1}
=
\f{1}{E_{\mbox{\tiny mag}}}\oint_{\partial V_{\mbox{\tiny PNS}}}\vect{P}\cdot d\vect{S}
\approx
\f{3\dot{M}}{2\pi\rho_{0}L_{B}^{3}},
\label{eq:growthRatePoynting}
\end{equation}
where in the rightmost estimate we have adopted the exponential decrease of magnetic field with radius over a characteristic length scale $L_{B}$ (cf. Figure \ref{fig:sphericalProfilesNonRotating}) to relate the magnetic energy in $V_{\mbox{\tiny{Sh}}}$ to the field strength at the surface of the PNS: $E_{\mbox{\tiny mag}}\approx\f{B_{0}^{2}}{2\mu_{0}}\f{4\pi}{3}L_{B}^{3}$.
The decay time due to accretion through the inner boundary is then approximately
\begin{eqnarray}
\tau_{\vect{P}}
&\approx&
90~\mbox{ms}
\left(\f{\rho_{0}}{3\times10^{10}~\mbox{g cm}^{-3}}\right)\times \nonumber \\
&&\times\left(\f{L_{B}}{100~\mbox{km}}\right)^{3}
\left(\f{\dot{M}}{0.36~M_{\odot}\mbox{ s}^{-1}}\right)^{-1}
\label{eq:growthTimePoynting}
\end{eqnarray}
The average mass density around $r=R_{\mbox{\tiny PNS}}$, denoted $\rho_{0}$, stays fairly constant throughout the simulations.
In Figure \ref{fig:magneticEnergyGrowthRatesB0_1E10L0_0_0E00} we plot the growth rates $\tau_{\vect{J}\times\vect{B}}^{-1}$ (solid line) and $\tau_{\vect{P}}^{-1}$ (dash-dot line) versus time for model $\model{10}{0.0}{00}$.
This model exhibits exponential magnetic energy growth throughout with a growth time of about 66~ms.
The growth rates plotted in Figure \ref{fig:magneticEnergyGrowthRatesB0_1E10L0_0_0E00} are computed from numerical approximations (second-order finite differences) to the integral expressions, and not the approximations provided by the rightmost expressions in Eqs. $(\ref{eq:growthTimeLorentzWork})$ and $(\ref{eq:growthTimePoynting})$.
We also include the growth rates due to stretching $\tau_{\gradient{\vect{u}}}^{-1}$ and compression $\tau_{\divergence{\vect{u}}}^{-1}$ (Eqs. (11) and (13) in \citetalias{endeve_etal_2010}, respectively), and the plot shows that stretching dominates over compression.
The rates remain quasi-steady for $t\gtrsim750$~ms, and in particular we find $\timeAverage{\tau_{\vect{J}\times\vect{B}}^{-1}}{0.9}{1.1}\approx480$~s$^{-1}$ and $\timeAverage{\tau_{\vect{P}}^{-1}}{0.9}{1.1}\approx9$~s$^{-1}$. (We also find $\timeAverage{\tau_{\gradient{\vect{u}}}^{-1}}{0.9}{1.1}\approx515$~s$^{-1}$, and $\timeAverage{\tau_{\divergence{\vect{u}}}^{-1}}{0.9}{1.1}\approx76$~s$^{-1}$.)
We note that there is good agreement between the numerically computed growth rates and the growth rates predicted by the estimates provided by the rightmost expressions in Eqs. $(\ref{eq:growthTimeLorentzWork})$ and $(\ref{eq:growthTimePoynting})$.
Furthermore, the relative importance of these rates in determining the total magnetic energy growth rate becomes clear: since $\tau_{\vect{J}\times\vect{B}}\ll\tau_{\vect{P}}$, accretion of magnetic energy through $\partial V_{\mbox{\tiny PNS}}$ has virtually no effect on the growth of magnetic energy in $V_{\mbox{\tiny{Sh}}}$.
The discrepancy between the millisecond growth time predicted by Eq. (\ref{eq:growthRateLorentzWork}) and the numerically measured growth time ($\tau\approx66$~ms; Figure \ref{fig:overviewNonRotating}) suggests that numerical dissipation plays an important role in controlling the growth time for magnetic energy in our simulations.
This is further supported by the results presented in Section \ref{sec:spectralAnalysis}, which show that the magnetic energy develops on spatial scales that are strongly affected by numerical dissipation (see also Appendix \ref{app:numericalDissipation}).
If not suppressing field growth entirely, numerical dissipation tends to increase the magnetic energy growth time.
The characteristic decay rate due to resistive dissipation of magnetic energy is
\begin{equation}
\tau_{\mbox{\tiny J}}^{-1}
=
\f{1}{E_{\mbox{\tiny mag}}}\int_{V}\f{1}{\mu_{0}}\vect{B}\cdot\curl{\left(\eta\vect{J}\right)}\,dV
\approx
\f{2\eta}{\lambda_{\mbox{\tiny d}}^{2}},
\label{eq:growthRateDissipation}
\end{equation}
where we have introduced the dissipation scale $\lambda_{\mbox{\tiny d}}$.
The decay time due to resistive dissipation is then
\begin{equation}
\tau_{\mbox{\tiny J}}
\approx
R_{\mbox{\tiny m}}\left(\f{\lambda_{\mbox{\tiny d}}}{\bar{\lambda}_{\mbox{\tiny mag}}}\right)^{2}\tau_{\vect{J}\times\vect{B}},
\label{eq:growthTimeDissipation}
\end{equation}
where the magnetic Reynolds number is defined as $R_{\mbox{\tiny m}}=u_{\mbox{\tiny rms}}^{\mbox{\tiny tur}}\bar{\lambda}_{\mbox{\tiny mag}}/\eta$.
The magnetic Reynolds number in the supernova environment is expected to be extremely large; on the order $10^{17}$ in the PNS \citep[][]{thompsonDuncan_1993}.
As far as the magnetic energy growth rate is concerned, resistive effects are only relevant on very small scales, and the growth is most likely curbed by dynamical interactions with the fluid through magnetic tension forces before the magnetic field develops to resistive scales due to turbulent stretching of flux ropes \citep{thompsonDuncan_1993}.
\citet{thompsonDuncan_1993} list (in their Table 1) the resistivity in the PNS convection zone ($\eta=1\times10^{-4}$~cm$^{2}$~s$^{-1}$).
Adopting this value, the resistive decay time for a magnetic field varying on a spatial scale of, for example, $1$~m (i.e., much smaller than any scale resolved by our simulations) becomes very long ($\tau_{\mbox{\tiny J}}=5\times10^{7}$~s) compared to the explosion time ($\sim1$~s).
Resistive effects are, however, important to consider in numerical MHD simulations of astrophysical systems.
We do not explicitly include resistivity in our simulations, but the numerical scheme incorporates an effective numerical resistivity in the induction equation in order to stabilize the solution when discontinuities or underresolved gradients appear in the flow (see Appendix \ref{app:numericalDissipation} for further details).
An approximation of the total growth rate can be obtained by combining Eqs. (\ref{eq:growthRateLorentzWork}) and (\ref{eq:growthRateDissipation}):
\begin{equation}
\tau_{\mbox{\tiny tot}}^{-1}
\approx
\tau_{\vect{J}\times\vect{B}}^{-1}
\left[1-\f{1}{R_{\mbox{\tiny m}}}\left(\f{\bar{\lambda}_{\mbox{\tiny mag}}}{\lambda_{\mbox{\tiny d}}}\right)^{2}\right].
\label{eq:totalGrowthRate}
\end{equation}
In our simulations we have $\tau_{\mbox{\tiny tot}}\gg\tau_{\vect{J}\times\vect{B}}$ and $\lambda_{\mbox{\tiny d}}\lesssim\bar{\lambda}_{\mbox{\tiny mag}}$.
Defined this way, the magnetic Reynolds number in our simulations is therefore somewhat larger than, but still close to, unity ($R_{\mbox{\tiny m}}\gtrsim 1$).
This conclusion is consistent with the observations from the energy spectrum plots above, which show that a sizable fraction of the magnetic energy resides on spatial scales where numerical diffusion is significant.
Our simulations are therefore likely to grossly underestimate the magnetic energy growth rates that can be expected under more realistic physical conditions (i.e., where $R_{\mbox{\tiny m}}\gg1$).
We point out here that the saturation of magnetic energy observed in model $\model{13}{0.0}{00}$ does \emph{not} mean that $\tau_{\vect{J}\times\vect{B}}^{-1}\approx0$~s$^{-1}$ for this model.
We find that the amplified magnetic fields in model $\model{13}{0.0}{00}$ result in about a $10\%$ reduction in $\tau_{\vect{J}\times\vect{B}}^{-1}$ relative to the weak-field model, which, because of numerical dissipation, results in a significant reduction in the total growth rate $\tau_{\mbox{\tiny tot}}^{-1}$ (we expect $\tau_{\mbox{\tiny J}}^{-1}$ to be the same in both models).
In Appendix \ref{app:numericalDissipation} we measure numerically the magnetic energy decay rate due to resistive dissipation in one of our simulations.
We find $\tau_{\mbox{\tiny J}}^{-1}\approx380$~s$^{-1}$, which is comparable to, but still somewhat smaller than $\tau_{\vect{J}\times\vect{B}}^{-1}$.
The decay rates $\tau_{\mbox{\tiny J}}^{-1}$, along with $\lambda_{\mbox{\tiny d}}(\approx\bar{\lambda}_{\mbox{\tiny mag}})$ and $\eta_{\mbox{\tiny num}}$ (Appendix \ref{app:numericalDissipation}), and $\tau_{\vect{J}\times\vect{B}}^{-1}$ do not vary significantly with time during the highly nonlinear stage of the SASI.
Thus, the numerically measured growth time ($\tau_{\mbox{\tiny tot}}^{-1}\approx66$~ms; Figure \ref{fig:overviewNonRotating}) is mostly the result of two large and competing processes: growth due to $\tau_{\vect{J}\times\vect{B}}^{-1}$ and decay due to $\tau_{\mbox{\tiny J}}^{-1}$.
For increasing spatial resolution (i.e., increasing $R_{\mbox{\tiny m}}$) we expect $\tau_{\mbox{\tiny tot}}^{-1}\to\tau_{\vect{J}\times\vect{B}}^{-1}$ (cf. Eq. (\ref{eq:totalGrowthRate})).
Indeed, we have carried out simulations with different spatial resolutions and measured the magnetic energy growth rate when an epoch of exponential growth can be identified.
We find that the growth rate increases with increasing resolution: for the lowest resolution model ($\Delta l=2.34$~km) the exponential growth time is about 150~ms, while in a simulation with $\Delta l=0.78$~km the exponential growth time decreases to about 50~ms\footnote{To conserve computational resources, this model was not run to completion but until the computational domain consisted of $1280^{3}$ zones ($t\approx880$~ms). At this time the SASI is still ramping up and the magnetic energy growing rapidly.}.
In fact, a divergent increase in the magnetic energy growth rate with increasing magnetic Reynolds number (i.e., resolution) has been reported in direct numerical simulations of MHD turbulence \citep[e.g.,][]{haugen_etal_2004} and recently in simulations of turbulent star formation using adaptive mesh refinement \citep[e.g.,][]{syr_etal_2010,federrath_etal_2011}.
These authors show results from simulations in which the resolution has been doubled several times, and they find that the magnetic energy growth rate increases as a power law with increasing magnetic Reynolds number.
The sensitivity of the magnetic field evolution to numerical resolution does raise concerns about what aspects of our simulations are relevant to core-collapse supernovae.
Dissipation due to finite grid resolution tends to suppress magnetic energy growth.
At face value, our simulations (falsely) result a negative assessment on the efficiency of SASI-induced magnetic field amplification.
However, as the resolution is increased the growth rate increases and the resulting magnetic fields become stronger.
Further analysis suggests that the simulations grossly underestimate the growth rates and fields that may obtain in the supernova environment.
The SASI-induced turbulent magnetic field amplification mechanism is a robust result from our simulations.
Only the growth rate, saturation amplitude, and the dynamical impact of the amplified magnetic fields remain uncertain.
An important consequence of the implied millisecond growth time is that any weak seed magnetic fields may be amplified to saturation levels ($|\vect{B}|\approx\sqrt{\mu_{0}\rho}|\vect{u}|$) in a core-collapse supernova if the SASI operates and drives vigorous turbulent flows below the shock.
The kinetic energy available to amplify the magnetic energy (some fraction of $E_{\mbox{\tiny kin}}^{\mbox{\tiny tur}}$) is not sufficient for magnetic fields generated in this way to play a principal role in the explosion dynamics.
We cannot, however, completely rule out the possibility that SASI-generated magnetic fields play a secondary role in the dynamics leading to core-collapse supernovae.
\begin{figure*}
\epsscale{1.0}
\plottwo{./figure16a.png}
{./figure16b.png}
\caption{Kinetic energy (left) and magnetic energy (right) versus time from simulations with varying degree of initial rotation. The specific angular momentum in the pre-shock flow has been set to $l=0.0$~cm$^{2}$~s$^{-1}$ (solid; $\model{10}{0.0}{00}$), $1.5\times10^{15}$~cm$^{2}$~s$^{-1}$ (dashed; $\model{10}{1.5}{15}$), and $4.0\times10^{15}$~cm$^{2}$~s$^{-1}$ (dotted; $\model{10}{4.0}{15}$). The magnetic field strength at $r=R_{\mbox{\tiny PNS}}$ is initially $B_{0}=1\times10^{10}$~G in all the models. \label{fig:kineticAndMagneticEnergy_rotating}}
\end{figure*}
\subsection{Simulations with Initial Rotation}
\label{sec:rotatingModels}
Results from rotating models are shown in Figure \ref{fig:kineticAndMagneticEnergy_rotating}, in which we plot kinetic energy (left panel) and magnetic energy (right panel) versus time.
Rotating models with $B_{0}=1\times10^{10}$~G, and $l_{0}=1.5\times10^{15}$~cm$^{2}$~s$^{-1}$ ($\model{10}{1.5}{15}$; dashed lines) and $l_{0}=4.0\times10^{15}$~cm$^{2}$~s$^{-1}$ ($\model{10}{4.0}{15}$; dotted lines) are compared with the corresponding non-rotating model ($\model{10}{0.0}{00}$; solid lines).
The most notable difference between these models is the earlier onset of the SASI observed in the rotating models.
The post-shock flow is set into rotation about the $z$-axis as the pre-shock material with angular momentum advects downstream.
The kinetic energy in the post-shock flow in model $\model{10}{1.5}{15}$ increases initially by $\sim50\%$, and settles momentarily into a quiescent state, which lasts for about 200~ms.
Then, for $t\gtrsim300$~ms, the nonlinear phase of the SASI sets in, and the kinetic energy begins to grow exponentially with a growth time $\tau\approx55$~ms, which is notably faster than in the non-rotating model.
Model $\model{10}{4.0}{15}$ receives a stronger initial perturbation due to more angular momentum ahead of the shock, and the kinetic energy in this model grows rapidly by a factor of $\sim5$ before settling into a short, quasi-steady state with $E_{\mbox{\tiny kin}}\sim10^{49}$~erg.
The kinetic energy begins to grow again for $t\gtrsim200$~ms.
The kinetic energy in all models eventually reaches similar levels in the strongly nonlinear phase of the SASI; when averaged over the last $200$~ms of each run we find $\timeAverage{E_{\mbox{\tiny kin}}}{0.68}{0.88}=0.052$~B and $\timeAverage{E_{\mbox{\tiny kin}}}{0.48}{0.68}=0.050$~Bs for models $\model{10}{1.5}{15}$ and $\model{10}{4.0}{15}$ respectively.
(We reported $\timeAverage{E_{\mbox{\tiny kin}}}{0.9}{1.1}=0.051$~B for model $\model{10}{0.0}{00}$ in Section \ref{sec:timeGlobal}.)
The earlier onset of the nonlinear SASI in the rotating models is consistent with \citet{blondinMezzacappa_2007} and \citet{yamasakiFoglizzo_2008}.
However, model $\model{10}{4.0}{15}$ is perturbed relatively hard when the rotating pre-shock material advects downstream, and the model does not settle into a quiescent state, as is observed in $\model{10}{1.5}{15}$.
We think it is very likely that the early SASI development in model $\model{10}{4.0}{15}$ is partially a result of our method of initiating the rotating models.
Nevertheless, the purpose of these simulations is to study the effect of rotation on turbulent magnetic field amplification during the non-linear phase, and our rotating models are suitable for this purpose.
The evolution of the magnetic energy below the shock during nonlinear SASI-operation (right panel in Figure \ref{fig:kineticAndMagneticEnergy_rotating}) in the rotating models is not significantly different from model $\model{10}{0.0}{00}$.
All models exhibit exponential magnetic energy growth during the late stages.
The magnetic energy in model $\model{10}{1.5}{15}$ grows exponentially with a growth time $\tau\approx44$~ms during the early stages (from $t\approx340$~ms to $t\approx550$~ms), and grows at a rate similar to model $\model{10}{0.0}{00}$ later on ($t\gtrsim600$~ms).
The magnetic energy in model $\model{10}{4.0}{15}$ grows exponentially at a somewhat slower rate than the other models ($\tau\approx85$~ms).
However, all models have reached similar levels at the end of the respective runs.
In particular, we find $E_{\mbox{\tiny mag}}\approx1.2\times10^{-7}$~B ($t=878$~ms) for $\model{10}{1.5}{15}$ and $E_{\mbox{\tiny mag}}\approx3.4\times10^{-8}$~B ($t=678$~ms) for $\model{10}{4.0}{15}$.
\section{MAGNETIZATION OF PROTONEUTRON STARS}
\label{sec:pnsMagnetization}
In \citetalias{endeve_etal_2010} we pointed out that the underlying PNS may be significantly magnetized due to SASI-induced magnetic field amplification.
In this section we estimate in a similar manner the degree of PNS magnetization predicted by the current set of simulations.
Adopting Eq. (\ref{eq:magneticEnergyEquation}), the magnetic energy in the volume occupied by the PNS $V_{\mbox{\tiny PNS}}$ at some time $t>t_{0}$ is
\begin{eqnarray}
E_{\mbox{\tiny mag}}(t)
&=&E_{\mbox{\tiny mag}}(t_{0}) \nonumber \\
& &
+\int_{t_{0}}^{t}\,dt'
\left(
\int_{V_{\mbox{\tiny PNS}}}\left(W_{\mbox{\tiny L}}-Q_{\mbox{\tiny J}}\right)\,dV
\right. \nonumber \\
& &
\left.
\hspace{1.5cm}
-\int_{\partial V_{\mbox{\tiny PNS}}}\vect{P}\cdot d\vect{S}
\right).
\label{eq:pnsMagneticEnergy}
\end{eqnarray}
Here $\vect{P}$ is the Poynting flux through the surface of the PNS,
and $W_{\mbox{\tiny L}}$ and $Q_{\mbox{\tiny J}}$ are obtained from $\vect{u}$ and $\vect{B}$, which must be computed with an appropriate physical model of the PNS.
The dissipative term $Q_{\mbox{\tiny J}}$ also involves the resistivity $\eta$.
Resistive dissipation is not likely to suppress field amplification in the PNS \citep{thompsonDuncan_1993}, but may be important to the long-term evolution of the neutron star magnetic field (strength and topology).
Evaluation of the volume integral on the right-hand-side of Eq. (\ref{eq:pnsMagneticEnergy}) involves numerical simulations of the hydro-magnetic evolution inside the PNS during the explosion phase of core-collapse supernovae and subsequent PNS cooling, and includes neutrino radiation-magnetohydrodynamic simulations of dense nuclear matter.
Such calculations are well beyond the scope of this study.
Earlier works have suggested numerous mechanisms for field amplification in the PNS, including winding by differential rotation \citep[e.g.,][]{wheeler_etal_2002}; the magneto-rotational instability \citep{akiyama_etal_2003}; and convective dynamo action, driven by entropy gradients, lepton gradients, or both \citep[e.g.,][]{thompsonDuncan_1993,bonanno_etal_2003,bonanno_etal_2005}.
All these mechanisms operate inside or on the surface of the PNS and rely on rotation.
We exclude the PNS from our simulations and do not address field amplification mechanisms in its interior.
Our simulations, however, focus on field amplification by the SASI exterior to the PNS, which is often ignored in models addressing the origin of pulsar magnetism.
From our simulations we compute the increase in magnetic energy in the volume occupied by the PNS due to the Poynting flux through the surface bounding it,
\begin{equation}
E_{\mbox{\tiny mag},\vect{P}}(t)
=-\int_{t_{0}}^{t}\,dt'\int_{\partial V_{\mbox{\tiny PNS}}}\vect{P}\cdot d\vect{S}.
\label{eq:pnsMagneticEnergyPoyntingFlux}
\end{equation}
We then estimate the PNS magnetic field due to SASI activity $\langle B_{\mbox{\tiny PNS},\vect{P}}\rangle=(2\mu_{0}E_{\mbox{\tiny mag},\vect{P}}/V_{\mbox{\tiny PNS}})^{1/2}$ \citepalias[cf. Eq. (18) in][]{endeve_etal_2010}. Results from these estimates for the rotating and non-rotating models with varying initial magnetic field strengths are listed in Table \ref{tab:pnsMagnetization}.
\begin{table}
\begin{center}
\caption{PNS Magnetic field estimates. \label{tab:pnsMagnetization}}
\begin{tabular}{cccc}
Model & $t_{\mbox{\tiny end}}$ (ms) & $E_{\mbox{\tiny mag},\vect{P}}$ (Erg) & $\langle B_{\mbox{\tiny PNS},\vect{P}}\rangle$ (G) \\
\tableline
\tableline
\model{10}{0.0}{00} & 1100 & $1.14\times10^{44}$ & $3.3\times10^{12}$ \\
\model{10}{1.5}{15} & 878 & $6.70\times10^{43}$ & $2.5\times10^{12}$ \\
\model{10}{4.0}{15} & 678 & $2.52\times10^{43}$ & $1.5\times10^{12}$ \\
\tableline
\model{12}{0.0}{00} & 1126 & $3.16\times10^{47}$ & $1.7\times10^{14}$ \\
\model{12}{1.5}{15} & 1000 & $1.15\times10^{48}$ & $3.3\times10^{14}$ \\
\model{12}{4.0}{15} & 644 & $1.74\times10^{47}$ & $1.3\times10^{14}$ \\
\tableline
\model{13}{0.0}{00} & 1100 & $4.49\times10^{48}$ & $6.5\times10^{14}$ \\
\tableline
\tableline
\end{tabular}
\tablecomments{Magnetic energy accumulated on the proto-neutron star in computed models. The inferred magnetic field strength resulting from SASI-induced magnetic field amplification is also listed.}
\end{center}
\end{table}
Our results show that the magnetic energy generated by SASI activity may result in significant magnetization of the PNS.
The magnetic energies generated in some of the models meet the energy requirements to power the total flare energy released per SGR \emph{and} the persistent X-ray emission \citep{thompsonDuncan_2001}.
The models with the weakest initial magnetic field predict field strengths in the range of ordinary pulsars (a few $\times10^{12}$~G), while the models with stronger initial magnetic fields predict fields in the magnetar range (exceeding $10^{14}$~G).
The magnetic field in the strong-field model ($\model{13}{0.0}{00}$) saturates dynamically, and this model may represent an upper limit to the fields attainable from this process.
On the other hand, the weak-field models do not reach saturation.
The magnetic energy in these models continues to grow at an underestimated rate, and the maximum attainable field strength/energy is also limited by finite grid resolution.
The PNS field strengths predicted by these models are therefore artificially low.
It then seems likely, given infinite grid resolution, that PNS magnetic fields can exceed $10^{14}$~G due to the SASI alone, independent of the initial magnetic field strength.
Moreover, as finite grid resolution severely limits the exponential growth rate of magnetic energy, the duration of SASI operation may be less critical.
The amount of initial rotation in the models does not seem to affect the degree of PNS magnetization.
The field strengths listed in Table \ref{tab:pnsMagnetization} should also be corrected for additional magnetic field amplification as the PNS cools and contracts.
From conservation of magnetic flux through the PNS surface, contraction from a 40~km radius to a radius of about 15~km boosts the surface field by a factor of $\sim7$.
We point out that the PNS magnetic fields resulting from the turbulent flows driven by the SASI are likely small-scale and disordered.
A connection to the dipolar magnetic field structure inferred for neutron stars is currently missing, and, of course, the SASI alone cannot fully explain the origin of pulsar magnetism.
However, pulsar magnetic fields are thought to consist of a global dipole field superimposed with higher order multipole (small-scale) components, and pulsar magnetism is most likely a result of the combined action of multiple amplification mechanisms.
While the inferences we can make are limited by resolution (which affects the magnetic growth rate) and the absence of important physics (which determines the time to explosion), our simulations suggest that the SASI could in principle make a nontrivial contribution.
\section{SUMMARY, DISCUSSION, AND CONCLUSIONS}
\label{sec:discussionConclusions}
We present results from three-dimensional MHD simulations of the SASI.
The simulations are initiated from a configuration that resembles the early stalled shock phase in a core-collapse supernova, albeit with simplified physics that excludes critical components of a supernova model (e.g., neutrino transport, self-gravity, and the PNS itself).
On the other hand, our simulations are computed with a spatial resolution that is currently inaccessible to state-of-the-art supernova models in three spatial dimensions, and they may therefore provide valuable insight into MHD developments in core-collapse supernovae.
In particular we study the evolution and amplification of magnetic fields in SASI-driven flows in order to assess the effects of the amplified magnetic fields on supernova dynamics, and the possible role of the SASI in magnetizing the PNS.
This paper is a continuation and extension of the study initiated in \citetalias{endeve_etal_2010}.
The simulations reported here were performed with higher spatial resolution (up to $1280^{3}$ grid cells) and cover a broader parameter range than the 3D simulations presented in \citetalias{endeve_etal_2010}: we have varied the strength of the initial magnetic field and the degree of rotation in the flow ahead of the shock (including no rotation).
We have also varied the spatial resolution in some of the simulations, and extended the analysis from \citetalias{endeve_etal_2010} to include a Fourier decomposition of the kinetic energy and magnetic energy in the simulations.
Our main findings are
\begin{itemize}
\item[1.] The SASI-driven turbulence that develops is essentially non-helical, and shares similarities with convectively driven MHD turbulence \citep[e.g.,][]{brandenburg_etal_1996}.
(See also ``box turbulence" simulations by \cite{haugen_etal_2004}.)
When corrected for density stratification, the kinetic energy spectra associated with the post-shock flow develop Kolmogorov-like $-5/3$ scaling (i.e., $\edensk{kin}^{\,\mbox{\tiny II}},\edensk{kin}^{\,\mbox{\tiny III}}\propto k^{-5/3}$; Section \ref{sec:spectralAnalysis}) in a narrow wavenumber range.
Moreover, inspection of the time evolution of the kinetic energy spectra reveals that the power in low-order SASI modes (i.e., large scale flows) cascades to higher-order modes (i.e., smaller-scale flows), and that a significant fraction (up to $\sim50\%$) of the post-shock kinetic energy can be associated with turbulence (although a smaller fraction is involved in magnetic field amplification).
This further suggests that the non-linear SASI saturates due to the development of turbulence via secondary instabilities \citep[e.g.,][]{guilet_etal_2010}.
\item[2.] The magnetic energy grows exponentially with time in turbulent flows driven by the SASI, as long as the kinematic regime obtains.
Our simulations develop flows characteristic of the SASI spiral mode \citep[e.g.,][]{blondinMezzacappa_2007}.
These flows drive vigorous turbulence below the shock ($u_{\mbox{\tiny rms}}^{\mbox{\tiny tur}}\sim4000$~km~s$^{-1}$), which amplifies magnetic fields by stretching.
The resulting magnetic field is highly intermittent and consists of thin, intense magnetic flux ropes.
\item[3.] Simulations initiated with weak or moderate rotation evolve similarly to non-rotating models as far as the magnetic field amplification mechanism is concerned.
However, models with initial rotation develop the nonlinear spiral SASI flows earlier, and exponential magnetic energy growth sets in sooner.
The earlier onset of the SASI in models with initial rotation is consistent with the results of \citet{blondinMezzacappa_2007} and \citet{yamasakiFoglizzo_2008}.
\item[4.] The magnetic energy grows at the expense of the kinetic energy available in the turbulent flows driven by the SASI.
Our simulations show that strong magnetic fields emerge on small (turbulent) spatial scales, and reduce the turbulent kinetic energy on those scales.
For our reference spatial resolution, magnetic fields impact flows on scales with wavenumber $k>k_{\mbox{\tiny dyn}}\approx0.1-0.2$~km$^{-1}$ ($\lambda_{\mbox{\tiny dyn}}=2\pi/k_{\mbox{\tiny dyn}}\lesssim30-60$~km) and peak around $k=0.6$~km$^{-1}$ ($\sim10$~km) (Figure \ref{fig:spectralKineticEnergyDensityNonRotating}).
That is, magnetic fields do not affect the portion of the kinetic energy spectrum with $k\lesssimk_{\mbox{\tiny dyn}}$.
The turbulent kinetic energy (that is, the kinetic energy on spatial scales below some specified cutoff) in models with larger magnetic fields is reduced compared to models initiated with weaker magnetic fields, indicating a dynamical impact of the amplified magnetic field.
\item[5.] The magnetic field evolution in our simulations remains very sensitive to the spatial resolution.
Key parameters extracted from simulations performed with increasing spatial resolution do not converge in the range covered in this study.
Both the final magnetic energy attained and the rate at which the magnetic energy grows increase with increasing grid resolution.
In particular, estimates using data extracted from our simulations suggest that the magnetic energy may grow exponentially on a millisecond timescale under physically realistic conditions, with very large magnetic Reynolds numbers, as opposed to the $\sim50$-$60$~ms timescale measured directly in our runs.
\item[6.] The magnetic energy saturates when the magnetic energy density becomes comparable to the kinetic energy density (i.e., $|\vect{B}|/\sqrt{\mu_{0}\rho}\gtrsim|\vect{u}|$) in localized regions of the flow.
Only our ``strong-field'' model (with the largest initial magnetic field) reaches this saturated state.
The subsequent magnetic field evolution remains highly dynamic: strong fields are advected through the flow, are temporarily weakened, and then reemerge in a seemingly stochastic manner.
\item[7.] The magnetic fields amplified by the SASI are not likely to play an important role in the explosion dynamics (but see further discussion below).
The presence of amplified magnetic fields does not result in noticeable effects on the global shock dynamics in our simulations, and this can be understood as a matter of simple energetics.
Magnetic energy grows at the expense of kinetic energy, and the kinetic energy content in the post-shock flow during vigorous SASI activity ($\sim5\times10^{-2}$~B) is not enough for magnetic fields to become energetically significant to the explosion ($\sim1$~B).
This was also pointed out in \citetalias{endeve_etal_2010}.
Moreover, the turbulent kinetic energy---which powers SASI-driven field amplification---accessible for magnetic field amplification only amounts to about $10\%$ of the total kinetic energy below the shock.
We further point out that our estimate for turbulent kinetic energy is \emph{not} critically sensitive to the numerical resolution (Section \ref{sec:spectralAnalysis} and Figure \ref{fig:turbulentKineticEnergy}).
A rapidly rotating (millisecond period) PNS would provide an energy reservoir large enough to power magnetically-driven explosions \citep[e.g.,][]{burrows_etal_2007}, but it is not likely that rotation would be this strong in most supernova progenitors \citep[e.g.,][]{heger_etal_2005}.
These observations suggest a rather passive role of magnetic fields in the overall dynamics of at least most supernovae.
\item[8.] Our simulations suggest that SASI-induced magnetic field amplification may play an important role in determining the strength of the magnetic field in proto-neutron stars and young pulsars.
Upon integrating the Poynting flux through the surface encompassing the PNS, we estimate that the magnetic energy accumulated on the PNS may account for magnetic field strengths exceeding $10^{14}$~G.
This is stronger than the canonical dipole field inferred for typical pulsars, and in this connection two points must be emphasized.
First, SASI-driven amplification is expected to cease when the explosion takes off, so that different delay times to explosion (which may for example be a function of progenitor mass) may result in different degrees of PNS magnetization.
Second, the SASI-amplified portion of the field accumulated by the PNS will at least initially be disordered and not of the large-scale, dipolar character of the fields inferred from pulsar spindown.
\end{itemize}
Despite the pessimism of point 7 above regarding the relevance of SASI-amplified magnetic fields to the explosion mechanism, we caution that the sensitivity of magnetic field amplification and evolution to numerical resolution prevents us from completely dismissing magnetic fields as unimportant to supernova dynamics in weakly rotating progenitors.
Certainly, our simulations cannot accurately describe the dynamical interaction between the magnetic field and the fluid on small scales.
An initially weak magnetic field is amplified exponentially in turbulent flows when the flux tubes are stretched and their cross sectional area decreases.
In a realistic post-shock supernova environment, where $R_{\mbox{\tiny m}}$ is extremely large, field amplification is likely quenched by dynamic back-reaction on the fluid before the flux tube thickness reaches the resistive scale \citep{thompsonDuncan_1993}.
The resistive decay time then remains much longer than the dynamical timescale of hydro-magnetic interactions.
But in numerical simulations the flux tube cross section inevitably approaches the grid scale, and numerical dissipation sets in and prevents further strengthening of the magnetic field.
This occurs in all our simulations.
(The strong-field model ($\model{13}{0.0}{00}$) develops dynamically relevant magnetic fields, but is also strongly affected by numerical dissipation.)
Our simulations suggest that magnetic fields become dynamically relevant on spatial scales smaller than $\lambda_{\mbox{\tiny dyn}}\sim30$~km (Figure \ref{fig:spectralKineticEnergyDensityNonRotating}).
The global shock dynamics remains unaffected by the presence of magnetic fields (e.g., Figure \ref{fig:overviewNonRotating}).
However, we cannot rule out the possibility that flows on scales larger than $\lambda_{\mbox{\tiny dyn}}$ could ultimately be affected by hydro-magnetic interactions emerging from small-scale turbulent flows.
Simulations of non-helical MHD turbulence \citep[e.g.,][]{haugen_etal_2004} show that the magnetic energy grows exponentially on the turnover time, on all spatial scales during the kinematic regime.
(We also observe exponential growth on all scales in our runs during this regime.)
The kinematic regime ends when the magnetic energy becomes comparable to the kinetic energy.
This occurs on a scale by scale basis.
Magnetic energy growth slows down considerably after this equipartition, which occurs first on the smallest spatial scales, and the magnetic energy spectrum settles somewhat above the kinetic energy spectrum.
(We also observe that magnetic energy growth is quenched when $\edens{mag}\sim\edens{kin}$, but the magnetic energy spectrum stays below the kinetic energy spectrum for all $k$ in our simulations.)
At later times in MHD turbulence simulations, the largest spatial scale at which $\edensk{mag}\gtrsim\edensk{kin}$ (i.e., $\lambda_{\mbox{\tiny dyn}}$) increases, and may approach the driving scale of the turbulent forcing.
For helical MHD turbulence, which may be more relevant when a rapidly rotating PNS is included in the model, $\lambda_{\mbox{\tiny dyn}}$ can even grow beyond the forcing scale \citep[e.g.,][]{meneguzzi_etal_1981,brandenburg_2001}.
However, the timescale for this process is relatively slow, and increases with $R_{\mbox{\tiny m}}$ \citep{brandenburg_2001}.
Nevertheless, it would be desirable to determine the largest scale at which the magnetic energy equilibrates with the kinetic energy in SASI-driven flows.
The lack of sufficient spectral resolution in our simulations prevents us from determining whether magnetic fields can become strong on large enough scales to alter post-shock flows in a significant way.
The SASI may play an important role in improving the conditions for successful neutrino-driven explosions \citep[e.g.,][]{bruenn_etal_2006,buras_etal_2006,mezzacappa_etal_2007,marekJanka_2009,suwa_etal_2010,muller_etal_2012}.
If amplified magnetic fields can alter the evolution of the SASI and change the conditions (making them more, or less, favorable) for energy deposition by neutrinos, then magnetic fields may play a secondary but relevant role in the dynamics of a broader range of core-collapse supernovae (i.e. not just those arising from rapidly rotating progenitor stars).
This point was also argued by \citet{obergaulingerJanka_2011}, who studied magnetic field amplification in non-rotating collapsed stellar cores using axisymmetric simulations that included the PNS and neutrino transport.
They found that the SASI and convection contribute to magnetic field amplification, and observed the most pronounced shock expansion in the model where the magnetic field was strong enough to alter the post-shock flow topology.
(This model was initiated with a strong pre-collapse magnetic field.)
However, axial symmetry severely constrains magnetic field evolution driven by the SASI \citepalias[see][]{endeve_etal_2010} and, most likely, also convectively driven field amplification.
Simulations similar to those of \citet{obergaulingerJanka_2011} in full 3D, where the SASI spiral mode can develop and drive turbulent field amplification, are therefore highly desired.
Such simulations will improve on our simulations in (at least) two important ways:
\begin{itemize}
\item[1.] A significant amount of magnetic energy (comparable to that in $V_{\mbox{\tiny{Sh}}}$) is lost through the boundary at $r=R_{\mbox{\tiny PNS}}$ in our models, and not accounted for in the subsequent dynamics.
3D simulations with the PNS included do not suffer from this artificial limitation, and will allow us to better assess the role of SASI-induced magnetic fields.
\item[2.] Simulations that include neutrino transport develop neutrino-driven convection, both in the PNS and in the shocked mantle. This convective activity will impact the evolution of magnetic fields, and possibly also the SASI. We will then be able to study magnetic field evolution in a much more physically realistic supernova environment.
Moreover, with neutrino transport included, we will be able to directly address the role of magnetic fields on neutrino-powered explosions.
\end{itemize}
The constraint on numerical resolution in order to properly describe turbulent flows may still be computationally prohibitive, especially when additional (necessary) physics components are added to the models.
This may be partially circumvented with the use of adaptive mesh refinement techniques and improved numerical algorithms.
Local (or semi-global) simulations \citep[e.g.,][]{obergaulinger_etal_2009}, adopting physical conditions and forcing functions relevant to the supernova environment (i.e., derived from global multi-physics simulations), may also be necessary to study turbulent magnetic field evolution and its impact on supernova dynamics in more detail.
More investigations, using both local and global simulations, are needed to better understand the role of magnetic fields in core-collapse supernovae.
In summary, we conclude from our simulations that magnetic fields in core-collapse supernovae may be amplified exponentially by turbulence on a millisecond timescale; i.e., much shorter than the time between core bounce/shock formation and initiation of the explosion.
Details of the impact on explosion dynamics by SASI-amplified magnetic fields remain unclear, but on energetic grounds alone the role of these magnetic fields is likely sub-dominant.
The simulations further suggest that small-scale neutron star magnetic fields in the $10^{14}-10^{15}$~G range may be formed, which may be sufficient to power some of the energetic activity that define AXPs and SGRs.
\acknowledgments
This research was supported by the Office of Advanced Scientific Computing Research and the Office of Nuclear Physics, U.S. Department of Energy.
This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory provided through the INCITE program.
We are grateful for support from members of the National Center for Computational Sciences during the execution and analysis of the simulations, especially Bronson Messer.
We also thank an anonymous referee for comments that helped us improve the manuscript.
|
1,108,101,562,946 | arxiv | \section{Introduction}
Dadarlat \cite{D0} computed the homotopy set $[X,\Aut A]$ for a Kirchberg algebra $A$ under a mild assumption of a space $X$.
He constructed a bijection between $[X,\Aut A]$ and a relevant KK-group, and showed that it is a group homomorphism
when $X$ is an H'-space (co-H-space).
However, the group structure of $[X,\Aut A]$ for more general $X$ is still unknown.
The Cuntz algebra $\mathcal{O}_{n+1}$ is a typical example of a Kirchberg algebra, and it plays an important role in operator algebraic
realization of the mod $n$ K-theory \cite{RS}.
Dadarlat's computation shows that $[X,\Aut \cO_{n+1}]$ as a set is identified with the mod $n$ K-group $K^1(X;\Z_n)$.
One of the main purposes of this paper is to determine the group structure of $[X,\Aut \cO_{n+1}]$,
and we show that it is indeed different from the ordinary group structure of $K^1(X;\Z_n)$ in general.
In particular, we verify that the group $[X,\Aut \cO_{n+1}]$ is non-commutative when $X$ is the product of
the Moore space $M_n$ and its reduced suspension $\Sigma M_n$.
Our computation uses the Cuntz-Toeplitz algebra $E_{n+1}$ in an essential way, for which
the homotopy groups of the automorphism group are computed in \cite{ST}.
The unitary group $U(n+1)$ acts on $\cO_{n+1}$ through the unitary transformations of the linear span of
the canonical generators, and it induces a map from $[X,BU(n+1)]$ to $[X,B\Aut \cO_{n+1}]$.
When $X$ is a finite CW-complex with dimension $d$, Dadarlat \cite[Theorem 1.6]{D1} showed that the map is a bijection provided
that $n\geq \lceil(d-3)/2\rceil$ and $H^*(X)$ has no $n$-torsion.
Another purpose of this paper is to remove the first condition by a localization trick.
We use the following notation throughout the paper.
For a unital C*-algebra $A$, we denote by $U(A)$ the unitary group of $A$, and by $U(A)_0$ the path component of $1_A$ in $U(A)$.
For a non-unital C*-algebra $B$, we denote its unitization by $B^{\sim}$.
We denote by $\mathbb{B}(H)$ the algebra of bounded operators on a Hilbert space $H$,
by $\mathbb{K}$ the algebra of compact operators on a separable Hilbert space,
and by $\M_n$ the algebra of $n$ by $n$ matrices.
Our standard references for K-theory are \cite{Bl, K}.
For a projection $p\in A$ (resp. a unitary $u\in U(A)$), we denote
denote by $[p]_0$ (resp. $[u]_1$) its class in the K-group $K_0(A)$ (resp. $K_1(A)$).
For a compact Hausdorff space $X$, we identify the topological K-groups $K^i(X)$ with $K_i(C(X))$ where $C(X)$ is
the C*-algebra of the continuous functions on $X$.
When moreover $X$ is path connected, we choose a base point $x_0$, and set $\tilde{K}^i(X)$ to be the kernel of the evaluation map
$({\rm ev}_{x_0})_* \colon K^i(X)\to K^i(\{x_0\})=\mathbb{Z}$, which is identified with $K_i(C_0(X, x_0))$
where $C_0(X, x_0)$ is the C*-algebra of the continuous functions on $X$ vanishing at $x_0$.
We denote by $\Sigma X$ the reduced suspension of $X$.
For two topological spaces $X$ and $Y$, we denote by $\operatorname{Map}(X,Y)$ the set of continuous map from $X$ to $Y$,
and by $[X,Y]$, the quotient of $\operatorname{Map}(X,Y)$ by homotopy equivalence.
\textbf{Acknowledgement. }
The authors would like to thank Marius Dadarlat and Ulrich Pennig for stimulating discussions.
Masaki Izumi would like to thank Isaac Newton Institute for Mathematical Sciences for its hospitality.
\section{Mod $n$ K-theory}
In this section, we summarize the basics of mod n K-theory from the view point of operator algebras.
Recall that the Cuntz algebra $\cO_{n+1}$ is the universal C*-algebra generated by $n+1$ isometries $\{S_i\}_{i=0}^n$ with mutually orthogonal ranges
whose summation is 1.
Its K-groups are
$$K_0(\mathcal{O}_{n+1})=\mathbb{Z}_n, \; K_1(\mathcal{O}_{n+1})=0,$$
(see \cite[Theorem 3.7, 3.8, Corollary 3.11]{C}).
The Cuntz Toeplitz algebra $E_{n+1}$ is the universal C*-algebra generated by $n+1$ isometries $\{T_i\}_{i=0}^{n}$
with mutually orthogonal ranges, and it is KK-equivalent to the complex numbers $\C$.
The closed two-sided ideal generated by the minimal projection $e\colon=1-\sum_{i=0}^{n}T_iT_i^*$ is isomorphic to $\mathbb{K}$, which is known to be the only closed non-trivial two-sided ideal.
Then the quotient algebra $E_{n+1}/\K$ is isomorphic to $\cO_{n+1}$ with identification $S_i=\pi(T_i)$, where $\pi$ is the quotient map.
For a natural number $n$, we denote by $M_n$ the Moore space, the mapping cone of the map $n : S^1\ni z\mapsto z^n\in S^1$ :
$$M_n \colon=([0,1]\times S^1) \sqcup S^1/\sim,$$ where $(0, z)\sim (0,1)$ and $(1, z)\sim z^n$ for every $z\in S^1$.
For cohomology and K-groups, we have
$$H^0(M_n)=\mathbb{Z},\;H^1(M_n)=0, \;H^2(M_n)=\mathbb{Z}_n,\;H^k(M_n)=0 \textrm{ for } \;k\geq 2,$$
$$\tilde{K}^0(M_n)=\mathbb{Z}_n, \; K^1(M_n)=0,$$
(see \cite[Theorem 9.10]{H} for example).
Since $C_0(M_n, pt)$ and $\mathcal{O}_{n+1}$ have the same K-theory and they are in the bootstrap class,
they are KK-equivalent (see \cite[Section 22.3]{Bl}).
The mod $n$ K-group of the pointed space $(X, x_0)$ is originally defined by
$$\tilde{K}^i(X ; \mathbb{Z}_n)\colon=\tilde{K}^i(X\wedge M_n), \; i=0, 1.$$
We refer to \cite{A1, A2} for the mod $n$ K-theory, and refer to \cite[Section 8]{RS} for an operator algebraic aspect of it.
The Bott periodicity of the K-theory induces the Bott periodicity of the mod $n$ K-theory.
By the KK-equivalence of $C_0(M_n, pt)$ and $\mathcal{O}_{n+1}$, the identification
$$
\tilde{K}^i(X\wedge M_n)=K_i(C_0(X, x_0)\otimes C_0(M_n, pt))\cong K_i(C_0(X, x_0)\otimes \mathcal{O}_{n+1})
$$
is natural in the variable $X$ (see \cite[Theorem 6.4]{S}).
We can identify the Bockstein exact sequence with
the 6-term exact sequence
$$\xymatrix{
\tilde{K}^0(X)\ar[r]^{-n}&\tilde{K}^0(X)\ar[r]^{\rho}&\tilde{K}^0(X ; \mathbb{Z}_n)\ar[d]^{\beta}\\
K^1(X ; \mathbb{Z}_n)\ar[u]^{\beta}&K^1(X)\ar[l]^{\rho}&K^1(X).\ar[l]^{-n}
}$$
arising from the exact sequence
$$0\to C_0(X,x_0)\otimes \mathbb{K}\to C_0(X,x_0)\otimes E_{n+1}\to C_0(X,x_0)\otimes \mathcal{O}_{n+1}\to 0.$$
The map $\beta$ is called Bockstein map, and $\rho$ is called the reduction map.
We frequently identify $\beta$ with the index map or the exponential map in the $6$-term exact sequence.
\begin{lem}\label{wer}
We have the following isomorphisms from the Bockstein exact sequence $\colon$
\begin{align*}
\rho \colon \tilde{K}^0(M_n)\to \tilde{K}^0(M_n ; \mathbb{Z}_n),\quad \beta \colon \tilde{K}^1(M_n ; \mathbb{Z}_n)\to \tilde{K}^0(M_n).
\end{align*}
\end{lem}
The K-theory has a multiplication $\mu$ defined by the external tensor product of vector bundles:
$$\xymatrix{
\mu : K^0(X)\otimes K^0(Y)\ar[r]&K^0(X\times Y)\\
\tilde{K}^0(X)\otimes \tilde{K}^0(Y)\ar[u]\ar[r]&\tilde{K}^0(X\wedge Y)\ar[u]
}
$$
We denote the diagonal map by $\Delta_X : X\to X\times X$. This gives the ring structure of $K^0(X)$ :
$$x\cdot y :=\Delta_X^*\mu(x\otimes y), \;x, y\in K^0(X).$$
This induce the ring structure of $\tilde{K}^0(X)$ by $\Delta_X : X\to X\wedge X$ :
$$x\cdot y :=\Delta_X^*\mu(x\otimes y), \; x, y \in \tilde{K}^0(X).$$
From \cite[Chap.II, Theorem 5.9]{K}, the reduced K-group $\tilde{K}^0(X)$ is the set of nilpotent elements of $K^0(X)$,
and in particular $\tilde{K}^0(\Sigma X)\cdot \tilde{K}^0(\Sigma X)=\{0\}$.
The multiplication $\mu$ extends to $\tilde{K}^i(X), \; i=0, 1$ by
$$\mu : \tilde{K}^0(S^i\wedge X)\otimes \tilde{K}^0(S^j\wedge Y)\to \tilde{K}^0(S^{i+j}\wedge X\wedge Y),$$
with the property
$$T_{X, Y}^*\mu(y\otimes x)=(-1)^{ij}\mu(x\otimes y), \; x\in \tilde{K}^i(X), \; y\in \tilde{K}^j(Y)$$
where the map $T_{X,Y} : X\wedge Y\to Y\wedge X$ is the exchange of the coordinates (see \cite[Chap. II section 5.30]{K}).
In a similar way, the multiplication $\mu$ defines the following :
$$\mu_L : \tilde{K}^i(X)\otimes \tilde{K}^j(Y ; \mathbb{Z}_n)\to \tilde{K}^{i+j}(X\wedge Y ; \mathbb{Z}_n),$$
$$\mu_R : \tilde{K}^i(X ; \mathbb{Z}_n)\otimes \tilde{K}^j(Y)\to \tilde{K}^{i+j}(X\wedge Y ; \mathbb{Z}_n),$$
with the same property (see \cite[Section 3]{A1}):
$$T_{X,Y}^*\mu_R(y\otimes x)=(-1)^{ij}\mu_L(x\otimes y), \; x\in \tilde{K}^i(X),\; y\in \tilde{K}^j(Y).$$
The multiplications $\mu$, $\mu_L$ and $\mu_R$ are compatible with the reduction $\rho$ and the map $\delta$ :
$$\mu_R(\rho \otimes{\rm id})=\rho\mu ,\; \beta(\mu_R({\rm id}\otimes {\rm id}))=\mu(\beta\otimes {\rm id}),$$
$$\mu_L({\rm id} \otimes \rho)=\rho\mu ,\; \beta(\mu_L({\rm id}\otimes {\rm id}))=\mu({\rm id}\otimes \beta).$$
Since the identification $\tilde{K}^i(X;\mathbb{Z}_n)\cong K_i(C_0(X, x_0)\otimes\mathcal{O}_{n+1})$ is natural, it is compatible with the Kasparov product, and the multiplications $\mu_L$ and $\mu_R$ extend to
\begin{align*}
\mu_L\colon &K_i(C(X))\otimes K_j(C(Y)\otimes\mathcal{O}_{n+1})\to K_{i+j}(C(X\times Y)\otimes\mathcal{O}_{n+1})\\
\mu_R\colon &K_i(C(X)\otimes \mathcal{O}_{n+1})\otimes K_j(C(Y))\to K_{i+j}(C(X\times Y)\otimes\mathcal{O}_{n+1}).
\end{align*}
In particular, for $u\in U((C(X)\otimes\mathcal{O}_{n+1}))$ and a projection $p\in C(X)\otimes \mathbb{M}_{m}$, we have
$$\mu_L([p]_0\otimes [u]_1)=[p\otimes u+(1_m-p)\otimes 1_{\mathcal{O}_{n+1}}]_1\in K_1(C(X\times X, \mathbb{M}_m\otimes\mathcal{O}_{n+1}))=K_1(C(X\times X, \mathcal{O}_{n+1})).$$
We also use the K\"{u}nneth theorem of the reduced K-theory.
\begin{thm}[{\cite[Theorem 23.1.3]{Bl}}]\label{Kunn}
For pointed spaces $X$ and $Y$, we have the following exact sequence
$$0\to \bigoplus_{i=0,1}\tilde{K}^i(X)\otimes\tilde{K}^{i+*}(Y)\to \tilde{K}^*(X\wedge Y)\to \bigoplus_{i=0,1}
\operatorname{Tor}(\tilde{K}^i(X), \tilde{K}^{i+1-*}(Y))\to 0,$$
that splits unnaturally.
\end{thm}
We note that the map $\tilde{K}^i(X)\otimes\tilde{K}^j(Y)\to \tilde{K}^{i+j}(X\wedge Y)$ above is given by the multiplication $\mu$.
Puppe sequence yields the following lemmas.
\begin{lem}[{\cite[Section 10, Proposition 3.4]{H}}]\label{yab}
For compact pointed spaces $X$ and $Y$, the sequence $X\vee Y\to X\times Y\to X\wedge Y$ induces
a split exact sequence
$$0\to\tilde{K}^i(X\wedge Y)\to\tilde{K}^i(X\times Y)\to \tilde{K}^i(X)\oplus \tilde{K}^i(Y)\to 0.$$
The splitting is given by the projections ${\rm Pr}_X\colon X\times Y\to X$ and ${\rm Pr}_Y\colon X\times Y\to Y$.
\end{lem}
We have the diagram below
$$\xymatrix{
K^i(X\times Y)&K^i(X)\ar[l]^{\mu(\cdot \otimes 1)}\\
\tilde{K}^i(X\times Y)\ar[u]&\tilde{K}^i(X)\ar[u]\ar[l]^{{\rm Pr}^*_X}
}$$
where $1\in K^0(\{y_0\})$.
So we identify the map ${\rm Pr}_X^*$ with the map $\mu(\cdot \otimes 1)$. We also identify ${\rm Pr}_Y^*$ with
the map $\mu(1 \otimes \cdot)$ where $1\in K^0(\{x_0\})=\mathbb{Z}$.
\section{The group structure of $[X, \operatorname{Aut}\mathcal{O}_{n+1}]$}
\subsection{Description of the group structure}
Let $(X,x_0)$ be a pointed compact metrizable space.
For every $\alpha\in\operatorname{Map}(X, \operatorname{Aut}\mathcal{O}_{n+1})$, we set
$$u_{\alpha}=\sum_{i=0}^{n}\alpha(1_{C(X)}\otimes S_i)(1_{C(X)}\otimes S^*_i)\in U(C(X)\otimes \mathcal{O}_{n+1}).$$
By \cite[Theorem 7.4]{D2}, the map
$$[X, \operatorname{Aut}\mathcal{O}_{n+1}]\ni [\alpha]\mapsto [u_{\alpha}]_1\in K_1(C(X)\otimes \mathcal{O}_{n+1})=K^1(X;\Z_n)$$
is a bijection, though it is not a group homomorphism in general as we will see below.
From the definition of $u_{\alpha}$, we have $u_{\alpha\beta}(x)=\alpha_x(u_\beta(x))u_\alpha(x)$, and
$[u_{\alpha\beta}]_1=[u_\alpha]_1+[\alpha(u_\beta)]_1$.
Thus to determine the group structure of $[X,\Aut\cO_{n+1}]$, it suffices to determine the map
$$K_1(\alpha)\colon K_1(C(X)\otimes \cO_{n+1})\to K_1(C(X)\otimes \cO_{n+1}),$$
induced by $u(x)\mapsto \alpha_x(u(x))$.
\begin{thm}\label{Iz}
For every $\alpha \in \operatorname{Map}(X, \operatorname{Aut}\mathcal{O}_{n+1})$ and $a\in K_1(C(X)\otimes \mathcal{O}_{n+1})$, we have
$$K_1(\alpha)(a)=a-[u_{\alpha}]_1\cdot \delta(a),$$
where $\delta : K_1(C(X)\otimes \mathcal{O}_{n+1})\to \operatorname{Tor}(\tilde{K}^0(X), \mathbb{Z}_n)$ is the index map.
\end{thm}
\begin{proof}
For a given $b\in \operatorname{Tor}(K^0(X),\Z_n)$, we look for the preimage $\delta^{-1}(b)$ first.
We may assume that $b$ is of the form $[p]_0-[1_m]_0$ with a projection $p\in C(X)\otimes\mathbb{M}_{2m}$ such that
there exists a unitary $v\in C(X)\otimes\mathbb{M}_{2nm}$ satisfying $v(1_n\otimes q)v^{-1}=1_n\otimes p$, where
$q=\operatorname{Diag}(1_m,0_m)$.
Identifying $K^0(X)$ with $K_0(C(X)\otimes\K))$, we may replace $p$ and $q$ with $e\otimes p$ and $e\otimes q$ respectively,
where $e=1_{E_{n+1}}-\sum_{i=0}^nT_iT_i^*$ is a minimal projection in $\K\subset E_{n+1}$.
Furthermore, we may adjoin $1_{E_{n+1}}$ to $C(X)\otimes\K$, and
$$c=[(1_{E_{n+1}}-e)\otimes q+e\otimes p]-[1_{E_{n+1}}\otimes q].$$
In what follows, we simply denote $1=1_{E_{n+1}}$, and often denote $1_{2m}$ for $1\otimes 1_{2m}$.
We will construct a unitary $U\in C(X)\otimes E_{n+1}\otimes \mathbb{M}_{10m}$ satisfying
$$U\operatorname{Diag}((1-e)\otimes q+e\otimes p,1_{4m},0_{4m})U^{-1}=\operatorname{Diag}(q,1_{4m},0_{4m}).$$
Expressing $v(x)=\sum_{i,j=1}^n e_{i,j}\otimes v_{i,j}(x)$, where $\{e_{i,j}\}_{1\leq i,j\leq n}$ is a system
of matrix units $\M_n$, we let
$$\tilde{v}(x)=(e+T_0T_0^*)\otimes 1_{2m}+\sum_{i,j=1}^nT_iT_j^*\otimes v_{i,j}(x).$$
Then $\tilde{v}$ is a unitary in $C(X)\otimes E_{n+1}\otimes \mathbb{M}_{2m}$ satisfying
$$\tilde{v}((1-e)\otimes q+e\otimes p)\tilde{v}^*=T_0T_0^*\otimes q+(1-T_0T_0^*)\otimes p.$$
Thus if we put
$$U_1=
\operatorname{Diag}(\tilde{v},
\left(
\begin{array}{cccc}
0 &0 &1_{2m} &0\\
0 &1_{2m} &0 &0 \\
1_{2m} &0 &0 &0 \\
0 &0 &0 &1_{2m}
\end{array}
\right)
),$$
we get
\begin{align*}
\lefteqn{U_1\operatorname{Diag}((1-e)\otimes q+e\otimes p,1_{4m},0_{4m})U_1^{-1}} \\
&=\operatorname{Diag}(T_0T_0^*\otimes q+(1-T_0T_0^*)\otimes p,0_{2m},1_{4m},0_{2m}).
\end{align*}
Let
$$U_2=\operatorname{Diag}(\left(
\begin{array}{cc}
T_0T_0^*\otimes 1_{2m} &(1-T_0T_0^*)\otimes 1_{2m} \\
(1-T_0T_0^*)\otimes 1_{2m} & T_0T_0^*\otimes 1_{2m}
\end{array}
\right),1_{6m}
),$$
Then \begin{align*}
\lefteqn{U_2\operatorname{Diag}(T_0T_0^*\otimes q+(1-T_0T_0^*)\otimes p,0_{2m},1_{4m},0_{2m})U_2^{-1}} \\
&=\operatorname{Diag}(T_0T_0^*\otimes q,(1-T_0T_0^*)\otimes p,1_{4m},0_{2m}).
\end{align*}
Let $U_3=\operatorname{Diag}(1_{2m},V_1,V_2)$ with
$$V_1=\left(
\begin{array}{cc}
1_{2m}-T_0T_0^*\otimes p &T_0T_0^*\otimes p \\
T_0T_0^*\otimes p &1_{2m}-T_0T_0^*\otimes p
\end{array}
\right),$$
$$V_2=\left(
\begin{array}{cc}
T_0T_0^*\otimes q &1_{2m}-T_0T_0^*\otimes q \\
1_{2m}-T_0T_0^*\otimes q &T_0T_0^*\otimes q
\end{array}
\right).$$
Then
\begin{align*}
\lefteqn{U_3\operatorname{Diag}(T_0T_0^*\otimes q,(1-T_0T_0^*)\otimes p,1_{4m},0_{2m})U_3^{-1}} \\
&=\operatorname{Diag}(T_0T_0^*\otimes q,1\otimes p,1_{2m}-T_0T_0^*\otimes p,T_0T_0^*\otimes q,1_{2m}-T_0T_0^*\otimes q).
\end{align*}
Let
$$U_4=\operatorname{Diag}(1_{2m},\left(
\begin{array}{ccc}
T_0\otimes 1_{2m} &0 &(1-T_0T_0^*)\otimes 1_{2m} \\
0 &1_{2m} &0 \\
0 &0 &T_0^*\otimes 1_{2m}
\end{array}
\right)
,1_{2m}).$$
Then
\begin{align*}
\lefteqn{U_4\operatorname{Diag}(T_0T_0^*\otimes q,1\otimes p,1_{2m}-T_0T_0^*\otimes p,T_0T_0^*\otimes q,1_{2m}-T_0T_0^*\otimes q)U_4^{-1}} \\
&=\operatorname{Diag}(T_0T_0^*\otimes q,T_0T_0^*\otimes p,1_{2m}-T_0T_0^*\otimes p,q,1_{2m}-T_0T_0^*\otimes q)
\end{align*}
Let
$$U_5=\left(
\begin{array}{ccccc}
T_0T_0^*\otimes q &0 &0 &0 &1_{2m}-T_0T_0^*\otimes q \\
0 &T_0T_0^*\otimes p &1_{2m}-T_0T_0^*\otimes p &0 &0 \\
0 &1_{2m}-T_0T_0^*\otimes p &T_0T_0^*\otimes p &0 &0 \\
0 &0 &0 &1_{2m} &0 \\
1_{2m}-T_0T_0^*\otimes q &0 &0 &0 &T_0T_0^*\otimes q
\end{array}
\right)
.$$
Then
$$U_5\operatorname{Diag}(T_0T_0^*\otimes q,T_0T_0^*\otimes p,1_{2m}-T_0T_0^*\otimes p,q,1_{2m}-T_0T_0^*\otimes q)U_5^{-1}
=\operatorname{Diag}(1_{2m},1_{2m},q,0_{4m}).$$
Let
$$U_6=\left(
\begin{array}{ccccc}
0 &0 &0 &1_{2m} &0 \\
0 &1_{2m} &0 &0 &0 \\
1_{2m} &0 &0 &0 &0 \\
0 &0 &1_{2m} &0 &0 \\
0 &0 &0 &0 &1_{2m}
\end{array}
\right).
$$
Then
$$U_6\operatorname{Diag}(1_{2m},1_{2m},q,0_{4m})U_6^{-1}=\operatorname{Diag}(q,1_{4m},0_{4m}).$$
Thus if we put $U=U_6U_5U_4U_3U_2U_1$, we get
$$U\operatorname{Diag}((1-e)\otimes q+e\otimes p,1_{4m},0_{4m})U^{-1}=\operatorname{Diag}(q,1_{4m},0_{4m}).$$
Recall that $\pi:E_{n+1}\to \cO_{n+1}$ is the quotient map,
Since
$$(\pi\otimes \id_{M_{10m}})(\operatorname{Diag}((1-e)\otimes q+e\otimes p,1_{4m},0_{4m}))=\operatorname{Diag}(q,1_{4m},0_{4m}),$$
the unitary $(\pi\otimes \id_{M_{10m}})(U)$ commutes with $\operatorname{Diag}(q,1_{4m},0_{4m})$.
Let
$$W=\operatorname{Diag}(q,1_{4m},0_{4m})(\pi\otimes \id_{M_{10m}})(U^{-1})\operatorname{Diag}(q,1_{4m},0_{4m}),$$
which we regard as a unitary in $C(X,\cO_{n+1}\otimes M_{5m})$.
Then by the definition of the index map, we get $\delta([W]_1)=b$.
Let
$$V(x)=\sum_{i,j=1}^nS_iS_j^*\otimes v_{i,j}(x).$$
Direct computation yields
$$W=\left(
\begin{array}{ccc}
0 &V^*(S_0^*\otimes p) &S_0S_0^*\otimes q \\
S_0\otimes q &0 &1_{2m}-S_0S_0^*\otimes q \\
0 &1_{2m}-S_0S_0^*\otimes p+S_0S_0^*S_0^*\otimes p &0
\end{array}
\right).
$$
Let $\beta=\Ad u_\alpha^*\circ \alpha$.
Then $K_1(\alpha)=K_1(\beta)$, and $\beta(S_i)=S_iu_\alpha$.
Now
\begin{align*}
\lefteqn{W^*(\beta\otimes \id_{M_{5m}})(W)} \\
&=
\left(
\begin{array}{ccc}
0 &S_0^*\otimes q & 0 \\
(S_0\otimes p)V &0 &1_{2m}-S_0S_0^*\otimes p+S_0^2S_0^*\otimes p \\
S_0S_0^*\otimes q &1_{2m}-S_0S_0^*\otimes q &0
\end{array}
\right) \\
&\times
\left(
\begin{array}{ccc}
0 &V^*(u_{\alpha}^{-1}S_0^*\otimes p) &S_0S_0^*\otimes q \\
S_0u_\alpha\otimes q &0 &1_{2m}-S_0S_0^*\otimes q \\
0 &1_{2m}-S_0S_0^*\otimes p+S_0 S_0^*u_\alpha^{-1}S_0^*\otimes p &0
\end{array}
\right)
\\
&=\left(
\begin{array}{ccc}
u_\alpha\otimes q &0 &0 \\
0&S_0u_\alpha^{-1}S_0^*\otimes p+1_{2m}-S_0S_0^*\otimes p &0 \\
0&0 &1_{2m}
\end{array}
\right)
\\
&=\operatorname{Diag}(u_\alpha\otimes q,
\left(
\begin{array}{cc}
S_0\otimes 1_{2m} &(1-S_0S_0^*)\otimes 1_{2m} \\
0 &S_0^*\otimes 1_{2m}
\end{array}
\right)
\left(
\begin{array}{cc}
u_\alpha^{-1}\otimes p+1\otimes (1_{2m}-p) &0 \\
0 &1_{2m}
\end{array}
\right)\\
&\times \left(
\begin{array}{cc}
S_0^*\otimes 1_{2m} &0 \\
(1-S_0S_0^*)\otimes 1_{2m} &S_0\otimes 1_{2m}
\end{array}
\right)
),
\end{align*}
whose $K_1$-class is
$$[u_\alpha]_1([q]_0-[p]_0)=-[u_\alpha]_1\cdot b=-[u_\alpha]_1\cdot\delta([W]_1).$$
Thus
$$K_1(\alpha)([W]_1)=[W]_1-[u_\alpha]_1\cdot\delta([W]_1).$$
Since $K_1(\alpha)(a)=a$ and $\delta(a)=0$ hold for any $a\in \rho(K^1(X))$, we get
$$K_1(\alpha)([W]_1+a)=[W]_1+a-[u_\alpha]_1\cdot\delta([W]_1+a),$$
which finishes the proof.
\end{proof}
Recall that we identify the index map $\delta$ with the Bockstein map $\beta$.
By Theorem \ref{Iz}, the group $[X, \operatorname{Aut}\mathcal{O}_{n+1}]$ is isomorphic to $(K^1(X ; \mathbb{Z}_n), \circ)$ with
$$a\circ b\colon =a+b-a\cdot \beta(b), \;a, b\in K^1(X ;\mathbb{Z}_n).$$
Note that $(K^1(X ; \mathbb{Z}_n), \circ)$ is a group extension
$$0\to K^1(X)\otimes \Z_n\to (K^1(X ; \mathbb{Z}_n), \circ)\xrightarrow{\hat{\beta}} (1+\operatorname{Tor}(\tilde{K}^0(X),\Z_n))^\times\to 0,$$
where $\hat{\beta}(a)=1-\beta(a)$.
We denote the inverse of an element $a\in (\tilde{K}^1(X ; \mathbb{Z}_n), \circ)$ by $a^{\circ (-1)}$.
\begin{lem}\label{inverse}
For any $a,b \in(\tilde{K}^1(X ; \mathbb{Z}_n), \circ)$, we have
\begin{itemize}
\item[$(1)$] $a^{\circ(-1)}=-a\cdot (1-\beta(a))^{-1}$.
\item[$(2)$] $a^{\circ(-1)}\circ b\circ a=b+a\cdot\beta(b)-\beta(a)\cdot b$.
In particular, if $b\in K^1(X)\otimes \Z_n=\ker \beta$, we have
$a^{\circ(-1)}\circ b\circ a=(1-\beta(a))b$.
\end{itemize}
\end{lem}
\begin{proof}
Direct computation yields
\begin{align*}
(-a\cdot (1-\beta(a))^{-1}\circ a=&-a\cdot (1-\beta(a))^{-1}+a+a\cdot (1-\beta(a))^{-1}\cdot \beta(a)\\
=&a+a\cdot(1-\beta(a))^{-1}\cdot (\beta(a)-1)\\
=&0,
\end{align*}
showing the first equation.
The second one follows from the first one.
\end{proof}
Now we discuss the relationship between the two groups $[X,\Aut E_{n+1}]$ and $[X,\Aut \cO_{n+1}]$.
Let $H_1$ be the set of vectors of norm $1$ in a separable infinite dimensional Hilbert space $H$.
Then $H_1$ is contractible.
Indeed, we can identify $H_1$ with the set $\{f\in L^2[0,1]\mid ||f||_{2}=1\}$,
and define a homotopy $h_t : H_1 \to H_1$ sending $f$ to $(1_{[0, t]}f+1_{[t, 1]})/||1_{[0, t]}f+1_{[t, 1]}||_2$
where $1_{[a,b]}$ is the characteristic function of $[a,b]$.
This gives a deformation retraction of $H_1$ to the set $\{1_{[0, 1]}\}$, and the space $H_1$ is contractible (see \cite{RW}).
Since the group $S^1=\{z\in \C;\;|z|=1\}$ freely acts on $H_1$ by multiplication, we can adopt $H_1$ as a model of the universal principal
$S^1$-bundle $\operatorname{E}S^1$ and identify the classifying space $\operatorname{B}S^1$ of $S^1$ with the set of all
minimal projections.
The space $\operatorname{B}S^1$ is the Eilenberg-Maclane space $K(\mathbb{Z}, 2)$ and we identifies the homotopy set $[X, \operatorname{B}S^1]$
with $H^2(X)$ via the Chern classes of the line bundles.
Let $\eta$ be the map $\operatorname{Aut}E_{n+1}\ni \alpha\mapsto\alpha(e)\in \operatorname{B}S^1$.
We denote by $\eta_*$ the induced map $\eta_*:[X, \operatorname{Aut}E_{n+1}]\to H^2(X)$,
which is a group homomorphism with image in $\operatorname{Tor} (H^2(X),\Z_n)$ (see \cite[Theorem 3.15]{ST}).
We will show that the two maps $\eta_*$ and $\hat{\beta}$ are compatible.
Let $\mathcal{P}(\mathbb{K})$ be the set of all projections of $\mathbb{K}$.
We remark that the map $[X, \mathcal{P}(\mathbb{K})]\ni [p]\mapsto [p]_0\in K^0(X)$ is well-defined by the definition of the $K_0$-group.
Since $\cO_{n+1}$ is the quotient of $E_{n+1}$ by a unique non-trivial closed two sided ideal, every element in $\Aut E_{n+1}$
induces an element in $\Aut \cO_{n+1}$, which gives a group homomorphism from $\Aut E_{n+1}$ to $\Aut \cO_{n+1}$.
We denote by $q$ the group homomorphism from $[X, \operatorname{Aut}E_{n+1}]$ to $[X,\Aut \cO_{n+1}]$ induced by
this homomorphism.
\begin{prop}\label{commutes}
Let $q$ be as above, and let $l\colon H^2(X)\to K^0(X)$ be a map induced by the map
$\operatorname{B}S^1\to \mathcal{P}(\mathbb{K})$
where we identify $\operatorname{B}S^1$ with the set of all minimal projections.
Then we have the following commutative diagram
$$
\begin{CD} [X, \operatorname{Aut}E_{n+1}] @>\eta_*>> \operatorname{Tor}(H^2(X),\Z_n) \\
@VV q V @VV l V\\
[X,\Aut \cO_{n+1}]@ > \hat{\beta} >> (1+\operatorname{Tor}(\tilde{K}^0(X),\Z_n))^\times\\
\end{CD}
$$
\end{prop}
\begin{proof}
For $\alpha\in \operatorname{Map}(X,\Aut E_{n+1})$, we denote by $\tilde{\alpha}$ the map in
$\operatorname{Map}(X,\Aut \cO_{n+1})$ induced by $\alpha$.
Then with the identification of $[X,\Aut \cO_{n+1}]$ and $K^1(X;\Z_n)$, the map $q$ sends $[\alpha]$
to $[u_{\tilde{\alpha}}]_1$.
One has $l\circ\eta_*([\alpha])=[\alpha(1_{C(X)}\otimes e)]_0\in K_0(C(X))$ for every $\alpha\in\operatorname{Map}(X, \operatorname{Aut}E_{n+1})$ by definition.
Since $\beta$ is given by the index map $\delta \colon K_1(C(X)\otimes \mathcal{O}_{n+1})\to K_0(C(X))$.
We compute the index ${\rm ind}\, [u_{\tilde{\alpha}}]_1$.
We have a unitary lift $V\in U(\mathbb{M}_2(C(X)\otimes \mathcal{O}_{n+1}))$ of the unitary $u_{\tilde{\alpha}}\oplus u^*_{\tilde{\alpha}}$ :
\begin{align*}
V=\left(
\begin{array}{cc}
\sum_{i=1}^{n+1}\alpha(1\otimes T_i)T_i^*&\alpha(1\otimes e)\\
1\otimes e&\sum_{i=1}^{n+1}1\otimes T_i\alpha(1\otimes T_i^*)
\end{array}\right).
\end{align*}
Direct computation yields
$$V(1\oplus 0)V^*=(1-\alpha(1\otimes e))\oplus (1\otimes e)$$
where we write $1_{C(X)}\otimes e$ simply by $1\otimes e$.
Hence we have $${\rm ind}[u_{\tilde{\alpha}}]_1=[1-\alpha(1\otimes e)]_0+[1\otimes e]_0-[1]_0=1-[\alpha(1\otimes e)]_0\in K_0(C(X)\otimes\mathbb{K}).$$
Now we have $1-{\rm ind}[u_{\tilde{\alpha}}]_1=[\alpha(1\otimes e)]_0$, and this proves the statement.
\end{proof}
\begin{lem}\label{com}
We have the following commutative diagram with exact rows
$$\xymatrix@!C{
K^1(X)\ar[r]\ar@{=}[d]&[X, \operatorname{Aut}E_{n+1}]\ar[d]^{q}\ar[r]^{\eta_*}&\operatorname{Tor}(H^2(X),\Z_n)\ar[d]^{l}\\
K^1(X)\ar[r]^{\rho}&[X,\Aut \cO_{n+1}]\ar[r]^{\hat{\beta}}&(1+\operatorname{Tor}(\tilde{K}^0(X),\Z_n))^\times.
}$$
\end{lem}
\begin{proof} Let $\End E_{n+1}$ be the set of unital endomorphisms of $E_{n+1}$, and let $\End_0 E_{n+1}$ be its connected
component of $\id$.
Then the inclusion $\Aut E_{n+1}\subset \End_0E_{n+1}$ is a weak homotopy equivalence (see \cite[Theorem 3.14]{ST}).
For $u\in U(E_{n+1})$, we denote by $\rho_u$ the unital endomorphism of $E_{n+1}$ defined by
$\rho_u(T_i)=uT_i$.
Then the correspondence $[u]_1\to [\rho_u]$ gives the map from $K^1(X)$ to $[X,\Aut E_{n+1}]$.
The exactness follows from \cite[Theorem 3.15]{ST} and the Bockstein exact sequence.
The right square commutes by Proposition \ref{commutes}.
The left square commutes because the following diagram commutes
$$\xymatrix{
u\in U(C(X)\otimes E_{n+1})\ar@{=}[d]\ar[r]&\operatorname{Map}(X, \operatorname{End}_0E_{n+1})\ni\alpha=\rho_{u}\ar[d]\\
u\in U(C(X)\otimes E_{n+1})\ar[r]^{\pi}&U(C(X)\otimes \mathcal{O}_{n+1})\ni u_{\tilde{\alpha}}=\pi(u)
}$$
where $\rho_{u}\colon X\ni x\mapsto \rho_{u_x}\in \operatorname{End}_0E_{n+1}$ for every $u\in U(C(X)\otimes E_{n+1})$.
\end{proof}
\subsection{An example of non-commutative $[X,\Aut \cO_{n+1}]$}
We first examine the ring structure of $K^*(M_n\times \Sigma M_n)$ to show that
$[M_n\times \Sigma M_n, \operatorname{Aut}\cO_{n+1}]$ is a non-commutative group.
By Lemma \ref{yab} and Theorem \ref{Kunn}, we have
\begin{align*}
\tilde{K}^1(M_n\times \Sigma M_n)\cong &1\otimes \tilde{K}^1(\Sigma M_n)\oplus \tilde{K}^0(M_n)\otimes \tilde{K}^1(\Sigma M_n),\\
\tilde{K}^0(M_n\times \Sigma M_n)\cong &\tilde{K}^0(M_n)\otimes 1\oplus \tilde{K}^0(M_n\wedge \Sigma M_n).
\end{align*}
Therefore Lemma \ref{wer} yields $\tilde{K}^i(M_n\times \Sigma M_n)\cong \mathbb{Z}_n^{\oplus 2}$.
In particular, the map $\rho\colon \tilde{K}^i(M_n\times \Sigma M_n)\to\tilde{K}^i(M_n\times \Sigma M_n; \mathbb{Z}_n)$ is injective by the Bockstein exact sequence.
We determine a generator of $K^1(M_n ;\mathbb{Z}_n)\cong \tilde{K}^0(M_n)\cong \mathbb{Z}_n$.
Recall that the canonical gauge action $\lambda_z\colon S^1\to \operatorname{Aut}E_{n+1}$ is a generator of $\pi_1(\operatorname{Aut}E_{n+1})=\mathbb{Z}_n$ (see \cite[Theorem 2.36, 3.14 ]{ST}).
Therefore we have a homotopy
$$h\colon [0,1]\times S^1\to\operatorname{Aut}E_{n+1}$$
with $h_0(z)={\rm id}_{E_{n+1}}$, $h_1(z)=\lambda_z^n$,
which extend $\lambda$ to a map
$$\lambda\colon M_n\to \operatorname{Aut}E_{n+1}$$
satisfying $\lambda\circ i=\lambda_z$ for the map $i\colon S^1\hookrightarrow M_n$.
For the gauge action $\tilde{\lambda}$ of $\cO_{n+1}$, we get an extension $\tilde{\lambda}:M_n\to \Aut \cO_{n+1}$
in the same way.
\begin{lem}\label{as}
We have the following isomorphisms :
$$i^* : [M_n, \operatorname{Aut}E_{n+1}] \ni [\lambda]\mapsto [\lambda_z]\in [S^1, \operatorname{Aut}E_{n+1}],$$
$$i^* : [M_n, \operatorname{Aut}\mathcal{O}_{n+1}]\ni [\tilde{\lambda}]\mapsto [\tilde{\lambda}_z]\in [S^1, \operatorname{Aut}\mathcal{O}_{n+1}].$$
\end{lem}
\begin{proof}
First, we show that $i^* : [M_n, \operatorname{Aut}\mathcal{O}_{n+1}]\to[S^1, \operatorname{Aut}\mathcal{O}_{n+1}]$ is an isomorphism.
By \cite{D2}, Puppe sequence $S^1\xrightarrow{n} S^1\xrightarrow{i} M_n\to S^2\to\dotsm $ gives an exact sequence
$$\mathbb{Z}_n=\pi_1(\operatorname{Aut}\mathcal{O}_{n+1})\xleftarrow{n}\pi_1(\operatorname{Aut}\mathcal{O}_{n+1})\xleftarrow{i_*}[M_n, \operatorname{Aut}\mathcal{O}_{n+1}]\leftarrow 0.$$
Hence the map $i^*$ is an isomorphism of groups.
Similarly, the map $i_*\colon [M_n, \operatorname{Aut}E_{n+1}]\to [S^1, \operatorname{Aut}E_{n+1}]$ is an isomorphism by \cite[Theorem 2.36, 3.14]{ST}.
\end{proof}
\begin{lem}\label{m0}
For every $\alpha \in \operatorname{Map}(M_n, \operatorname{Aut}\mathcal{O}_{n+1})$, we have
$K_1(\alpha)={\rm id}_{K^1(M_n ; \mathbb{Z}_n)}.$
In particular, we have $\tilde{K}^0(M_n)\cdot K^1(M_n ; \mathbb{Z}_n)=\tilde{K}^0(M_n)\cdot \tilde{K}^0(M_n)=\{0\}$.
\end{lem}
\begin{proof}
By Lemma \ref{as}
We have the following commutative diagram
$$\xymatrix{
[M_n, \operatorname{Aut}\mathcal{O}_{n+1}]\ar@{=}[r]^{i^*}\ar[d]&[S^1, \operatorname{Aut}\mathcal{O}_{n+1}]\ar[d]\\
(K_1(C_0(M_n, pt)\otimes\mathcal{O}_{n+1}), \;\circ)\ar[r]^{K_1(r)}&(K_1(C_0(S^1, pt)\otimes \mathcal{O}_{n+1}), \;\circ),
}$$
where $r : C_0(M_n, pt)\to C_0(S^1, pt)$ is a restriction by $i : S^1\hookrightarrow M_n$. Since two vertical maps are group isomorphisms, the map $K_1(r)$ is a group homomorphism with respect to the multiplication $\circ$. We have $K_1(C(S^1)\otimes \mathcal{O}_{n+1})\xrightarrow{\delta}\tilde{K}^0(S^1)=0$, and it follows that $(K^1(S^1 ; \mathbb{Z}_n), \;+)=(K^1(S^1 ;\mathbb{Z}_n),\;\circ)$ by Theorem \ref{Iz}. Therefore two multiplications $\circ$ and $+$ coincide in $K^1(M_n ; \mathbb{Z}_n)$,
and we have $K_1(\alpha)={\rm id}_{K^1(M_n ;\mathbb{Z}_n)}$ and $K^1(M_n ;\mathbb{Z}_n)\cdot \tilde{K}^0(M_n)=0$.
Since the map $\beta$ is compatible with multiplication, we have $\tilde{K}^0(M_n)\cdot\tilde{K}^0(M_n)=\beta(\tilde{K}^1(M_n ;\mathbb{Z}_n)\cdot\tilde{K}^0(M_n))=\{0\}$.
\end{proof}
We denote by $a_{\lambda}$ the generator $[\tilde{\lambda}]\in [M_n, \operatorname{Aut}\mathcal{O}_{n+1}]=K^1(M_n ;\mathbb{Z}_n)$, and denote $g\colon=\beta(a_{\lambda})$.
By Lemma \ref{wer}, two elements $g$ and $\rho(g)$ are the generators of $\tilde{K}^0(M_n)$ and $\tilde{K}^0(M_n ;\mathbb{Z}_n)$ respectively.
By Lemma \ref{m0}, we have
\begin{align*}
g\cdot g&=0 \in\tilde{K}^0(M_n),\\
a_{\lambda}\cdot g=&0 \in\tilde{K}^1(M_n ; \mathbb{Z}_n).
\end{align*}
Now, we determine the group $[M_n\times \Sigma M_n, \operatorname{Aut}E_{n+1}]$.
Since the reduction $\rho\colon \tilde{K}^1(M_n\times \Sigma M_n)\to \tilde{K}^1(M_n\times \Sigma M_n ; \mathbb{Z}_n)$
is injective, we regard $\tilde{K}^1(M_n\times \Sigma M_n)$ as a subgroup of $(\tilde{K}^1(M_n\times \Sigma M_n ; \mathbb{Z}_n), \circ)$.
From Lemma \ref{com}, we can regard $\tilde{K}^1(M_n\times \Sigma M_n)$ as a normal subgroup of the group
$[M_n\times \Sigma M_n, \operatorname{Aut}E_{n+1}]$ too.
Consider the map
$$\Lambda \colon=\lambda\circ{\rm Pr}_{M_n}\colon M_n\times \Sigma M_n\to \operatorname{Aut}E_{n+1}.$$
By definition, we have $q([\Lambda])=[u_{\tilde{\Lambda}}]_1={\rm Pr}^*_{M_n}([u_{\tilde{\lambda}}]_1)
=\mu_R(a_{\lambda}\otimes 1)\in\tilde{K}^1(M_n\times \Sigma M_n ;\mathbb{Z}_n).$
\begin{prop}\label{poop}
The group homomorphism $q\colon [M_n\times \Sigma M_n, \operatorname{Aut}E_{n+1}]\to [M_n\times\Sigma M_n, \operatorname{Aut}\mathcal{O}_{n+1}]$ is injective.
\end{prop}
\begin{proof} Note that the K\"{u}nneth formula implies $H^2(M_n\times \Sigma M_n)\cong \Z_n$.
Since $\hat{\beta}(\mu_R(a_\lambda\otimes 1))=1-\mu(g\otimes 1)$ has order $n$,
and $\hat{\beta}(\mu_R(a_\lambda\otimes 1))=l(\eta_*([\Lambda]))$, the element $\eta_*([\Lambda])$ is
a generator of $H^2(M_n\times \Sigma M_n)$, and $l$ is injective.
Thus the statement follows from Lemma \ref{com}.
\end{proof}
\begin{thm}\label{ex1} With the above notation, the group
$[M_n\times \Sigma M_n, \operatorname{Aut}E_{n+1}]$ is isomorphic to the Heisenberg group
$$ \mathbb{Z}_n^{\oplus 2}
\rtimes_{\left(
\begin{array}{cc}
1 &1 \\
0 &1
\end{array}
\right)
} \mathbb{Z}_n.$$
\end{thm}
\begin{proof} We already know that the group $[M_n\times \Sigma M_n, \operatorname{Aut}E_{n+1}]$ isomorphic to
the subgroup of $(K^1(M_n\times \Sigma M_n;\Z_n),\circ)$ generated by $K^1(M_n\times \Sigma M_n)$ and
$[u_{\tilde{\Lambda}}]_1$.
Since the order of $[u_{\tilde{\Lambda}}]_1$ is $n$, the group is a semi-direct product $(\Z_n\times \Z_n)\rtimes \Z_n$.
To determine the group structure, it suffices to compute the action of
$\hat{\beta}([u_{\tilde{\Lambda}}]_1)=1-\mu(g\otimes 1)$
on $K^1(M_n\times \Sigma M_n)$ by multiplication.
Since $\tilde{K}^1(M_n\times \Sigma M_n)=\langle\mu(1\otimes u)\rangle\oplus \langle\mu(g\otimes u)\rangle\cong\mathbb{Z}_n\oplus\mathbb{Z}_n$,
and $g\cdot g=0$, we get the statement.
\end{proof}
\begin{cor} The groups $[M_n\times \Sigma M_n, \operatorname{Aut}E_{n+1}]$ and
$[M_n\times \Sigma M_n, \operatorname{Aut}\mathcal{O}_{n+1}]$ are non-commutative for any $n\geq 2$.
In particular, two spaces $\operatorname{BAut}\mathcal{O}_{n+1}$ and $\operatorname{BAut}E_{n+1}$ are not H-spaces.
\end{cor}
\begin{rem}
If $n$ is an odd number, we can actually show
$$[M_n\times\Sigma M_n, \operatorname{Aut}\mathcal{O}_{n+1}]\cong [M_n\times \Sigma M_n,\Aut E_{n+1}]\times \mathbb{Z}_n.$$
\end{rem}
\section{Continuous fields of Cuntz algebras}
We first review Dadarlat's results on the continuous fields of the Cuntz algebras.
We refer to \cite[Definition 10.1.2, 10.1.3]{Dix} for the definition of the continuous fields of C*-algebras.
A locally trivial continuous field of a C*-algebra $A$ is the section algebra of a locally trivial fiber bundle with the fibre $A$, which is an associated bundle of a principal $\operatorname{Aut}A$ bundle.
By \cite[Theorem 1.1]{D2}, all continuous fields of $\mathcal{O}_{n+1}$ over finite CW-complexes are locally trivial.
So we identify the continuous fields of $\mathcal{O}_{n+1}$ over finite CW-complexes with principal $\operatorname{Aut}\mathcal{O}_{n+1}$ bundles.
For a compact Hausdorff space $X$, we denote by ${\rm Vect}_m\;(X)$ the set of the vector bundles of rank $m$.
Dadarlat investigated continuous fields of $\mathcal{O}_{n+1}$ over $X$ arising
from $E\in {\rm Vect}_{n+1}(X)$, which are Cuntz-Pimsner algebras.
We refer to \cite{KT} and \cite{Pim} for Cuntz-Pimsner algebras.
Fixing a Hermitian structure of $E$, we get a Hilbert $C(X)$-module from $E$, which we regard as a $C(X)$-$C(X)$-bimodule.
Then the Pimsner construction gives the Cuntz-Pimsner algebra $\cO_E$, which is the quotient of
$\mathcal{T}_E$ by $\mathcal{K}_E$.
The algebra $\mathcal{O}_E$ is a continuous field of $\cO_{n+1}$ over $X$.
We denote by $\theta_E : C(X)\to \mathcal{O}_E$ the natural unital inclusion.
\begin{thm}[{\cite[Theorem 4.8]{Pim}}]\label{op}
Let $X$ be a compact Hausdorff space, and let $E$ be a vector bundle over $X$. Then we have the following exact sequence
$$\xymatrix{
K_0(C(X))\ar[r]^{1-[E]}&K_0(C(X))\ar[r]^{\theta_{E}}&K_0(\mathcal{O}_E)\ar[d]\\
K_1(\mathcal{O}_E)\ar[u]&K_1(C(X))\ar[l]^{\theta_E}&K_1(C(X))\ar[l]^{1-[E]}
}$$
where the map $\theta_E : C(X)\to \mathcal{O}_E$ is the natural inclusion, and the map $1-[E]$ is the multiplication by $1-[E]\in K^0(X)$.
\end{thm}
Dadarlat found an invariant to classify the $C(X)$-linear isomorphism classes of $\mathcal{O}_E$.
\begin{thm}[{\cite[Theorem 1.1]{D1}}]
Let $X$ be a compact metrizable space, and let $E$ and $F$ be vector bundles of rank $\geq 2$ over $X$. Then there is a unital $*$-homomorphism $\varphi : \mathcal{O}_E\to \mathcal{O}_F$ with $\varphi\circ\theta_E=\theta_F$ if and only if $(1-[E])\cdot K^0(X)\subset(1-[F])\cdot K^0(X)$. Moreover we can take $\varphi$ to be an isomorphism if and only if $(1-[E])\cdot K^0(X)=(1-[F])\cdot K^0(X)$.
\end{thm}
The key observation of Dadarlat is that if there is a $C(X)$-linear isomorphism $\varphi : \mathcal{O}_E\to\mathcal{O}_F$,
we have $(1-[E])\cdot K^0(X)=\operatorname{Ker}K_0(\theta_E)=\operatorname{Ker}K_0(\theta_F)=(1-[F])\cdot K^0(X)$
by the exact sequence of Theorem \ref{op}.
Dadarlat also estimate the cardinality of the set of the $C(X)$-linear isomorphism classes of $\mathcal{O}_E$.
We denote $\lceil x\rceil\colon ={\rm min}\{k\in\mathbb{Z} \colon k\geq x\}$.
\begin{thm}\label{DG}
Let $X$ be a finite connected CW-complex with $\operatorname{Tor}(H^*(X), \mathbb{Z}_n)=0$.
Then the following holds.\\
${\rm (1)}$ $|\tilde{K}^0(X)\otimes \mathbb{Z}_n|=|\tilde{H}^{even}(X, \mathbb{Z}_n)|$.\\
$\rm (2)$ If $n\geq\lceil({\rm dim}\;X-3)/3\rceil$, the set $\{[\mathcal{O}_E];\; E\in {\rm Vect}_{n+1}(X)\}$ exhausts
all the isomorphism classes of continuous fields of $\cO_{n+1}$ over $X$, and its cardinality is
$|\tilde{K}^0(X)\otimes \mathbb{Z}_n|$.
\end{thm}
Our goal in this section is to remove the restriction $n\geq\lceil({\rm dim}\;X-3)/3\rceil$ from the above statement
using a localization trick.
In fact, all the necessary algebraic arguments for the proof are already in Dadarlat's paper \cite{D1}
Let $P_n$ be the set of all prime numbers $p$ with $(n,p)=1$,
and let $\M_{(n)}$ be the UHF algebra
$$\M_{(n)}=\bigotimes_{p\in P_n}\M_{p^\infty}.$$
This is the unique UHF algebra satisfying $K_0(\M_{(n)})=\mathbb{Z}_{(n)}$ where $\mathbb{Z}_{(n)}$ is a localization of $\mathbb{Z}$ by $n$.
Assume that $r$ is a natural number with $(n,r)=1$.
Then the K-groups of $\mathcal{O}_{nr+1}\otimes \mathbb{M}_{(n)}$ are
$$K_0(\mathcal{O}_{nr+1}\otimes \mathbb{M}_{(n)})=\mathbb{Z}_{nr}\otimes\mathbb{Z}_{(n)}=\mathbb{Z}_n=\langle[1]_0\rangle, \;\;
K_1(\mathcal{O}_{nr+1}\otimes \mathbb{M}_{(n)})=0.$$
Therefore Kirchberg and Phillips' classification theorem \cite[Theorem 4.2.4]{Phill} yields $\mathcal{O}_{nr+1}\otimes \mathbb{M}_{(n)}\cong\mathcal{O}_{n+1}$. Let $F_r$ be a vector bundle over $X$ of rank $nr+1$. Then we have a continuous field of $\mathcal{O}_{n+1}$ of the form $\mathcal{O}_{F_r}\otimes \mathbb{M}_{(n)}$.
\begin{dfn}
We denote by $\mathcal{O}(X)_n$ the $C(X)$-linear isomorphism classes of continuous fields of the Cuntz algebra
$\mathcal{O}_{n+1}$ over $X$ of the form $\mathcal{O}_{F_r}\otimes \mathbb{M}_{(n)}$ for $F_r\in{\rm Vect}_{nr+1}(X)$ with $(n, r)=1$.
\end{dfn}
Note that we have $K_*(C(X)\otimes \mathbb{M}_{(n)})=K^*(X)\otimes \mathbb{Z}_{(n)}$.
Following Dadarlat' s argument, we consider an ideal $(1-[F_r])K^0(X)\otimes \mathbb{Z}_{(n)}$ of the ring $K^0(X)\otimes \mathbb{Z}_{(n)}$.
\begin{lem}\label{inv}
Let $X$ be a finite connected CW-complex.
Let $F_r$ and $F_R$ be vector bundles over $X$ of rank $nr+1$ and $nR+1$ respectively, with $(n, r)=(n, R)=1$. If $\mathcal{O}_{F_r}\otimes\mathbb{M}_{(n)}$ is $C(X)$-linearly isomorphic to $\mathcal{O}_{F_R}\otimes \mathbb{M}_{(n)}$, we have $(1-[F_r]) K^0(X)\otimes\mathbb{Z}_{(n)}=(1-[F_R])K^0(X)\otimes \mathbb{Z}_{(n)}$.
\end{lem}
\begin{proof}
Let $\varphi \colon \mathcal{O}_{F_r}\otimes\mathbb{M}_{(n)} \to\mathcal{O}_{F_R}\otimes \mathbb{M}_{(n)}$ be a $C(X)$-linear isomorphism.
First, we show that the following diagram induces a commutative diagram of $K_0$-groups :
$$\xymatrix{
C(X)\otimes \mathbb{M}_{(n)}\ar[rr]^{\theta_{F_r}\otimes {\rm id}}\ar@{=}[d]&&\mathcal{O}_{F_r}\otimes \mathbb{M}_{(n)}\ar[d]^{{\rm id}\otimes 1\otimes {\rm id}}\\
C(X)\otimes \mathbb{M}_{(n)}\ar[rr]^{\theta_{F_r}\otimes {\rm id}\otimes 1}\ar@{=}[d]&&\mathcal{O}_{F_r}\otimes \mathbb{M}_{(n)}\otimes \mathbb{M}_{(n)}\ar[d]^{\varphi\otimes {\rm id}}\\
C(X)\otimes \mathbb{M}_{(n)}\ar[rr]^{\theta_{F_R}\otimes {\rm id}\otimes 1}\ar@{=}[d]&&\mathcal{O}_{F_R}\otimes \mathbb{M}_{(n)}\otimes\mathbb{M}_{(n)}\\
C(X)\otimes \mathbb{M}_{(n)}\ar[rr]^{\theta_{F_R}\otimes {\rm id}}&&\mathcal{O}_{F_R}\otimes \mathbb{M}_{(n)}.\ar[u]^{{\rm id}\otimes 1\otimes{\rm id}}
}$$
The middle square of the diagram commutes because $\varphi$ is $C(X)$-linear.
By \cite[Theorem 2.2]{DW}, two $*$-homomorphisms $1\otimes {\rm id}, {\rm id}\otimes 1 \colon \mathbb{M}_{(n)}\to\mathbb{M}_{(n)}\otimes\mathbb{M}_{(n)}$ are homotopic. So the upper and lower square of the diagram commute up to homotopy, and commutes in the level of $K$-groups.
Second, we show the vertical map ${\rm id}\otimes 1\otimes {\rm id} \colon \mathcal{O}_{F_r}\otimes\mathbb{M}_{(n)}\to\mathcal{O}_{F_r}\otimes\mathbb{M}_{(n)}\otimes\mathbb{M}_{(n)}$ induces an isomorphism of the K-groups. One has an isomorphism $\psi : \mathbb{M}_{(n)}\to\mathbb{M}_{(n)}\otimes\mathbb{M}_{(n)}$. By \cite[Theorem 2.2]{DW}, two maps $1\otimes {\rm id}$ and $\psi$ are homotopic. So the map $K_0({\rm id}\otimes 1\otimes {\rm id})=K_0({\rm id}\otimes \psi)$ is an isomorphism.\\
Finally, we show $(1-[F_r])K^0(X)\otimes \mathbb{Z}_{(n)}=(1-[F_R])K^0(X)\otimes \mathbb{Z}_{(n)}$.
An exact sequence $0\to \mathcal{K}_{F_r}\otimes \mathbb{M}_{(n)}\to\mathcal{T}_{F_r}\otimes\mathbb{M}_{(n)}\to\mathcal{O}_{F_r}\otimes \mathbb{M}_{(n)}\to 0$ gives a $6$-term exact sequence, and we have the following exact sequence :
$$K_0(C(X))\otimes\mathbb{Z}_{(n)}\xrightarrow{(1-[F_r])\otimes 1}K_0(C(X))\otimes \mathbb{Z}_{(n)}\xrightarrow{K_0(\theta_{F_r}\otimes{\rm id})}K_0(\mathcal{O}_{F_r}\otimes \mathbb{M}_{(n)}).$$
So we have $\operatorname{Ker}K_0(\theta_{F_r}\otimes {\rm id})=(1-[F_r])K^0(X)\otimes \mathbb{Z}_{(n)}$. This gives the conclusion because the diagram below commutes by the above argument :
$$\xymatrix{
K_0(C(X))\otimes \mathbb{Z}_{(n)}\ar[rr]^{K_0({\rm id}\otimes \theta_{F_r})}\ar@{=}[d]&&K_0(\mathcal{O}_{F_r}\otimes \mathbb{M}_{(n)})\ar[d]^{K_0(\varphi)}\\
K_0(C(X))\otimes \mathbb{Z}_{(n)}\ar[rr]^{K_0({\rm id}\otimes \theta_{F_R})}&&K_0(\mathcal{O}_{F_R}\otimes \mathbb{M}_{(n)}).
}$$
\end{proof}
We define an equivalence relation $\sim_n$ in $\tilde{K}^0(X)\otimes \mathbb{Z}_{(n)}$.
\begin{dfn}
Let $a$ and $b$ be elements in $\tilde{K}^0(X)\otimes\mathbb{Z}_{(n)}$. Then $a\sim_n b$ if there exists $z\in\tilde{K}^0(X)\otimes\mathbb{Z}_{(n)}$ satisfying $(n+a)(1+z)=(n+b)$.
\end{dfn}
All elements of $\tilde{K}^0(X)\otimes\mathbb{Z}_{(n)} $ are nilpotent by \cite[Chap.II, Theorem 5.9]{K}.
The relation $\sim_n$ is well-defined because $(1-z)$ has the inverse $\sum_{k=0}^{\infty}z^k$.
For a vector bundle $E$ of rank $m$, we denote $[\tilde{E}]\colon=[E]-m$.
\begin{lem}\label{equiv}
Let $X$ be a connected compact Hausdorff space, and let $F_r$ and $F_R$ be vector bundles of rank $nr+1$ and $nR+1$ respectively with $(n, r)=(n, R)=1$.
If $(1-[F_r])K^0(X)\otimes \mathbb{Z}_{(n)}=(1-[F_R])K^0(X)\otimes \mathbb{Z}_{(n)}$, we have $[\tilde{F_r}]r^{-1}\sim_n [\tilde{F_R}]R^{-1}$.
\end{lem}
\begin{proof}
By assumption we have $h\in K^0(X)\otimes \mathbb{Z}_{(n)}$ satisfying $(nr+[\tilde{F_r}])h=(nR+[\tilde{F_R}])$. A split exact sequence $0\to\tilde{K}^0(X)\otimes\mathbb{Z}_{(n)}\to K^0(X)\otimes \mathbb{Z}_{(n)}\xrightarrow{{\rm ev}_{pt}}K^0(\{pt\})\otimes \mathbb{Z}_{(n)}\to 0$ yields $h-R/r\in\tilde{K}^0(X)\otimes\mathbb{Z}_{(n)}$. So we have $(n+[\tilde{F_r}]r^{-1})(1+\frac{r}{R}(h-R/r))=(n+[\tilde{F_R}])$.
\end{proof}
By Lemma \ref{inv} and Lemma \ref{equiv} , the map $I_n : \mathcal{O}(X)_n\ni [\mathcal{O}_{F_r}\otimes \mathbb{M}_{(n)}]\mapsto [[\tilde{F_r}]r^{-1}]\in \tilde{K}^0(X)\otimes \mathbb{Z}_{(n)}/\sim_n$ is well-defined.
\begin{lem}\label{big}
Let $X$ be a finite dimensional connected compact Hausdorff space. Then the map $I_n$ is surjective, and we have
$$|[X, \operatorname{BAut}\mathcal{O}_{n+1}]|\geq|\mathcal{O}(X)_n|\geq |\tilde{K}^0(X)\otimes \mathbb{Z}_{(n)}/\sim_n|.$$
\end{lem}
\begin{proof}
Every element of $\tilde{K}^0(X)\otimes \mathbb{Z}_{(n)}$ is of the form $\frac{1}{r}\otimes x$ where $(n, r)=1$ and $x\in\tilde{K}^0(X)$.
By \cite[Section 9, Theorem 1.2]{H}, we have $R\in \mathbb{N}$ satisfying $\tilde{K}^0(X)=\{[\tilde{E}]\in\tilde{K}^0(X)\;\mid {\rm rank}\;E=nR+1\}$. So we have a vector bundle of rank $nrR+1$, $F_{rR}$ with $Rx=[\tilde{F}_{rR}]$. Therefore we have $I_n([\tilde{F}_{rR}])=[\frac{1}{r}\otimes x]$.
By \cite[Theorem 1.4]{D2}, one has $\mathcal{O}(X)_n\subset[X, \operatorname{BAut}\mathcal{O}_{n+1}]$. This proves the lemma.
\end{proof}
Let $R$ be a commutative algebra. A filtration of $R$ is a sequence of subalgebras $$\dotsm R_{k+1}\subset R_k\subset\dotsm\subset R_1=R$$ with $R_pR_q\subset R_{p+q}$.
Let $X$ be a finite CW-complex. Then the group $\tilde{K}^0(X)$ is a finitely generated commutative group by induction argument of attaching cells.
The algebra $\tilde{K}^0(X)$ has a filtration
$$0=K^0_m(X)\subset\dotsm\subset K^0_1(X)=\tilde{K}^0(X)$$
by \cite[Section 2.1]{AH}.
Consider a sequence of $k$-skeletons $\{pt\}=X_0\subset X_1\subset\dotsm\subset X_m=X$. Then we define $K^0_k(X)$ by $\operatorname{Ker}(K^0(X)\to K^0(X_k))$.
If the cohomology groups of a finite CW-complex $X$ have no torsion, one has $\operatorname{Tor}(K^0_k(X)/K^0_{k+1}(X), \mathbb{Z}_n)=0$ by \cite[Section 2.3]{AH} and \cite[Section 2.4]{AH}.
Moreover Dadarlat shows in his proof of \cite[Theorem 5.3]{D1} that if the cohomology groups of the space $X$ have no $n$-torsion, one has $\operatorname{Tor}(K_k^0(X)/K^0_{k+1}(X), \mathbb{Z}_n)=0,\; m\geq k$.
The proof of the following lemma is the same as in the proof of \cite[Lemma 5.2]{D1}.
\begin{lem}\label{tec}
Let $R$ be a filtered commutative ring with $0=R_m\subset R_{m-1}\dotsm \subset R_1=R$ and such that $R$ is finitely generated as an additive group. If $\operatorname{Tor}(R_k/R_{k+1}, \mathbb{Z}_n)=0$ for every $k$, we have $|(R\otimes\mathbb{Z}_{(n)})/\sim_{n}|\geq |R\otimes \mathbb{Z}_n|$.
\end{lem}
\begin{cor}\label{Syn}
Let $X$ be a finite CW-complex. Suppose $\operatorname{Tor}(H^*(X),\;\mathbb{Z}_n)=0$. Then we have
$$|\tilde{K}^0(X)\otimes \mathbb{Z}_{(n)}/\sim_n|\geq |\tilde{K}^0(X)\otimes \mathbb{Z}_n|.$$
\end{cor}
We need the following proposition.
\begin{prop}[{\cite[Proposition 5.1]{D1}}]\label{tower}
Let $X$ be a finite CW-complex. Then we have
$$|\tilde{H}^{even}(X, \mathbb{Z}_n)|\geq|[X, \operatorname{BAut}\mathcal{O}_{n+1}]|,$$
where $\tilde{H}^{even}(X, \mathbb{Z}_n) \colon=\prod_{k\geq 1}H^{2k}(X, \mathbb{Z}_n)$.
\end{prop}
Now we show the following theorem.
\begin{thm}
Let $X$ be a finite CW-complex. Suppose $\operatorname{Tor}(H^*(X),\;\mathbb{Z}_n)=0$. Then the map $I_n : \mathcal{O}(X)_n\to\tilde{K}^0(X)\otimes\mathbb{Z}_{(n)}/\sim_n$ is bijective, and we have
$$|[X, \operatorname{BAut}\mathcal{O}_{n+1}]|=|\mathcal{O}(X)_n|=|\tilde{H}^{even}(X, \mathbb{Z}_n)|.$$
\end{thm}
\begin{proof}
By Corollary \ref{Syn}, we have $|\tilde{K}^0(X)\otimes \mathbb{Z}_{(n)}/\sim_n|\geq |\tilde{K}^0(X)\otimes \mathbb{Z}_n|$. By Lemma \ref{big}, we have $|[X, \operatorname{BAut}\mathcal{O}_{n+1}]|\geq |\tilde{K}^0(X)\otimes \mathbb{Z}_{(n)}/\sim_n|$. From Proposition \ref{tower}, we have $|\tilde{H}^{even}(X, \mathbb{Z}_n)|\geq|[X, \operatorname{BAut}\mathcal{O}_{n+1}]|,$ and Theorem \ref{DG} yields
$$|[X, \operatorname{BAut}\mathcal{O}_{n+1}]|=|\mathcal{O}(X)_n|=|\tilde{H}^{even}(X, \mathbb{Z}_n)|.$$
\end{proof}
|
1,108,101,562,947 | arxiv | \section{Computational method}
The Hamiltonian of studied model reads
\begin{align}
H=&\sum_{ij,\sigma}
\begin{pmatrix} a_{i\sigma}^{\dagger} & b_{i\sigma}^{\dagger} \end{pmatrix}
\!
\begin{pmatrix} t_{aa} &
t_{ab}\\ t_{ab} &\ t_{bb} \end{pmatrix}
\!
\begin{pmatrix} a_{j\sigma}^{\phantom\dagger} \\ b_{j\sigma}^{\phantom\dagger} \end{pmatrix}
+\frac{\Delta}{2}\sum_{i,\sigma}(n^a_{i\sigma}-n^b_{i\sigma})
\nonumber
\\
&+ U \sum_{i,\alpha}n^\alpha_{i\uparrow}n^\alpha_{i\downarrow}+
\sum_{i,\sigma\sigma'}(U'-J\delta_{\sigma\sigma'}) n^a_{i\sigma}n^b_{i\sigma'},
\label{eq:2bhm}
\end{align}
where $a^{\dag}_{i\sigma}$ and $b^{\dag}_{i\sigma}$ are fermionic
operators that create electrons with the respective orbital flavors
and spin $\sigma$ at site $i$ of a square lattice. The first term
describes the nearest neighbor hopping.
The rest, expressed in terms of local densities
$n^{c}_{i,\sigma} \equiv c^{\dag}_{i\sigma}c_{i\sigma}$,
captures the crystal-field $\Delta$, the Hubbard
interaction $U$ and Hund's exchange $J$ in the Ising approximation. Parameters
$U=4$, $J=1$, $U'=U-2J$,~\footnote{The results are little sensitive to variation of $U'$ and $J$ as long
as the ratio $\Delta/J$ is fixed.} $t_{aa}=0.4118$, $t_{bb}=-0.1882$, $t_{ab}=$0, 0.02, 0.06
with magnitudes (in eV) typical for $3d$ transition metal oxides
were used in previous studies~\cite{Kunes2014a,Kunes2014c,Kunes2016}.
We follow the standard DMFT procedure of self-consistent mapping
the lattice model onto an auxiliary Anderson impurity model (AIM)~\cite{Georges1992,Jarrell1992},
which is solved with the ALPS implementation~\cite{Bauer2011, Shinaoka2016, Gaenko2017}~\footnote{Numerically identical results for the normal state susceptibilities were obtained
with w2dymanics~\cite{w2dynamics}.}
of the matrix version of the strong-coupling continuous-time quantum
Monte-Carlo (CT-QMC) algorithm~\cite{Werner2006a}.
The susceptibilities~\cite{Georges1996,Kunes2011,vanLoon2015,Krien2017} are obtained by solving
the Bethe-Salpeter equation in the particle-hole channel with the DMFT 1P propagators
and 2P-irreducible vertices of AIM using the orthogonal polynomial representation~\cite{Boehnke2011}.
The susceptibilities $\chi^{OO}_{\eta\eta}(\mathbf{k},\omega)$ are obtained by analytic
continuation~\cite{Gubernatis1991,SM} of their Matsubara representations
\begin{equation*}
\chi^{OO}_{\eta\eta}(\mathbf{k},i\nu_n)=
\!
\sum_{\mathbf{R}}
\!
\int_0^{\beta}\!\!\!\!\!\mathrm{d}\tau
\,e^{i(\nu_n\tau+\mathbf{k}\cdot\mathbf{R})}
\langle O^{\eta}_{\mathbf{i}+\mathbf{R}}(\tau)O^{\eta}_\mathbf{i}(0)
\rangle
-\langle O^\eta\rangle^2\!\!,
\end{equation*}
with the observables $O$ of interest being excitonic fields
$R^\eta_i(I^\eta_i)=\sqrt{\pm 1}\sum_{\alpha\beta}
\sigma^\eta_{\alpha\beta}(
a_{i\alpha}^{\dagger} b_{i\beta}^{\phantom\dagger}\pm
b_{i\alpha}^{\dagger} a_{i\beta}^{\phantom\dagger})$, respectively,
with $\eta=x,y$ and the $z$-component of spin moment
${S^z_i=\sum_{\alpha\beta}\sigma^z_{\alpha\beta}
(a_{i\alpha}^{\dagger} a_{i\beta}^{\phantom\dagger}
+
b_{i\alpha}^{\dagger} b_{i\beta}^{\phantom\dagger})}$.
\begin{comment}
\begin{align*}
R^\eta_i&=\sum_{\alpha\beta}
\sigma^\eta_{\alpha\beta}(
a_{i\alpha}^{\dagger} b_{i\beta}^{\phantom\dagger}+
b_{i\alpha}^{\dagger} a_{i\beta}^{\phantom\dagger})\\
I^\eta_i&=i\sum_{\alpha\beta}
\sigma^\eta_{\alpha\beta}(
a_{i\alpha}^{\dagger} b_{i\beta}^{\phantom\dagger}-
b_{i\alpha}^{\dagger} a_{i\beta}^{\phantom\dagger})\\
S^z_i&=\sum_{\alpha\beta}\sigma^z_{\alpha\beta}
(a_{i\alpha}^{\dagger} a_{i\beta}^{\phantom\dagger}
+
b_{i\alpha}^{\dagger} b_{i\beta}^{\phantom\dagger})
\end{align*}
\end{comment}
\begin{figure}[t]
\includegraphics[width=0.48\textwidth]{all_dispersions_ch_0_physical_prod_EM.pdf}
\caption{\label{fig:goldstone}Evolution of the excitonic modes of
dynamical susceptibility in the $U^2(1)$ model
($t_{ab}$=0) across $\Delta$-driven transition ($T=1/40$). The columns correspond to
$-\operatorname{Im}\chi^{OO}_{\gamma\gamma}(\mathbf{k},\omega)$ with $O^\gamma=I^x,I^y,R^x,R^y$ (left to
right) along the high-symmetry lines in the 2D Brillouin zone. The
rows from top to bottom correspond to $\Delta=$3.9, 3.8, 3.65, 3.55,
3.45 with $\Delta_c\approx 3.75$ (Red line separates the normal state
from the PEC phase).}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.45\columnwidth]{sound.pdf}
\includegraphics[width=0.45\columnwidth]{higgs_gap.pdf}
\caption{\label{fig:gap} (a) The sound velocity $v_s$ of the GMs, the phase mode ($\chi^{RR}_{yy}$, blue symbols) and the spin rotation mode ($\chi^{II}_{xx}$, black symbols)
in the $U^2(1)$ model as a function of the crystal field $\Delta$. The dotted lines show the
corresponding strong-coupling results.
(b) The Higgs gap in the $U(1)$ model with $t_{ab}=$0.02 and 0.06 as a function of $\Delta$. The line
is a guide for eyes.}
\end{figure}
Model (\ref{eq:2bhm}) at half-filling has a rich phase diagram
exhibiting a metal-insulator transition~\cite{Werner2007} as well as
various types of LRO including antiferromagnetism,
spin-state order or
superconductivity~\cite{Kaneko2014,Kaneko2015,Kunes2014a,Hoshino2016}. For
the present parameters it undergoes a temperature- or crystal-field-controlled transition to polar exciton condensate (PEC)~\cite{Kunes2015}, as shown in Fig.~\ref{fig:1p_disp}b.
PEC is characterized by a finite excitonic field. Throughout the paper we choose the orientation $\langle I^y\rangle=\phi$, while $R^y$, $I^x$ and $R^x$ remain fluctuating. This phase is an instance of spin nematic state, which breaks spin-rotation symmetry without appearance of spin polarization.
The behavior of the collective modes depends on the continuous symmetry broken by the LRO~\cite{Watanabe2012}. Here, it is the $U(1)$ spin ($z$-axis) rotation. If $t_{ab}=0$, an additional $U(1)$ gauge symmetry due to conservation of $\sum_{i,\sigma}(n^a_{i,\sigma}-n^b_{i,\sigma})$ makes the total broken symmetry $U(1)\times U(1)$. We will refer to the general $t_{ab}\neq 0$ case as $U(1)$ model and the $t_{ab}=0$ case as $U^2(1)$ model.
{\it $\Delta$-driven transition.} While the system exhibits a sizable 1P gap throughout the studied $\Delta$-range, horizontal line in Fig.~\ref{fig:1p_disp}b, low-energy 2P-excitations show up
in the excitonic susceptibilities, Fig.~\ref{fig:goldstone}. In the normal phase ($\Delta>\Delta_c$), these can be viewed as spinful Frenkel excitons. The spin symmetry ensures the equivalence of $x$ and $y$
directions, while the gauge symmetry leads to equivalence of
the excitonic fields $R$ and $I$ in the $U^2(1)$ model.
Reducing $\Delta$ closes the excitation gap and the system undergoes transition to the PEC phase.
For the exctionic field, which freezes in an arbitrary direction both in the $xy$-plane and the $RI$-plane in the $U^2(1)$ case, we choose the orientation discussed above.
Linear gapless GMs~\cite{SM} corresponding to the spin rotation and phase fluctuation
($RI$-rotation) are observed in $\chi^{II}_{xx}$ and $\chi^{RR}_{yy}$, respectively.
The intensities of both GMs diverge as $1/|\mathbf{k}|$~\cite{SM}.
The corresponding sound velocities are shown in Fig.~\ref{fig:gap}a.
\begin{figure}[t]
\centering
\vspace{-0.4cm}
\includegraphics[width=0.99\columnwidth]{all_dispersions_ch_006_physical_prod_EM_higgs_zoom3.pdf}
\caption{\label{fig:higgs}
The same susceptibilities as in Fig.~\ref{fig:goldstone} ($T=1/40$)
in the vicinity of $\Gamma$-point for $U(1)$ model with cross-hopping
$t_{ab}=0.06$. The rows from top to bottom correspond to
$\Delta=$3.675, 3.60, 3.5, with $\Delta_c\approx 3.65$ (Red line separates the normal state
from the PEC phase) .}
\end{figure}
\begin{figure}[b]
\includegraphics[width=0.5\textwidth]{spin.pdf}
\caption{\label{fig:spin} (a) Evolution of dynamical spin susceptibility
$-\operatorname{Im}\chi^{SS}_{zz}(\mathbf{k},\omega)$ across the $\Delta$-driven transition in the $U^2(1)$ model of Fig.~\ref{fig:goldstone} (Asterisk marks the normal phase). (b) The corresponding static susceptibilities $\operatorname{Re}\chi^{SS}_{zz}(\mathbf{k},0)$
throughout the Brillouin zone.}
\end{figure}
Finite cross-hopping $t_{ab}$ leads to a generic $U(1)$ model. The equivalence between the $R$ and $I$
fields is lost, see Fig.~\ref{fig:higgs}. The excitonic field freezes
in the $I$-direction~\cite{Kunes2015,Geffroy2018}, while the $xy$-orientation remains arbitrary.
For the small $t_{ab}$ studied here, the changes to the excitonic spectra~\cite{SM} are concentrated in the
low-energy region shown in Fig.~\ref{fig:higgs}. The spin-rotation GM, visible in
$\chi^{II}_{xx}$, remains gapless and linear. The 'phase' mode acquires a Higgs gap that vanishes at
the transition, Fig.~\ref{fig:gap}b, a behavior observed in bi-layer Heisenberg system TlCuCl$_3$~\cite{Merchant2014}.
Interestingly the character of this mode changes
as we proceed deeper into the ordered phase, Fig.~\ref{fig:higgs}.
Close to the phase boundary, its spectral weight is dominated by $\chi^{II}_{yy}$, i.e.,
amplitude fluctuation of the condensed $I^y$ field. Deeper in the ordered phase
the spectral weight is mostly in $\chi^{RR}_{yy}$, corresponding to phase fluctuation ($RI$-rotation)
as in the $U^2(1)$ model. We offer an interpretation in terms of the relative strength of the
symmetry breaking term ($t_{ab}$) in the Hamiltonian and the spontaneously generated Weiss field. The Weiss field, the off-diagonal $F_{ab}^{\uparrow\downarrow}(\omega)$ part of the hybridization function in the present method,
is in general a fluctuating (frequency dependent) object, which prohibits a direct comparison
to $t_{ab}$. Nevertheless, we can compare their dynamical effects. A Weiss field dominating over the
Hamiltonian term ($t_{ab}$) results in a gapped GM found deep in the ordered phase. A common example of such situation is a gap in spin-wave spectra of magnets due to magneto-crystalline anisotropy.
Dominance of the Hamiltonian term ($t_{ab}$) close to the phase boundary, where the Weiss field is small,
results in amplitude fluctuations. This is a generic situation in cases without an approximate symmetry.
This interpretation is supported by the observation that the extent of the amplitude-fluctuation
regime shrinks when $t_{ab}$ is reduced~\cite{SM}. Moreover, the strong-coupling calculations (see SM~\cite{SM}), which make an explicit comparison possible, lead to the same conclusions.
Next, we discuss the impact of exciton condensation on the spin susceptibility $\chi^{SS}_{zz}$, shown in Fig.~\ref{fig:spin}. In the normal phase, $\chi^{SS}_{zz}(\mathbf{k},\omega)$ exhibits no distinct dispersion and
essentially vanishes throughout the Brillouin zone, Fig.~\ref{fig:spin}b, as expected
in a band insulator. In the PEC phase, it develops a sharp spin-wave-like dispersion although
there are no ordered moments present. We point out a similarity of $\chi^{SS}_{zz}(\mathbf{k},\omega)$ to $\chi^{RR}_{xx}(\mathbf{k},\omega)$ that we discuss later. A distinct feature of $\chi^{SS}_{zz}(\mathbf{k},\omega)$
is the suppression of the spectral weight close to the $\Gamma$-point. This suppression can be
overcome by doping, which results in appearance of ferromagnetic exciton condensate~\cite{Kunes2014c}.
\begin{table}[t]
\caption{\label{tab:sw} The parameters of Eq.~\ref{eq:sw}.
The variational parameter $0\leq\alpha^2\leq 1$, corresponding to the LS density,
assumes 1 in the normal phase and $\tfrac{\mu+z(\mathcal{T}+\mathcal{W})+z\mathcal{V}}{2z(\mathcal{T}+\mathcal{W})+z\mathcal{V}}$ in the condensate.}
\centering
\begin{tabular}{c|l}
$\mu_x$ & $\alpha^2\mu+z\alpha^2(1-\alpha^2)(2\mathcal{T}+2\mathcal{W}+\mathcal{V})$ \\
$\mathcal{T}_x$ & $\alpha^2\mathcal{T}-(1-\alpha^2)\mathcal{J}$ \\
$\mathcal{W}_x$ & $\alpha^2\mathcal{W}+(1-\alpha^2)\mathcal{J}$ \\
\hline
$\mu_y$ & $z(\mathcal{T}+\mathcal{W})$; $\mu$ if $\alpha^2=1$ \\
$\mathcal{T}_y$ & $ \mathcal{T}-\alpha^2(1-\alpha^2)(2\mathcal{T}+2\mathcal{W}+\mathcal{V})$ \\
$\mathcal{W}_y$ & $\mathcal{W}-\alpha^2(1-\alpha^2)(2\mathcal{T}-2\mathcal{W}+\mathcal{V})$
\end{tabular}
\end{table}
{\it Strong-coupling limit.} To understand the numerical results, it is instructive to analyze the
strong-coupling limit of (\ref{eq:2bhm}), which
can be expressed in terms of two-flavor hard-core bosons~\cite{Balents2000b,Kunes2015,Nasu2016}
\begin{equation}
\label{CartHam}
\begin{split}
\mathcal{H}=&\mu\sum_{i}
n_i
-\!
\sum_{ij,\nu}
\Big[
\mathcal{T}
d_{i\nu}^{\dagger}d_{j\nu}^{\phantom\dagger}-
\frac{\mathcal{W}}{2}
(
d_{i\nu}^{\dagger}d_{j\nu}^{\dagger}+
d_{i\nu}^{\phantom\dagger}d_{j\nu}^{\phantom\dagger}
)
\Big]
\\
+&\frac{\mathcal{V}}{2}\sum_{ij}
n_in_j
+\frac{\mathcal{J}}{2}\sum_{ij}
S^z_iS^z_j,
\end{split}
\end{equation}
Bosonic operators $d_{i\nu}^\dagger$ ($\nu=x,y$), which create high-spin (HS) states out
of the low-spin (LS) state, are related to the excitonic fields by
$R_i^\nu(I_i^\nu)\rightarrow \sqrt{\pm 1}(d_{i\nu}^\dagger\pm d_{i\nu}^{\phantom\dagger})$.
The number operators $n_i=\sum_{\nu}d_{i\nu}^{\dagger}d_{i\nu}^{\phantom\dagger}$ measure
the HS concentration and ${S^z_i=-i(d_{ix}^{\dagger}
d_{iy}^{\phantom\dagger}-d_{iy}^{\dagger}d_{ix}^{\phantom\dagger})}$
is the $z$-component of the spin operator.
The relations of the coupling constants $\mu$, $\mathcal{T}$, $\mathcal{W}$, $\mathcal{V}$, and $\mathcal{J}$ to the parameters of (\ref{eq:2bhm}) can be
found in SM~\cite{SM} and Ref.~\onlinecite{Kunes2014a}.
Since $\mathcal{W}\sim t_{ab}^2$, the gauge symmetry of the
$U^2(1)$ model reflects conservation of $d$-charge for $\mathcal{W}=0$.
Generalized spin wave treatment~\cite{Sommer2001,Nasu2016}
of the excitations over the variational ground state
${|G\rangle=\prod_i(\alpha+i\sqrt{1-\alpha^2}d^\dagger_{iy})|0\rangle}$,
see SM~\cite{SM} for details, leads to a free boson model
\begin{equation}
\label{eq:sw}
\tilde{\mathcal{H}}_\nu=\mu_\nu\! \sum_{i}
\tilde{n}_{i\nu}
-\!
\sum_{ij}
\Big[
\mathcal{T}_\nu\tilde{d}_{i\nu}^{\dagger}\tilde{d}_{j\nu}^{\phantom\dagger}-
\frac{\mathcal{W}_\nu}{2}
(
\tilde{d}_{i\nu}^{\dagger}\tilde{d}_{j\nu}^{\dagger}+
H.c.
)
\Big].
\end{equation}
Note that the parameters of this effective model in the ordered phase, given in Table~\ref{tab:sw}, depend on the flavor $\nu=x,y$.
The elementary excitations of (\ref{eq:sw}) have the dispersion
$\epsilon_\nu(\mathbf{k})=\sqrt{(\mu_\nu-2\mathcal{T}_\nu\delta(\mathbf{k}))^2-(2\mathcal{W}_\nu\delta(\mathbf{k}))^2}$
with $\delta(\mathbf{k})=\cos k_x+\cos k_y$.
In the $U^2(1)$ case with $\mathcal{W}=0$ both $x$ and $y$
modes are gapless with sound velocities
$v_\nu\equiv\nabla_\mathbf{k}\epsilon_\nu(\mathbf{k}=0)
=\sqrt{8|\mathcal{W}_\nu|(\mathcal{T}_\nu+|\mathcal{W}_\nu|)}$
vanishing at the transition.
Finite $\mathcal{W}$ in the $U(1)$ case leads to opening of a gap for $y$-excitations. The ratio
of the spectral weights of $I-$ and $R-$ propagators corresponding to $\chi^{II}_{yy}$ and
$\chi^{RR}_{yy}$ at $\Gamma$ point is given by~\cite{SM}
\begin{equation*}
\frac{\operatorname{Im}\chi^{II}_{yy}(0,\nu_\text{gap})}{\operatorname{Im}\chi^{RR}_{yy}(0,\nu_\text{gap})}\approx \frac{4\mathcal{W}}{(2\mathcal{T}+\mathcal{V})\phi^2},
\end{equation*}
which supports the interpretation that a dominant Hamiltonian term ($\mathcal{W}$) favors the amplitude fluctuations, while a dominant Weiss field ($\sim\mathcal{T}\phi$) favors the gapped Goldstone fluctuations.
Finally, we address the behavior of the spin susceptibility $\chi^{SS}_{zz}$ in Fig.~\ref{fig:spin}
We observe that replacing the operator $d_{iy}$
in the strong-coupling expression for $S^z_i$ by its finite PEC value
yields $S^z_i\sim(d_{ix}^{\dagger} +d_{ix}^{\phantom\dagger})\phi/2$.
In the ordered phase, the spin susceptibility $\chi^{SS}_{zz}$ therefore follows $\chi^{RR}_{xx}$, while they are decoupled in the normal phase.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{temp.pdf}
\caption{\label{fig:temp} The same susceptibilities of $U^2(1)$ model as in Fig.~\ref{fig:goldstone}
calculated across the thermally driven transition for $\Delta=$3.55.
The rows from top to bottom correspond to temperatures
$T=1 / 11$, $1 / 16$, $1 / 30$, $1/40$ with $T_c\approx 1 / 13$.}
\end{figure}
\begin{comment}
Next, we discuss the generic $U(1)$ case with finite cross-hopping $t_{ab}$. The evolutions of the dynamical
susceptibilities is shown in Fig.~\ref{fig:higgs}. In the normal
phase, all the excitonic susceptibilities have the same dispersion,
however, the equivalence between the $\chi_{RR}$ and $\chi_{II}$ is
broken with the latter one having larger amplitude, in particular at
small $\mathbf{k}$. As in the $U^2(1)$ case the phase transition proceeds by
closing of the exciton gap. Unlike in the $U^2(1)$ case, the exciton
dispersions are not rigidly shifted, but
smoothly approach linear dispersion (with finite sound velocity) at small $|\mathbf{k}|$ thanks to finite $\mathcal{W}$ in
(\ref{CartHam}). For $\Delta<\Delta_c$ the equivalence of $x$ and $y$ excitonic susceptibilities is broken. The
$\chi^{II}_{xx}$, detecting the Goldstone spin-rotation mode in our
set-up, remains gapless and linear at small $|\mathbf{k}|$. The
$\chi^{RR}_{yy}$ exhibits a gap that increases when $\Delta<\Delta_c$
moves away from the phase boundary, see Fig.~\ref{fig:gap}b. Such a characteristic behavior of Higgs mode was observed
in bi-layer Heisenberg system TlCuCl$_3$~\cite{Merchant2014}. We point out that bi-layer Heisenberg model is closely
related to (\ref{CartHam})~\cite{Kunes2015}. The behavior of the spin susceptibility $\chi^{SS}_{zz}$
and its relationship to $\chi^{RR}_{xx}$ is essentially the same as in the $U^2(1)$ case.
\end{comment}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{spin_temp.pdf}
\caption{\label{fig:spin_temp} (a) Evolution of dynamical spin susceptibility
$-\operatorname{Im}\chi^{SS}_{zz}(\mathbf{k},\omega)$ across the thermally driven phase
transition as in Fig.~\ref{fig:temp} for temperatures
$T=1/11$, $1/20$, $1/30$ (Asterisk marks the normal phase).
(b) The corresponding static
susceptibilities $\operatorname{Re}\chi^{SS}_{zz}(\mathbf{k},0)$
throughout the Brillouin zone.}
\end{figure}
{\it Thermally driven transition.}
Since the transition observed in Pr$_{0.5}$Ca$_{0.5}$CoO$_3$~\cite{tsubouchi2002} is driven by temperature
we investigate the behavior of the $U^2(1)$ model along the vertical trajectory in
Fig.~\ref{fig:1p_disp}b. We observe that the 1P gap in the normal state is closed, Fig.~\ref{fig:1p_disp}d.
The excitonic susceptibilities possess a peak at finite frequency, whose tail extends to zero frequency, Fig.~\ref{fig:temp}
Cooling is accompanied by a downward shift of the damped dispresive features, i.e.,
the phase transition can be viewed as a mode softening, an observation also made experimentally on TlCuCl$_3$~\cite{Merchant2014}.
The normal state spin susceptibility $\chi^{SS}_{zz}$ in Fig.~\ref{fig:spin_temp} does not
vanish as in Fig.~\ref{fig:spin}.
The presence of thermally excited HS states gives rise to $\mathbf{k}$-featureless susceptibility with spectral
weight concentrated at low energies. Nevertheless, $\chi^{SS}_{zz}(\mathbf{k},\omega)$
changes qualitatively at the transition in this case as well. The
dispersion becomes sharper and its bandwidth increases significantly. As a result, upon cooling below $T_c$, the low-energy region is depleted of spectral weight throughout the Brillouin zone,
except in the vicinity of the $\Gamma$-point.
Recently, this behavior was reported in inelastic neutron scattering
in the putative excitonic material
(Pr$_{1−y}$Y$_y$)$_{1−x}$Ca$_x$CoO$_3$~\cite{Moyoshi2018}.
In conclusion, we used DMFT to study the 2P response
across exciton condensation transition in two-orbital Hubbard model.
We observed the formation of GMs as predicted by symmetry considerations~\cite{Watanabe2012}.
Explicit breaking of continuous symmetry led to appearance
of a gapped mode~\cite{Pekker2015}, characterized by vanishing
of the gap at the phase transitions similar to observations in TlCuCl$_3$~\cite{Merchant2014}.
We have observed that the character of this mode changes from
Higgs-like amplitude fluctuations close to the phase boundary, to Goldstone-like
phase fluctuations deep in the ordered phase. We suggest that this behavior shall
be common to systems with weakly broken symmetry and provide an interpretation in terms of
the relative strengths of the spontaneously generated Weiss field and the explicit
symmetry-breaking term in the Hamiltonian.
Experimental observation of excitonic modes is in principle possible~\cite{Wang2018,Kim2014} using
resonant inelastic x-ray scattering, however, practical limitations in energy resolution
and $\mathbf{k}$-space accessibility~\cite{Wang2018} exist at the moment.
We have shown that the measurement of dynamical spin susceptibility provides an alternative,
that can used to identify spinful excitonic condensates with current experimental technology.
\begin{acknowledgements}
We thank J. Chaloupka, G. Sangiovanni, G. Khaliullin and B. Hartl for
discussions, H. Shinaoka for help with the ALPS code, K. Steiner and O. Janson for testing
the maximum entropy code, and K. Held and
A. Kauch for critical reading of the manuscript.
This work was supported by the ERC Grant Agreements No. 646807 under
EU Horizon 2020 (J.Ku., A.Har. and D.G.) and No. 306447 under EU
Seventh Framework Program
(FP7/20072013)/ERC (J.Ka.), by the Czech Ministry of Education, Youth
and Sports, project
``IT4Innovations National Supercomputing Center – LM2015070'' (D.G.),
by FWF through SFB ViCoM F41 (J.Ka.), by DFG through SFB1170
“Tocotronics” (A.Hau.) and by the Austrian Federal Ministry of Science, Research and Economy
through the Vienna Scientific Cluster (VSC) Research Center (P.G.). We
gratefully acknowledge the Gauss Centre for Supercomputing
e.V. (www.gauss-centre.eu) for providing computing time on the GCS
Supercomputer SuperMUC at Leibniz Supercomputing Centre (www.lrz.de),
the programme ``Projects of Large Research, Development, and
Innovations Infrastructures'' (CESNET LM2015042) for the access to
computing and storage facilities of the Czech National Grid
Infrastructure MetaCentrum, and the Vienna Scientific Cluster (VSC)
for access to its computing facilities.
\end{acknowledgements}
|
1,108,101,562,948 | arxiv | \section{Introduction}
\label{intro}
The study of dissipative systems in quantum theory is of strong interest and relevance either for fundamental reasons \cite{Zurek} and for its practical applications \cite{Amir1,NMR}. The explicit time dependence of the Lagrangian and Hamiltonian operators introduces a major difficulty to this study since the canonical commutation relations are not preserved by time evolution. Then different approaches have been used in order to apply the canonical quantization scheme to dissipative systems (see, for instance, \cite{Flavio,Barone}).
One of these approaches is to focus on an isolated system composed by the original dissipative system plus a reservoir. One start from the beginning with a Hamiltonian which describes the system, the bath and the system-bath interaction. Subsequently, one eliminates the bath variables which give rise to both damping and fluctuations, thus obtaining the reduced density matrix \cite{FeynmanVernon,Amir1,Amir2,Amir3,Barone}.
Another way to handle the problem of quantum dissipative systems is to double the phase-space dimensions, so as to deal with an effective isolated system composed by the original system plus its time-reversed copy (indirect representation) \cite{Banerjee,Feshbach}. The new degrees of freedom thus introduced may be represented by a single equivalent (collective) degree of freedom for the bath, which absorbs the energy dissipated by the system.
The study of the quantum dynamics of an accelerated charge is appropriated to use the indirect representation since it loses the energy, the linear momentum, and the angular momentum carried by the radiation field \cite{Heitler}. The effect of these losses to the motion of charge is know as radiation damping \cite{Heitler}.
The reaction of a classical point charge to its own radiation was first discussed by Lorentz and Abraham more than one hundred years ago, and never stopped being a source of controversy and fascination \cite{Becker,Lorentz}. Nowadays, it is probably fair to say that the most disputable aspects of the Abraham-Lorentz theory, such as self-acceleration and preacceleration, have been adequately understood. Self-acceleration refers to classical solutions where the charge is under acceleration even in the absence of an external field. Preacceleration means that the charge begins to accelerate before the force is actually applied.
The process of radiation damping is important in many areas of electron accelerator operation \cite{Walker}, as in recent experiments with intense-laser relativistic-electron scattering at laser frequencies and field strengths where radiation reaction forces begin to become significant \cite{Hartemann,Bula}.
The purpose of this letter is to present a Lagrangian formalism to the study of quantum dynamics of accelerated charge, yielding an {\it effective} isolated system, where the canonical commutation relations are preserved by time evolution. In Section \ref{sec:1} we briefly review the equation of motion of the radiation damping and aspects of the solutions to the equation of motion. In section \ref{sec:2} we present a Lagrangian description to the radiation damping by the indirect representation, doubling the phase-space dimensions. Section \ref{sec:3} contains the concluding remarks.
\section{The equation of motion}
\label{sec:1}
The derivation of an exact expression for the radiation damping force has long been an outstanting problem of classical electrodynamics \cite{Flavio,Heitler,Becker,Lorentz,Hartemann,Rohrlich1,Landau,Rohrlich2}. In the classic derivation given by Lorentz and Abraham \cite{Becker,Lorentz}, which relies on energy-momentum conservation, the self-elec\-tro\-mag\-ne\-tic-energy and momentum of a charged rigid sphere are derived for an accelerated motion. In this first-order ap\-pro\-xi\-ma\-ti\-on, this de\-ri\-va\-ti\-on yields the well-known Abraham -Lorentz force which depends on the second time de\-ri\-va\-ti\-ve of the particle velocity of mass $m$ and charge $e$:
\begin{equation}\label{01}
m\left( \frac{d \vec v}{dt} -\tau_0 \frac{d^2 \vec v}{dt^2} \right) =\vec F,
\end{equation}
where $\tau_0=2e^2/3mc^3$, $c$ is the velocity of light, $\vec v=d\vec r/dt$ denotes the velocity of the charge, and $\vec F$ is the external force.
A fully relativistic formulation of the equation of motion was only achieved in 1938 by Dirac in his classic paper \cite{Dirac}, where the Lorentz-Dirac equation reads
\begin{equation}\label{02}
ma^{\mu}=\frac{e}{c}F^{\mu\nu}u_{\nu} +\Gamma^{\mu},
\end{equation}
with
\begin{equation}\label{03}
\Gamma^\mu \equiv \frac{2e^2}{3c^3}\left(\dot a^\mu-a^\lambda a_\lambda \frac{u^\mu}{c^2}\right),
\end{equation}
where the charge world line $z_\mu (\tau)$ is parametrized by its proper time $\tau$, and $u_\mu = dz/d\tau$, $a_\mu =du_\mu /d\tau$, and $\dot a_\mu =da_\mu /d\tau$. Greek indices range from 0 to 3, and the diagonal metric of Minkowski space is $(-1,1,1,1).$ The term $(e/c)F^{\mu\nu}u_\nu$ in Eq.(\ref{02}) is the Lorentz force due to the external field $F^{\mu\nu}$. In addition, $\Gamma^\mu$ represents the effect of radiation \cite{Rohrlich1}.
The equation (\ref{01}) can be critized on the grounds that it is second order in time, rather than first, and therefore runs counter to the well-known requirements for a dynamical equation of motion. This difficulty manifests itself immediately in runaway (self-accelerated) solutions. If the external force is zero, with the help of the integrating factor $e^{t/\tau_0}$, it is obvious that Eq.(\ref{01}) has two possible solutions,
\begin{equation}\label{04}
\dot{\vec v}(t)=\left\{ \begin{array}{l}
0 \\
\dot{\vec v}(0)e^{t/\tau_0}.
\end{array}
\right.
\end{equation}
Only the first solution is rasonable.
However, there are a particular choice for $\dot{\vec v}(0)$ where the second solution in Eq.(\ref{04}) disappear, that is the Dirac's asymptotic condition on the vanishing of the acceleration for an asymptotically free particle \cite{Dirac}. In this case, the solution of Eq.(\ref{01}), with the help of the integrating factor $e^{t/\tau_0}$, for a rather general time-dependent force $\vec F(t)$ reads
\begin{equation}\label{05}
m\frac{d\vec v}{dt}=\int_0^\infty e^{-s}\vec F(t+\tau_0 s)ds.
\end{equation}
In fact, if $\vec F(t)$ vanishes identically for a large value of $t$, then Eq.(\ref{05}) shows that the acceleration also vanishes for a large value of $t$ and therefore solution (\ref{05}) is not self-accelerating. But, unfortunately, and althogh mathematically correct, this approach leads to preacceleration. The violation of causality implied by preacceleration is particularly disappointing since the Lorentz-Dirac equation (\ref{02}) can be derived by using only retarded fields \cite{Villaroel}. The existence of preacceleration is not a consequence of the presence the time derivative of the acceleration in (\ref{01}), but of the method through which the solution has been obtained \cite{Villaroel1}.
\section{Indirect Lagrangian representation of the radiation damping}
\label{sec:2}
The inverse problem of variational calculus is to construct the Lagrangian from the equations of motion. Different Lagrangian representations are obtained from the direct and indirect approaches \cite{Santilli}. In the direct representation as many variables are introduced as there are in the equations of motion. The equation of motion corresponding to a coordinate $q$ is related with the variational derivative of the action with respect to the same coordinate. Whereas, in the indirect representation, the equation of motion is suplemented by its time-reversed image. The equation of motion with respect to the original variable then corres\-ponds to the variational derivative of the action with res\-pect to the image coordinate and vice versa \cite{Feshbach,Bateman}.
In the indirect approach we consider equation (\ref{01}) along with its time-reversed copy
\begin{equation}
\label{06}
m\left( \frac{d \vec {\bar v}}{dt} +\tau_0 \frac{d^2 \vec {\bar v}}{dt^2} \right) =\vec{\bar F},
\end{equation}
where $\vec {\bar v}=d\vec{\bar r}/dt$ is the velocity of the image system, which appears in fact to be the {\it time reversed} $(\tau_0 \rightarrow -\tau_0 )$ of (\ref{01}).
Thus the variation of the action $S$ for equations of motion (\ref{01}) and (\ref{06}), in term of the coordinates, must then be
\begin{eqnarray}\label{07}
\delta S =\int_{t_1}^{t_2}dt && \left[ m\left(\frac{d}{dt}\dot{\vec r}-\tau_0 \stackrel{\ldots}{\vec r}+\frac{\partial V}{\partial\vec{\bar r}}\right). \delta\vec {\bar r}\right. \nonumber\\
&+& \left. m\left(\frac{d}{dt}\dot{\vec {\bar r}}+\tau_0 \stackrel{\ldots}{\vec {\bar r}}+\frac{\partial V}{\partial\vec r}\right). \delta\vec r \right],
\end{eqnarray}
where $V\equiv V(\vec r, \vec {\bar r})$ is the potential energy with $\frac{\partial V}{\partial\vec r}=-\vec {\bar F}$ and $\frac{\partial V}{\partial\vec{\bar r}}=-\vec F$.
>From (\ref{07}), equation (\ref{01}) is obtained by varying $S$ with $\vec {\bar r}$ whereas (\ref{06}) follows from varying $S$ with $\vec r$. Since the equations of motion for $\vec r$ and $\vec {\bar r}$ follow as Euler-Lagrangian equations of motion for $\vec {\bar r}$ and $\vec r$ respectively, the method is called the indirect method. By descarding the surface terms, we get from (\ref{07}):
\begin{equation}
\label{08}
\delta S =-\delta \int_{t_1}^{t_2}dt \left[ m{\dot{\vec r}}. {\dot{\vec{\bar r}}} +\frac{\gamma}{2}\left( {\dot{\vec r}}.{\ddot{\vec{\bar r}}}-{\ddot{\vec r}}.{\dot{\vec {\bar r}}}\right)-V(\vec r, \vec {\bar r})\right],
\end{equation}
where $\gamma=m\tau_0=2e^2/3c^3$. It is then possible to identify
\begin{equation}
\label{09}
L= m{\dot{\vec r}}. {\dot{\vec{\bar r}}} +\frac{\gamma}{2}\left( {\dot{\vec r}}.{\ddot{\vec{\bar r}}}-{\ddot{\vec r}}.{\dot{\vec {\bar r}}}\right)-V(\vec r, \vec {\bar r})
\end{equation}
as the appropriate Lagrangian in the indirect representation. So, the system made of the radiation damping and of its time-reversed image globally behaves as a closed system. The Lagrangian (\ref{09}) can be written in a suggestive form by subtitution of the hyperbolic coordinates $\vec r_1$ and $\vec r_2$ \cite{Blasone} defined by
\begin{equation}\label{10} \vec r = {1\over{\sqrt{2}}}\left(\vec r_{(1)} +\vec r_{(2)} \right); \;\; \vec {\bar r}
={1\over{\sqrt{2}}}\left(\vec r_{(1)} -\vec r_{(2)} \right)
\end{equation}
We find that the Lagrangian $L$ becomes
\begin{equation}\label{11} L={m\over 2}g_{ij}\; \dot{\vec r}_{(i)} .\dot{\vec r}_{(j)} -{\gamma
\over 2}\epsilon_{ij}\; \dot {\vec r}_{(i)} .\ddot{\vec r}_{(j)}-V[\vec r_{(1)}, \vec r_{(2)}] \end{equation}
where the pseudo-euclidian metric $g_{ij}$ is given by
$g_{11}=-g_{22}=1$, $g_{12}=0$ and $\epsilon_{12}=-\epsilon_{21}
=1$. This Lagrangian is similar to the one discussed by Lukierski
et al \cite{Lukierski} (that is a special nonrelativistic limit of relativistic model of the particle with torsion investigated in \cite{MSP}), but in this case we have a pseudo-euclidian
metric. The equations of motion corresponding to the Lagrangian
(\ref{11}) are
\begin{equation}\label{12} m\ddot {\vec r}_{(1)} - \gamma \stackrel{\ldots}{\vec r}_{(2)} =-\frac{\partial V}{\partial{\vec r}_{(2)}},\;\;
m\ddot {\vec r}_{(2)} -\gamma \stackrel{\ldots}{\vec r}_{(1)} =-\frac{\partial V}{\partial\vec r_{(1)}}. \end{equation}
On the hyperbolic plane, the equations (\ref{12}) shows that the dissipative term actully acts as a coupling between the systems $\vec r_{(1)}$ and $\vec r_{(2)}$.
Recently, one of us, in \cite{Albert1} have studied the canonical quantization of the radiation damping. A Hamiltonian analysis is done in commutative and noncommutative scenarios, what leads to the quantization of the system, where the dynamical group structure associated with our system is that of $SU(1,1)$. In \cite{Albert2}, a supersymmetrized version of the model to the radiation damping, Eq.(\ref{11}), was developed. Its symmetries and the corresponding conserved Noether charges were discused. It is shown that this supersymmetric version provides a supersymmetric gene\-ralization of the Galilei algebra of the model \cite{Albert1}, where the supersymmetric action can be split into dynamically independent external and internal sectors.
\section{Concluding remarks}
\label{sec:3}
We have shown that in the pseudo-Euclidean metrics
the system made of a charge interacting with its own radiation and its time-reversed image, introduced by doubling the degrees of freedom as required by the
canonical formalism, actually behaves as a closed system described
by the Lagrangian (\ref{11}).
This formalism represents a new scenario in the study of this
very interesting system. The Lagrangian (\ref{11}) des\-cribes, in the hyperbolic plane, the dissipative
system of a charge interacting with its own radiation field, where the 2-labeled system represents the
reservoir or heat bath coupled to the 1-labed system. Note that this Lagrangian is similar to the one discussed in \cite{Lukierski} (which is a
special nonrelativistic limit of relativistic model of the particle with torsion investigated in \cite{MSP}),
but in this case we have a pseudo-Euclidean metric and the radiation-damping constant, $\gamma$ , is
the coupling constant of a Chern-Simons-like term.
This formalism is important because it allows us to study the canonical quantization of the model (see Ref.\cite{Albert1}), and to study the symmetries of the model and their supersymmetric version (see Ref.\cite{Albert2}). In future works, we will study the introduction of gauge interactions into the model.
\section{Acknowledgement}
This work is supported by CNPq Brazilian
Research Agency. In particular, ACRM would like to acknowledge
the CNPq.
|
1,108,101,562,949 | arxiv | \section{Introduction}\label{s:intro}
Directed graphs are natural combinatorial objects which are used to model
systems in many areas including
biology (for example~\cite{protein,LC}), the social sciences (for
example~\cite{RPW,JR})
and computer science (for example~\cite{HC,MS}).
In this paper we consider the problem of sampling directed graphs
with a given degree sequence.
For graph-theoretic terminology not introduced here, see~\cite{BJG}.
A directed graph (digraph) $G=(V,A)$ consists of a vertex set $V=V(G)$
and an arc set
\[ A=A(G)\subseteq \{ (v,w)\in V\times V\mid v\neq w\}.\]
Note that digraphs as defined here are simple, which means that they
contain no loops and no multiple arcs.
The arc $(v,w)$ is drawn as an arrow from $v$ to $w$. We refer
to $v$ as the \emph{tail} and $w$ as the \emph{head} of the arc.
For a vertex $v$, the \emph{out-degree} $d^+(v)$ of $v$ is the number of
arcs with tail $v$. Similarly, the \emph{in-degree} $d^-(v)$ of $v$
is the number of arcs with head $v$. For a positive integer $d$,
if $d^+(v)=d^-(v)=d$
for all vertices $v\in V$ then we say that the digraph $G$
is \emph{$d$-regular} (or \emph{$d$-in, $d$-out}).
Let $d = d(n)\geq 1$ be a sequence of positive integers,
and let $\Omega_{n,d}$ be the set of all simple $d$-regular
digraphs on the vertex set $[n]=\{ 1,\ldots, n\}$.
The \emph{configuration model} of Bollob{\' a}s~\cite{bollobas}
(adapted for directed graphs)
gives an expected polynomial-time uniform sampling algorithm
for $\Omega_{n,d}$ when $d=O(\sqrt{\log n})$.
There is a one-to-one correspondence between $\Omega_{n,d}$
and the set of all $d$-regular bipartite graphs on
$\{ 1,2,\ldots, n\}\cup\{n+1,n+2,\ldots, 2n\}$ with no edges
in common with the perfect matching $\{ \{ j,n+j\} : j=1,\ldots, n\}$.
The probability that a $d$-regular bipartite graph on the given vertex
bipartition has no edges in common with this perfect matching is
asymptotic to $e^{-d}$ whenever $d=o(n^{1/3})$,
by ~\cite[Theorem 4.6]{McKay84}. This probability is
polynomially small when $d=O(\log n)$. McKay and Wormald's
algorithm~\cite{MW90} for sampling $d$-regular
graphs runs in expected polynomial time for $d = O(n^{1/3})$,
and hence gives rise to an expected polynomial-time algorithm
for uniformly sampling elements of $\Omega_{n,d}$ when $d=O(\log n)$.
The set of all 1-regular digraphs is in one-to-one correspondence
with the set of all derangements of $[n]$,
and here the configuration model corresponds to repeatedly
sampling uniform
permutations of $n$ until one is obtained without fixed points.
The proportion of permutations which are derangements tends to $1/e$,
so this algorithm has linear expected running time.
Other algorithms for uniformly sampling derangements in
linear expected time but an improved
constant have been proposed, for example~\cite{MPP}.
We know of no expected polynomial-time
uniform sampling algorithm for regular digraphs other than those
mentioned above.
Hence we turn our attention to the problem of obtaining approximately uniform
samples from $\Omega_{n,d}$ using a Markov chain.
(Some Markov chain definitions are given in Section~\ref{s:intro-MC};
for others, see~\cite{OH}.)
There is a very natural Markov chain for digraphs which has arisen in
many contexts, which we will call the \emph{switch chain}.
A transition of the switch chain is performed by randomly choosing
two distinct arcs and exchanging their heads, if the two arcs are
non-incident and if the resulting digraph does not
contain any multiple arcs.
See Figure~\ref{f:chain} for a precise description of
the transition procedure of the chain.
A transition of the switch chain is called
\emph{switching along an alternating rectangle}
by Rao, Jana and Bandyopadhyay~\cite{RJB}; we will
simply call it a \emph{switch}. Similar transformations
were used by Ryser~\cite{ryser} to study 0-1 matrices.
Besag and Clifford~\cite{BC} defined a related chain for sampling
0-1 matrices with given row and column sums, while
Diaconis and Sturmfels~\cite{DS} used a similar chain to sample
contingency tables.
Rao, Jana and Bandyopadhyay~\cite{RJB} showed that the switch chain
is not irreducible for general degree sequences. (However, they
mention that degree sequences for which the switch chain is
not irreducible are ``rather rare''.)
For completeness, we prove in Lemma~\ref{connected} that
the switch chain is irreducible for regular digraphs.
This also follows from the existence of the multicommodity flow
defined in Sections~\ref{s:flow},~\ref{s:2circuit}.
The switch chain is aperiodic for $d\geq 1$,
as we prove in Lemma~\ref{aperiodic}.
In their empirical study of methods for generating directed graphs
with given degree sequences, Milo et al.~\cite{MKINA} wrote
that the switch chain ``works well but, as with many Markov chain methods,
suffers because
in general we have no measure of how long we need to wait for it to
mix properly''.
Our main result, Theorem~\ref{main},
partially answers this point by providing
the first rigorous
polynomial bound on the mixing time of the switch chain, in the
special case of regular digraphs.
\begin{theorem}
Let $\Omega_{n,d}$ be the set of all $d$-regular digraphs on
the vertex set $\{ 1,\ldots, n\}$, where $d=d(n)$ is any integer-valued
function which satisfies $1\leq d(n)\leq n-1$ for all $n\geq 4$.
Let $\tau(\epsilon)$ be the mixing time of the Markov chain
$\mathcal{M}$ with state space $\Omega_{n,d}$ and transition procedure
given by Figure~\ref{f:chain}, for $d\geq 1$.
Then
\[ \tau(\epsilon) \leq 50\, d^{25}\, n^{9}\,
\left( dn\log(dn) + \log(\epsilon^{-1})\right).
\]
\label{main}
\end{theorem}
Our proof of this result has two parts. To avoid using a lazy chain
(which stays where it is at each step, with probability at least
$\nfrac{1}{2}$)
we prove and apply a new result which can be used to bound the
smallest eigenvalue of an ergodic reversible Markov chain.
This new bound is based on
Diaconis and Stroock~\cite[Proposition 2]{DS91} and inspired by
Sinclair~\cite[Theorem 5]{sinclair}. To bound the second-largest
eigenvalue of the chain we adapt the multicommodity
flow analysis given in~\cite{CDG} for the undirected case.
While some parts of the proof are very similar to~\cite{CDG},
significant extra technical difficulties arise
in the directed setting.
We expect that the bound on the mixing time given in Theorem~\ref{main}
is far from tight, but proving a substantially tighter bound
seems beyond the reach of known proof techniques.
The \emph{flip chain} is a Markov chain which performs a restricted set
of switches, designed to ensure that the underlying digraph never becomes
disconnected.
The flip chain for undirected graphs was described in~\cite{MS1},
and proposed as a self-randomizing mechanism for peer-to-peer networks.
The mixing time of the flip chain for regular undirected graphs
was analysed in~\cite{CDH,FGMS}, building on the multicommodity flow
analysis of the switch chain~\cite{CDG}.
We expect that Theorem~\ref{main} can be used to show that the flip chain
for digraphs is rapidly mixing for regular degree sequences.
This result would be of interest since
many protocols for communications networks (such as peer-to-peer networks)
use directed communications (see for example~\cite{HC,MS}).
The structure of the rest of paper is as follows.
The necessary Markov chain definitions are given in the next subsection,
together with the new result (Lemma~\ref{smallest-eigval})
for bounding the smallest eigenvalue of an ergodic, reversible Markov chain.
In Section~\ref{s:markov} we define the switch chain $\mathcal{M}$ and
prove that it is ergodic on $\Omega_{n,d}$ for $d\geq 1$.
A bound on the smallest eigenvalue of the chain is given in
Lemma~\ref{our-smallest-eigval}, and a bound on the second-largest
eigenvalue is stated in
Proposition~\ref{our-second-largest-eigval}.
To conclude Section~\ref{s:markov}, we show how Theorem~\ref{main}
follows from Proposition~\ref{our-second-largest-eigval}, and
give an overview of the main steps of the multicommodity flow
argument which is used to prove
Proposition~\ref{our-second-largest-eigval}. This argument is
presented in Sections~\ref{s:flow}--~\ref{s:analysis}.
Finally, a worked example is given in Section~\ref{a:example}
which illustrates several features of the multicommodity flow
construction.
Before we begin our analysis, we mention some recent related work.
In many practical situations, almost uniformly random samples are required
in order to estimate the average value of some observable of
a system. Kim et al.~\cite{KDBT} describe an alternative approach
to this problem in the case of sampling directed graphs with given
in-degrees and out-degrees. Let $\boldsymbol{d}^+$, $\boldsymbol{d}^-$ be two vectors of
nonnegative integers with a common sum. Denote by
$\Omega_{n,\boldsymbol{d}^+,\boldsymbol{d}^-}$ the set of all digraphs on the vertex set
$[n]$ with in-degree sequence $\boldsymbol{d}^+$ and
out-degree sequence $\boldsymbol{d}^-$ (and assume that this set is nonempty).
Kim et al.\ describe
an algorithm which runs in time $O(n^3)$ and produces a
random element of $\Omega_{n,\boldsymbol{d}^+,\boldsymbol{d}^-}$, drawn from
a specific non-uniform distribution.
The samples output by the algorithm are statistically independent,
and the algorithm can calculate the weight of each digraph that
it produces. They then explain how combining their algorithm
with biased sampling allows the average value of any function
on $\Omega_{n,\boldsymbol{d}^+,\boldsymbol{d}^-}$ to be approximated. However, they
do not analyse the running time of the biased sampling procedure,
which could be very inefficient when the output distribution is far
from uniform. (Indeed, in~\cite[Section 4.1]{KDBT} they assume
that the number of samples in the biased sampling is some positive
integer multiple of $|\Omega_{n,\boldsymbol{d}^+,\boldsymbol{d}^-}|$, which is usually
exponentially large.)
We complete this section with a final remark.
Milo et al.~\cite{MKINA} wrote of the
switch chain for directed graphs that ``Theoretical bounds on the mixing
time exist only for specific near-regular degree sequences'', citing
Kannan, Tetali and Vempala~\cite{KTV}.
However, this is not correct, as we now explain.
Two Markov chains are considered in~\cite{KTV}.
The first is an analogue of the switch chain for undirected graphs.
A bound on the mixing time is given in~\cite{KTV} for near-regular bipartite
undirected graphs, but no conclusion can be drawn from this for directed
graphs. The second chain analysed in~\cite{KTV} is a Markov chain for
tournaments with a given score sequence.
(A \emph{tournament} is a digraph obtained by giving an orientation to
each edge in an (undirected) complete graph. Its \emph{score sequence}
is the sequence of out-degrees.) Each transition of the Markov chain
reverses the arcs of a directed 3-cycle, so it is quite different from the
switch chain. Furthermore, tournaments are very special kinds of digraphs.
We know of no rigorous
polynomial bound on the mixing time of the switch chain for digraphs,
other than Theorem~\ref{main}.
\medskip
\noindent \emph{Acknowledgements.}\
I am grateful to Brendan McKay for his suggestion that
it seemed unnecessary to make the switch chain lazy, which led
to the approach taken here.
I am also grateful to the anonymous referee for their helpful comments,
which improved both the content and the structure of this paper.
\subsection{Markov chain definitions and a new bound on the smallest eigenvalue}
\label{s:intro-MC}
Let $\mathcal{M}$ be an ergodic, time-reversible
Markov chain on the finite state space $\Omega$
with transition matrix $P$ and stationary distribution $\pi$.
The \emph{total variation distance}
between two probability distributions $\sigma,\,\sigma'$ on $\Omega$ is
given by
\[ d_{\mathrm{TV}}(\sigma,\sigma') = \tfrac{1}{2} \sum_{x\in \Omega}
|\sigma(x) - \sigma'(x)|.\]
The \emph{mixing time} $\tau(\varepsilon)$ is defined by
\[ \tau(\varepsilon) = \mathrm{max}_{x\in \Omega}\,
\mathrm{min}\left\{T\geq 0 \mid d_{\mathrm{TV}}(P^t_x,\pi)
\leq \varepsilon \mbox{ for all } t\geq T\right\},\]
where $P_x^t$ is the distribution of the state $X_t$ of the Markov
chain after $t$ steps from the initial state $X_0=x$.
Let $\pi^\ast = \min\{ \pi(x) \mid x\in\Omega\}$ be the minimum
stationary probability.
The transition matrix $P$ has real eigenvalues
\[ 1 = \lambda_0 > \lambda_1 \geq \lambda_2 \geq \cdots
\geq \lambda_{N-1} \geq -1,\]
where $N=|\Omega|$, and the Markov chain is aperiodic if
and only if $\lambda_{N-1} > -1$. Let
\begin{equation}
\label{eigenvalues}
\lambda_*= \max\{ \lambda_1, \, |\lambda_{N-1}|\}
\end{equation}
be the second-largest eigenvalue in absolute value.
The following result follows from Sinclair~\cite[Proposition 1]{sinclair},
which is based on a result of
Diaconis and Stroock~\cite[Proposition 3]{DS91}.
\begin{lemma} \emph{(\cite[Proposition 1]{sinclair})}
The mixing time of the Markov chain $\mathcal{M}$ satisfies
\[ \tau(\varepsilon) \leq (1-\lambda_*)^{-1}
\left( \log(1/\pi^\ast) + \log(\varepsilon^{-1})\right).\]
\label{mix-eigvals}
\end{lemma}
It has become common practice when applying this bound to first
make the Markov chain lazy (that is, replace the transition matrix $P$
by $(I+P)/2$). This ensures that all eigenvalues of the chain are nonnegative,
so that $\lambda_* = \lambda_1$ and only $(1-\lambda_1)^{-1}$ needs to be bounded.
However, we prefer not to use introduce unnecessary laziness and seek
an alternative approach.
Diaconis and Stroock proved a result~\cite[Proposition 2]{DS91} which
provides an upper bound on $(1 + \lambda_{N-1})^{-1}$, where
$\lambda_{N-1}$ is the smallest eigenvalue of a Markov chain
as in (\ref{eigenvalues}). In Lemma~\ref{smallest-eigval} below,
we give a new method for bounding on $\lambda_{N-1}$.
The new bound is obtained by modifying~\cite[Proposition 2]{DS91}
in the same way that Sinclair modified~\cite[Proposition 1]{DS91}
to produce~\cite[Theorem 5]{sinclair}.
The modification results in a bound which is more local in character
and seems easier to apply than~\cite[Proposition 2]{DS91}.
(See also the discussion in~\cite[Section 2]{sinclair}.)
To state the new bound we need some notation.
Write $\mathcal{G}$ for the underlying graph of the Markov chain
$\mathcal{M}$, where $\mathcal{G} = (\Omega, \Gamma)$
and each edge $e\in \Gamma$ corresponds to a transition of
$\mathcal{M}$. That is, $e=\{ x,y\}$ is an edge of $\mathcal{G}$
if and only if $P(x,y)>0$.
Define $Q(e) = Q(x,y) = \pi(x)P(x,y)$ for the edge $e=\{x,y\}$.
(If $P(x,x)>0$ then the edge $\{x,x\}$ is called a \emph{self-loop}
at $x$.)
For each $x\in\Omega$, fix a particular cycle
from $x$ to $x$ in $\mathcal{G}$ with an odd number
of edges, and denote it by $\sigma_x$.
(Such a cycle exists for each $x$, since the Markov chain
is aperiodic.) Note that $\sigma_x$ may be a 1-cycle, which is
a walk along a self-loop edge at $x$. Write $|\sigma_x|$ to denote
the length of the cycle $\sigma_x$, which is a positive odd number.
Let $\Sigma = \{ \sigma_x : x\in \Omega\}$ be the set of these odd cycles,
and define the parameter
\[ \eta = \eta(\Sigma) = \operatorname{max}_{e\in\Gamma} \, \frac{1}{Q(e)}
\, \sum_{x\in\Omega,\,\, e\in\sigma_x} |\sigma_x| \pi(x).
\]
\begin{lemma}
Suppose that $\mathcal{M}$ is a reversible, ergodic Markov chain with
state space $\Omega$. Let $N=|\Omega|$ and let the
eigenvalues of $\mathcal{M}$ be given by \emph{(\ref{eigenvalues})}.
Then
\[ (1 + \lambda_{N-1})^{-1} \leq \frac{\eta}{2}.\]
\label{smallest-eigval}
\end{lemma}
\begin{proof}
The proof is very similar to the proof of~\cite[Proposition 2]{DS91},
but using a different application of the Cauchy-Schwarz inequality,
as in the proof of~\cite[Theorem 5]{sinclair}. Assign an arbitrary
orientation to each cycle $\sigma_x$ and let $e = (e^-,e^+)$ under
this orientation. Also define $\ell(e)$ to be the distance from $x$ to $e^-$
along the oriented cycle $\sigma_x$. Then for any function
$\psi:\Omega\to\mathbb{R}$ we have
\[ \psi(x) = \dfrac{1}{2} \sum_{e\in\sigma_x} (-1)^{\ell(e)}\,
(\psi(e^+) + \psi(e^-))\]
for all $x\in\Omega$.
Given $\psi,\varphi:\Omega\to\mathbb{R}$, let
\[
\langle \psi, \varphi\rangle_\pi = \sum_{x\in\Omega} \psi(x)\varphi(x)\pi(x),\quad
\mathrm{\mathbf{E}}_\pi(\psi) = \sum_{x\in\Omega} \psi(x)\pi(x).
\]
Then for any nonzero function $\psi:\Omega\to\mathbb{R}$ we have
\begin{align*}
\mathrm{\mathbf{E}}_\pi(\psi^2) = \sum_{x\in\Omega} \psi(x)^2\pi(x)
&= \sum_{x\in\Omega} \pi(x)\,
\left(\dfrac{1}{2}\, \sum_{e\in \sigma_x}
(-1)^{\ell(e)} (\psi(e^+) + \psi(e^-)) \right)^2\\
&\leq \dfrac{1}{4} \sum_x \pi(x) |\sigma_x| \sum_{e\in\sigma_x}
(\psi(e^+) + \psi(e^-))^2,
\end{align*}
using the Cauchy-Schwarz inequality. Exchanging the order of summation
(and now orienting each edge $e\in\Gamma$ arbitrarily) gives
\begin{align*}
\mathrm{\mathbf{E}}_\pi(\psi^2) &= \dfrac{1}{4} \sum_{e\in\Gamma} (\psi(e^+) + \psi(e^-))^2\,
\sum_{x\in\Omega,\, e\in\sigma_x}
|\sigma_x|\pi(x)\\
&\leq \frac{\eta}{4}\, \sum_{e\in\Gamma} (\psi(e^+) + \psi(e^-))^2\, Q(e)\\
&= \frac{\eta}{2}\,
\left(\mathrm{\mathbf{E}}_\pi(\psi^2) + \langle \psi, P\psi\rangle_\pi\right).
\end{align*}
Divide through by $\mathrm{\mathbf{E}}_\pi(\psi^2)$ to obtain
\[ 1\leq \frac{\eta}{2}\, \left(1 + \frac{\langle \psi, P\psi\rangle_\pi}
{\mathrm{\mathbf{E}}_\pi(\psi^2)}\right).\]
Now set $\psi$ equal to any eigenfunction $\psi_{N-1}$
corresponding to $\lambda_{N-1}$. After rearranging this completes the proof,
since
\[ \langle \psi_{N-1}, P\psi_{N-1}\rangle_\pi =
\lambda_{N-1}\, \langle \psi_{N-1}, \psi_{N-1}\rangle_\pi =
\lambda_{N-1}\, \mathrm{\mathbf{E}}_\pi(\psi_{N-1}^2).\]
\end{proof}
This leads to an analogue of~\cite[Corollary 6]{sinclair}.
We also prove a bound for a special case which often arises.
\begin{corollary}
\label{congestion-corollary}
Under the same conditions as Lemma~\emph{\ref{smallest-eigval}}\ we have
\[ (1 + \lambda_{N-1})^{-1} \leq \frac{\eta'(\Sigma) \, \ell(\Sigma) }{2}
\]
where
\[ \eta'(\Sigma) = \operatorname{max}_{e\in \Gamma} \frac{1}{Q(e)}
\sum_{x\in\Omega,\, e\in\sigma_x} \pi(x), \qquad
\ell(\Sigma) = \operatorname{max}_{x \in \Omega} |\sigma_x|.
\]
In particular, if $\ell(\Sigma) = 1$ then
\[ (1+\lambda_{N-1})^{-1} \leq \dfrac{1}{2}\operatorname{max}_{x\in\Omega}
P(x,x)^{-1}
\]
(where $P$ denotes the transition matrix of the Markov chain).
\end{corollary}
\begin{proof}
The first statement follows immediately from Theorem~\ref{smallest-eigval}.
Now suppose that $\ell(\Sigma)=1$. Then each
$\sigma_x\in\Sigma$ is a self-loop.
If $e=(y,y)\in\Gamma$ is a self-loop at $y$ then $e$ is contained
in exactly one element of $\Sigma$, namely $\sigma_y$.
In this case
\[ \frac{1}{Q(e)}\, \sum_{x,\,e\in \sigma_x}\, \pi(x) =
\frac{\pi(y)}{Q(y,y)} = P(y,y)^{-1}. \]
If $e\in\Gamma$ is not a self-loop
then $e$ is not contained in any element of $\Sigma$, and in
this case
\[ \frac{1}{Q(e)}\, \sum_{x,\, e\in \sigma_x} \, \pi(x) = 0.\]
Therefore $\eta'(\Sigma) = \operatorname{max}_{x\in\Omega} P(x,x)^{-1}$
and the second statement follows from the first.
\end{proof}
We will use the \emph{multicommodity flow}
method of Sinclair~\cite{sinclair} to bound the second
eigenvalue $\lambda_1$.
A \emph{flow} in $\mathcal{G}$ is a function
$f:\mathcal{P}\to [0,\infty)$ which satisfies
\begin{equation}
\label{flow-condition}
\sum_{p\in\mathcal{P}_{xy}} f(p) = \pi(x)\pi(y)
\quad \mbox{ for all } x,y\in \Omega, \, x\neq y,
\end{equation}
where $\mathcal{P}_{xy}$ is the set of all simple directed paths from
$x$ to $y$ in $\mathcal{G}$ and $\mathcal{P}= \cup_{x\neq y}\mathcal{P}_{xy}$.
Extend $f$ to a function on oriented edges by setting
\[ f(e) = \sum_{p\ni e} f(p),\]
so that $f(e)$ is the total flow routed through $e$.
Let $\ell(f)$
be the length of the longest path with $f(p)>0$, and let
\[ \rho(e) = f(e)/Q(e)\]
be the \emph{load} of the edge $e$. The
\emph{maximum load} of the flow is
\[ \rho(f) = \max_{e} \rho(e).\]
Sinclair~\cite[Corollary $6'$]{sinclair} proves the following.
\begin{lemma} \emph{(\cite[Corollary $6'$]{sinclair})}
For any reversible ergodic Markov chain $\mathcal{M}$ and any flow $f$,
the second eigenvalue $\lambda_1$ satisfies
\[ (1-\lambda_1)^{-1} \leq \rho(f)\ell(f).\]
\label{second-eigval}
\end{lemma}
\section{The switch chain}\label{s:markov}
Let $d:\{ 4,5,\ldots\}\rightarrow \mathbb{N}$ be any function such that
$1\leq d(n)\leq n-1$ for all $n\geq 4$, and denote by
$\Omega_{n,d(n)}$
the set of all $d(n)$-regular simple
digraphs with vertex set $[n]=\{ 1,2,\ldots, n\}$.
We usually hide the dependence of $d$ on $n$ and just write
$d$ rather than $d(n)$; similarly we write $\Omega_{n,d}$ for
$\Omega_{n,d(n)}$.
We will study the Markov chain $\mathcal{M}$ described in
Figure~\ref{f:chain}, which we call the \emph{switch chain}.
From a given state, an unordered pair of two
distinct arcs are chosen uniformly at random. Then the
two chosen arcs exchange heads, unless the chosen arcs are incident or
exchanging their heads would create a repeated arc.
Note that two arcs are non-incident if and only if the set of endvertices
of the two arcs contains exactly four vertices.
\begin{figure}[ht]
\begin{center}
\fbox{%
\begin{minipage}{17cm}
\begin{tabbing}
A\=\kill
\> From $G\in\Omega_{n,d}$ do\\
\> AA\= \kill
\> \> choose an unordered pair of
two distinct arcs $\{ (i,j)$, $(k,\ell)\}$, u.a.r., \hspace*{2mm}\\
\> \> if $|\{ i,j,k,\ell\}| = 4$ and
$\{ (i,\ell), \, (k,j)\} \cap A(G) = \emptyset$ then\\
\>\> AA\= \kill
\>\>\> delete the arcs $(i,j)$, $(k,\ell)$
and add the arcs $(i,\ell)$, $(k,j)$, \hspace*{2mm}\\
\> \> else\\
\> \>\> do nothing;\\
\> \> end if;\\
\> end.
\end{tabbing}
\end{minipage}}
\end{center}
\caption{The Markov chain on $\Omega_{n,d}$}
\label{f:chain}
\end{figure}
We will write $[ijk\ell]$ as shorthand notation for the switch
that replaces the arcs $(i,j)$, $(k,\ell)$ with the arcs $(i,\ell)$,
$(k,j)$, as in Figure~\ref{f:chain}.
The transition matrix $P$ of the Markov chain
satisfies $P(X,Y) = P(Y,X) = 1/\binom{dn}{2}$ if $X$ and $Y$ differ by
just a switch, with all other non-diagonal entries equal to zero.
Therefore $P$ is symmetric, so the stationary distribution
of the Markov chain is uniform over $\Omega_{n,d}$.
It is not difficult to see that the switch chain is aperiodic, but
for completeness we give a brief proof.
\begin{lemma}
The switch chain on $\Omega_{n,d}$ is aperiodic
for $n\geq 4$ and $1\leq d\leq n-1$.
\label{aperiodic}
\end{lemma}
\begin{proof}
Fix $G\in \Omega_{n,d}$ and choose an arc $(\alpha,\beta)\in A(G)$.
Since $d\geq 1$, there exists an arc $(\gamma,\alpha)\in A(G)$.
These two arcs are distinct but incident, and if they
are the arcs chosen in the transition procedure then the switch
will be rejected and the chain will remain at $G$. Hence
$P(G,G)\geq 1/\binom{dn}{2} > 0$. So there is a self-loop at every
state of $\Omega_{n,d}$, which proves that the chain is aperiodic.
\end{proof}
Rao, Jana and Bandyopadhyay~\cite{RJB} showed that the switch chain is
not always irreducible
on the set of all digraphs with a given degree sequence.
Characterisations of degree sequences for which the chain is
irreducible were given in~\cite{BM,lamar}.
We will now prove that when $n\geq 4$ and $d\geq 1$
the set $\Omega_{n,d}$ is connected under switches;
that is, that the switch chain is irreducible on $\Omega_{n,d}$.
(This was already known when $d=1$,
see Diaconis, Graham and Holmes~\cite[Remark 2]{DGH}.)
We will use results from LaMar~\cite{lamar}.
For a set $U$ of vertices in a digraph $G$,
define the sets $\mathcal{W}^{(i,j)} = \mathcal{W}^{(i,j)}(U,G)$
for ${(i,j)}\in \mathbb{Z}_2^2$, as follows:
\begin{align*}
\mathcal{W}^{(0,0)} &= \{ x\in V(G)- U : (x, u)\not\in A(G),\,
(u,x)\not\in A(G) \text{ for all } u\in U\},\\
\mathcal{W}^{(0,1)} &= \{ x\in V(G)- U : (x, u)\not\in A(G),\,
(u,x)\in A(G) \text{ for all } u\in U\},\\
\mathcal{W}^{(1,0)} &= \{ x\in V(G)- U : (x, u)\in A(G),\,
(u,x)\not\in A(G) \text{ for all } u\in U\},\\
\mathcal{W}^{(1,1)} &= \{ x\in V(G)- U : (x, u)\in A(G),\,
(u,x)\in A(G) \text{ for all } u\in U\}.
\end{align*}
(In~\cite{lamar} these sets are called $\mathcal{C}^0$, $\mathcal{C}^+$,
$\mathcal{C}^-$, $\mathcal{C}^{\pm}$, respectively.)
\begin{lemma}
\label{connected}
The space $\Omega_{n,d}$ is connected under switches when $n\geq 4$
and $1\leq d\leq n-1$.
\end{lemma}
\begin{proof}
For a contradiction, assume that $\Omega_{n,d}$ is not connected
under switches. Then by LaMar~\cite[Theorems 3.3 and 3.4]{lamar},
for any digraph $G\in\Omega_{n,d}$ there is a
set of vertices $\{v_0,v_1,v_2\}$ such that the induced digraph
$G[\{v_0,v_1,v_2\}]$ is a directed 3-cycle and, writing
$\mathcal{W}^{(i,j)} = \mathcal{W}^{(i,j)}(\{ v_0,v_1,v_2\},G)$ for all
${(i,j)}\in\mathbb{Z}_2^2$,
\begin{itemize}
\item[(i)] all vertices in $V(G)$ other than $\{ v_0, v_1, v_2\}$ belong to
${\bigcup_{{(i,j)}\in\mathbb{Z}_2^2} \mathcal{W}^{(i,j)}}$,
\item[(ii)] no arcs from $\mathcal{W}^{(0,0)}\cup \mathcal{W}^{(0,1)}$ to
$\mathcal{W}^{(0,0)}\cup\mathcal{W}^{(1,0)}$ are present,
\item[(iii)] all (non-loop) arcs from $\mathcal{W}^{(1,0)}\cup \mathcal{W}^{(1,1)}$
to $\mathcal{W}^{(0,1)}\cup \mathcal{W}^{(1,1)}$ are present.
\end{itemize}
Let
$n^{(i,j)} = |\mathcal{W}^{(i,j)}|$ for ${(i,j)}\in \mathbb{Z}_2^2$.
Considering the
in-degree and out-degree of $v_0$ gives, using (i),
\[ d = n^{(1,0)} + n^{(1,1)} + 1 = n^{(0,1)} + n^{(1,1)} + 1\]
(and in particular, $n^{(0,1)} = n^{(1,0)}$). However, by (ii),
the in-degree of any element of $\mathcal{W}^{(1,0)}$ is at most
$n^{(1,0)} + n^{(1,1)} - 1 = d-2$,
the out-degree of any element of $\mathcal{W}^{(0,1)}$ is at most
$n^{(0,1)} + n^{(1,1)} - 1 = d-2$
and the out-degree of any element of $\mathcal{W}^{(0,0)}$ is at most
$n^{(0,1)} + n^{(1,1)} = d-1$.
This contradicts the assumption that $G\in\Omega_{n,d}$ unless
\[ \mathcal{W}^{(0,0)} \cup\mathcal{W}^{(0,1)}\cup\mathcal{W}^{(1,0)} =\emptyset.\]
But then $\mathcal{W}^{(1,1)} = V - \{ v_0,v_1,v_2\}$, and this set is
nonempty as $n\geq 4$. By (iii), the in-degree of any element of
$\mathcal{W}^{(1,1)}$ is $n-1$. Since the in-degree of $v_0$ is $n-2$,
we obtain a contradiction.
\end{proof}
Suppose that $G\in\Omega_{n,d}$ contains a directed 3-cycle on the
vertices $v_0, v_1, v_2$. Consider the sets
$\mathcal{W}^{(i,j)}=\mathcal{W}^{(i,j)}(\{ v_0, v_1, v_2\},G)$
where ${(i,j)}\in\mathbb{Z}_2^2$.
If $x\in V(G) - \{ v_0,v_1,v_2\}$
does not belong to
$\bigcup_{{(i,j)}\in\mathbb{Z}_2^2}\mathcal{W}^{(i,j)}$
then we say that $x$ is a \emph{useful neighbour}
for the given 3-cycle.
(Note that $x$ must be an in-neighbour or an out-neighbour of at least one
vertex on the 3-cycle, since $x\not\in\mathcal{W}^{(0,0)}$.) Similarly,
$(x,y)$ is called a \emph{useful arc} for the given 3-cycle
if $x\neq y$ and one of the following conditions holds:
\begin{itemize}
\item[(U1)] $(x,y)\in A(G)$, with
$x\in\mathcal{W}^{(0,0)}\cup\mathcal{W}^{(0,1)}$ and
$y\in\mathcal{W}^{(0,0)}\cup\mathcal{W}^{(1,0)}$;
\item[(U2)]
$(x,y)\not\in A(G)$, with
$x\in\mathcal{W}^{(1,0)}\cup\mathcal{W}^{(1,1)}$ and
$y\in\mathcal{W}^{(0,1)}\cup\mathcal{W}^{(1,1)}$.
\end{itemize}
The following result will be used later.
\begin{lemma}
Suppose that $G\in\Omega_{n,d}$ contains a set of three vertices
$\{ v_0,v_1,v_2\}$ such that the induced digraph
$G[\{v_0,v_1,v_2\}]$ is a directed 3-cycle.
Then there exists a useful neighbour or a useful arc for
this 3-cycle.
\label{useful}
\end{lemma}
\begin{proof}
Suppose that there is no useful neighbour for the 3-cycle.
Then condition (i) from the proof of Lemma~\ref{connected} holds.
For a contradiction, assume that there is no useful arc $(x,y)$.
Then all (non-loop) arcs from $\mathcal{W}^{(0,0)}\cup\mathcal{W}^{(0,1)}$ to
$\mathcal{W}^{(0,0)}\cup\mathcal{W}^{(1,0)}$ are absent in $G$, and all
(non-loop) arcs from $\mathcal{W}^{(1,0)}\cup\mathcal{W}^{(1,1)}$ to $\mathcal{W}^{(0,1)}\cup \mathcal{W}^{(1,1)}$
are present in $G$. That is,
conditions (ii) and (iii) from the proof of Lemma~\ref{connected} also hold.
Arguing as in the proof of Lemma~\ref{connected} leads to a contradiction.
\end{proof}
Now we prove a bound on the smallest eigenvalue of the switch chain.
\begin{lemma}
Suppose that $n\geq 4$ and $1\leq d\leq n-1$, and let $N = |\Omega_{n,d}|$.
The smallest eigenvalue $\lambda_{N-1}$ of the switch chain satisfies
\[ (1 + \lambda_{N-1})^{-1} \leq \dfrac{1}{4}\, d^2 n^2.\]
\label{our-smallest-eigval}
\end{lemma}
\begin{proof}
By Lemma~\ref{aperiodic}, there is a self-loop $\sigma_x$
in $\Gamma$ at every $x\in\Omega_{n,d}$.
Let $\Sigma=\{\sigma_x\}$ be the set of these 1-cycles.
Since $\ell(\Sigma) = 1$, the result follows
from the second statement of Corollary~\ref{congestion-corollary}
since
\[ \operatorname{max}_{x\in\Omega_{n,d}} P(x,x)^{-1} \leq \binom{dn}{2}.\]
\end{proof}
We also need a bound on the second-largest eigenvalue of the
switch chain.
\begin{proposition}
Suppose that $n\geq 4$ and $1\leq d\leq n-1$, and let
$\lambda_1$ be the second-largest eigenvalue of the
switch chain on $\Omega_{n,d}$. Then
\[
(1 - \lambda_1)^{-1} \leq 50 d^{25} n^9.
\]
\label{our-second-largest-eigval}
\end{proposition}
The proof of Proposition~\ref{our-second-largest-eigval}
is lengthy and quite technical. We give an outline of the
proof below, and full details in Sections~\ref{s:flow}--\ref{s:analysis}.
But first, we show how Theorem~\ref{main} can be proved
from Proposition~\ref{our-second-largest-eigval}.
\begin{proof}[Proof of Theorem~\ref{main}]\
If the smallest eigenvalue $\lambda_{N-1}$ is nonnegative
then $\lambda_* = \lambda_1$, and by
Proposition~\ref{our-second-largest-eigval} we have
\begin{equation}
(1-\lambda_*)^{-1} \leq 50 d^{25} n^9.
\label{our-next-largest}
\end{equation}
Suppose now that $\lambda_{N-1}$ is negative.
Then $1-|\lambda_{N-1}| = 1+\lambda_{N-1}$
and it follows from Lemma~\ref{our-smallest-eigval} and
Proposition~\ref{our-second-largest-eigval} that
(\ref{our-next-largest}) also holds in this case.
Finally, we note that
\begin{equation}
\label{log-omega}
\log |\Omega_{n,d}| \leq dn \log(dn).
\end{equation}
(This is well-known but for completeness we sketch a proof. Take a
bipartite graph on $n+n$ vertices and
assign $d$ ``half-edges'' to each vertex on the side.
Arbitrarily match each half-edge on the left to a half-edge
on the right. There are at most $(dn)^{dn}$ ways to perform this
matching. Finally, orient each edge from left to right and
identify the $j$'th vertex on each side, giving a digraph on $n$
vertices which may have loops or multiple arcs. As each element of
$\Omega_{n,d}$ can be formed from at least one matching in this
way, we obtain an upper bound.)
Hence, since $\pi$ is uniform,
\[ \log 1/\pi^\ast = \log{|\Omega_{n,d}}| \leq dn\log (dn).\]
Substituting (\ref{our-next-largest}) and (\ref{log-omega}) into
Lemma~\ref{mix-eigvals} gives the stated bound on the mixing time,
completing the proof.
\end{proof}
Hence it remains to establish Proposition~\ref{our-second-largest-eigval}.
We use a multicommodity flow argument to prove this result.
Before embarking on the proof, we outline the major steps in the
argument. (We note that our proof follows the same general outline
as most canonical path or multicommodity flow arguments, where encodings
are often used. In particular, our proof builds upon
the argument from~\cite{CDG}.)
\begin{itemize}
\item Given distinct digraphs $G, G'\in\Omega_{n,d}$, we define a
finite set $\Psi(G,G')$ of objects, called pairings.
For each $\psi\in\Psi(G,G')$
we will define a canonical path $\gamma_\psi(G,G')$ from $G$ to $G'$,
indexed by $\psi$. Then the flow $f$ is defined on
\[ \bigcup_{(G,G')} \{ \gamma_\psi(G,G')\mid \psi\in\Psi(G,G')\}\]
by
\[ f(\gamma_\psi(G,G')) = \frac{\pi(G)\,\pi(G')}{|\Psi(G,G')|}
= \left(|\Omega_{n,d}|^2\, |\Psi(G,G')|\right)^{-1},
\]
and is set to zero for all other paths.
Note that $f$ satisfies (\ref{flow-condition}).
\item
To define $\gamma_\psi(G,G')$ we work with the symmetric difference
$H = G\triangle G'$ of $G$ and $G'$ (with arcs of $G-G'$ coloured
blue and arcs of $G'-G$ coloured red). In Sections~\ref{s:circuits}
and~\ref{s:circuit}
we show how to decompose $H$ into a sequence of arc-disjoint subdigraphs called
1-circuits and 2-circuits, in a canonical way. The canonical path
$\gamma_\psi(G,G')$
is formed by processing each of these 1-circuits and 2-circuits in
the given order.
\item We can process 1-circuits, and certain 2-circuits, in a way which
is very similar to the method used in~\cite{CDG} for undirected graphs.
The 2-circuits which can be handled in this way are called \emph{normal}.
Section~\ref{s:1circuit} explains how to process a 1-circuit and
Section~\ref{s:normal} describes how to process a normal 2-circuit.
\item The main difficulties in the proof arise from the need to handle
2-circuits which are not normal. We further categorise these as
\emph{eccentric 2-circuits} or \emph{triangles}. Sections~\ref{s:eccentric}
and~\ref{s:triangle} describe how to process these 2-circuits.
\end{itemize}
By this stage, the multicommodity flow is completely defined.
Next we must analyse the flow in order to bound the maximum load of the
flow, and hence the second-largest eigenvalue (using
Lemma~\ref{second-eigval}).
\begin{itemize}
\item Let $(Z,Z')$ be a transition along one of the canonical paths
$\gamma_\psi(G,G')$, and suppose that this transition is performed
while processing the 1-circuit or 2-circuit $S$. A set of
\emph{interesting arcs} for $Z$ with respect to $(G,G',\psi)$
is defined.
These are arcs which have been disturbed during the
processing of $S$ and not yet returned to their original state,
and they will play a key role in our analysis. Lemma~\ref{zoo}
describes the structure of the digraph formed by the interesting arcs
(see also Figure~\ref{f:zoo}).
\item Next we identify $G$, $G'$ and $Z$ with their adjacency matrices
and define a matrix $L$ by $L+Z = G+G'$. Then $L$ is an $n\times n$
matrix with entries in $\{ -1,0,1,2\}$. We say that $L$ is
an \emph{encoding} for $Z$ with respect to $(G,G')$. Lemma~\ref{notquiteunique}
shows that given $(Z,Z')$, $L$ and $\psi$ there are at most four
possibilities for $(G,G')$ such that $(Z,Z')$ is a transition on
$\gamma_\psi(G,G')$ and $L$ is an encoding for $Z$ with respect
to $(G,G')$. Further information about the structure of $L$ is
given in Lemma~\ref{structure}.
\item Now the notion of encoding is broadened to encompass any $n\times n$
matrix with entries in $\{ -1,0,1,2\}$ such that all row sums and column
sums equal $d$. Given $Z\in \Omega_{n,d}$, we say that the
encoding $L$ is $Z$\emph{-valid} if every entry of $L+Z$ belongs
to $\{ 0,1,2\}$ and $L, Z, H$ satisfy the conclusions of
Lemma~\ref{structure}. (Here $H$ is the digraph defined by all
entries of $L+Z$ which equal 1.) Lemma~\ref{Zvalid} proves a
useful fact about $Z$-valid encodings.
\item Next we explain how to apply switches to encodings, and prove
in Lemma~\ref{fix} (using Lemma~\ref{Zvalid})
that any $Z$-valid encoding can be transformed into
an element of $\Omega_{n,d}$ using at most three switches.
Counting the number of ways these switches can be performed in
reverse leads to an upper bound
of the form $\operatorname{poly}(n,d)\, |\Omega_{n,d}|$
on the number of $Z$-valid encodings,
as proved in Lemma~\ref{poly}.
\item Combining all this allows us to prove an upper bound on the
total flow routed through an arbitrary transition of the Markov chain.
This bound, of the form $\operatorname{poly}(n,d)\, |\Omega_{n,d}|^{-1}$,
is proved in Lemma~\ref{load}. With this in hand it is easy to establish
a polynomial bound on the maximum load $\rho(f)$ of the flow, and
hence to prove Proposition~\ref{our-second-largest-eigval}.
\end{itemize}
\section{Defining the flow}\label{s:flow}
We now define the multicommodity flow which will be used
to bound the second largest eigenvalue, and hence the mixing time,
of the switch chain for regular directed graphs.
For $G,G'\in\Omega_{n,d}$, let $H = G\triangle G'$ be
the symmetric difference of $G$ and $G'$, together with
an arc-colouring which colours all arcs of $G - G'$ blue
and all arcs of $G' - G$ red. This arc colouring means
that we can think of $H$ as the symmetric difference of
the \emph{ordered pair} $(G,G')$.
For $v\in V$ let $\theta_v$ be the blue in-degree of $v$,
which equals the red in-degree of $v$, and let $\phi_v$
be the blue out-degree of $v$, which equals the red out-degree
of $v$. Choose a \emph{pairing} of the red and blue arcs
around each vertex as follows: each blue arc with head $v$
is paired with a red arc with head $v$, and
each blue arc with tail $v$ is paired with a red arc with
tail $v$, defining two bijections (one from the set of blue arcs
with head $v$ to the set of red arcs with head $v$, and one
from the set of blue arcs with tail $v$ to the set of red arcs
with tail $v$). Denote the set of all such pairings by
$\Psi(G,G')$. Then
\begin{equation}
\label{number-pairings}
|\Psi(G,G')| = \prod_{v\in V} \theta_v!\, \phi_v!
\end{equation}
is the total number of pairings.
Write $\mathcal{G}$ for the underlying graph of the Markov chain
$\mathcal{M}$, where $\mathcal{G} = (\Omega_{n,d}, \Gamma)$
and each edge $e\in \Gamma$ corresponds to a transition of
$\mathcal{M}$.
For each pairing in $\Psi(G,G')$ we construct a canonical
path from $G$ to $G'$ in $\mathcal{G}$.
Each of these paths will carry $1/|\Psi(G,G')|$ of the total
flow from $G$ to $G'$.
We now introduce some terminology. A
\emph{forward circuit}
in $H$ is a string $C = w_0w_1\cdots w_{2k-1}$ over the alphabet $V$
such that the arcs
\begin{equation}
(w_0,w_1),\, (w_2,w_1),\, (w_2,w_3),\, (w_4,w_3),\,
\ldots (w_{2k-2},w_{2k-1}),\, (w_0,w_{2k-1})
\label{arclist}
\end{equation}
are all distinct, all belong to $A_H$ and alternate in colour:
that is,
the arcs in
\[ \{ (w_{2i},w_{2i+1}) : i=0,1,\ldots, k-1\}\]
all have one colour and the arcs in
\[ \{ (w_{2i+2},w_{2i+1}) : i=0,1,\ldots, k - 2\}\cup\{ (w_0,w_{2k-1})\}\]
all have the other colour.
The \emph{converse} of $H$ is the digraph obtained from $H$
by reversing the direction of every arc (but keeping the colours
the same).
A \emph{reverse circuit}
in $H$ is a string $C = w_0w_1\cdots w_{2k-1}$
over the alphabet $V$
which forms a forward circuit in the converse of $H$.
That is, the arcs
\begin{equation}
(w_1,w_0),\, (w_1,w_2),\, (w_3,w_2),\, (w_3,w_4),\,
\ldots (w_{2k-1},w_{2k-2}),\, (w_{2k-1},w_{0})
\label{reversearclist}
\end{equation}
are all distinct, all belong to $A_H$ and alternate in colour,
so that the arcs in
\[ \{ (w_{2i+1},w_{2i}) : i=0,1,\ldots, k-1\}\]
all have one colour and the arcs in
\[
\{ (w_{2i+1}, w_{2i+2}) : i=0,1,\ldots, k-2\}\cup \{ (w_{2k-1},w_0)\}
\]
all have the other colour.
By \emph{circuit} we mean either a forward circuit or a reverse
circuit. For a forward or reverse circuit $C$, denote by $A(C)$
the set of arcs in (\ref{arclist}) or (\ref{reversearclist}),
respectively.
It is important to note that the arcs of a circuit
alternate both in colour and orientation at each step.
While a circuit may contain both the arcs $(x,y)$ and $(y,x)$,
any three consecutive vertices on the circuit are distinct.
We now define two operations on digraphs.
Let $\zeta$ denote the operation which takes a digraph to its converse
(that is, it reverses every arc in the digraph),
and let $\chi$ be the operation which takes a digraph to its complement.
Writing $[n]^{(2)}$ for the set of all ordered pairs of distinct
elements of $[n]$, the complement $\chi G$ of a digraph $G$ has
arc set $[n]^{(2)} - A(G)$.
Note that the operations $\zeta$ and $\chi$
commute and are both involutions.
We can also apply $\zeta$ and $\chi$ to the (arc-coloured) symmetric
difference $H=G\triangle G'$. Here $\zeta H$ is the result of reversing
every arc in $H$, without changing the colour of any arc.
Similarly, $\chi H$ is the result of exchanging the colour of every arc in $H$
(so that blue becomes red and vice-versa), without changing the
orientation of any arc. To see this, note that the set of blue
arcs in $H=G\triangle G'$ equals
\[ A(G) - A(G') = A(\chi G') - A(\chi G),\]
but this equals the set of red arcs in $(\chi G)\triangle (\chi G')$
(and similarly, the set of red arcs in $G\triangle G'$ equals the
set of blue arcs in $(\chi G)\triangle (\chi G')$).
Finally, we generalise these definitions so that they also
apply to (arc-coloured) sub-digraphs $U$ of $H$. That is,
$\zeta U$ is the result of reversing every arc in $U$, without
changing the colour of any arc, while $\chi U$ is the result of
exchanging the colour of every arc in $U$ without changing the
orientation of any arc.
\subsection{Decomposition into circuits}\label{s:circuits}
Fix a pairing $\psi\in\Psi(G,G')$. We decompose $H$ into
a sequence of circuits depending on $\psi$, as
follows. Let $(w_0,w_1)$ be the lexicographically least
arc in $H$. Choose the arc $(w_2,w_1)$ which is paired with
$(w_0,w_1)$ at $w_1$. (Note that if $(w_0,w_1)$ is blue then
$(w_2,w_1)$ is red, and vice-versa. Furthermore, $w_2\neq w_0$
since $H$ is a symmetric difference.)
Next choose the arc $(w_2,w_3)$
which is paired with $(w_2,w_1)$ at $w_2$. (This arc will have the
same colour as $(w_0,w_1)$.) Continue in this fashion. Specifically, for
$i\geq 1$, if $w_{2i}\neq w_0$ then
let $(w_{2i}, w_{2i+1})$ be the arc which is paired with $(w_{2i},w_{2i-1})$
at $w_{2i}$ and let $(w_{2i+2},w_{2i+1})$ be the arc which is paired
with $(w_{2i},w_{2i+1})$ at $w_{2i+1}$. The vertices $w_i$ are
not necessarily distinct, but the arcs are distinct.
The process terminates when $(w_0,w_{2k-1})$ is paired with $(w_0,w_1)$
at $w_0$, giving a forward circuit
$C_1 = w_0w_1\cdots w_{2k-1}$.
If $A(C_1) = A_H$ then $\mathcal{C} = \{ C_1\}$ and we are done.
Otherwise, take the lexicographically least arc not in $C_1$ and
generate a new circuit $C_2$ by the above procedure. Continue generating
circuits until
\[ A_H = A(C_1)\cup A(C_2)\cup\cdots \cup A(C_s).\]
Then $\mathcal{C} = \{ C_1, C_2,\ldots, C_s\}$ and the circuits
$C_1, C_2,\ldots, C_s$ are arc-disjoint.
Note that, once the pairing has been chosen, $\mathcal{C}$ is formed without
regard to the colouring of $H$. This property will be needed later.
Using $\mathcal{C}$, we form a path
\[ G = Z_0, Z_1,\ldots, Z_M = G'\]
from $G$ to $G'$ in the underlying graph of the Markov chain
(that is, to get from $Z_a$ to $Z_{a+1}$ we perform a switch).
The path is defined by processing each circuit $C_i$ in turn.
Processing a circuit changes its arcs
from agreeing with $G$ to agreeing with $G'$, with no other arcs
being permanently altered (though some may be temporarily changed while
processing the circuit $C_i$). The canonical path is defined
inductively. If
\[ G = Z_0, Z_1,\ldots, Z_r\]
is the canonical path obtained by processing the first circuit $C_1$, and
\[ Z_r, Z_{r+1},\ldots, Z_N = G'\]
is the canonical path from $Z_r$ to $G'$ obtained by processing
the circuits $(C_2,\ldots, C_s)$, in order, then the canonical path
from $G$ to $G'$ corresponding to $\psi$ is given by the concatenation
of these two paths.
Thus it suffices to describe the canonical path corresponding to a
particular circuit $C = w_0 w_1 \cdots w_{2k-1}$.
First we may need to decompose the circuit $C$ further.
A \emph{1-circuit} $S = v_0v_1v_2\cdots v_t$ is a string
on the letters $\{ w_0, w_1,\ldots, w_{2k-1}\}$
such that $v_0=w_0$ and $w_0$ appears only once in $S$.
Usually a 1-circuit will be a contiguous substring of $C$
(allowing reversal of direction and/or cyclic wrapping around $C$), but
it may also contain one arc which is not an arc of $C$.
We now show how to decompose a circuit into a sequence of 1-circuits
(and possibly some single switches)
which will then be processed in order (as described in Section~\ref{s:1circuit})
to form the canonical path corresponding to $C$.
\subsection{Decomposition of a circuit }\label{s:circuit}
Given a circuit $C = w_0w_1\cdots w_{2k-1}$, let $v = w_0$
and let $C^{(0)}=C$ (which is the currently unprocessed segment of $C$).
Suppose that the current digraph on the
canonical path from $G$ to $G'$ is $Z_J$. If $w_i\neq v$ for $i=1,\ldots, 2k-1$
then $C^{(0)}$ is
a 1-circuit which we process (using the procedure described in Section~\ref{s:1circuit}),
extending the canonical path as
\begin{equation}
\label{process}
G = Z_0,\ldots, Z_J, Z_{J+1},\ldots, Z_{J+t}.
\end{equation}
This completes the processing of $C$. Otherwise,
$v$ appears $\theta$ times on $C^{(0)}$,
where $2\leq \theta\leq \theta_v+\phi_v$.
Relabel the vertices on $C^{(0)}$ as
\[ C^{(0)} = v x_1\cdots y_1 v x_2 \cdots y_2 v
\cdots v x_\theta \cdots y_\theta.\]
By construction, $C^{(0)}$ is a forward circuit, so $(v,x_1)\in A(C^{(0)})$.
Firstly,
suppose that $S = v x_1 \cdots y_1$ is a 1-circuit. That is, arcs
$(v,x_1)$ and $(v,y_1)$ are present on $S$, with opposite colours.
Process the 1-circuit $S$, extending the canonical path as in
(\ref{process}), leaving the forward circuit
\[ C^{(1)} = v x_2\cdots y_2 v \cdots v x_\theta \cdots y_\theta\]
as the unprocessed section of $C$. Then process $C^{(1)}$ inductively.
Next, suppose that we are not in the above situation (so that
the arcs $(v,x_1)$ and $(y_1,v)$ are present on $S$, with the same colour),
but that $S = vx_\theta \cdots y_\theta$ is a 1-circuit.
That is, the arcs $(v,x_\theta)$ and $(v,y_\theta)$ are present on $S$,
with opposite colours.
Process the 1-circuit $S$, extending the canonical path as in (\ref{process}),
and leaving the forward circuit
\[ C^{(1)} = vx_1\cdots y_1 v \cdots v x_{\theta-1} \cdots y_{\theta-1}\]
to be processed inductively.
Finally, suppose that neither of the two situations above apply.
Then the arcs $(v,x_1)$ and $(y_1,v)$ have one colour
while $(x_\theta, v)$ and $(v, y_\theta)$ have the other colour.
We will process
$S' = v x_1\cdots y_1 v x_\theta \cdots y_\theta $ (which we
call a \emph{2-circuit}),
extending the canonical path as in (\ref{process})
and leaving
\[ C^{(1)} = v x_2 \cdots y_2 v \cdots v x_{\theta-1} \cdots y_{\theta-1}
\]
to be processed inductively. Here $C^{(1)}$ is a reverse circuit
and we process it using the same procedure as described above, but with
all arcs reversed.
All 1-circuits and 2-circuits created by the above procedure are
called \emph{raw}.
The order in which we detect and process raw 1-circuits and raw
2-circuits implies
that both the processed and unprocessed sections of $C$ are contiguous
whenever the processing of a raw 1-circuit or raw 2-circuit is complete.
(That is, these sections form contiguous substrings of $C$, where a substring
is allowed to wrap around in a cyclic fashion.)
Suppose that $S$ is a raw 1-circuit or raw 2-circuit with
substring $abc$. Fix $i\in \{ 0,1\}$ such that the corresponding
arcs are
$\zeta^i(a,b)$ and $\zeta^i(c,b)$. These arcs are called
\emph{successive arcs} along $S$.
Every raw 1-circuit or raw 2-circuit $S$ has the following property:
successive arcs along $S$ are paired under $\psi$ at their
common endvertex $b$, except possibly when
$b=v$ and the arcs are the first and last arcs of $S$.
We call this the \emph{well-paired} property,
which will be used in
Lemma~\ref{simplify} below.
Raw 1-circuits are processed using the method described in
Section~\ref{s:1circuit}. In most cases, raw 2-circuits must be further
decomposed (into a sequence of 1-circuits and/or switches)
before they can be processed, as described in
Section~\ref{2circuit}. It is here that extra
difficulties arise here when working with directed graphs.
Recall the notation for switches introduced after
Figure~\ref{f:chain}.
Let $Q=\alpha\beta\gamma\delta$ be a circuit in $H$ which is
also a 4-cycle. Set
\[ i=\begin{cases} 0 & \text{ if $Q$ is a forward circuit,}\\
1 & \text{ otherwise}.
\end{cases}
\]
We now define notation for the switch which processes this 4-cycle,
starting from the current digraph $Z_J$ and producing the next digraph
$Z_{J+1}$ on the canonical path.
Let $h=0$ if $\zeta^i (\alpha,\beta)\in Z_J$ and $h=1$ otherwise.
Then define
\[ \zeta^i\chi^h [\alpha\beta\gamma\delta] =
\begin{cases} [\alpha\beta\gamma\delta] & \text{ if $i=0$, $h=0$,}\\
[\alpha\delta\gamma\beta] & \text{ if $i=0$, $h=1$,}\\
[\beta\alpha\delta\gamma] & \text{ if $i=1$, $h=0$,}\\
[\beta\gamma\delta\alpha] & \text{ if $i=1$, $h=1$.}
\end{cases}
\]
If $h=0$ then the switch $\zeta^i\chi^h [\alpha\beta\gamma\delta]$
deletes the arcs
$\zeta^i(\alpha,\beta)$, $\zeta^i(\gamma,\delta)$ and replaces them
with $\zeta^i(\alpha,\delta)$, $\zeta^i(\gamma,\beta)$, while the
opposite occurs if $h=1$.
Finally, we define the \emph{status} of an arc $(x,y)$
in a digraph $Z$ to equal 0 if $(x,y)\not\in A(Z)$ and to equal 1 if
$(x,y)\in A(Z)$. We say that two arcs have
\emph{matching status} if their status is
equal in $Z$, and say that they have
\emph{opposite status} otherwise.
\subsection{Processing a 1-circuit}\label{s:1circuit}
Let $S$ be a 1-circuit. (If $S$ is not raw then $S$ has resulted
from the decomposition of a raw 2-circuit: see Sections~\ref{s:normal}
and~\ref{s:eccentric}.)
The method for processing a 1-circuit is very similar to
that used in~\cite{CDG}, and some of the discussion and figures given there
may be helpful.
(See also the worked example in Section~\ref{a:example}.)
Label the 1-circuit as $S = x_0 x_1\ldots x_{2k-1}$
where $k\geq 2$, such that $x_0$ is the minimum vertex on $S$
and $x_1 = \min\{ x_1,\, x_{2k-1}\}$. Set
\[ i=\begin{cases} 0 & \text{if $S$ is a forward circuit,}\\
1 & \text{if $S$ is a reverse circuit.}
\end{cases}
\]
Also set
\[ h= \begin{cases} 0 & \text{if $\zeta^i (x_0,x_1)\in A(Z_J)$,}\\
1 & \text{ otherwise.}
\end{cases}
\]
Then
$\zeta^i (x_{2t},x_{2t+1})\in A(\chi^h Z_J)$ and
$\zeta^i (x_{2t+2}, x_{2t+1})\not\in A(\chi^h Z_J)$ for $t=0,1,\ldots, k-1$
(identifying $x_{2t}$ with $x_0$). Note that
any three consecutive vertices on $S$ are distinct.
Define the set
\begin{align*}
\mathcal{B} &= \{ t = 1,2,\ldots, k-1 \,\, :\,\, \zeta^i (x_0,x_{2t+1})\not\in
A(\chi^h Z_J) \\
& \qquad \qquad
\mbox{ and } x_{2\ell+1} \neq x_{2t+1} \mbox{ for }
\ell \mbox{ with } \ell = t+1,\ldots, k-1. \}
\end{align*}
(This definition ensures that exactly one value $t$ is stored for each
distinct vertex $x_{2t+1}$ with $\zeta^i (x_0,x_{2t+1})\not\in A(\chi^h Z_J)$,
ensuring that vertices which are repeated along $S$ are treated correctly.)
Note that $k-1\in\mathcal{B}$ always.
The arcs $\zeta^i (x_0,x_{2t+1})$ are called \emph{odd chords}.
The number of phases in the processing of $S$ will be
$p = |\mathcal{B}|$. For the first phase, choose the minimum $t\in\mathcal{B}$.
There will be $t$ steps in the first phase, which proceeds as follows:
\begin{center}
\begin{tabbing}
for $j:= t, \, t-1, \ldots, 1$ do\\
XXX \= \kill
\> form $Z_{J+t-j+1}$ from $Z_{J+t-j}$ by performing the
switch $\zeta^i\chi^h[x_0x_{2j-1}x_{2j}x_{2j+1}]$;\\
\end{tabbing}
\end{center}
If $t=k-1$ then there is only one phase and the processing of $S$ is complete.
Otherwise, $\zeta^i (x_0, x_{2t+1}) \in A(\chi^h Z_{J+t})$ but all odd
chords $\zeta^i (x_0,x_{2\ell+1})$ with $x_{2\ell+1}\neq x_{2t+1}$
have been reinstated to match their status in $Z_J$ (that is,
they belong to $Z_{J+t}$ if and only if they belong to $Z_J$).
For subsequent phases, if $t$ was the starting point of the previous phase
then choose $q > t$ minimum such that $q\in\mathcal{B}$.
The odd chord $\zeta^i (x_0,x_{2t+1})$ has been switched in the previous
phase but will be restored to its original state by the end of this phase.
There will be
$q-t$ steps in this phase, performing the sequence of switches
\[ \zeta^i\chi^h[x_0 x_{2q-1} x_{2q} x_{2q+1}],\,\,
\zeta^i\chi^h[x_0 x_{2q-3} x_{2q-2} x_{2q-1}],\,\, \ldots
, \,\, \zeta^i\chi^h[x_0 x_{2t+1} x_{2t+2} x_{2t+3}].\]
Note that every switch involves $x_0$, the start-vertex of $S$.
At any point during the processing of the 1-circuit, at most three odd chords
have been switched (that is, temporarily disturbed). This is illustrated
in the worked example in Section~\ref{a:example}.
\section{Decomposition of a raw 2-circuit }\label{s:2circuit}
We now show how to process a raw 2-circuit $S$, given the method
for processing a 1-circuit described in Section~\ref{s:1circuit}.
Suppose that we have reached the digraph $Z_J$ on the canonical path
from $G$ to $G'$ in $\mathcal{G}$. Note that $Z_J$
agrees with $G$ on the 2-circuit $S$ before the processing of $S$ begins.
We relabel the vertices on $S$ as
\begin{equation}
\label{Slabels}
S = v x_{0,0} \cdots x_{1,0} v x_{1,1} \cdots x_{0,1},
\end{equation}
where $(v,x_{0,0})$ is the lexicographically least arc in $A(S)$.
Treat the indices $(i,j)$ on the vertex labels as elements of
$\mathbb{Z}_2^2$, with addition performed modulo 2.
In the undirected case~\cite{CDG}, the vertices
$x_{0,0}, \, x_{0,1}, \, x_{1,0}, \, x_{1,1}$
are all distinct. However, this is no longer the case in the directed
setting, which complicates the definition of the canonical paths.
Also note that there may be as few as two vertices between two successive
occurrences of $v$ on the 2-circuit $S$. This is explained in more
detail below Figure~\ref{2circuit}, once we have introduced some
useful notation.
Recall that $\chi$ is the complementation operation
for digraphs. Set
\[ h = \begin{cases}
0 & \text{ if the arc $(v,\, x_{0,0})$ is present in $Z_J$,}\\
1 & \text{ if the arc $(v,\, x_{0,0})$ is absent in $Z_J$.}
\end{cases}
\]
Then
\[ (v,\, x_{0,0})\in A(\chi^h Z_J), \,\,\,
(x_{1,0},\, v)\in A(\chi^h Z_J),\,\,\,
(x_{1,1},\, v)\not\in A(\chi^h Z_J),\,\,\,
(v,\, x_{0,1})\not\in A(\chi^h Z_J).
\]
Figure~\ref{2circuit-lite} depicts $\chi^h S$,
where the curved lines (from $x_{0,0}$ to $x_{1,0}$ and from
$x_{0,1}$ to $x_{1,1}$)
represent any odd number of alternating arcs. Solid arcs represent
arcs which are present in $\chi^h Z_J$ and dashed arcs represent arcs
which are absent in $\chi^h Z_J$. That is, if $h=0$ then
solid arcs belong to $Z_J$ and dashed arcs belong to $G'$,
while if $h=1$ then solid arcs belong to $G'$ and dashed arcs belong to
$Z_J$.
\begin{center}
\begin{figure}[ht]
\psfrag{v}{$v$}\psfrag{x1}{$x_{0,0}$}\psfrag{w1}{$y_{0,0}$}
\psfrag{z1}{$y_{1,0}$}\psfrag{y1}{$x_{1,0}$}\psfrag{x2}{$x_{1,1}$}
\psfrag{w2}{$y_{1,1}$}\psfrag{z2}{$y_{0,1}$}\psfrag{y2}{$x_{0,1}$}
\centerline{\includegraphics[scale=0.6]{2circuit-lite}}
\caption{The 2-circuit $\chi^h S$}
\label{2circuit-lite}
\end{figure}
\end{center}
For $(i,j)\in\mathbb{Z}_2^2$, let $y_{i,j}$ be the unique vertex such that
$v x_{i,j} y_{i,j}$ or $y_{i,j} x_{i,j} v$ is a contiguous substring of $S$
(allowing cyclic wrapping in the case of $y_{0,1}$).
If $y_{0,j}=x_{1,j}$ for some $j$ then
$y_{1,j} = x_{0,j}$ and there is only one arc between $x_{0,j}$
and $x_{1,j}$. This means that the corresponding curved line in
Figure~\ref{2circuit-lite} can be replaced by a single arc.
There are four possibilities for $\chi^h S$ in which
$y_{0,j}= x_{1,j}$ for $j=1,2$.
These are shown in Figure~\ref{bowtie-family}.
The leftmost 2-circuit involves 5 distinct vertices
and the middle two 2-circuits each involve 4 distinct
vertices, with one coincidence of the form $x_{0,j}=x_{1,j+1}$,
where $j\in \mathbb{Z}_2$.
The rightmost 2-circuit
involves 3 distinct vertices: we will call it a
\emph{triangle}.
\begin{center}
\begin{figure}[ht]
\psfrag{v}{$v$}\psfrag{x1}{$x_{0,0}$}\psfrag{w1}{$y_{0,0}$}
\psfrag{y1}{$x_{1,0}$}\psfrag{x2}[c]{$x_{1,1}$} \psfrag{y2}[c]{$x_{0,1}$}
\psfrag{w1}[c]{$x_{0,0} = x_{1,1}$}\psfrag{z1}[c]{$x_{0,1} = x_{1,0}$}
\centerline{\includegraphics[scale=0.6]{bowtie-family}}
\caption{The four 2-circuits $\chi^h S$ with at most 5 distinct vertices.
The rightmost 2-circuit is a triangle.}
\label{bowtie-family}
\end{figure}
\end{center}
In the undirected analysis~\cite{CDG}, a critical observation was
that vertex $y_{0,0}$ must be distinct from $x_{0,1}$ (without loss of
generality). This fact underpinned the definition
of the canonical paths in~\cite{CDG}. For directed graphs this
property does not necessarily hold, as can be seen from the
last two 2-circuits in Figure~\ref{bowtie-family}.
We will say that $S$ is \emph{normal} if $y_{i,j}\neq x_{i,j+1}$
for some $(i,j)\in\mathbb{Z}_2^2$.
In Section~\ref{s:normal} we describe how to process a normal 2-circuit.
The procedure is analogous to that used in~\cite{CDG}, which is the motivation
for
the definition of normal 2-circuits. Note that the triangle
(shown at the rightmost of Figure~\ref{bowtie-family}) is not normal but
the remaining 2-circuits in Figure~\ref{bowtie-family} are normal.
For $(i,j)\in\mathbb{Z}_2^2$, let $z_{i,j}$ be the unique vertex
such that $v x_{i,j} y_{i,j} z_{i,j}$ or
$z_{i,j} y_{i,j} x_{i,j} v$
is a contiguous substring of $S$ (allowing cyclic wrapping in the case of $z_{0,1}$).
We will need the following lemma.
\begin{lemma}
Suppose that $S$ is a raw 2-circuit which is not normal and
such that
$v=z_{i,j}$ for some $(i,j)\in\mathbb{Z}_2^2$.
Then $S$ is a triangle.
\label{simplify}
\end{lemma}
\begin{proof}
Without loss of generality (by reversing arcs and/or taking the complement if necessary)
we may suppose that $v=z_{0,0}$.
(This means we cannot assume that $(v,x_{0,0})$ is the lexicographically
least arc in $S$, but we do not need to use that property in this proof.)
Colour arcs around the 2-circuit orange, purple in an
alternating fashion, starting with the orange arc $(v,x_{0,0})$.
By assumption, $S$ has initial substring
$v\, x_{0,0}\, y_{0,0}\, v$. By the well-paired property of 2-circuits,
we know that the orange arc $(v,x_{0,0})$ is paired with the purple arc
$(y_{0,0},x_{0,0})$ at $x_{0,0}$ under $\psi$, and the purple arc
$(y_{0,0},x_{0,0})$ is paired with the orange arc $(y_{0,0},v)$ at $y_{0,0}$
under $\psi$.
Now $v$ is incident with exactly four arcs of $S$, one of each colour and orientation
(see Figure~\ref{2circuit-lite}.)
Hence the presence of the orange arc $(y_{0,0},v)$ on $S$
shows that $y_{0,0} = x_{1,0}$. But then we obtain
\[ y_{0,0} = x_{1,0} = y_{1,1} = x_{0,1},\]
as $S$ is not normal.
Now $y_{0,0}=x_{1,0}$ and the purple arc $(y_{0,0},x_{0,0})$ is paired with
the orange arc $(v,y_{0,0})$. This implies that $x_{0,0}=y_{1,0}$, and since
$S$ is not normal it follows that
\[ x_{0,0} = y_{1,0} = x_{1,1} = y_{0,1}.\]
This gives all pairing information around the 2-circuit
except for pairings at $v$. (For example, since $y_{0,0}=x_{0,1}$
and $x_{0,0}=y_{0,1}$, we know that the purple arc $(v,y_{0,0})$
is present on $S$ and is paired with the orange arc $(x_{0,0},y_{0,0})$
at $y_{0,0}$. This arc is paired at $x_{0,0}$ with the purple arc
$(x_{0,0},v)$, since $x_{0,0} = x_{1,1}$ and $y_{0,0}=y_{1,1}$.)
But as $v$ is only incident
with four arcs of $S$ there can be no other vertices involved in $S$.
So by the well-paired property, at least one of the pairs of arcs
$(v,x_{0,0}), (v,y_{0,0})$
and $(x_{0,0},v),(y_{0,0})$ must be paired at $v$.
It follows that $S$ is a triangle on $\{ v, x_{0,0}, y_{0,0}\}$.
\end{proof}
Call $S$ \emph{eccentric}
if it is not normal and not a triangle.
If $S$ is eccentric then $v\neq z_{i,j}$ for all $(i,j)\in\mathbb{Z}_2^2$,
by Lemma~\ref{simplify}. Hence $\chi^h S$ is as shown in
Figure~\ref{eccentric}. (Remember that arcs must alternate in both
colour and orientation, giving a unique way to navigate around this figure,
or see Figure~\ref{eccentric-detail} below for an unravelled version.)
Again the curved lines represent an odd
number of alternating arcs (from $x_{0,0}$ to $x_{1,0}$ and
from $x_{0,1}$ to $x_{1,1}$). Recall also that the vertices $x_{i,j}$ are
not necessarily distinct.
\begin{center}
\begin{figure}[ht]
\psfrag{v}{$v$}\psfrag{x1}{$x_{0,0}=y_{0,1}$}\psfrag{w1}{$x_{0,1}=y_{0,0}$}
\psfrag{y1}[r]{$x_{1,0}=y_{1,1}$}\psfrag{z1}[r]{$x_{1,1}=y_{1,0}$}
\psfrag{p1}{$z_{0,0}$} \psfrag{p2}{$z_{1,1}$}\psfrag{q1}{$z_{1,0}$}
\psfrag{q2}{$z_{0,1}$}
\centerline{\includegraphics[scale=0.6]{abnormal-lite}}
\caption{The 2-circuit $\chi^h S$ when $S$ is eccentric}
\label{eccentric}
\end{figure}
\end{center}
We describe how to process an eccentric
2-circuit in Section~\ref{s:eccentric} and in Section~\ref{s:triangle}
we explain how to process a triangle. This will complete the
description of the canonical path from $G$ to $G'$ corresponding
to the pairing $\psi$.
\subsection{Decomposing a normal 2-circuit}\label{s:normal}
Let $S$ be a normal 2-circuit, with vertices labelled
as in (\ref{Slabels}), where $(v,x_{0,0})$ is the
lexicographically least arc in $A(S)$.
Recall the notation $z_{i,j}$ defined before Lemma~\ref{simplify}.
A normal 2-circuit was depicted in Figure~\ref{2circuit-lite} but now
we need a more detailed picture (Figure~\ref{2circuit}).
Recall however that there can be as few as three arcs in the left or right
half of this figure: for example, if there were only three arcs on the
right then $y_{i,0}=x_{i+1,0}$ and $z_{i,0}=v$ for $i\in\mathbb{Z}_2$.
Again the curved lines
in Figure~\ref{2circuit} represent an odd number of alternating arcs.
\begin{center}
\begin{figure}[ht]
\psfrag{v}{$v$}\psfrag{x00}{$x_{0,0}$}\psfrag{y00}{$y_{0,0}$}
\psfrag{y10}{$y_{1,0}$}\psfrag{x10}{$x_{1,0}$}\psfrag{x11}{$x_{1,1}$}
\psfrag{y11}{$y_{1,1}$}\psfrag{y01}{$y_{0,1}$}\psfrag{x01}{$x_{0,1}$}
\psfrag{a11}{$z_{1,1}$}\psfrag{a01}{$z_{0,1}$}\psfrag{a10}{$z_{1,0}$}
\psfrag{a00}{$z_{0,0}$}
\centerline{\includegraphics[scale=0.6]{2circuit}}
\caption{A normal 2-circuit $\chi^h S$, in more detail}
\label{2circuit}
\end{figure}
\end{center}
Let $(i,j)$ be the lexicographically
least index such that $x_{i,j}\neq y_{i,j+1}$.
(Here we use the ordering $0<1$ on $\mathbb{Z}_2$.)
Define the arc $a_{i,j} = (y_{i,j+1},\, x_{i,j})$. The \emph{shortcut arc}
of $S$ is $\zeta^i a_{i,j}$ (that is, it equals $a_{i,j}$ itself if
$i=0$ and equals the reversal of $a_{i,j}$ if $i=1$).
Suppose that $Z_J$ is the current digraph on the canonical path from $G$ to $G'$
before we start decomposing $S$.
There are three cases, called (Na), (Nb), (Nc), where the `N' stands for `normal'.
\begin{enumerate}
\item[(Na)] the shortcut arc $\zeta^i a_{i,j}$ belongs to $A(S)$.
\item[(Nb)] the shortcut arc $\zeta^i a_{i,j}$ does not belong to $A(S)$,
and $\zeta^i a_{i,j}$ is not an arc of $\chi^{h + j} Z_J$.
\item[(Nc)] the shortcut arc $\zeta^i a_{i,j}$ does not belong to $A(S)$,
and $\zeta^i a_{i,j}$ is an arc of $\chi^{h + j} Z_J$.
\end{enumerate}
We consider these cases in order. (A more detailed description of the
analogous process in the undirected case, with figures, can be found
in~\cite{CDG} and may also be helpful.)
\begin{enumerate}
\item[(Na)]
In case (Na), the 2-circuit $S$ can be split into two 1-circuits, $S_1$
and $S_2$. There are four
subcases to consider, depending on which ``half'' of the 2-circuit contains
the shortcut arc and whether the shortcut arc belongs to $Z_J$.
In all subcases, the arcs of $S_1$ and $S_2$ form a partition of the arcs of $S$.
Once the two 1-circuits $S_1$ and $S_2$ have been identified, they are processed
in that order,
extending the canonical path from $G$ to $G'$ as
\[ G = Z_0,\ldots, Z_J, Z_{J+1},\ldots, Z_{J+k}\]
after processing $S_1$, and
\[ G = Z_0,\ldots, Z_J, Z_{J+1}, \ldots, Z_{J+k}, Z_{J+k+1},\ldots, Z_{J+k+\ell}\]
after processing $S_2$.
\begin{enumerate}
\item[(Na1)] Suppose that $S$ can be rewritten (allowing cyclic wrapping if necessary)
as
\[
v\, x_{i,j+1}, y_{i,j+1} z_{i,j+1} \cdots y_{i,j+1} x_{i,j} \cdots
z_{i+1,j+1} y_{i+1,j+1} x_{i+1,j+1} v x_{i+1,j}
\cdots x_{i,j}\]
and $\zeta^i a_{i,j} \not\in A(\chi^{h+j} Z_J)$. Split $S$ into two 1-circuits
\begin{align*}
S_1 &= v x_{i,j+1}\, y_{i,j+1} z_{i,j+1} \cdots y_{i,j+1} x_{i,j}, \\
S_2 &= v x_{i+1,j+1} y_{i+1,j+1} z_{i+1,j+1}\cdots x_{i,j} y_{i,j}
\cdots y_{i+1,j} x_{i+1,j}.
\end{align*}
\item[(Na2)] Suppose that $S$ can be rewritten (allowing cyclic wrapping if necessary)
as
\[
v\, x_{i,j+1} y_{i,j+1} z_{i,j+1} \cdots x_{i,j}\, y_{i,j+1} \cdots
z_{i+1,j+1} y_{i+1,j+1} x_{i+1,j+1} v\,
x_{i+1,j} \cdots x_{i,j}\]
and $\zeta^i a_{i,j} \in A(\chi^{h+j} Z_J)$. Split $S$ into two 1-circuits
\begin{align*}
S_1 &= v x_{i,j+1} y_{i,j+1} z_{i,j+1} \cdots x_{i,j},\\
S_2 &= v x_{i+1,j+1} y_{i+1,j+1} z_{i+1,j+1}\cdots
y_{i,j+1}\, x_{i,j} y_{i,j} \cdots y_{i+1,j} x_{i+1,j}.
\end{align*}
\item[(Na3)]
Suppose that $S$ can be rewritten (allowing cyclic wrapping if necessary)
as
\[ v x_{i,j+1}\cdots x_{i+1,j+1} v x_{i+1,j} y_{i+1,j}
z_{i+1,j} \cdots y_{i,j+1} x_{i,j}
\cdots z_{i,j+1} y_{i,j+1} x_{i,j+1} \]
and $\zeta^i a_{i,j} \not\in A(\chi^{h+j} Z_J)$. Split $S$ into two 1-circuits
\begin{align*}
S_1 &= v x_{i,j+1} y_{i,j+1} x_{i,j}\cdots
z_{i,j} y_{i,j} x_{i,j},\\
S_2 &= v x_{i+1,j+1} y_{i+1,j+1} \cdots y_{i,j+1} \cdots z_{i+1,j}
y_{i+1,j} x_{i+1,j}.
\end{align*}
\item[(Na4)]
Suppose that $S$ can be rewritten (allowing cyclic wrapping if necessary)
as
\[ v x_{i,j+1}\cdots x_{i+1,j+1} v x_{i+1,j} y_{i+1,j}
z_{i+1,j} \cdots x_{i,j} y_{i,j+1} \cdots z_{i,j} y_{i,j} x_{i,j}\]
and $\zeta^i a_{i,j} \in A(\chi^{h+j} Z_J)$. Split $S$ into two 1-circuits
\begin{align*}
S_1 &= v x_{i,j+1} y_{i,j+1}\cdots z_{i,j} y_{i,j} x_{i,j},\\
S_2 &= v x_{i+1,j+1} y_{i+1,j+1} \cdots y_{i,j+1} x_{i,j} \cdots
z_{i+1,j} y_{i+1,j} x_{i+1,j}.
\end{align*}
\end{enumerate}
\item[(Nb)]
Now suppose that $S$ is a normal 2-circuit, the shortcut arc $\zeta^i a_{i,j}$
is not an arc of $S$
and $\zeta^i a_{i,j}$ is not an arc of $\chi^{h+j} Z_J$. Then we can
use the shortcut arc to give
an alternating 4-cycle $v x_{i,j} y_{i,j+1} x_{i,j+1}$.
First process this alternating 4-cycle using the switch
$\zeta^i\chi^{h+j} [ v x_{i,j}, y_{i,j+1} x_{i,j+1}]$,
extending the canonical path by one step to give
\[ G = Z_0, \ldots, Z_J, Z_{J+1}. \]
(Call this step the \emph{shortcut switch}.)
Now $\zeta^i a_{i,j}$ is an arc of $\chi^{h+j} Z_{J+1}$ and we
can form a 1-circuit $S_1$ from $S$, specifically
\begin{equation}
\label{normalS1}
S_1 = v x_{i+1,j+1} y_{i+1,j+1} \cdots y_{i,j+1}\, x_{i,j}
y_{i,j} \cdots y_{i+1,j} x_{i+1,j}.
\end{equation}
Process this 1-circuit (as described in Section~\ref{s:1circuit})
to extend the canonical path further, giving
\[ G = Z_0,\ldots, Z_J, Z_{J+1}, Z_{J+2},\ldots, Z_{J+k}.\]
Note that $\zeta^i a_{i,j}$ is not an arc of $\chi^{h + j} Z_{J+k}$
after the 1-circuit $S_1$ has been processed, so it has been
restored to the same state as in $\chi^{h+j} Z_J$, before the processing of
the 2-circuit $S$ began.
\item[(Nc)] Finally assume that $S$ is a normal 2-circuit, the shortcut
arc $\zeta^i a_{i,j}$ is not an arc of $S$ and $\zeta^i a_{i,j}$ is an arc of
$\chi^{h + j} Z_J$. Then the shortcut arc completes the 1-circuit $S_1$
defined in (\ref{normalS1}),
which is processed (as described in Section~\ref{s:1circuit}).
This extends the canonical path to give
\[ G = Z_0, \ldots, Z_J, Z_{J+1}, \ldots, Z_{J+k}.\]
Last we process the alternating 4-cycle $v x_{i,j} y_{i,j+1} x_{i,j+1}$,
using the shortcut switch $\zeta^i\chi^{h+j} [ v x_{i,j}, y_{i,j+1} x_{i,j+1}]$,
extending the canonical path by one step to give
\[ G = Z_0, \ldots, Z_J, Z_{J+1}, \ldots, Z_{J+k}, Z_{J+k+1}.\]
Note that $\zeta^i a_{i,j}$ is not an arc of $\chi^{h + j} Z_{J+k}$
but it is an arc of $\chi^{h+j} Z_{J+k+1}$, so it has been restored to
the same state as in $\chi^{h+j} Z_J$.
\end{enumerate}
\subsection{Decomposing an eccentric 2-circuit}\label{s:eccentric}
Now we may assume that $S$ is an eccentric 2-circuit.
Then $y_{i,j}=x_{i,j+1}$ for all $(i,j)\in\mathbb{Z}_2^2$, by definition,
and $v\neq z_{i,j}$ for all $(i,j)\in\mathbb{Z}_2^2$, by Lemma~\ref{simplify}.
Call $(z_{1,0},\, v)$ the \emph{eccentric arc}.
Note that $z_{1,0}\not\in \{ x_{1,0}, \, x_{1,1}\}$
which is the set of in-neighbours of $v$ on $S$.
Hence the eccentric arc is never
an arc of $S$, so that the analogue of Case (Na)
never arises. The remaining possibilities are below, called Case (Ea)
and (Eb) (these are similar to cases (Nb) and (Nc) for
normal 2-circuits, respectively).
\begin{enumerate}
\item[(Ea)] Suppose that $(z_{1,0},\, v)\not\in A(\chi^{h} Z_J)$.
Then $z_{1,0} x_{1,1} x_{1,0} v$ forms an alternating 4-cycle
which we process using the switch $\chi^h [z_{1,0} x_{1,1} x_{1,0} v]$,
extending the canonical path by one step to give
\[ G = Z_0,\cdots, Z_J,\, Z_{J+1}.\]
We call this step the \emph{eccentric switch}.
After performing the eccentric switch we have the 2-circuit
\begin{equation}
\label{Seccentric}
S' = v x_{0,0}\cdots z_{1,0} v x_{1,1} x_{1,0} \cdots x_{0,0} x_{0,1}.
\end{equation}
Indeed, since $z_{1,0}\neq x_{1,0}$ it follows that
$S'$ is a normal 2-circuit, which we can process using the method described
in Section~\ref{s:normal}.
This extends the canonical path as
\[ G = Z_0,\cdots, Z_J,\, Z_{J+1}, \, Z_{J+2} \cdots , Z_{J+1 + k}.
\]
Note that $(z_{1,0},\, v)\not\in A(\chi^{h} Z_{J+1+k})$, so the eccentric arc
has been restored to the same state as in $\chi^h Z_J$, before the processing
of $S$ began.
\item[(Eb)]
Suppose that $(z_{1,0},\, v)\in A(\chi^{h} Z_J)$.
Then $S'$ defined in (\ref{Seccentric}) is a normal 2-circuit which we
first process using the method described in Section~\ref{s:normal}.
This extends the canonical path as
\[ G = Z_0, \cdots, Z_J,\, Z_{J+1},\cdots, Z_{J+k}.\]
Then $z_{1,0} x_{1,1} x_{1,0} v$ forms an alternating 4-cycle
which we process using the eccentric switch
$\chi^h [z_{1,0} x_{1,1} x_{1,0} v]$,
extending the canonical path by one step to give
\[ G = Z_0, \cdots, Z_J,\, Z_{J+1},\cdots, Z_{J+k},\, Z_{J+k+1}.\]
Now $(z_{1,0},\, v) \in A(\chi^{h} Z_{J+1+k})$, so the eccentric arc has been
restored to the same state as in $\chi^h Z_J$.
\end{enumerate}
This procedure for still works even for eccentric
2-circuits with only five vertices. These arise when $z_{i,j}=x_{i+1,j+1}$
for all $(i,j)\in\mathbb{Z}_2^2$ (matching Figure~\ref{eccentric} with
both curved lines replaced by one arc each.)
The following information will be needed when analysing the flow.
\begin{lemma}
\label{eccentric-plus-shortcut}
Let $S$ be an eccentric $2$-circuit
with the labelling of (\ref{Slabels})
and let $S'$ be the normal $2$-circuit used to process $S$.
Suppose that $S'$ falls into case (Nb) or (Nc).
Then the following all hold:
\begin{enumerate}
\item[\emph{(i)}]
Neither of the arcs $(v,x_{0,1})$, $(x_{1,1},v)$
are involved in the eccentric switch.
\item[\emph{(ii)}]
Using the labelling from Figure~\ref{eccentric}, the shortcut arc
used to process $S'$ is $(z_{1,0},x_{1,0})$ and the shortcut
switch is $[x_{1,1} x_{1,0} z_{1,0} v]$.
\item[\emph{(iii)}] The
eccentric arc is involved in the shortcut switch and does not lie on the
1-circuit used to process $S'$.
\end{enumerate}
\end{lemma}
\begin{proof}
Recall that the eccentric arc is $(z_{1,0},v)$.
The first statement is immediate as the eccentric switch processes
the alternating 4-cycle $z_{1,0} x_{1,1} x_{1,0} v$.
For the remainder of the proof, we use
labels $\hat{x}_{i,j}, \hat{y}_{i,j},\ldots$
to denote the labelling of $S'$ obtained as in (\ref{Slabels}).
See Figure~\ref{eccentric-detail}.
As $S$ is eccentric we have $y_{i,j+1}=x_{i,j}$ for all
$(i,j)\in\mathbb{Z}_2^2$.
By choice of the eccentric switch we have $\hat{x}_{i,j}= x_{i,j}$ and
$\hat{y}_{i,j}= y_{i,j}$ for $(i,j)\neq (1,0)$, while
$\hat{x}_{1,0} = z_{1,0}$.
\begin{center}
\begin{figure}[ht]
\psfrag{v}[c]{$v$}
\psfrag{x00}{$x_{0,0}$}\psfrag{x01}{$x_{0,1}$}\psfrag{x10}{$x_{1,0}$}
\psfrag{x11}{$x_{1,1}$} \psfrag{a00}{$z_{0,0}$} \psfrag{a01}{$z_{0,1}$}
\psfrag{a10}{$z_{1,0}$} \psfrag{a11}{$z_{1,1}$}
\psfrag{aa10}{$z_{1,0} = \hat{x}_{1,0}$}
\centerline{\includegraphics[scale=0.6]{eccentric-detail}}
\caption{An eccentric 2-circuit $\chi^h S$ (above) and the normal 2-circuit
$\chi^h S'$ used to process it (below)}
\label{eccentric-detail}
\end{figure}
\end{center}
Now
$z_{1,0}\neq x_{1,0}$ since $z_{1,0} x_{1,1} x_{1,0}$ is a contiguous substring of $S$.
Hence $(1,0)$ is the lexicographically least $(i,j)$ such that
$\hat{x}_{i,j}\neq \hat{y}_{i,j+1}$.
It follows that the shortcut arc is $(\hat{x}_{1,0},\hat{y}_{1,1})
= (z_{1,0},x_{1,0})$. Notice that the eccentric arc is incident
with the shortcut arc at $z_{1,0}$ (with the same orientation).
Furthermore, the shortcut switch involves
a switch to the alternating 4-cycle
\[ v \hat{x}_{1,1} \hat{y}_{1,1} \hat{x}_{1,0} = v x_{1,1} x
_{1,0} z_{1,0} \]
which includes the eccentric arc.
Specifically, the switch is $[x_{1,1} x_{1,0} z_{1,0} v]$,
proving (ii).
Since the eccentric arc $(z_{1,0},v)$ is one of the arcs involved
in the shortcut switch,
it does not lie on the 1-circuit used to process $S'$.
This establishes (iii), completing the proof.
\end{proof}
\subsection{Processing a triangle }\label{s:triangle}
Now suppose that $S$ is a triangle, with vertices labelled $v_0,v_1,v_2$
where $v_0$ is the least vertex on $S$ and $(v_0,v_1)$ is an arc in
the current digraph $Z_J$.
Define the sets $\mathcal{W}^{(i,j)} =
\mathcal{W}^{(i,j)}(\{v_0,v_1,v_2\},Z_J)$ for
${(i,j)}\in \mathbb{Z}_2^2$.
There are two cases, depending on whether a useful neighbour of $S$
exists.
\begin{enumerate}
\item[(T1)]
First suppose that there exists a useful neighbour of $S$.
Let $x$ be the minimum useful neighbour of $S$, and set
$(i,h)$ according to the first condition in this list which is
satisfied by $x$:
\[ (i,h) = \begin{cases} (0,0) & \text{ if $x$ is an out-neighbour of
exactly one vertex of $S$,}\\
(0,1) & \text{ if $x$ is an out-neighbour of
exactly two vertices of $S$,}\\
(1,0) & \text{ if $x$ is an in-neighbour of exactly one
vertex of $S$,}\\
(1,1) & \text{ if $x$ is an in-neighbour of exactly two
vertices of $S$.}
\end{cases} \]
Then the sequence of three switches given by
LaMar~\cite[left half of Figure 2]{lamar}
can be used to process $S$. For completeness we describe these switches
here. Relabel the vertices of the triangle with $a$, $b$, $c$ so that
\begin{itemize}
\item $\zeta^i (a,x)\in A(\chi^h Z_J)$,
\item $\zeta^i (b,x)\not\in A(\chi^h Z_J)$, $\zeta^i (c,x)\not\in A(\chi^h Z_J)$,
\item $\zeta^{i}(a,b), \zeta^{i}(b,c), \zeta^{i}(c,a)\in A(\chi^h Z_J)$.
\end{itemize}
(Once $x,i,h$ are chosen using the above procedure, the labelling of
the triangle is uniquely determined.)
Then the sequence of switches
\[ \zeta^i \chi^h [axbc],\qquad \zeta^i \chi^h [bxca],\qquad
\zeta^i \chi^h [abcx]\]
processes the triangle and restores all arcs between $x$ and the
triangle to their original state. See Figure~\ref{useful-nb}
for the case $(i,h)=(0,0)$: the diagram for the other cases
can be obtained by reversing all arcs if $i=1$, and/or by exchanging
solid lines and dashed lines if $h=1$.
Call arcs $\zeta^i (a,x)$, $\zeta^i (b,x)$,
$\zeta^i (c,x)$ the \emph{auxilliary arcs}.
\begin{center}
\begin{figure}[ht]
\psfrag{v0}{$a$}\psfrag{v1}{$b$}\psfrag{v2}{$c$}
\psfrag{x}{$x$}
\centerline{\includegraphics[scale=0.6]{useful-nb}}
\caption{Processing a triangle using a useful neighbour}
\label{useful-nb}
\end{figure}
\end{center}
Use this sequence of
switches to process the triangle, extending the canonical path as
\[ G = Z_0,\ldots, Z_J, \, Z_{J+1},\, Z_{J+2},\, Z_{J+3}.\]
\item[(T2)]
Suppose that there is no useful neighbour of $S$ in $Z_J$.
Then using Lemma~\ref{useful}, there must exist a useful arc
for $S$. Let $(x,y)$ be the lexicographically least such arc.
Recall that $(x,y)$ satisfies one of the properties (U1), (U2)
given just before Lemma~\ref{useful}. Define
\[ h = \begin{cases} 0 & \text{ if (U1) holds,} \\
1 & \text{ if (U2) holds.}
\end{cases}
\]
Then $(x,y)\in A(\chi^h\, Z_J)$ with
$x\in\mathcal{W}^{(h,h)}\cup\mathcal{W}^{(h,h+1)}$ and
$y\in\mathcal{W}^{(h,h)}\cup\mathcal{W}^{(h+1,h)}$.
Relabel the vertices of the triangle as $a$, $b$, $c$, where
$a=v_0$ and $(a,b)\in A(\chi^h Z_J)$. (Once $h$ is defined,
this labelling is completely determined.)
The sequence of switches given by LaMar~\cite[right side of Figure 2]{lamar}
will be used to process $S$. For completeness we give this
sequence of switches in our notation:
\[ \chi^h [x y a b], \quad \chi^h [a y b c], \quad
\chi^h [b y c a],\quad \chi^h [x b c y].\]
These switches are also displayed in Figure~\ref{useful-arc} in
the case that $h=0$: the diagram for $h=1$ can be obtained by
exchanging solid lines and dashed lines.
The arcs
$(x,y)$, $(x,b)$, $(a,y)$, $(b,y)$, $(c,y)$ are called
\emph{auxilliary arcs}.
Use this sequence of switches to process the triangle, extending
the canonical path as
\[ G = Z_0,\ldots, Z_J, \, Z_{J+1}, \, Z_{J+2}, \, Z_{J+3}, \, Z_{J+4}.\]
\begin{center}
\begin{figure}[ht]
\psfrag{v0}{$a$}\psfrag{v1}{$b$}\psfrag{v2}{$c$}
\psfrag{x}{$x$}\psfrag{y}{$y$}
\centerline{\includegraphics[scale=0.6]{useful-arcb}}
\caption{Processing a triangle using a useful arc}
\label{useful-arc}
\end{figure}
\end{center}
\end{enumerate}
\section{Analysing the flow}\label{s:analysis}
We now analyse the multicommodity flow so that we can apply
Lemma~\ref{second-eigval} to give a bound on the second-largest
eigenvalue of the switch chain.
In this section we assume that $1\leq d = d(n)\leq n/2$ for all $n$.
This implies the general result for any $(d(n))$, by complementation
where necessary.
Fix a pairing $\psi\in\Psi(G,G')$ and let $\gamma_\psi(G,G')$ be
the canonical path from $G$ to $G'$ with respect to $\psi$.
Let $(Z,Z')$ be any transition on $\gamma_\psi(G,G')$, and
let $S$ be the raw 1-circuit
or raw 2-circuit which is currently being processed.
(That is, the transition $(Z,Z')$ is performed while processing $S$.)
Let $Z_J$ be the digraph on the canonical
path from $G$ to $G'$ just before the processing of $S$ began.
Any arc which does not belong to $S$ but which has distinct status
in $Z$ and $Z_J$ is called an \emph{interesting arc}
for $Z$ with respect to $(G,G',\psi)$.
(That is, the arc does not belong to $S$ but is present in $Z$ but
absent in $Z_J$, or vice-versa.)
The only arcs that can be interesting are:
\begin{itemize}
\item odd chords which are switched while processing a 1-circuit,
\item the shortcut arc and/or eccentric arc, switched while processing a
normal or eccentric 2-circuit,
\item auxilliary arcs which are switched while processing a triangle.
\end{itemize}
We will label an interesting arc by $-1$ (respectively, 2)
if it is absent (respectively, present) in $Z_J$
but present (respectively, absent) in $Z$. (The reason for this
choice of labels will be made clear shortly.)
Interesting arcs play a key role in our analysis.
The following lemma describes the possible subdigraphs of $Z$ that
can be formed by interesting arcs in $Z$.
It proves that the labelled digraph consisting of the interesting arcs
is a subdigraph of one of the eight labelled digraphs shown in
Figure~\ref{f:zoo}, up to symmetries.
Here $\{ \mu, \nu\} = \{ -1, 2\}$ and $\{ \xi, \omega\} = \{ -1, 2\}$
independently, giving four symmetries obtained by exchanging these pairs.
Furthermore, $\zeta$ may also be applied to reverse the orientation
of all arcs. Hence each digraph shown in Figure~\ref{f:zoo} represents
up to eight possible digraphs. Note, the label for a given arc is shown
next to the head of that arc.
\begin{center}
\begin{figure}[ht]
\psfrag{a}{$\mu$}\psfrag{b}{$\nu$}
\psfrag{z}{$\xi$} \psfrag{x}{$\omega$}
\centerline{\includegraphics[scale=0.5]{zoo-new}}
\caption{Possible configurations of interesting arcs, up to symmetries}
\label{f:zoo}
\end{figure}
\end{center}
\begin{lemma}
Let $Z$ be a digraph which lies on the canonical path from $G$ to $G'$
with respect to the pairing $\psi\in\Psi(G,G')$.
There are at most five interesting arcs in $Z$ with respect to
$(G,G',\psi)$. The digraph
consisting of the interesting arcs in $Z$
is a subdigraph of one of the digraphs in Figure~\ref{f:zoo}.
If there are five interesting arcs then the following statements all hold:
\begin{enumerate}
\item[\emph{(i)}] There exists a vertex $w$ which
is the head (respectively, tail) of three interesting arcs,
and these three
interesting arcs do not all have the same label.
\item[\emph{(ii)}] There is a fourth interesting arc which has $w$ as tail
(respectively, head). Let $u$ be the head (respectively, tail) of the fourth
interesting arc.
\item[\emph{(iii)}] The fifth interesting arc is not incident with $w$ but
has $u$ as its head (respectively, tail).
\end{enumerate}
\label{zoo}
\end{lemma}
\begin{proof}
While processing a triangle, at most three interesting arcs are used,
namely the two or three auxilliary arcs.
It follows from Figures~\ref{useful-nb},~\ref{useful-arc}
that the auxilliary arcs always form subdigraphs of a configuration
from Figure~\ref{zoo}.
When processing a normal 2-circuit, the situation is very similar
to that in~\cite{CDG}, with at most four interesting arcs. Up to three
interesting arcs arise from the processing of a 1-circuit.
They are all odd chords, and hence are all incident
with the start-vertex of the 1-circuit with consistent orientation.
However, they do not all have the
same label. The fourth interesting arc corresponds to the shortcut arc,
which may be labelled $-1$ or 2 and may be incident with none, one or two
of the other interesting arcs (but not incident
with the start vertex of the 1-circuit).
The fifth possible interesting arc is the eccentric arc, in the case that we
are processing an eccentric 2-circuit $S$. Let $S'$ be the normal 2-circuit
containing the eccentric arc which is used to process $S$.
If $S'$ falls into case (Na) then the eccentric arc may be a
interesting arc for part (either the start or end) of the processing of $S_1$,
the 1-circuit which contains it. But the configuration of interesting
arcs in
this case looks just the same as those which may arise from the processing
of a normal 1-circuit, since the eccentric arc is involved in either the first
switch of the last phase or the last switch of the first phase, and
hence plays the same role as an interesting arc left over from a previous phase.
However, if $S'$ falls into case (Nb) or (Nc) then
by Lemma~\ref{eccentric-plus-shortcut} (iii),
the eccentric arc does not lie on the 1-circuit $S_1$ which arises from $S'$.
But the eccentric arc be an interesting arc throughout the processing of $S_1$.
Hence $S_1$ may have up to five interesting arcs, including the
shortcut arc and
the eccentric arc. In this case the eccentric arc is incident with the
start-vertex $v$
of $S_1$ (which equals the start-vertex of $S$) and it has the \emph{opposite}
orientation to the other interesting arcs incident with $v$, if any.
Let $u$ be the endvertex of the eccentric arc which is not $v$.
If the shortcut arc is present then it must be incident
with the eccentric arc at $u$, with consistent orientation.
This completes the proof.
\end{proof}
Now identify a digraph with its $n\times n$ adjacency matrix (which has
zero diagonal), and define the $n\times n$ matrix $L$ by $L+Z=G+G'$.
Entries of $L$ belong to $\{ -1,\, 0,\, 1,\, 2\}$. We may also think of
$L$ as the complete digraph on $[n]$
with each arc labelled by the corresponding entry of $L$.
An arc in $L$ is called \emph{bad} if its label is $-1$ or 2.
Note that $L$ is independent of $\psi$. Call $L$ an \emph{encoding}
for $Z$ with respect to ($G,G')$. Note that an arc receives label
$-1$ if it is absent in both $G$ and $G'$ but is present in $Z$,
while an arc receives label 2 if it is present in both $G$ and $G'$ but
is absent in $Z$. Thus arcs in the symmetric difference $G\triangle G'$
are never bad arcs. Furthermore, every bad arc is an interesting arc,
and an interesting arc is bad if and only if it does not belong to
the symmetric difference $G\triangle G'$. This observation will be
used many times in our analysis. In particular, it means that the
digraph of bad arcs in an encoding $L$ for $Z$ is a subdigraph of
one of the digraphs in Figure~\ref{f:zoo}. This also explains
our choice of labels for interesting arcs, since a bad arc
with label 2 (respectively, $-1$) is also an interesting arc with label 2
(respectively, $-1$).
In the undirected setting~\cite[Lemma 1]{CDG} it is always possible to
uniquely recover $(G,G')$ if $(Z,Z')$, $L$ and $\psi$ are known.
We prove a slightly weaker result in the directed setting.
\begin{lemma}
Given $(Z,Z')$, $L$, $\psi$, there are at most four possibilities for
$(G,G')$ such that $(Z,Z')$ is a transition along the canonical path
from $G$ to $G'$ corresponding to $\psi$ and $L$ is an encoding for
$Z$ with respect to $(G,G')$.
\label{notquiteunique}
\end{lemma}
\begin{proof}
The matrix $G+G'$ equals $Z+L$.
From this matrix we can identify all arcs which are present in both
$G$ and $G'$ (entries with value 2 in $G+G'$) and all arcs which are absent
in both $G$ and $G'$ (entries with value 0 in $G+G'$). We can also identify
the symmetric difference $H=G\triangle G'$, corresponding to entries
with value 1 in $G+G'$.
It remains to assign colours blue and red to the arcs
of $H$ so that blue arcs come from $G$ and red arcs come from $G'$.
From the uncoloured version of $H$ together with $\psi$ we can construct
the circuit decomposition $\mathcal{C}$.
Let $\mathcal{S}$
be the sequence of raw 1-circuits and raw 2-circuits obtained by
decomposing the circuits in $\mathcal{C}$ in order, as described
in Section~\ref{s:circuit}. The elements of $\mathcal{S}$ are
pairwise arc-disjoint and their union is $H$.
Suppose that the transition $(Z,Z')$ deletes the arcs
$(\alpha,\beta)$, $(\delta,\gamma)$ and replaces them with
$(\alpha,\gamma)$, $(\delta,\beta)$.
Call $(\alpha,\beta), (\alpha,\gamma), (\delta,\beta), (\delta,\gamma)$
the \emph{switch arcs}.
We classify transitions along the canonical paths into three types as follows:
\begin{enumerate}
\item[] {\bf Type 1:} the transition is any step in the processing of
a 1-circuit used to process $S\in\mathcal{S}$.
At least one of the switch arcs belong to $S$.
(This includes the case of a raw 1-circuit, in which case the 1-circuit
equals $S$.)
\item[] {\bf Type 2:} the transition is a shortcut switch or
an eccentric switch used while processing the normal or eccentric
2-circuit $S\in \mathcal{S}$.
At least two of the switch arcs belong to $S$.
\item[] {\bf Type 3:} the transition is a step in the processing of
a triangle $S\in\mathcal{S}$.
At least one of the four switch arcs belong to $S$.
\end{enumerate}
In all cases, at least one of the switch arcs belongs to the
element $S\in\mathcal{S}$ currently being processed. Therefore,
there are at most four possiblities for $S$, namely, at most one
possibility for each switch arc. (This follows as elements of
$\mathcal{S}$ are pairwise arc-disjoint.)
Now fix one of the (at most four) possibilities for $S$.
We will show that given this choice (or guess) for $S$,
we can uniquely determine $(G,G')$ by colouring the edges of $H$.
Note that if $S$ is a 2-circuit, its labelling
(as in Figure~\ref{2circuit})
can be determined uniquely. Hence
we can determine whether $S$ is normal, eccentric or a triangle.
Furthermore, in the first two cases we can
identify exactly which arcs will be used as odd chords, shortcut arcs or
eccentric arcs during the processing of $S$.
We now claim that if $S$ is a triangle then we can
uniquely determine the useful neighbour $x$ or the useful arc
$(x,y)$ which is used to process $S$, and hence identify all auxilliary
arcs used while processing $S$. To see this, note that
when processing
a triangle, each switch involves either two or three vertices of the
triangle. If all three vertices of
the triangle are involved in the switch then the other vertex is either
a useful neighbour, or an endvertex of a useful arc.
Fix one orientation around the triangle and call it ``clockwise'',
with the opposite orientation called ``anticlockwise''.
Consider the number of clockwise
and anticlockwise arcs on the triangle in $Z$ and $Z'$: if they are equal
in $Z$ or in $Z'$ then we are using a useful arc
and otherwise we are using a useful neighbour.
(See Figures~\ref{useful-nb},~\ref{useful-arc}.)
In the latter situation it is easy to identify the useful neighbour $x$:
it is the only
vertex involved in the switch which does not belong to $S$.
This determines the auxilliary arcs (their orientation matches
the orientation of the switch arcs at $x$).
Now suppose that we are using an useful arc $(x,y)$.
Then we are in case (T2), which means that no useful neighbour
of $S$ existed at the start of processing $S$. Then $y$ is the only
vertex incident with the switch arcs which does not belong to $S$,
and $x$ and $y$
are the unique vertices in $Z$ which are useful neighbours of $S$.
That is, $x$ and $y$ are the only vertices not in $S$ which
do not belong to the set $\cup_{(i,j)\in\mathbb{Z}_2^2}
\mathcal{W}^{(i,j)}(S,Z)$. If only two
vertices of the triangle are involved in the switch then the unique
switch arc which is not incident with either of these vertices is
the useful arc, and the switch is the first or last in processing
$S$. This shows that
all auxilliary arcs for $S$ can be identified, as claimed.
Suppose that $S$ comes from the decomposition of the circuit
$C_r\in\mathcal{C}$.
The digraph induced by all interesting arcs contains no
circuits, as can be seen from Figure~\ref{f:zoo}.
Hence for any $\ell\neq r$ we can find at least
one arc on $C_\ell$ which is not an interesting arc
for $S$: call this a \emph{helpful arc} for $C_\ell$.
Colour the helpful arc for $C_\ell$ blue if it does not belong
to $Z$ and $\ell < r$, or if it does belong to $Z$ and $\ell>r$; otherwise
colour it red. Then the colouring of the rest of $C_\ell$ is forced,
since colours alternate around the circuit.
In the same way we can assign colours to the
arcs of every raw 1-circuit and raw 2-circuit obtained in
the decomposition of $C_r$, other than the element $S\in\mathcal{S}$
(which we have assumed is) being switched in the current transition $(Z,Z')$.
It remains to explain how to assign colours to the arcs of $S$.
If $S$ is a triangle then $(Z,Z')$ is a Type 3 transition.
By observing the number of clockwise and anticlockwise arcs in $Z$ and
$Z'$, we can determine the orientation of the triangle in $G$ and in
$G'$ and hence assign colours to the arcs in $S$.
Hence for the remainder of the proof we can assume that $S$ is
either a 1-circuit or a normal or eccentric
2-circuit. Therefore the vertices $\alpha, \beta, \gamma, \delta$
all belong to $S$ and
without loss of generality $\alpha = \min\{ \alpha,\beta,\gamma,\delta \}$
is the start-vertex
of $S$ (since the start-vertex is involved in every switch).
The argument for 1-circuits and normal 2-circuits is very
similar to that given in~\cite{CDG}.
First suppose that $(Z,Z')$ is a Type 1 transition, performed while
processing the 1-circuit $S'$.
Now $S'$ may be a raw 1-circuit (in which case $S'=S\in\mathcal{S}$),
or $S'$ may have arisen while processing a raw (normal or eccentric)
2-circuit $S$. Hence $S'$ may contain a shortcut arc
(but note, no 1-circuit contains an eccentric arc, by
Lemma~\ref{eccentric-plus-shortcut}). The arcs of
$S'$ can be partitioned into sections, separated from each other by
two consecutive arcs that are either both in $Z$ or both absent from $Z$.
Each section contains
at least two arcs, so at least one arc which is not the shortcut arc.
Then at least
one arc of $S'$ is actually switched in the current transition, which allows
us to label the section containing that arc as switched, and alternately
label the remaining sections around $S'$ as switched or unswitched.
Then colour an arc of $S'$ blue if it belongs to $Z$ and is unswitched or it is
absent from $Z$ and is switched, and colour an arc of $S'$ red if it belongs
to $Z$ and is switched or it is absent from $Z$ and is unswitched.
Finally, if $S'$ is not raw but arose from a 2-circuit $S$, there is
a unique way to colour the remaining arcs of $S$, keeping the
colours alternating.
For the remainder of the proof we assume that $(Z,Z')$ is Type 2 transition
for $S$; that is, a shortcut switch or an eccentric switch.
Let $Z_J$ denote the digraph on the canonical path from $G$ to $G'$
just before we start decomposing $S$.
We consider three subcases.
Firstly, suppose that $(Z,Z')$ is an eccentric switch. Then we know that
the arcs $(v,x_{0,1})$ and $(x_{1,0},x_{1,1})$ have the same
status in $Z_J$. The former is not involved in the eccentric switch,
by Lemma~\ref{eccentric-plus-shortcut} (i), while the latter is involved
in the eccentric switch. Hence if these two arcs have matching
status in $Z$ then we are in
Case (Ea) and the current transition is the first in processing $S$.
Colour the arcs of $S$ according to $Z$: arcs of $S\cap Z$
should be coloured blue and the remaining arcs of
$S$ should be coloured red.
If these arcs have opposite status in $Z$ then we are in case (Eb)
and the current transition is the last in processing $S$.
Colour the arcs of $S$ according to $Z'$: arcs of $S\cap Z'$
should be coloured red and the remaining arcs of $S$ should be
coloured blue.
We proceed similarly if $(Z,Z')$ is a Type 2 transition for $S$ which is
a shortcut switch. For now, assume that $S$ is a normal 2-circuit,
so the shortcut switch does not involve the eccentric arc (if any).
If the shortcut arc
is $\zeta^i (y_{i,j+1},x_{i,j})$ then the arcs $\zeta^i(x_{i+1,j},v)$
and $\zeta^i (v,x_{i,j})$ have matching status
in $Z_J$. The former arc is not involved in the shortcut switch but the latter arc is.
Hence if these two arcs still have matching status in $Z$ then we are in
Case (Nb) and the current transition is the first in processing $S$.
Colour the arcs of $S$ according to $Z$, as described in the
previous
paragraph. If these two arcs have opposite status in $Z$ (one absent
and one present)
then we have already processed the 1-circuit using the shortcut arc,
so we are in case (Nc) and the current transition is the last in
processing $S$. Colour the arcs of $S$ according to $Z'$, as
described in the previous paragraph.
The third subcase is that $(Z,Z')$ is a shortcut switch
which also involves an eccentric arc. Then $S$ is an eccentric 2-circuit
which has been decomposed into an eccentric switch and a normal
2-circuit $S'$, where $S'$ contains the eccentric arc. The current
transition is the shortcut switch which has arisen while processing
$S'$.
Now the arcs $(v, x_{0,1})$ and $(x_{1,1},v)$
have matching status in $Z_J$.
From Lemma~\ref{eccentric-plus-shortcut} (i) we know that
neither of these arcs are involved in the eccentric switch.
The former arc is not involved in the shortcut switch but the latter arc is,
by Lemma~\ref{eccentric-plus-shortcut} (ii).
Hence, these two arcs also have matching status at the start of processing
$S'$, and we
can colour the arcs of $S$ according to $Z$ if these arcs have matching
status in $Z$, and colour the arcs of $S$ according to $Z'$ otherwise.
This completes the proof.
\end{proof}
Let $L(\alpha,\beta)$ denote the label of arc $(\alpha,\beta)$
in the encoding $L$.
The arc-reversal operator $\zeta$ acts on an
encoding $L$ by mapping $L$ to its transpose $\zeta L = L^T$.
If $\zeta^i L(\alpha,\beta)=2$ and $\zeta^i L(\alpha,\gamma)=-1$ for some
$i\in \{0,1\}$ then $(i,\alpha,\beta,\gamma)$ is called a
\emph{handy tuple} with \emph{centre}
$\alpha$. If $(i,\alpha,\beta,\gamma)$ is handy and at
most one of $\beta$, $\gamma$ is the head, when $i=0$ (respectively, the
tail, when $i=1$)
of two bad arcs with distinct labels then $(i,\alpha,\beta,\gamma)$ is
said to be \emph{very handy}.
We now collect together some structural information about bad arcs in encodings.
\begin{lemma}
\label{structure}
Given $G, G'\in \Omega_{n,d}$ with symmetric difference $H = G\triangle G'$,
suppose that $Z$ is a digraph on the canonical path from $G$ to $G'$
with respect to some pairing $\psi$. Let $L$ be the corresponding
encoding, defined by $L+Z=G+G'$.
Then the following statements all hold.
\begin{enumerate}
\item[\emph{(i)}] Viewed as the arcs of a labelled digraph, the set of
bad arcs in $L$ forms a subdigraph of one of the digraphs given in
Figure~\ref{f:zoo}.
\item[\emph{(ii)}] If $L$ contains a handy tuple then $L$ contains
a very handy tuple.
\item[\emph{(iii)}] If there are five bad
arcs in $L$ then there exists a very handy tuple $(i_1,\alpha_1,\beta_1,\gamma_1)$,
and a handy tuple
$(i_1,\alpha_2,\beta_2,\gamma_2)$ in $L$ such that $\alpha_1\neq \alpha_2$
and
\begin{equation}
\label{independent}
\{ \zeta^{i_1} (\alpha_1,\beta_1),\, \zeta^{i_1} (\alpha_1,\gamma_1)\}
\cap \{ \zeta^{i_2} (\alpha_2,\beta_2),\, \zeta^{i_2} (\alpha_2,\gamma_2) \}
= \emptyset.
\end{equation}
\item[\emph{(iv)}] If there are four bad arcs in $L$ then there is at
least one handy tuple in $L$.
\item[\emph{(v)}] If $d=1$ then
no arc in $H$ is incident with a bad arc with label $2$.
If $L$ has a bad arc with label $2$ which is not the only bad arc in $L$
then $L$ has a handy tuple and $L$ has at most three bad arcs,
exactly one of which has label $2$.
Furthermore, if $L$ has three bad arcs, exactly one of which has label
$2$ then each endvertex of the bad arc with label $2$ is the centre of
a handy tuple in $L$.
\item[\emph{(vi)}] If $d=2$ then every vertex which has nonzero degree in
$H$ is the head of at most one bad arc with label $2$ and is the
tail of at most one bad arc with label $2$.
\end{enumerate}
\end{lemma}
\begin{proof}
Every bad arc is interesting, so (i) follows immediately from Lemma~\ref{zoo}.
Statements (ii)--(iv) follow from (i), by inspection of Figure~\ref{f:zoo}.
Now the head (respectively, tail) of a bad arc is also the head
(respectively, tail) of an arc in $G-G'$ and an arc in $G'-G$,
unless the bad arc is the useful arc used to process a triangle in case
(T2). This follows from the definition of odd chords, shortcut arc,
eccentric arc and auxilliary arcs in
Sections~\ref{s:1circuit},~\ref{s:normal}--~\ref{s:triangle}.
Hence (vi) and the first statement of (v) holds,
since a bad arc with label 2 is present in both $G$ and $G'$.
Furthermore, inspection of Figure~\ref{useful-arc}
shows that the remaining statements of (v) hold, completing the proof.
\end{proof}
The notion of an encoding is now generalised to mean any $n\times n$
matrix $L$ with entries in $\{ -1, 0, 1, 2\}$ such that every row
and column sum equals $d$.
Given $Z\in \Omega_{n,d}$, an encoding $L$ is called
\emph{$Z$-valid} if every entry of $L+Z$
belongs to $\{ 0,1,2\}$ and $L,Z,H$ satisfy
statements (i)--(vi) of Lemma~\ref{structure}, where $H$
is the digraph defined by the entries of $L+Z$ which equal 1.
We also define the set $\mathcal{F}(L)$ of all bad arcs in $L$ by
\[ \mathcal{F}(L) = \{ (i,j)\in [n]^2 \mid L(i,j)\in \{ -1,2\}\}.\]
\begin{lemma}
Let $Z\in\Omega_{n,d}$ and let $L$ be a $Z$-valid encoding.
Suppose that $L'$ is another encoding such that
$\mathcal{F}(L')\subseteq \mathcal{F}(L)$. Then $L'$ is also $Z$-valid.
\label{Zvalid}
\end{lemma}
\begin{proof}
If $L'(i,j) = -1$ then $L(i,j)=-1$ and hence $Z(i,j)=1$, as $L$ is $Z$-valid.
Similarly, if $L'(i,j)=2$ then $L(i,j)=2$ and hence $Z(i,j)=0$.
This shows that every entry of $L'+Z$ belongs to $\{ 0,1,2\}$.
Checking properties (i)--(vi) of Lemma~\ref{structure} we see that
they all hold for $L',Z,H'$, completing the proof.
\end{proof}
Switches can be applied to encodings, as follows.
By definition, the sum of
all labels on arcs with head $v$ add up to $d$, and the sum of all labels
on arcs with tail $v$ add up to $d$, for all vertices $v$.
If $x,y,z,w$ are vertices with $L(x,y) > -1$, $L(w,z) > -1$,
$L(x,z) < 2$ and $L(w,y) < 2$ then we may perform the switch
$[xywz]$ by decreasing
$L(x,y)$ and $L(w,z)$ by one and increasing $L(x,z)$ and $L(w,y)$ by one,
giving a new encoding $L'$.
\begin{lemma}
Let $Z\in\Omega_{n,d}$.
Given a $Z$-valid encoding, one can obtain a digraph (with no bad arcs) using
at most three switches.
\label{fix}
\end{lemma}
\begin{proof}
Let $L$ be a $Z$-valid encoding and let $H$ be the digraph given by
the entries of $L+Z$ which equal 1.
First suppose that $L$ contains a handy tuple. Then $L$ contains
a very handy tuple, by Lemma~\ref{structure} (ii).
If $L$ contains a very handy tuple $(i_1,\alpha_1,\beta_1,\gamma_1)$
and a handy tuple $(i_2,\alpha_2,\beta_2,\gamma_2)$ such that
$\alpha_1\neq \alpha_2$
and (\ref{independent}) holds, then we choose
$(i,\alpha,\beta,\gamma)$ to be the very handy tuple
$(i_1,\alpha_1,\beta_1,\gamma_1)$.
Otherwise, let $(i,\alpha,\beta,\gamma)$ be any
very handy tuple in $L$.
If $i=0$ (respectively, $i=1$) then the sum of the labels
on the bad arcs with $\beta$ as head (respectively, tail) is strictly greater
than the sum of the labels on the bad arcs with $\gamma$ as head (respectively,
tail).
By construction, each row of $L$ adds up to $d$ and each column of $L$ adds up to $d$.
Hence $\gamma$ is the head (respectively, tail)
of strictly more good arcs (with label 1) than $\beta$.
It follows that there exists a vertex $\delta$ such that
$\zeta^i L(\delta,\gamma) = 1$ and $\zeta^i L(\delta, \beta)= 0$.
Now we can perform the switch $\zeta^i [\alpha\beta\delta\gamma]$
to give an encoding $L'$ with
$\zeta^i L'(\alpha,\beta)=\zeta^i L'(\delta,\beta)=1$
and $\zeta^i L'(\alpha,\gamma)=\zeta^i L'(\delta,\gamma)=0$.
Note that $L'$ is a $Z$-valid encoding by Lemma~\ref{Zvalid}, and that
$|\mathcal{F}(L')|=|\mathcal{F}(L)|-2$.
We call this operation a
$(-1,2)$-\emph{switch}.
Next suppose that no vertex is the head (respectively, tail) of two bad
arcs with distinct labels, but that an arc exists in $L$ with label 2.
By Lemma~\ref{structure} (iii), (iv), there are at most three bad arcs in $L$.
Choose vertices $\alpha,\beta$ such that for some $i\in\{ 0,1\}$
we have $\zeta^i L(\alpha,\beta)=2$
and if $i=0$ (respectively, $i=1$) then $\alpha$ is the tail
(respectively, head) of exactly one bad arc.
(That such an $\alpha$ exists follows from Lemma~\ref{structure} (i).)
Let $U$ be the set of vertices $x\neq \alpha$ with $\zeta^i L(\alpha,x)=0$.
Since $\alpha$ is the tail (respectively, head) of exactly $d-2$ arcs
labelled 1 and one arc labelled 2, it follows that $|U| = n-d\geq d$.
We claim that there exists a vertex $\gamma\in U$ which is not the
head (respectively, tail) of a bad arc with label 2.
If $d\geq 3$ then there are at most
3 vertices which are at the head (respectively, tail)
of a bad arc labelled 2, and one of these is $\beta$.
Since $\beta\not\in U$ and $|U|\geq 3$, we can choose a vertex
$\gamma\in U$ which is not the head of a bad arc with label 2, as claimed.
If $d=2$ then by Lemma~\ref{structure} (vi),
each vertex in $H$ is
head (respectively, tail) of at most one bad arc labelled 2. Hence there are
at most 2 bad arcs labelled 2 in $L$, by Lemma~\ref{structure} (i).
Therefore at most one vertex other than $\beta$ is the
head (respectively, tail) of a bad arc labelled 2 in $L$.
The claim then follows since $\beta\not\in U$ and $|U|\geq 2$.
If $d=1$ then by Lemma~\ref{structure} (v) there is
exactly one bad arc in $L$, namely $\zeta^i (\alpha,\beta)$.
Hence we can let $\gamma$ be any element of $U$ since $\beta\not\in U$,
and the claim follows as $|U|\geq 1$ in this case.
Now $\beta$ is the head (respectively, tail) of at most $d-2$ good arcs
and $\gamma$ is the head (respectively, tail) of at least $d$ good arcs.
Hence we can choose a vertex $\delta$ such that $\zeta^i L(\delta,\beta)=0$
and $\zeta^i L(\delta,\gamma)=1$. Perform the switch
$\zeta^i[\alpha \beta\delta\gamma]$ to produce an
encoding $L'$ with $\zeta^i L'(\alpha,\beta)=\zeta^i L'(\alpha,\gamma)
=\zeta^i L'(\delta,\beta)=1$ and $\zeta^i L'(\delta,\gamma)=0$.
Then $L'$ is $Z$-valid by Lemma~\ref{Zvalid}, and
$|\mathcal{F}(L')| = |\mathcal{F}(L)| - 1$.
Call this operation a 2-\emph{switch}.
Finally, suppose that the only remaining bad arcs are labelled $-1$.
Let $\alpha$ and $\gamma$ be vertices such that $\zeta^i L(\alpha,\gamma)=-1$
for some $i\in \{0,1\}$,
choosing $\alpha$ to be a vertex at the tail, when $i=0$
(respectively head, when $i=1$) of
two bad arcs with label $-1$, if such a vertex exists.
Note that when $L$ contains three bad arcs with label $-1$ then
such a choice of $\alpha$ exists, by Lemma~\ref{structure} (i).
We claim that there exists a vertex $\beta$ such that
$\zeta^i L(\alpha,\beta)=1$ but $\beta$ is not the head, when $i=0$
(respectively tail, when $i=1$) of any bad arc.
To see this, note that there are at least $d+1\geq 2$ choices for $\beta$,
and there is at most one vertex which is at the head (respectively, tail)
of a bad arc which is not incident with $\alpha$, by choice
of $\alpha$. Hence we can avoid this vertex when choosing $\beta$,
proving the claim.
Then $\beta$ is the head (respectively, tail) of exactly $d$ good arcs,
while $\gamma$ is the head (respectively, tail) of at least $d+1$ good arcs.
Hence there
is at least one way to choose a vertex $\delta$ such that
$\zeta^i L(\delta,\beta)=0$ and $\zeta^i L(\delta,\gamma)=1$. Perform the
switch $\zeta^i [\alpha\beta\delta\gamma]$
to produce an encoding $L'$, with $\zeta^i L'(\alpha,\beta) =
\zeta^i L'(\delta,\gamma) = \zeta^i L'(\alpha,\gamma)=0$,
$\zeta^i L'(\delta,\beta)=1$.
Again Lemma~\ref{Zvalid} shows that $L'$ is $Z$-valid, and
$|\mathcal{F}(L')| = |\mathcal{F}(L)| - 1$.
Call this operation a $(-1)$-\emph{switch}.
If the original encoding $L$ has five bad arcs then by
Lemma~\ref{structure} (iii),
we can find a very handy tuple $(i_1,\alpha_1,\beta_1,\gamma_1)$
in $L$ and perform the $(-1,2)$-switch $\zeta^{i_1} [\alpha_1\beta_1\gamma_1\delta_1]$,
where $\delta_1$ is a vertex found using the procedure above.
It follows from (\ref{independent}) and Figure~\ref{structure} (i)
that $(i_2,\alpha_2,\beta_2,\gamma_2)$
is a very handy tuple in the resulting $Z$-valid encoding $L'$. Hence we may
perform the $(-1,2)$-switch $\zeta^{i_2} [\alpha_2\beta_2\gamma_2\delta_2]$
to transform $L'$ into the $Z$-valid encoding $L''$ with at most one bad arc.
At most one further switch is required to transform $L''$ into an encoding
with no bad arcs. Thus at most 3 switches are needed to process $L$
when $L$ has five bad arcs.
Similarly, if $L$ has four bad arcs then by Lemma~\ref{structure} (iv),
we can transform $L$ into a $Z$-valid encoding $L'$ with at
most two bad arcs, using a $(-1,2)$-switch. At most two further switches
are needed to produce an encoding with no bad arcs. Thus at most 3 switches
are needed to process $L$ when $L$ has four bad arcs. Clearly,
if $L$ has at most
3 bad arcs then at most 3 switches are required. This completes the proof.
\end{proof}
For $Z\in\Omega_{n,d}$ let $\mathcal{L}(Z)$ be the set of all
$Z$-valid encodings.
We obtain the following upper bound on $|\mathcal{L}(Z)|$ using
a relatively simple proof. It is possible that an improved bound
can be found using a more careful analysis, probably saving a
factor of $n$.
\begin{lemma}
For any $Z\in\Omega_{n,d}$ we have
\[ |\mathcal{L}(Z)| \leq 25\, d^6 n^6\, |\Omega_{n,d}|.\]
\label{poly}
\end{lemma}
\begin{proof}
Fix $Z\in\Omega_{n,d}$ and let $L\in\mathcal{L}(Z)$ be a $Z$-valid encoding.
By Lemma~\ref{fix} there exists a sequence
\[ L = L_0, L_1, \ldots, L_r = A\]
where $A\in \Omega_{n,d}$ is a digraph with no bad arcs, $r\leq 3$
and each of
$L_1,\ldots, L_r$ is $Z$-valid.
We can turn this into a function
$\varphi:\mathcal{L}(Z)\to\Omega_{n,d}$ by performing these switches
in a canonical way: as in Lemma~\ref{fix} perform all $(-1,2)$-switches first,
then all 2-switches, then all $(-1)$-switches, following the extra conditions
described in Lemma~\ref{fix} and breaking ties using lexicographic ordering
on the 5-tuple $(i,\alpha,\beta,\gamma,\delta)$.
It suffices to prove that $|\varphi^{-1}(A)|\leq 25\, d^6\, n^6$
for all $A\in\Omega_{n,d}$.
Now fix $A\in\Omega_{n,d}$. Define a
\emph{reverse $X$-switch}
to be the reverse of a $X$-switch, for $X \in \{(-1,2),\, -1,\, 2\}$.
For an upper bound we count all encodings
which can be obtained from $A$ using at most three reverse switches,
regardless of whether $A$ is the canonical image of that encoding under
$\varphi$. We will perform the reverse switchings in order: first
the reverse $(-1)$-switches, if any, then any reverse 2-switches and
finally any reverse $(-1,2)$-switches.
Note that a reverse switching alters four entries of the current encoding,
none of which are bad entries. So a bad arc created by
a reverse switch will never be changed by a later reverse switch.
Fix an encoding $B\in\mathcal{L}(Z)$ (which may not have any bad arcs).
Let $N_X(B)$ be the number of distinct 5-tuples
$(i,\alpha,\beta,\gamma,\delta)$ which define a
reverse $X$-switch that may be performed in $B$,
for $X\in\{ (-1,2),\, -1,\, 2\}$.
The result of each of the reverse switches counted by $N_X(B)$ is
a $Z$-valid encoding.
Our next task is to calculate upper bounds on $N_X(B)$ which hold for
all encodings $B\in\mathcal{L}(Z)$.
We only perform $(-1)$-switches on encodings $B\in\mathcal{L}(Z)$ which
have no bad arc with label 2. For such encodings we claim that
\begin{equation}
\label{N1}
N_{-1}(B) \leq 2 d^2 n(n-2).
\end{equation}
With notation as in Lemma~\ref{fix}, the factor of 2 counts the two
choices of orientation $i\in \{ 0,1\}$.
We prove the bound assuming that $i=0$, and the proof for $i=1$
follows by symmetry. There are $n$ choices for vertex $\alpha$, and
$d$ choices for
$\gamma$ since $\zeta^i (\alpha,\gamma) \in A(Z)$ as $B$ is $Z$-valid.
Then choose $\beta\neq \alpha$
so that $B(\alpha,\beta)=0$ and $\beta$ is not the head of any bad arc.
There are at most $n-2$ choices for $\beta$ since
$\beta\not\in\{ \alpha,\gamma\}$.
Then there are $d$ choices for $\delta$
such that $B(\delta,\beta)=1$,
since $\beta$ is the head of exactly $d$ good arcs.
This gives the claimed bound on $N_{-1}(B)$ when $B$ has no bad
arcs labelled 2.
Now suppose that $B\in\mathcal{L}(Z)$
may contain bad arcs with distinct labels, but no vertex is the
head (respectively, tail) of two bad arcs with distinct labels in $B$.
We also ensure that the reverse 2-switchings that we perform
never create any such pair of bad arcs, in order to maintain the
canonical order in which forward switches are performed.
We claim that
\begin{equation}
\label{N2}
N_2(B) \leq 2d (d-1)^2 n.
\end{equation}
The factor of 2 counts the two choices of orientation $i\in \{0,1\}$.
We prove the bound assuming that $i=0$, and the proof for $i=1$
follows by symmetry.
There are at most $n$ choices for $\alpha$ which is not the tail of a
bad arc labelled $-1$.
Then distinct out-neighbours
$\beta$, $\gamma$ of $\alpha$ in $B$ can be chosen in at most $d(d-1)$ ways
such that $\beta$ is not the head of a bad arc labelled $-1$
and $\gamma$ is not the head of a bad arc labelled 2.
(Note, $\alpha$ is the tail of at most $d$ good arcs, since $\alpha$
is not the tail of any bad arc labelled $-1$.)
Then there are at most $d-1$ choices for
a neighbour $\delta$ of $\beta$ in $B$,
since $\beta$ is the head of at most $d$
good arcs. This gives the claimed bound on $N_2(B)$.
Finally, we claim that for all $B\in\mathcal{L}(Z)$ we have
\begin{equation}
\label{N12}
N_{(-1,2)}(B) \leq 2d^2(d+1) n.
\end{equation}
Again, the factor of 2 counts the two choices of orientation $i\in\{0,1\}$
and we assume $i=0$ below, without loss of generality.
There are $n$ ways to choose a vertex $\alpha$ which may be the
tail of at most one bad arc in $B$.
There are $d$ choices for $\gamma$, as $B$ is $Z$-valid so
$\zeta^i(\alpha,\gamma)\in A(Z)$.
Then there are at most $d+1$ choices for $\beta$ such that
$\beta$ is an out-neighbour of $\alpha$ and is not the head of any arc
labelled $-1$. (There are at most $d$
choices for $\beta$ if there is no bad arc incident with $\alpha$ in $B$.)
Finally, there are at most $d$ choices for $\delta\neq\alpha$
such that $B(\delta,\beta)=1$,
since $\beta$ is the head of at most $d$ good arcs.
(The $d$ here arises since $\beta$ may itself be the head of at most one
bad arc in $B$,
and the bad arc may be labelled $-1$.)
This gives the claimed bound on $N_{(-1,2)}(B)$.
Each sequence of reverse switches which may arise is given a type,
defined by the corresponding sequence of labels in $\{ -1, 2, (-1,2)\}$.
It follows from the proof of Lemma~\ref{fix}
that the only types of reverse switchings which occur are given by
the following 9 sequences and all distinct subsequences of these
(including the empty sequence):
\[
\begin{array}{lll}
[\ -1,\ (-1,2),\ (-1,2)\ ], & [\ 2,\ (-1,2),\ (-1,2)\ ], & [\ -1,\ -1,\ (-1,2)\ ], \\{}
[\ -1,\ 2,\ (-1,2)\ ], & [ \ 2,\ 2,\ (-1,2)\ ], & [\ -1,\ -1,\ -1\ ],\\{}
[\ -1,\ -1,\ 2\ ], & [\ -1,\ 2,\ 2\ ], & [\ 2,\ 2,\ 2\ ].
\end{array}
\]
This gives 19 possible types in all.
We calculate the contribution of a type by simply multiplying the upper
bounds obtained in (\ref{N1})--(\ref{N12}) corresponding to each reverse
switch in the sequence.
(It is at this step that a more careful analysis may lead to an
improved bound, but we are satisfied by the bound given by this simple
calculation.)
For example, the contribution from the type $[\ -1,\ (-1,2),\ (-1,2)]$ is
\[ 2d^2 n(n-2) (2d^2(d+1)n)^2 = 8 d^6 (d+1)^2 n^3(n-2).\]
Finally we simply sum the contribution from each of the 19 types
and find that the resulting expression is bounded above by
$25 d^6 n^6$, using the inequalities $1\leq d\leq n/2$.
This shows that
\[ |\varphi^{-1}(A)| \leq 25\, d^6 n^6, \]
completing the proof.
\end{proof}
For each pair $(G,G')$ of distinct digraphs in $\Omega_{n,d}$, let
$\mathcal{P}_{G,G'}$ be the set of $|\Psi(G,G')|$ canonical paths
which we have defined from $G$ to $G'$, one for each pairing
$\psi\in\Psi(G,G')$. Let $\mathcal{P} = \cup_{G\neq G'} \mathcal{P}_{G,G'}$.
Define
\[ f(\gamma) = |\Omega_{n,d}|^{-2}\, |\Psi(G,G')|^{-1}\]
for each path $\gamma\in\mathcal{P}_{G,G'}$. Then
\[ \sum_{\gamma\in\mathcal{P}_{G,G'}} f(\gamma) = |\Omega_{n,d}|^{-2}
= \pi(G)\,\pi(G')\]
where $\pi$ is the stationary distribution of the Markov chain,
which is uniform on $\Omega_{n,d}$. Thus $f:\mathcal{P}\to [0,\infty)$
is a flow. We want to apply Lemma~\ref{second-eigval}. First we bound
$f(e)$ for all transitions $e$ of the Markov chain.
\begin{lemma}
For any transition $e=(Z,Z')$ of the Markov chain,
\[ f(e) \leq 100\, d^{22}\, n^6\, |\Omega_{n,d}|^{-1}.\]
\label{load}
\end{lemma}
\begin{proof}
Fix a transition $e=(Z,Z')$ of the Markov chain.
Let $(G,G')$ be a pair of distinct digraphs in $\Omega_{n,d}$
and suppose that $e$ lies on $\gamma_\psi(G,G')$, the canonical
path from $G$ to $G'$ corresponding to the pairing $\psi\in\Psi(G,G')$.
From $Z$ and $(G,G')$ we can construct $L$ and the digraph
$H=Z\triangle L = G\triangle G'$. We colour arcs of $H$ green
if they belong to $Z$ and yellow if the corresponding entry in
$L$ is 1.
(Recall that the symmetric
difference $H$ consists of those arcs with entry 1 in $L+Z = G+G'$.)
From the pairing $\psi$ we obtain the circuit decomposition
$\mathcal{C}$ of $H$, with colours alternating green, yellow
almost everywhere.
A vertex $x$ is \emph{bad} with respect
to $\psi$ if two arcs of the same colour are paired at $x$ under $\psi$.
If a vertex is not bad it is called \emph{good}.
Every bad vertex
lies on the circuit currently being processed. Specifically, bad
vertices may only be found incident to interesting arcs.
Lemma~\ref{zoo} shows that there are at most
five interesting arcs and at most six potentially bad vertices.
A yellow-yellow
or green-green pair at a bad vertex $x$ is called a \emph{bad pair}
with respect to $\psi$.
Careful consideration of the possibilities reveals that
there can be at most 16 bad pairs with respect to $\psi$.
In the worst case, there are five interesting arcs which all belong
to $H$. An interesting arc $e$ which belongs to $H$
creates two bad pairs in the circuit containing $e$, one at each
endvertex of $e$ (both of the same colour).
A bad pair is also created in the current circuit $C$
incident with each endvertex of each interesting arc,
giving at most six further bad pairs.
(The worked example in Section~\ref{a:example} gives an example of
a digraph, $Z_4$, containing the maximum number of bad pairs:
see Figure~\ref{example5}.)
Note also that a bad vertex may be the head (respectively, tail)
of at most two bad pairs of each colour. This follows from
Lemma~\ref{zoo} since no vertex is head (respectively, tail)
of more than two interesting arcs with the same label. Hence a bad vertex may
be the head (respectively, tail) of at most four bad pairs in total.
This is true even if there are some coincidences between the bad
vertices, which may occur when the interesting arcs have one of the
configurations other than the first one in Figure~\ref{f:zoo}.
To see this, note that for all the configurations in Figure~\ref{f:zoo},
the only vertex which is the head (or tail) of more than two interesting
arcs
is $v$, the start-vertex of the current circuit, and $v$ is always
distinct from all other bad vertices.
Given the uncoloured digraph $H$, we can form a pairing $\psi$ by pairing
up all in-arcs around $v$ and pairing up all out-arcs around $v$,
for each vertex $v$. Let the set of all these pairings be $\Psi(H)$.
Say that a pairing $\psi\in\Psi(H)$ is \emph{consistent with} $L$
if there are at most 16 bad pairs in the yellow-green colouring of
$H$ with respect to $L$, and at each vertex $u$
and for each choice of orientation there are
at most two bad pairs of each colour with that orientation at $u$.
Let $\Psi'(H,L)$ be the set of all pairings
$\psi$ of $H$ which are consistent with $L$. Given any $(G,G')$
with $G\triangle G'=H$, any pairing $\psi\in\Psi(G,G')$ is consistent
with the yellow-green colouring of $H$, as proved above. Therefore each triple
$(G,G',\psi)$ with $\psi\in\Psi(G,G')$ and $e\in\gamma_\psi(G,G')$
gives rise to at least one pair $(L,\psi)$ with $L\in\mathcal{L}(Z)$
and $\psi\in\Psi'(H,L)$.
Conversely, we can start with $L\in\mathcal{L}(Z)$ and find an
upper bound for $|\Psi'(H,L)|$. Once $\psi$ and $(Z,Z')$ are given,
there are at most four possibilities for $(G,G')$ with
$e\in\gamma_\psi(G,G')$, by Lemma~\ref{notquiteunique}.
Recall from (\ref{number-pairings}) that
\[ |\Psi(G,G')| = \prod_{v\in V} \theta_v!\, \phi_v!\]
where $2\theta_v$ is the in-degree of $v$ in $H$ and $2\phi_v$ is the
out-degree of $v$ in $H$.
Similarly, each good vertex $v$ contributes a
factor $\theta_v!\, \phi_v!$ to $|\Psi'(H,L)|$, but a bad vertex may
contribute more. The contributions from in-arcs and out-arcs are
independent, so we consider only in-arcs below.
Recall that no vertex can be the head of more than two bad pairs of
a given colour.
First suppose that a vertex $v$ is the head of
$\theta_v+2$ green arcs and $\theta_v-2$ yellow arcs.
Then $v$ must be bad, with two bad green pairs and no bad yellow pairs.
The number of ways to pair up the in-arcs around $v$
is
\[ 3 \, \binom{\theta_v+2}{4}\, (\theta_v-2)!
= \frac{(\theta_v + 2)(\theta_v+1)}{8}\, \theta_v!
\leq \theta_v^2\cdot \theta_v! \leq d^2 \theta_v!.
\]
Next suppose that $v$ is the head of $\theta_v+1$ green arcs
and $\theta_v-1$ yellow arcs. Then $v$ must be a bad vertex.
Now $v$ may be the head of two bad green pairs and one bad yellow pair,
or $v$ may be the head of one bad green pair and no bad yellow pairs.
The number of ways to pair up the in-arcs around $v$ with two bad
green pairs and one bad yellow pair is
\[ 3\, \binom{\theta_v+1}{4}\,\binom{\theta_v-1}{2}\, (\theta_v-3)!
= \frac{(\theta_v+1)(\theta_v-1)(\theta_v-2)}{16}\, \theta_v!
\leq \theta_v^3\, \theta_v!
\leq d^3\, \theta_v!,
\]
while the number of pairings of in-arcs around $v$ with one bad green pair
and no bad yellow pairs is
\[ \binom{\theta_v+1}{2}\, (\theta_v-1)! =
\frac{\theta_v + 1}{2}\, \theta_v! \leq \theta_v\, \theta_v!
\leq d\, \theta_v!.
\]
Finally, suppose that $v$ is the head of $\theta_v$ arcs of each colour.
Then $v$ may be good, or it may be the head of one bad pair of each colour,
or the head of two bad pairs of each colour.
The number of pairings of in-arcs around $v$ with two bad pairs of in-arcs
of each colour is
\[ 9\, \binom{\theta_v}{4}^2 \, (\theta_v - 4)!
=
\frac{\theta_v(\theta_v-1)(\theta_v-2)(\theta_v-3)}{64}\, \theta_v!
\leq \theta_v^4\cdot \theta_v!\\
\leq d^4\, \theta_v!,
\]
while the number of pairings of in-arcs around $v$ with one bad pair of
each colour is
\[ \binom{\theta_v}{2}^2\, (\theta_v-2)!
= \frac{\theta(\theta_v-1)}{4}\, \theta_v! \leq \theta_v^2\,\theta_v!
\leq d^2\, \theta_v!.
\]
By symmetry, the same bounds hold for out-arcs and also hold after
exchanging green and yellow.
Since there are at most 16 bad pairs, it follows that
\begin{equation}
|\Psi'(H,L)|\leq d^{16}\, |\Psi(G,G')|.
\label{psi-prime}
\end{equation}
Now write $\mathbf{1}(e\in\gamma_\psi(G,G'))$ to denote the indicator
variable which is 1 if $e\in\gamma_\psi(G,G')$ and is 0 otherwise,
for $(G,G')\in\Omega_{n,d}$ and $\psi\in\Psi(G,G')$.
Then
\begin{align*}
|\Omega_{n,d}|^2 f(e)
&= \sum_{(G,G')}\,\, \sum_{\psi\in\Psi(G,G')}\,
\mathbf{1}(e\in\gamma_\psi(G,G'))\, |\Psi(G,G')|^{-1}\\
&\leq 4\, \sum_{L\in\mathcal{L}(Z)}\,\, \sum_{\psi\in\Psi'(H,L)}\,
\mathbf{1}(e\in\gamma_\psi(G,G'))\, |\Psi(G,G')|^{-1}\\
&\leq 4\, \sum_{L\in\mathcal{L}(Z)}\,\, \sum_{\psi\in\Psi'(H,L)}\,
|\Psi(G,G')|^{-1}\\
&\leq 4\, \sum_{L\in\mathcal{L}(Z)} \, d^{16}\\
&\leq 100\, d^{22}\, n^6\, |\Omega_{n,d}|.
\end{align*}
The first inequality follows by Lemma~\ref{notquiteunique},
the third inequality follows from (\ref{psi-prime}),
and applying Lemma~\ref{poly} gives the last inequality.
This completes the proof.
\end{proof}
We can now complete our argument by proving
Proposition~\ref{our-second-largest-eigval}.
\begin{proof}[Proof of Proposition~\ref{our-second-largest-eigval}]\
For any transition $e=(Z,Z')$ of the switch chain
we have
\[ 1/Q(e) = |\Omega_{n,d}|/P(Z,Z') = \binom{dn}{2}\, |\Omega_{n,d}|.\]
Therefore, by Lemma~\ref{load},
\begin{equation}
\label{rho-bound}
\rho(f) \leq 50 d^{24}\, n^8.
\end{equation}
Next, observe that $\ell(f) \leq dn$, since each step along a canonical path
replaces at least one arc of $G$ by an arc of $G'$.
The result follows from Lemma~\ref{second-eigval}.
\end{proof}
\section{An illustrative example}\label{a:example}
Let $(G,G')\in\Omega_{n,d}$ be any pair of digraphs
with the symmetric difference $H$ given in
Figure~\ref{example1}, where vertices of degree 0 in $H$ are not shown.
To avoid congestion in the figure, some vertices are depicted as black
rectangles.
Solid arcs belong to $G$ and dashed arcs belong to $G'$, so they play
the role of blue and red arcs.
\begin{center}
\begin{figure}[ht]
\psfrag{v}{$v$}\psfrag{x00}{$x_{0,0}$} \psfrag{x01}{$x_{0,1}$}\psfrag{x11}{$x_{1,1}$}
\psfrag{x10}{$x_{1,0}$} \psfrag{z00}{$z_{0,0}$} \psfrag{z01}{$z_{0,1}$}
\psfrag{z11}{$z_{1,1}$} \psfrag{z10}{$z_{1,0}$}
\psfrag{u1}{$u_1$} \psfrag{u2}{$u_2$} \psfrag{w1}{$w_1$} \psfrag{w2}{$w_2$}
\psfrag{r1}{$r_1$} \psfrag{r2}{$r_2$} \psfrag{t1}{$t_1$} \psfrag{t2}{$t_2$}
\psfrag{p1}{$p_1$} \psfrag{p2}{$p_2$} \psfrag{q1}{$q_1$} \psfrag{q2}{$q_2$}
\psfrag{s1}{$s_1$} \psfrag{s2}{$s_2$}
\centerline{\includegraphics[scale=0.48]{example1b}}
\caption{The symmetric difference $H$ of $G$ and $G'$.}
\label{example1}
\end{figure}
\end{center}
\vspace*{-\baselineskip}
Let $\psi$ which be the pairing which produces the forward circuits
\begin{equation}
\label{circuits}
\begin{split}
& v x_{0,0} x_{0,1} z_{0,0} w_1 w_2 z_{1,0} x_{1,1} x_{1,0} v x_{1,1} x_{1,0} z_{1,1} z_{0,1} x_{0,0} x_{0,1},
\quad v p_2 p_1 z_{0,1},\quad
v x_{1,0} q_1 q_2, \\
& \hspace*{2cm} z_{1,0} x_{1,0} r_1 r_2,\quad
z_{1,0} v s_2 s_1,\quad
v w_2 t_1 t_2,\quad
v u_2 u_1 z_{0,0},
\end{split}
\end{equation}
in the given order.
Set $Z_0=G$ and start processing $H$. The first circuit to process is
the eccentric 2-circuit
\[ S =
v x_{0,0} x_{0,1} z_{0,0} w_1 w_2 z_{1,0} x_{1,1} x_{1,0} v x_{1,1} x_{1,0} z_{1,1} z_{0,1} x_{0,0} x_{0,1}.
\]
We have $(i,h)=(0,0)$, and the eccentric arc $(z_{1,0},v)$
does not belong to $A(S)$. Hence $S$ falls into case (Ea) and
we must first perform the eccentric switch $[z_{1,0} x_{1,1} x_{1,0} v]$.
This produces the next digraph $Z_1$ in the canonical path $\gamma_\psi(G,G')$.
The eccentric arc $(z_{1,0},v)$ has been used in the eccentric switch,
so it is now an interesting arc.
Initially it belonged to $G'-G$,
and now it belongs to $Z_1\cap G'$, so it does not belong to the
current symmetric difference $Z_1\triangle G'$. However, we include
all interesting arcs in our figures, denoted by thicker arcs
(either solid or broken, as appropriate).
Hence Figure~\ref{example2} shows the symmetric difference of $Z_1$ and $G'$,
together with the eccentric arc.
\begin{center}
\begin{figure}[ht]
\psfrag{v}{$v$}\psfrag{x00}{$x_{0,0}$} \psfrag{x01}{$x_{0,1}$}\psfrag{x11}{$x_{1,1}$}
\psfrag{x10}{$x_{1,0}$} \psfrag{z00}{$z_{0,0}$} \psfrag{z01}{$z_{0,1}$}
\psfrag{z11}{$z_{1,1}$} \psfrag{z10}{$z_{1,0}$}
\psfrag{u1}{$u_1$} \psfrag{u2}{$u_2$} \psfrag{w1}{$w_1$} \psfrag{w2}{$w_2$}
\psfrag{r1}{$r_1$} \psfrag{r2}{$r_2$} \psfrag{t1}{$t_1$} \psfrag{t2}{$t_2$}
\psfrag{p1}{$p_1$} \psfrag{p2}{$p_2$} \psfrag{q1}{$q_1$} \psfrag{q2}{$q_2$}
\psfrag{s1}{$s_1$} \psfrag{s2}{$s_2$}
\centerline{\includegraphics[scale=0.48]{example2b}}
\caption{The symmetric difference of $Z_1$ and $G'$, together with the
single interesting arc (the eccentric arc), after the eccentric switch.}
\label{example2}
\end{figure}
\end{center}
The arcs $(z_{1,0},x_{1,1}), (x_{1,0},x_{1,1}), (x_{1,0},v)$ have
disappeared because they have now been switched to agree with $G'$.
They play no further part in the formation of the canonical path.
Next, we must process the normal 2-circuit
\[ S' =
v x_{0,0} x_{0,1} z_{0,0} w_1 w_2 z_{1,0} v x_{1,1} x_{1,0} z_{1,1} z_{0,1} x_{0,0} x_{0,1}
\]
which the eccentric switch has produced (see Figure~\ref{eccentric-detail}).
From Lemma~\ref{eccentric-plus-shortcut} we know that the shortcut arc is
$(z_{1,0},x_{1,0})$, and again $(i,h)=(0,0)$.
Now $(z_{1,0},x_{1,0})\in A(Z_1)$ so $S'$ falls into case (Nc), and we
will perform the shortcut switch last. Our next task is to process the
1-circuit
\[ S_1 =
v x_{0,0} x_{0,1} z_{0,0} w_1 w_2 z_{1,0} x_{1,0} z_{1,1} z_{0,1}
x_{0,0} x_{0,1}.
\]
The set $\mathcal{B}$ of end-vertices of odd chords which are absent in $Z_1$
is $\mathcal{B} = \{ x_{0,1}\, z_{0,1},\, z_{0,0}\}$.
Now $z_{0,1}x_{0,0}x_{0,1}$ is a contiguous substring of $S$, so these
vertices are all distinct, and hence $\mathcal{B}$ has three elements.
Thus there will be three phases in the processing of $S_1$.
The first phase is over after just one switch, namely
$[v x_{0,0} x_{0,1} z_{0,0}]$.
This produces the next digraph
$Z_2$ on the canonical path: see Figure~\ref{example3}.
\begin{center}
\begin{figure}[ht]
\psfrag{v}{$v$}\psfrag{x00}{$x_{0,0}$} \psfrag{x01}{$x_{0,1}$}\psfrag{x11}{$x_{1,1}$}
\psfrag{x10}{$x_{1,0}$} \psfrag{z00}{$z_{0,0}$} \psfrag{z01}{$z_{0,1}$}
\psfrag{z11}{$z_{1,1}$} \psfrag{z10}{$z_{1,0}$}
\psfrag{u1}{$u_1$} \psfrag{u2}{$u_2$} \psfrag{w1}{$w_1$} \psfrag{w2}{$w_2$}
\psfrag{r1}{$r_1$} \psfrag{r2}{$r_2$} \psfrag{t1}{$t_1$} \psfrag{t2}{$t_2$}
\psfrag{p1}{$p_1$} \psfrag{p2}{$p_2$} \psfrag{q1}{$q_1$} \psfrag{q2}{$q_2$}
\psfrag{s1}{$s_1$} \psfrag{s2}{$s_2$}
\centerline{\includegraphics[scale=0.5]{example3b}}
\caption{The symmetric difference of $Z_2$ and $G'$,
together with the two interesting arcs (the eccentric arc and one odd chord),
after Phase 1.}
\label{example3}
\end{figure}
\end{center}
\vspace*{-2\baselineskip}
The odd chord $(v,z_{0,0})$ has become an interesting arc, so it is included
in Figure~\ref{example3} together with the eccentric arc. Both belong to
$Z_2\cap G'$, and hence they are depicted by a thick unbroken arc.
The arcs $(v,x_{0,0})$, $(x_{0,1},x_{0,0})$, $(x_{0,1},z_{0,0})$ have now
been switched to agree with $G'$, so they play no further role. Hence
we have omitted these arcs from Figure~\ref{example3}.
We now start Phase 2 with the switch $[v x_{1,0} z_{1,1} z_{0,1}]$,
producing the next digraph $Z_3$ on the canonical path.
See Figure~\ref{example4}.
Note that there are four interesting arcs in $Z_3$, namely three odd chords
and the eccentric arc. The vertex $z_{1,1}$ is omitted
from Figure~\ref{example4} since it has degree zero in the symmetric
difference of $Z_3$ and $G'$.
(We will make no further comments on the inclusion of interesting arcs or the
omission of isolated vertices for the remaining figures.)
\begin{center}
\begin{figure}[ht]
\psfrag{v}{$v$}\psfrag{x00}{$x_{0,0}$} \psfrag{x01}{$x_{0,1}$}\psfrag{x11}{$x_{1,1}$}
\psfrag{x10}{$x_{1,0}$} \psfrag{z00}{$z_{0,0}$} \psfrag{z01}{$z_{0,1}$}
\psfrag{z11}{$z_{1,1}$} \psfrag{z10}{$z_{1,0}$}
\psfrag{u1}{$u_1$} \psfrag{u2}{$u_2$} \psfrag{w1}{$w_1$} \psfrag{w2}{$w_2$}
\psfrag{r1}{$r_1$} \psfrag{r2}{$r_2$} \psfrag{t1}{$t_1$} \psfrag{t2}{$t_2$}
\psfrag{p1}{$p_1$} \psfrag{p2}{$p_2$} \psfrag{q1}{$q_1$} \psfrag{q2}{$q_2$}
\psfrag{s1}{$s_1$} \psfrag{s2}{$s_2$}
\centerline{\includegraphics[scale=0.5]{example4b}}
\caption{The symmetric difference of $Z_3$ and $G'$,
together with the four interesting arcs (the eccentric arc and three
odd chords), after the first step of Phase 2.}
\label{example4}
\end{figure}
\end{center}
\vspace*{-2\baselineskip}
The next step in Phase 2 is the switch $[v w_2 z_{1,0} x_{1,0}]$,
which involves the shortcut arc. This produces the digraph $Z_4$ on
the canonical path. See Figure~\ref{example5}.
Note that $Z_4$ has five interesting arcs, namely
\[ (z_{1,0},v),\,\, (z_{1,0},x_{1,0}),\,\, (v,z_{0,1}),\,\, (v,w_2),\,\,
(v,z_{0,0}).
\]
This is the maximum possible, by Lemma~\ref{zoo}.
Later we will show that $Z_4$ also has the maximum number of bad
pairs.
\begin{center}
\begin{figure}[ht]
\psfrag{v}{$v$}\psfrag{x00}{$x_{0,0}$} \psfrag{x01}{$x_{0,1}$}\psfrag{x11}{$x_{1,1}$}
\psfrag{x10}{$x_{1,0}$} \psfrag{z00}{$z_{0,0}$} \psfrag{z01}{$z_{0,1}$}
\psfrag{z11}{$z_{1,1}$} \psfrag{z10}{$z_{1,0}$}
\psfrag{u1}{$u_1$} \psfrag{u2}{$u_2$} \psfrag{w1}{$w_1$} \psfrag{w2}{$w_2$}
\psfrag{r1}{$r_1$} \psfrag{r2}{$r_2$} \psfrag{t1}{$t_1$} \psfrag{t2}{$t_2$}
\psfrag{p1}{$p_1$} \psfrag{p2}{$p_2$} \psfrag{q1}{$q_1$} \psfrag{q2}{$q_2$}
\psfrag{s1}{$s_1$} \psfrag{s2}{$s_2$}
\centerline{\includegraphics[scale=0.5]{example5b}}
\caption{The symmetric difference of $Z_4$ and $G'$
together with the five interesting arcs (the eccentric arc, the shortcut arc
and three odd chords), after the second step of Phase 2.}
\label{example5}
\end{figure}
\end{center}
\vspace*{-2\baselineskip}
The final step in Phase 2 is the switch $[vz_{0,0}w_1 w_2]$,
producing the digraph $Z_5$.
See Figure~\ref{example6}. Now only one odd chord is interesting, as two
have been restored to their original state.
\begin{center}
\begin{figure}[ht]
\psfrag{v}{$v$}\psfrag{x00}{$x_{0,0}$} \psfrag{x01}{$x_{0,1}$}\psfrag{x11}{$x_{1,1}$}
\psfrag{x10}{$x_{1,0}$} \psfrag{z00}{$z_{0,0}$} \psfrag{z01}{$z_{0,1}$}
\psfrag{z11}{$z_{1,1}$} \psfrag{z10}{$z_{1,0}$}
\psfrag{u1}{$u_1$} \psfrag{u2}{$u_2$} \psfrag{w1}{$w_1$} \psfrag{w2}{$w_2$}
\psfrag{r1}{$r_1$} \psfrag{r2}{$r_2$} \psfrag{t1}{$t_1$} \psfrag{t2}{$t_2$}
\psfrag{p1}{$p_1$} \psfrag{p2}{$p_2$} \psfrag{q1}{$q_1$} \psfrag{q2}{$q_2$}
\psfrag{s1}{$s_1$} \psfrag{s2}{$s_2$}
\centerline{\includegraphics[scale=0.5]{example6b}}
\caption{The symmetric difference of $Z_5$ and $G'$,
together with the three interesting arcs (the eccentric arc, the shortcut arc
and one odd chord), after Phase 2.}
\label{example6}
\end{figure}
\end{center}
\vspace*{-2\baselineskip}
Then we perform Phase 3, which consists of one step: the switch
$[vz_{0,1}x_{0,0} x_{0,1}]$. This produces the digraph $Z_6$ which has
no interesting odd chords, but still has two interesting arcs, namely the
eccentric arc and shortcut arc. See Figure~\ref{example7}.
\begin{center}
\begin{figure}[ht!]
\psfrag{v}{$v$}\psfrag{x00}{$x_{0,0}$} \psfrag{x01}{$x_{0,1}$}\psfrag{x11}{$x_{1,1}$}
\psfrag{x10}{$x_{1,0}$} \psfrag{z00}{$z_{0,0}$} \psfrag{z01}{$z_{0,1}$}
\psfrag{z11}{$z_{1,1}$} \psfrag{z10}{$z_{1,0}$}
\psfrag{u1}{$u_1$} \psfrag{u2}{$u_2$} \psfrag{w1}{$w_1$} \psfrag{w2}{$w_2$}
\psfrag{r1}{$r_1$} \psfrag{r2}{$r_2$} \psfrag{t1}{$t_1$} \psfrag{t2}{$t_2$}
\psfrag{p1}{$p_1$} \psfrag{p2}{$p_2$} \psfrag{q1}{$q_1$} \psfrag{q2}{$q_2$}
\psfrag{s1}{$s_1$} \psfrag{s2}{$s_2$}
\centerline{\includegraphics[scale=0.5]{example7b}}
\caption{The symmetric difference of $Z_6$ and $G'$ together with
the two eccentric arcs (the eccentric arc and the shortcut arc),
after Phase 3: the processing of the 1-circuit is complete.}
\label{example7}
\end{figure}
\end{center}
\vspace*{-2\baselineskip}
This completes the processing of the 1-circuit $S_1$. To complete the
processing of the normal 2-circuit $S'$ we must perform the shortcut switch
$[x_{1,1} x_{1,0} z_{1,0} v]$. This produces the digraph $Z_7$ as in
Figure~\ref{example8}, with no interesting arcs.
\begin{center}
\begin{figure}[ht!]
\psfrag{v}{$v$}\psfrag{x00}{$x_{0,0}$} \psfrag{x01}{$x_{0,1}$}\psfrag{x11}{$x_{1,1}$}
\psfrag{x10}{$x_{1,0}$} \psfrag{z00}{$z_{0,0}$} \psfrag{z01}{$z_{0,1}$}
\psfrag{z11}{$z_{1,1}$} \psfrag{z10}{$z_{1,0}$}
\psfrag{u1}{$u_1$} \psfrag{u2}{$u_2$} \psfrag{w1}{$w_1$} \psfrag{w2}{$w_2$}
\psfrag{r1}{$r_1$} \psfrag{r2}{$r_2$} \psfrag{t1}{$t_1$} \psfrag{t2}{$t_2$}
\psfrag{p1}{$p_1$} \psfrag{p2}{$p_2$} \psfrag{q1}{$q_1$} \psfrag{q2}{$q_2$}
\psfrag{s1}{$s_1$} \psfrag{s2}{$s_2$}
\centerline{\includegraphics[scale=0.5]{example8b}}
\caption{The symmetric difference of $Z_7$ and $G'$.}
\label{example8}
\end{figure}
\end{center}
This completes the processing of the normal 2-circuit $S'$, and hence it also
completes the processing of the eccentric 2-circuit $S$.
It remains to process the remaining circuits in the given order.
Each remaining circuit is an alternating 4-cycle, which
is processed by a single switch, removing it from the symmetric difference.
This gives 6 more switches, specifically
\[ [v p_2 p_1 z_{0,1}], \,\, [v x_{1,0} q_1 q_2],\,\,
[z_{1,0} x_{1,0} r_1 r_2],\,\,
[z_{1,0} s_1 s_2 v],\,\,
[v w_2 t_1 t_2],\,\, [v u_2 u_1 z_{0,0}].
\]
The switches are performed in this order,
producing digraphs $Z_8,\ldots, Z_{13}$ where $Z_{13}=G'$. This
completes the construction of
the canonical path $\gamma_\psi(G,G')$ from $G$ to $G'$ corresponding to
$\psi$.
\bigskip
Now let us return to the digraph $Z_4$. We now show that there are
16 bad pairs in $Z_4$ with respect to $\psi$.
We redraw $H$
in Figure~\ref{example9}, where now solid lines show arcs in $H\cap Z_4$
and dashed lines show arcs in $H - Z_4$. Hence solid and dashed
arcs play the role of green and yellow arcs, in the terminology of
Lemma~\ref{load}.
Interesting arcs are still shown with thicker lines.
\begin{center}
\begin{figure}[ht!]
\psfrag{v}{$v$}\psfrag{x00}{$x_{0,0}$} \psfrag{x01}{$x_{0,1}$}\psfrag{x11}{$x_{1,1}$}
\psfrag{x10}{$x_{1,0}$} \psfrag{z00}{$z_{0,0}$} \psfrag{z01}{$z_{0,1}$}
\psfrag{z11}{$z_{1,1}$} \psfrag{z10}{$z_{1,0}$}
\psfrag{u1}{$u_1$} \psfrag{u2}{$u_2$} \psfrag{w1}{$w_1$} \psfrag{w2}{$w_2$}
\psfrag{r1}{$r_1$} \psfrag{r2}{$r_2$} \psfrag{t1}{$t_1$} \psfrag{t2}{$t_2$}
\psfrag{p1}{$p_1$} \psfrag{p2}{$p_2$} \psfrag{q1}{$q_1$} \psfrag{q2}{$q_2$}
\psfrag{s1}{$s_1$} \psfrag{s2}{$s_2$}
\centerline{\includegraphics[scale=0.5]{example9b}}
\caption{The symmetric difference $H$, where now solid lines are arcs in
$H\cap Z_4$ and dashed lines are arcs in $H - Z_4$.}
\label{example9}
\end{figure}
\end{center}
\vspace*{-2\baselineskip}
By tracing around this figure using the circuits given in (\ref{circuits})
determined by the pairing $\psi$, we find that there are 16 bad pairs in
$Z_4$ with respect to $\psi$. This is the maximum possible number
of bad pairs, as proved in Lemma~\ref{load}. Table~\ref{badpairs}
shows the bad vertices and the bad pairs of arcs incident with each one.
\begin{table}[ht]
\renewcommand{\arraystretch}{1.5}
\begin{center}
\begin{tabular}{|r|l|l|}
\hline
bad vertex & bad green pairs & bad yellow pairs \\
\hline
$v$ & $(z_{1,0},v),\ (s_2,v)$ & $(x_{1,1},v),\ (x_{1,0},v)$ \\
& $(v,z_{0,0}),\ (v,u_2)$ & $(v,x_{0,0}),\ (v,x_{0,1})$ \\
& $(v,z_{0,1}),\ (v,p_2)$ & $(v,w_2),\ (v,t_2)$\\
\hline
$z_{1,0}$ & $(z_{1,0},v), \ (z_{1,0},s_1)$ &
$(z_{1,0},x_{1,0}),\ (z_{1,0},r_2)$ \\
\hline
$x_{1,0}$ & $(x_{1,1},x_{1,0}),\ (z_{1,1},x_{1,0})$ &
$(z_{1,0},x_{1,0}),\ (r_1,x_{1,0})$ \\
\hline
$z_{0,0}$ & $(v,z_{0,0}),\ (u_1,z_{0,0})$ &
$(x_{0,1},z_{0,0}),\ (w_1,z_{0,0})$ \\
\hline
$z_{0,1}$ & $(v,z_{0,1}),\ (p_1,z_{0,1})$
& $(z_{1,1},z_{0,1}),\ (x_{0,0},z_{0,1})$ \\
\hline
$w_2$ & $(w_1,w_2),\ (z_{1,0},w_2)$ &
$(v,w_2),\ (t_1,w_2)$ \\
\hline
\end{tabular}
\caption{The bad vertices and bad pairs of arcs in $Z_4$ with respect to
$\psi$.}
\label{badpairs}
\end{center}
\end{table}
We now make two final comments.
\begin{enumerate}
\item[(i)] In this relatively small example, not many coincidences between the
bad vertices are possible. For instance, we know that $w_2\neq z_{0,0}$ since
$z_{0,0}w_1w_2$ is a contiguous substring of $S$, while $w_2\neq x_{1,0}$ since
$(v,w_2)$ is a blue arc in $H$ and $(v,x_{1,0})$ is a red arc in $H$.
In our example, the only coincidences that may occur are that $z_{1,0}$ may equal
$z_{0,0}$
or it may equal $z_{0,1}$. If either holds then
the vertex $z_{0,0}$ is incident with four bad pairs in $Z_4$, one of each colour and
orientation.
\item[(ii)] This example was constructed to produce a digraph with the maximum
number of bad pairs (namely $Z_4$, with 16 bad pairs). This was
achieved by letting the interesting arcs all belong to $H$, so that they
did not become bad arcs when they became interesting, but instead they
created extra bad pairs with respect to $\psi$.
If instead $H$ just
consisted of the arcs of the eccentric 2-circuit $S$, then
any interesting arc would also be a bad arc.
(For example, if the eccentric arc had not been
an arc of $H$ but was absent in both $G$ and $G'$ then in $Z_1$ it would
become a bad arc with label $-1$.)
Then the analogue of $Z_4$ would be an example of a digraph
with the maximum number of bad arcs.
\end{enumerate}
|
1,108,101,562,950 | arxiv | \section{Static Analysis}
\label{analysis}
XLA operators are relatively low-level compared to those in front-end frameworks like TensorFlow, which has two implications. First, lots of structural information about the model is lost when lowered to XLA (e.g., what parts of the graph are weight updates), which requires us to use static analysis to identify the operators to shard. Second, the set of operators is small, which makes analysis easier. We use static analysis to guarantee correctness and to identify transforms that are beneficial to performance.
\subsection{Correctness: cross-replica redundancy}
Weight update is a subset of the training graph that is redundant across replicas: since it does not have a batch dimension, all replicas are repeating the same computation on the same data. Redundancy is the property of an operator that it must output the same value in all replicas. Therefore, as long as an operator is redundant across replicas, it is safe to shard it across replicas.
\paragraph{Sources of redundancy.} There are three types of operators that are known to produce the same results, thus can be regarded as the sources of the analysis.
\begin{itemize}
\item {\bf Constants.} Because all replicas are executing the same program, compile-time constants must be the same across replicas.
\item {\bf Output of all-reduce.} By definition, an all-reduce operator produces the same output on the participating replicas. An exception is all-reduce operators with subgroups, where each group of replicas perform their own all-reduce, which could still be used in partial sharding within those subgroups, but we skip that case for simplicity.
\item {\bf Annotated parameters.} The above obvious sources are insufficient to identify the weight update computation, because the initial values for the weight variables are passed in as parameters to the XLA graph, and XLA does not assume all replicas to have the same parameter values. Fortunately, in practical use cases of data-parallel training, these initial weight values are set to the same across replicas in order to keep the weights in sync. In our approach, the front-end framework, e.g., TensorFlow, needs to annotate the corresponding parameters in XLA to indicate that they will receive the same values during execution.
\end{itemize}
\paragraph{Propagation.} With the initial source set of redundant operators, we can run a propagation pass to identify other redundant operators. The analysis pass visits one operator at a time in topological order, i.e., producer before consumer.
For an operator that does not involve control flow, the analysis checks whether it has side effects or randomness, and whether all of its inputs are redundant. If all checks pass, this operator is marked as redundant.
Control flow in XLA is represented as special operators calling to nested computations.
\begin{itemize}
\item {\bf A conditional} has a predicate and multiple branch computations. To determine a conditional's redundancy, the analysis first checks whether the predicate is redundant, then runs on all the branches to check their return values' redundancy. If all checks pass, the conditional can be marked as redundant.
\item {\bf A while loop} has a body computation and a condition computation, which share the same input. The body's output is passed as the next iteration's input, so the output's redundancy must be used to determine the input's redundancy. To model this back edge, the analysis maintains tentative results of the operators in the loop, and runs iteratively on the condition and body, until a fixed point is reached. During each run, if the condition's result is determined to be non-redundant, all operators must be marked as non-redundant as well, since the control flow is different across replicas; this is unlikely to happen on the main training loop since all replicas are expected to execute the same number of steps.
\end{itemize}
\subsection{Performance: sharding profitability}
\label{perf-analysis}
We analyze whether efficient sharding can be applied to each part of the identified redundant computation. Since an all-reduce operator precedes the update of a weight and the associated auxiliary variables, our analysis is centered around the all-reduce operators. The analysis first finds the cluster of redundant operators connected to each all-reduce, using simple propagation. Because weight updates are usually distinct parts of the training graph that do not have much interaction with the forward and backward passes, such propagation does not need to be overly sophisticated. Figure~\ref{f:weight-update-around-ar} shows a typical example of the identified clusters.
For each weight update, the impact on performance is primarily determined by two factors: the size reduction in local weight update and the requirement for communication. If the effect of size reduction overweighs the communication overhead, sharding can be applied. The calculation can be based on a fairly conservative cost models, since in typical cases (Figure~\ref{f:sharding-aux-loop}), sharding should give an obvious speedup.
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{figs/weight-update-around-ar.pdf}
\caption{\label{f:weight-update-around-ar} Weight update operators around all-reduce. If this is in a loop, Input 1 and Output 1 can be sharded across iterations, and there will be just one all-gather needed for Input 0 and Output 0 (either before Output 0 or before the matmul).}
\end{figure}
Size reduction in local weight update must consider fusion; a good estimate is to use the combined size of the non-fusible inputs and outputs, instead of the number of operators sharded.
Because we always need only one reduce-scatter (Figure~\ref{f:sharding-aux-loop}), communication requirement is determined by the all-gathers needed for unshardable operators with sharded inputs. An unshardable operator can be part of the output of the program, a non-redundant operator, or an operator with unimplemented sharding transformation. There are also conditionally shardable operators, i.e., those supported only for certain sharding formats of the tensor. For example, a reduce operator along specific dimensions may not be supported for arbitrary reformatting. See Section~\ref{sharding-rep} for detailed discussion.
The placement of all-gather operators is heavily affected by control flow. In Figure~\ref{f:sharding-aux-loop}, only one all-gather is inside the loop, so that the amortized overhead of the extra all-gather operators is very small. The analysis accounts for this effect by marking the corresponding input/output pair of the loop as shardable, with the requirement that they must be sharded in the same way.
The training loop is critical to amortize all-gather cost for auxiliary variables. However, it is also common that there is not a compiler-visible loop, where the XLA graph only represents a single step, and the training loop could be entirely implemented by the user as a Python loop. We will discuss such cases in Section~\ref{transform-on-loop}.
We have seen models that transfer the on-device tensors to the host once in a certain number of steps, as a way to checkpoint, summarize, or debug the current training state. This is typically done using a conditional operator after the weight update, which contains an outfeed operator of the full weight and auxiliary tensors in one branch. For such cases, we have an analysis that estimates the frequency of different branches, and if the full tensors are only needed in an infrequent branch, we can place an all-gather inside that branch without adding much overhead. We implemented the frequency analysis by checking the conditional predicate's use of the loop induction variable, which is capable of recognizing the pattern described above.
\section{Efficient Communication}
\label{comm}
Efficient reduce-scatter and all-gather implementation is important for performance even if the theoretical amount of communication is comparable to the all-reduce without weight-update sharding. There are two challenges, matching the sharding representation specified on the tensor (Section~\ref{sharding-rep}) and avoiding latency-bound communication on small shards.
\subsection{Fusion with data formatting}
The formatting steps chosen for each tensor in the sharding representation are needed to determine how it is divided into shards. If we pad the gradient before reduce-scatter, it would require each replica to perform local read and write on the full data. To avoid such inefficiency, we fuse the formatting operators into the reduce-scatter and all-gather. With the fusion representation, we can express flexible sharding without introducing complex configurations on the operators; in fact, we do not even need to define dedicated reduce-scatter or all-gather operators, because they can be expressed using all-reduce as shown in Figure~\ref{f:ar-fusion}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.36\textwidth]{figs/ar-fusion.pdf}
\caption{\label{f:ar-fusion} Reduce-scatter and all-gather represented as fusion with reformatting and all-reduce.}
\end{figure}
In a classic algorithm of reduce-scatter and all-gather on $N$ replicas, the data is partitioned into $N$ pieces, and replicas form a logical ring and exchange pieces with neighbors in multiple rounds~\cite{thakur05}. In our fusion implementation, the boundaries of these pieces must exactly match the sharding format, and the padding is done in-place when preparing the data pieces.
The implementation of the fusion operators also guarantees that the shard assigned to a replica matches the location of it in the logical ring, so that the classic algorithm will produce the desired shard on each replica at the end. Because it is critical for the logical ring to utilize the bandwidth of the physical network's links, we choose shard ID based on the network topology, not the other way around. In practice, reduce-scatter and all-gather can be implemented in multiple phases in order to leverage specific network topology~\cite{cho19}. For instance, for an $N\times M$ array of devices, reduce-scatter with data size $D$ can be done first for each row with $D/M$ as the shard size, then for each column with $D/(MN)$ as the shard size. In such cases, the shard ID is calculated based on the topology of all phases.
\subsection{Utilizing network bandwidth for large topology}
In large-scale training where the number of replicas is large, the shard size of a weight or gradient tensor can be very small. For instance, a Cloud TPUv3 pod has 2048 cores (with 2 cores sharing a chip), so if a 4~MB tensor is partitioned in 2048 ways the shard size will be just 2~KB. First, an obvious problem is that the communication can easily become latency-bound; second, the small shard itself might require a significant amount of padding in a tiled memory layout, so that the effective transferred data size could be much larger than the full tensor.
\paragraph{Partial sharding.} In practice, sharding the weight update in 2048 ways does not provide observable saving compared to sharding in 64 ways, because the sharded weight-update time is already small compared to the rest of the training step. Therefore, we can choose to organize the replicas into independent groups, and each group performs its own sharding. However, a per-group reduce-scatter only produces partial result since it does not accumulate the data from other groups. Therefore, an all-reduce across groups is needed after the reduce-scatter.
For an $N\times M$ array of replicas, the sharding groups can be defined as the $N$ rows, and the all-reduce will be performed on each of the $M$ columns (Figure~\ref{f:partial-sharding}). It may appear that the communication will still be latency-bound, because the all-reduce happens on the already sharded output of reduce-scatter, so the internal shard size of the all-reduce is still $D/(MN)$. In fact, as we show next, the grouping helps by enabling more aggressive batching of small data transfers.
\begin{figure}[ht]
\centering
\includegraphics[width=0.46\textwidth]{figs/partial-sharding.pdf}
\caption{\label{f:partial-sharding} Partial sharding and batched communication.}
\end{figure}
\paragraph{Batched communication operators.} The weight update computations for different weight variables are typically independent from each other, so we can combine their communication operators together. This is possible because they share the same assignment of the groups and shards determined by the network topology.
A combined reduce-scatter or all-gather must maintain the original shard assignment for each tensor. To achieve this, each combined shard consists of one shard from every tensor. If there is excessive padding on one tensor's shards, it is likely to remain in the combined shard. Also, tracking these sharding boundaries is challenging in multi-phase reduce-scatter and all-gather.
By contrast, a combined all-reduce does not need to respect any sharding for inidividual input tensors, because its internal sharding does not need to be exposed. This makes implementing all-reduce on combined small tensors much more tractable and efficient. The input tensors can be conceptually concatenated together in full shapes, and the internal shards are partitions on the concatenated shape, as the right-hand-side graph shows in Figure~\ref{f:partial-sharding}. In addition to the all-reduce after a subgroup reduce-scatter, the all-reduce in Figure~\ref{f:sharding-rep} for the partial scalar reduce result can also be combined with other similar all-reduce operators.
As a result, the partial sharding defers most of the handling of small shards into the combined all-reduce operators, where reduce-scatter and all-gather only perform combined operators in a single phase. This largely avoids the latency-bound communication for small shards. The batching of small communication operators is done automatically by a compiler pass.
\section{Conclusion}
\label{conclusion}
This paper presents a set of analyses and transformations for data-parallel deep learning training, which reduces the weight-update time by sharding across replicas. To minimize overhead of sharding, the approach carefully chooses communication patterns and data formatting, based on the training loop structure and the network topology of devices. It achieves significant speedups on language and large-scale image models, without requiring any additional devices.
\section{Evaluation}
\label{eval}
Automatic weight-update sharding is a key technique that enabled the state-of-the art training performance in Google's MLPerf-0.6 submission~\cite{mlperf0.6, google-mlperf0.6}.
We evaluated the performance improvements of several models with automatic weight-update sharding enabled. The models include ResNet-50~\cite{resnet}, Transformer~\cite{transformer} and NCF~\cite{ncf}. ResNet-50 and Transformer are based on the same configuration as in the MLPerf 0.6 submission. The test platform is Cloud TPUv3~\cite{tpu} with different topology configurations: 16 and 1024 chips in a 2D mesh, where each chip contains two processing cores. We use data-parallel training for all models, where each replica occupies a single core.
\begin{table}[ht]
\centering
\begin{tabular}{c|c|c|c}\toprule
\textbf{Model} & \textbf{Core count} & \textbf{Batch size} & \textbf{Optimizer} \\\hline
\multirow{2}{*}{ResNet-50} & 32 & 4096 & LARS \\\cline{2-4}
& 2048 & 32768 & LARS \\\hline
\multirow{2}{*}{Transformer} & 32 & 512 & ADAM \\\cline{2-4}
& 2048 & 2048 & ADAM \\\hline
NCF & 32 & 98304 & ADAM \\\bottomrule
\end{tabular}
\caption{Optimizer and batch size of evaluated models}
\label{t:eval_configs}
\end{table}
\subsection{Performance}
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{figs/perf-eval.pdf}
\caption{\label{f:perf_eval} Step time reduction when automatic weight-update sharding is enabled.}
\end{figure}
Figure~\ref{f:perf_eval} shows the performance improvements of automatic weight-update sharding against the replicated weight update for different models.
At small scale (16 TPUv3 chips), we keep the per-replica batch size as large as possible to maximize TPU utilization. In this setup, the step time is relatively long and the constant weight-update time is amortized. As a result, for models with small weight sizes, the performance impact is small. But we still observes improvements in the range of 9\% for language models like Transformer where weight sizes are large.
At large scale (1024 TPUv3 chips), we decrease the per-replica batch size to keep the global batch size reasonably small. As a result, the step time reduces and the performance impact of automatic weight-update sharding becomes much larger. Even for image models like ResNet where weight sizes are small, it is giving a 6\% speedup. For Transformer, the step time reduces from 46.5ms to 25.6ms when we enable this optimization.
\subsection{Memory saving}
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{figs/mem-eval.pdf}
\caption{\label{f:mem_eval} Activation memory saving when automatic weight-update sharding is enabled.}
\end{figure}
With auxiliary variables sharded during the training steps \ref{f:sharding-aux-loop}, their live ranges are split into two small segments before and after the training loop. Therefore, in the training loop body, their buffers could be reused by activations or intermediate results, which reduces peak memory usage. The actual saving is determined by the memory allocator, which is subject to problems like fragmentation.
Figure~\ref{f:mem_eval} shows the activation memory saving from this optimization. For models like Resnet, where weights are small comparing with activations, and there is only one copy of full shape auxiliary variable from SGD, the reusable memory is small. For models like NCF where activation size is comparable to weight size, and optimizer (e.g. Adam) creates two copies of the full shape auxiliary variables, there are more memory to reuse thus savings are larger.
\section{Introduction}
\label{intro}
With increasing complexity and data size in deep neural networks, it has become a common practice to leverage distributed, heterogeneous computing devices to parallelize the training process. In the recent MLPerf~0.6 results~\cite{mlperf0.6}, multiple submitters have used over 1000 devices to dramatically reduce the training time.
Data parallelism~\cite{krizhevsky12} is the most commonly used synchronous distributed training strategy due to its simplicity and efficiency. Participating devices are called replicas, which run the same training program that contains the entire neural network, but each replica receives a different partition of the training data batch. Replicas compute their local gradients with their own training data, then communicate to get the combined gradients and apply the same update to their copies of the weight variables. The weight update computation is repeated on all replicas, because the weights and gradients do not have a batch dimension to partition. The cost of weight update on each replica stays constant, even if more devices are added to reduce the per-replica batch size. Due to Amdahl's law, weight update can be a significant overhead for training performance and limit scalability for models with large weights (typical in language models), or small per-replica batch size (typical in large-scale training).
It is natural to think about sharding the weight update computation across replicas, instead of having them all execute the full computation. However, naively sharding the weight update could dramatically increase the data formatting and communication overhead across replicas. First, partitioning a tensor efficiently is non-trivial on modern accelerators with tiled memory layouts~\cite{tpu-perf-guide}. Second, because the forward and backward passes are already partitioned along the batch dimension across replicas, they must receive the full weight in the next training step. In addition to general challenges for efficient communication primitives, a complication is that modern optimizers~\cite{adam,momentum} often require a few auxiliary variables for each weight variable, such as moving average and momentum, each of which has the same size as the weight itself. These auxiliary variables also need to be updated along with the weight. Without weight-update sharding, replicas only need to communicate the gradients; with weight-update sharding, replicas need to communicate the weights and the auxiliary variables, so it is critical to reduce this overhead.
In this paper, we show that with static analysis and proper graph transformations, efficient weight-update sharding can be achieved without any change to the model. Static analysis is used for two purposes, correctness and performance. The correctness analysis identifies parts of the computation graph that are repeated in all replicas, which are the candidate operations for sharding. The performance analysis uses the knowledge of control flow in the graph to find the best places for communication, and estimates the profit of sharding for each part of the repeated computation. With the analysis result, the graph can be transformed with sharded operations, and communication primitives can be added in proper places. We also found that the transformation often enables more advanced optimizations due to reduced live ranges of the the full weight tensors.
The efficiency of communication primitives can be highly affected by the way we shard a multi-dimensional weight tensor, as well as the topology of the training cluster. Our graph transformation carefully chooses the sharding format for each tensor so that it can be efficiently sharded and unsharded. We use different sharding strategies for small- and large-scale training: for small-scale training, we prioritize reducing the shard size since the number of replicas is small; for large-scale training, we instead prioritize reducing the latency of communication.
We have implemented this approach in TensorFlow~\cite{tensorflow} with XLA~\cite{xla}, where most of the analysis and transformation passes are in XLA. XLA is a functional representation where side effects only exist in a few operations, which greatly reduces the complexity of analysis and transformation. We have evaluated the performance improvements for several common image and language models on Cloud TPUs~\cite{tpu}.
\section{Overview}
\label{overview}
Although our approach can be applied to other systems, we use XLA as the basis for static analysis and graph transforms. We briefly go over some key concepts in XLA, and then give an overview of our approach.
\subsection{XLA background}
XLA is an intermediate representation and its optimizing compiler for linear algebra operations targeting multiple backends (e.g., GPU, TPU, and other accelerators). XLA is currently deployed as a compiler for TensorFlow.
An XLA computation is modeled as a data-flow graph (HLO IR), where nodes represent operators and edges represent the inputs and outputs of each operator. The ordering of operators is solely enforced by data dependencies between the operators~\footnote{A token-type data edge is used to order side-effect operators, if needed.}. Most operators are pure, except for those side-effecting operators to interact between host and device or between devices.
Operator shapes in XLA are static and this restriction enables aggressive compiler optimizations such as buffer assignment, tiling, and rematerialization. Those are key optimizations on accelerators in general as accelerators are often designed with vector/tiled compute units and have limited amount of memory. XLA runs a set of target-independent optimizations (e.g., common-subexpression elimination) as well as target-specific optimizations (e.g., layout assignment, fusion). After running all HLO-level optimizations, the backend-compiler component \textit{lowers} each operator to a lower-level, target-specific representation, and eventually generates low-level machine code for the target.
Here we list the operators involved during the transformation for the weight-update sharding optimization we present in this paper. Please refer to XLA operational semantics document~\cite{xla} for the full list.
\paragraph{Control flow.} XLA represents control flow as special operators that call nested computations. There are two control-flow operators: \texttt{While} and \texttt{Conditional}. \texttt{While} takes a single operand (of shape \texttt{T}) as the initial value of the loop carried variable and repeats executing its \texttt{body} computation (\texttt{T} $\Rightarrow$ \texttt{T}) until its \texttt{condition} computation (\texttt{T} $\Rightarrow$ \texttt{bool}) returns false. The result of the \texttt{While} is the loop variable (shape \texttt{T}) of the last iteration.
\texttt{Conditional} with N branches takes an operand for the branch index (or a boolean predicate for a 2-way branch) and an operand for the arguments of each branch. \texttt{Conditional} also takes a computation for each branch, where the argument shape of each computation must match the shape of the corresponding operand. The result shape must be the same for all branch computations and this is the result shape of the \texttt{Conditional} operator.
\paragraph{All-reduce.}
\texttt{All-reduce} has semantics to MPI All-reduce~\cite{MPI-2.2}, which reduces a tensor across participating devices based on the provided binary reduction computation. All-reduce can optionally take a subgroup information, so that the reduction is only applied within each subgroup of devices. For example, a subgroup of $\left\{\left\{0,1,2,3\right\}, \left\{5,6,7,8\right\}\right\}$ combines values among devices 0-3 and devices 4-7 separately.
\paragraph{Data formatting operators.}
\texttt{Transpose} and \texttt{Reshape} are operators used to change the logical shape of tensors. \texttt{Transpose} reorders the dimensions of a tensor based on the permutation pattern (e.g., F32[5,3,8] $\Rightarrow$ F32[3,8,5]), and \texttt{Reshape} changes the shape to a new configuration (e.g., F32[5,3,8] $\Rightarrow$ F32[15,8]). It is important to note that data-formatting semantics is applied to the logical shape, which is not necessarily the same as how the data is laid out physically in memory. For example, a \texttt{Transpose} from a F32[5,3] tensor laid out as row-major to a F32[3,5] tensor laid out as column-major does not need any data movement, and the compiler converts it into a \texttt{Bitcast} operator which is effectively a no-op.
\begin{figure}[ht]
\centering
\includegraphics[width=0.26\textwidth]{figs/fusion.pdf}
\caption{\label{f:fusion} A \texttt{Fusion} operator with element-wise operators. Edges in blue represent data transfers from/to the global memory, and all intermediate results are stored in local memory.}
\end{figure}
\paragraph{Fusion operators.}
A \texttt{Fusion} operator represents a group of operators that can be emitted as a unit of computation by the backend of the target device. The fusion optimization pass groups operators that are fusible and replaces them with a fusion operator along with a fusion sub-computation. In the common case, this means that the intermediate results of fused operators are stored in registers or scratchpad memory, without moving data from/to the global memory to save memory bandwidth. Figure~\ref{f:fusion} shows an example of several element-wise operators fused into a single operator.
A more advanced use of \texttt{Fusion} operators would be for the backend compiler to pattern match on the operators within the fusion sub-computation and generate a custom implementation that is semantically equivalent to the original one. Fusion operators used for the weight-update sharding optimization (reduce-scatter and all-gather fusion) correspond to this use case.
\paragraph{Side-effecting operations.}
A small number of operators in XLA are marked as side-effecting and compiler passes need extra care when applying optimizations around the side-effecting operators such that visible side-effects remain the same. Examples are operators used for data transfers between different address spaces, such as \texttt{Infeed} (host to device), \texttt{Outfeed} (device to host), and \texttt{Send/Recv} (device to device).
\subsection{Sharding weight update}
\begin{figure}[ht]
\centering
\includegraphics[width=0.48\textwidth]{figs/unsharded-replication.pdf}
\caption{\label{f:replication} Synchronous data-parallel training with 2 replicas.}
\end{figure}
Figure~\ref{f:replication} shows a typical synchronous training scenario in data parallelism with two replicas. In every training step, each replica computes its local gradients with its own partition of the training input batch, then all replicas use an all-reduce operator to get the summed gradients. At the end of the training step, all replicas apply the same summed gradients to their copies of the weights, which ensures them to always have the weights in sync as long as they start with the same initial weights.
The training step can spend a non-trivial amount of time in weight update. This is typically true in models with large weights, such as language models like Transformer~\cite{transformer}; in image models like ResNet~\cite{resnet}, although the weight size is usually smaller, when they are trained in large-scale setups with many devices, the per-core batch size is usually set to a small value to avoid excessively large global batch size, making weight update relatively expensive as well. Weight update is memory bound: the compute is mostly simple elementwise operations, but for every weight variable it needs to read the gradient, the original weight and the auxiliary variables, then write back the updated weight and auxiliary variables. In our experiments, Transformer training can spend more than 40\% of the step time in weight update on 1024 TPUv3 chips.
Weight update is not sharded in data parallelism because the weights and gradients do not have a batch dimension to be partitioned. Our goal is to enable sharded weight update across the replicated devices as an optimization, without using more devices.
\paragraph{Sharding with decomposed all-reduce.}
A typical efficient implementation of all-reduce has two phases~\cite{cho19}: reduce-scatter and all-gather. In the {\bf reduce-scatter} phase, replicas exchange data in several rounds on different shards of the data, and at the end, each replica has one shard of the fully reduced data from all replicas. In the {\bf all-gather} phase, they perform new exchanges to broadcast their own fully reduced shards to all other replicas.
\begin{figure}[ht]
\centering
\includegraphics[width=0.48\textwidth]{figs/basic-sharding.pdf}
\caption{\label{f:basic-sharding} Sharding with reduce-scatter an all-gather.}
\end{figure}
As shown in Figure~\ref{f:basic-sharding}, we could use reduce-scatter to produce per-replica shards of the summed gradients, so that each replica can perform weight update on a shard. After that, we could use all-gather to broadcast the updated weight shards to all replicas. The reduce-scatter and all-gather combined should have similar performance as the original all-reduce.
A complication is the use of auxiliary variables in the optimizer. For example, for each weight, the Adam optimizer~\cite{adam} maintains two variables for exponential moving averages of the gradients and squared gradients. These variables are part of the training state and are included in model checkpoints, so typically the updated values are also part of the training step's output. If we do all-gather on every auxiliary variable at the end of each training step, the communication overhead would be too large. However, these variables are only used by the optimizer at the weight-update time, and not needed by the next iteration's forward and backward passes that compute the gradients. Therefore, an optimized solution could keep the auxiliary variables sharded across iterations until they are needed by checkpointing or summary. In practice, there are different patterns that could affect the placements of the all-gathers.
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\textwidth]{figs/sharding-aux-loop.pdf}
\caption{\label{f:sharding-aux-loop} Two ways of sharding auxiliary variables with a loop. Left: only keep auxiliary sharded across iterations. Right: keep auxiliary and weight sharded across iterations, and all-gather the weight before forward/backward passes. Details will be discussed in Section~\ref{transform-on-loop}.}
\end{figure}
\begin{itemize}
\item {\bf Compiler-visible loop.} If the compiler (graph optimization) could see a training loop in the graph, it could perform all-gather of auxiliary variables after the loop, amortizing the cost (see Figure~\ref{f:sharding-aux-loop}). If not, it would require additional help from the runtime system.
\item {\bf Other uses of the auxiliary variables.} Although auxiliary variables are only used at weight-update time for the purpose of training, models in practice often include custom logic such as getting a summary of the current training progress which may use the complete state of variables. Such operations may be inside the training loop body, but often guarded by a conditional so that they only happen every ${k}$ steps.
\end{itemize}
We will discuss these issues in Section~\ref{analysis} and Section~\ref{transform}. The rest of the paper also addresses the following challenges that are critical to performance.
\begin{itemize}
\item {\bf Sharding format.} How a tensor is divided across different replicas can be tricky on accelerators with tiled memory layouts~\cite{tpu-perf-guide}, since data formatting can be expensive. Additionally, individual dimensions on a tensor can be too small or not evenly-shardable among the replicas. To make the sharding of tensors efficient, our system chooses a set of cheap reformatting steps that could be efficiently fused into the sharding/unsharding operations.
\item {\bf Non-elementwise optimizers.} With some model optimizers, the weight update computation may include non-elementwise operations. For example, some optimizers~\cite{adafactor,lars} use the weight norm or root-mean-square which involves \texttt{reduce} operators. We will discuss solutions of running non-elementwise computations on sharded data.
\item {\bf Communication in large topology.} When the number of replicas is large, the shard size of a tensor can be very small such that the reduce-scatter and all-gather would become latency-bound. In such cases, our system will choose to partially shard the weight update computation among subgroups of replicas, and use batched communication operations to reduce the latency on large network topology.
\end{itemize}
\section{Related Work}
\label{related}
\paragraph{Partitioned parameter servers.} In asynchronous training, parameter servers \cite{distblief,li14-param-server} are often used for weight update, where partitions of weights can be sharded across different server instances. The major difference from our approach is that these systems partition weights across multiple server instances, while our approach shards weight update across the existing workers (replicas) without using extra resources. Also, asynchronous training is a very different setting than our focus.
\paragraph{Parallel programming frameworks.}
Mesh-TensorFlow \cite{mtf} is a Single-Program-Multiple-Data (SPMD) framework that allows users to write programs with different tensor dimension split across dimensions of a multi-dimensional processor mesh. While weight update can be sharded using Mesh-TensorFlow, it is orthogonal to our work, because it requires each mesh dimension to have a specific meaning and if it is assigned to the batch (replica) dimension, it cannot be used to split weight update. In our work, the weight-update is split across replicas without using more processors than pure data parallelism. GPipe~\cite{gpipe} is a library for implementing pipeline parallelism for sequence models, which is orthogonal to our approach since it does not parallelize across replicas.
\paragraph{Automated parallelism.}
FlexFlow~\cite{jia19} uses automated search to discover the optimal partition of operations in a graph, which has a flexible search space to cover model and data parallelism. While it focuses on determining the partition strategy for every operation, our system also leverages global graph transformation for the training loop in order to amortize all-gather cost.
\paragraph{Manually designed parallelism.}
There are custom parallel training strategies designed for specific models \cite{gnmt,one-weird-trick}, which typically mix data and model parallelism. In contrast, our approach focus on the general weight-update pattern in synchronous data-parallel training, where the benefit varies across use cases.
\section{Graph Transformation}
\label{transform}
After the analysis passes, whether to shard each weight update is determined. This section discusses issues in implementing the sharded weight update efficiently, including how a weight tensor is sharded and the placement of all-gather operators in different scenarios. Performance of reduce-scatter and all-gather will be discussed later in Section~\ref{comm}.
\subsection{Sharding representation}
\label{sharding-rep}
For a set of weight-update operators (Figure~\ref{f:weight-update-around-ar}), all the inputs (gradients and the original weights and auxiliary variables) must be sharded in the same way, because they are consumed by the same set of elementwise operators during weight update. Without weight-update sharding, although all-reduce is also implemented as a reduce-scatter phase and an all-gather phase, it can choose arbitrary sharding internally because the sharding does not need to be exposed to other operators; in contrast, with sharded weight update, the sharding format used by the communication primitives must match the sharding on the inputs.
A weight tensor is represented as a multi-dimensional array. In processors like Cloud TPUs which have tiled memory layouts~\cite{tpu-perf-guide}, splitting some dimensions can be more expensive than splitting other dimensions. The chosen sharding must also be supported by the reduce-scatter and all-gather operators. Therefore, we always choose a dimension that is efficient for sharding and easier to be supported in reduce-scatter and all-gather.
\paragraph{Data formatting.} One common problem is that the desired sharding dimensions are not evenly divisible by the number of shards (replicas). For example, ResNet~\cite{resnet} has weights with shape [3,3,256,256], where [3,3] are the desired sharding dimensions, but the shard count can be 8. To address such problems, we allow a tensor to be reformatted before sharded across replicas. Therefore, the sharding of a tensor is represented as a sequence of data formatting operators, followed by a dynamic-slice operator, as shown in Figure~\ref{f:sharding-rep}. The dynamic-slice specifies the dimensions to shard, and uses the replica-id to calculate the offset of the shard for each replica.
\begin{figure}[ht]
\centering
\includegraphics[width=0.46\textwidth]{figs/sharding-rep.pdf}
\caption{\label{f:sharding-rep} Sharding and unsharding with reformatting. The right graph shows an example of handling non-elementwise operators on a shard with reformatting.}
\end{figure}
The formatting operators can include reshapes that combine dimensions, and pads that make the dimensions divisible by the shard count. Combining dimensions usually happens before padding, which helps minimizing the amount of padding. For example, [3,3,256,256] can be reshaped to [9,256,256], which allows it to be padded into [10,256,256] if the replica count is 10, instead of padding to [10,3,256,256] or [4,5,256,256]. The reformatting must be efficient to implement for the platform. In practice, we only choose reshapes that are trivial, which do not require any data movement. For Cloud TPUs, reshaping [3,3,256,256] into [9,256,256] can be trivial, but reshaping it to [589824] may be expensive due to the tiled memory layout.
There is another platform-dependent reformatting operator, bitcast. It means to reinterpret the on-device memory as a different shape, as long as the new shape's on-device representation does not go out of bound. Bitcast does not have consistent semantics in platform-agnostic XLA, but it is consistent for a specific platform, which is sufficient to guarantee that different inputs of the weight update are sharded in the same way. For example, for Cloud TPU, we can bitcast [3,3,256,256] into [576,8,128], making it shardable across 64 replicas without any padding.
In addition, we only choose reformatting operators that could be efficiently fused into operators around them. For example, the pad operator should be fused into the dynamic-slice so it does not access the entire memory buffer of the full shape.
\paragraph{Non-elementwise operators.} While most operators in weight update are simple elementwise arithmetic ones, some optimizers~\cite{lars,adafactor} also include non-elementwise operators, with the most common one being reduce.
Non-elementwise operators may impose restrictions on how a tensor can be reformatted. In a reduce operator, some dimensions are collapsed using the reduction function, and others are passed-through to the result; the reformatting is disallowed to combine a collapsed dimension with a pass-through dimension, through either reshape or bitcast. This does not restrict a reduce-to-scalar operator since all dimensions are collapsed.
Another restriction is for padding. Padded data elements in the collapsed dimensions could affect the result of the reduce, so they must be masked off with the identity value (e.g., 0 for addition and 1 for multiplication). This requires that the locations of the padding data must be identifiable after reformatting. If the source of padding is introduced as an explicit reformartting step without reshape or bitcast following it, the locations are identifiable as specified in the pad operator; instead, the restriction is often for implicit padding in bitcast: the tiled memory layout already implies padding, so reinterpreting a memory buffer could lose some of the padding information. Therefore, depending on the platform's memory layouts for tensors, certain bitcasts may introduce complexity when supporting reduce operators. The restriction depends on the implementation, and should avoid unsupported cases.
If sharding affects the collapsed dimensions, extra handling is required for the reduce operator. First, each replica needs to mask off the padded data. The padding areas on different replicas are different, depending on their shards' locations in the full shape, which requires the masking to be dynamic in the same training program. As shown in Figure~\ref{f:sharding-rep}, this can be achieved by comparing the elements' locations (iota + start offset) with the padding areas' locations on the full shape, then selecting between the shard data and the identity value based on the comparison results. Second, replicas need to combine their reduce results using an all-reduce. This is because the collapsed dimensions are lost in the reduce result, so they cannot be sharded, but each replica's local result is different from others and only captures data from its own input shard.
\subsection{Transform the training graph}
\label{transform-on-loop}
As discussed in Section~\ref{perf-analysis}, the placement of all-gather operators is critical to performance. With the help of the a training loop, we often need only one all-gather inside the loop.
\paragraph{Out-of-loop all-gather placement.} With a compiler-visible training loop, the all-gather operator for auxiliary variables can be placed after the loop, followed by required reverse formatting operators. Correspondingly, the original auxiliary variable values need to be sharded before the loop starts, using the reformatting operators and dynamic-slice as in Figure~\ref{f:sharding-rep}.
If there is not a compiler-visible loop, it is still possible to benefit from weight-update sharding by moving the sharding and unsharding of auxiliary variables outside of the training step program. One solution is to generate three separate programs after graph transformation: a sharding program, a main program, and an unsharding program. The sharding program contains the sharding operators of the variables before the training loop; the main program contains the training step with sharded weight update; the unsharding program contains the all-gather operators to reconstruct the full variables. It is the run-time system's responsibility to invoke each program at the right time. For instance, if the run-time system manages the training loop, it can invoke the sharding/unsharding programs before and after the loop; if even the run-time does not see a loop structure, it can still maintain states that track whether each variable is sharded, and conditionally invoke the sharding/unsharding programs when there is a state mismatch.
\paragraph{In-loop all-gather placemnt.} In Figure~\ref{f:sharding-aux-loop}, we have shown two potential ways to place the all-gather for the weight to be consumed by the forward and backward passes. The left graph shows the obvious way where the all-gather is at the end of the training step, and weight is already in full shape when the next iteration starts. The right graph instead keeps the weight sharded across loop iterations, like for auxiliary variables, but performs the all-gather right before it is needed by the forward and backward passes.
\begin{figure}[ht]
\centering
\includegraphics[width=0.38\textwidth]{figs/ag-bf16.pdf}
\caption{\label{f:ag-bf16} By keeping the weight sharded across iterations (right graph), the full weight is only needed in bfloat16 precision.}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.75\textwidth]{figs/peak-memory.pdf}
\caption{\label{f:peak-memory} Reduced buffer live ranges of auxiliary variables allow ADAM to have similar peak memory as SGD.}
\end{figure*}
It may appear that the first approach is better for performance since it does not need the all-gather for the weight after the loop, even though that should be only a small amortized cost. However, we found in practice the second approach often enables more advanced optimizations. The main difference is that in the second approach the weight update no longer depends on the full weight. Weight update only requires the sharded data that is given when the step starts, and the full data after all-gather is only consumed by the forward and backward passes. In many image and language models, the forward and backward passes use the weight as an input to convolutions or matrix multiplies, which often have lower precision requirements on their inputs. For example, in typical training with Cloud TPUs, the precision of the input to a convolution is reduced to bfloat16~\cite{bfloat16}, while the weight update is often required to be in float32. With the second approach, the all-gather for the full weight can be performed in bfloat16 as shown in Figure~\ref{f:ag-bf16}, which dramatically reduces the amount of memory access and communication. This precision optimization is done automatically by a dataflow-based precision propagation pass.
\paragraph{Memory saving.} With the above transformation, the live ranges of weights and auxiliary variables are reduced. Especially for auxiliary variables, the full buffer is only required outside the training loop. Therefore, their buffers can be reused to store activations and gradients in the forward and backward passes. As shown in Figure~\ref{f:peak-memory}, this allows optimizers with different auxiliary variable sizes to have similar peak memory usage. More precisely, suppose the total size of weights is $W$, total size of auxiliary variables is $V$ (optimizer-specific), and the peak size of live activations and gradients in forward and backward passes is $P$, then our technique reduces peak memory usage from $W+V+P$ to $max(W+V/N+P, W+V)$ where $N$ is the number of shards. This allows the ADAM optimizer to be as efficient as SGD in terms of both performance and memory. |
1,108,101,562,951 | arxiv | \section{Introduction}
The concern over the potential link between the black hole and the particle has a long and continuous history because it may provide us useful information about the connection between general relativity and quantum mechanics.
In 1935, trying to in search of a geometric model for elementary particles, Einstein and Rosen \cite{einstein_particle_1935} provided a speculative structure now known as the Einstein–Rosen bridge.
In 1968, Carter \cite{carter_global_1968} found that the Kerr–Newman solution \cite{newman_metric_1965} has a gyromagnetic ratio g=2 like the Dirac electron.
Then, the Kerr–Newman electron has received constant attention \cite{debney_solutions_1969,israel_source_1970,d._ivanenko_gravitational_1975,barut_zitterbewegung_1981,lopez_extended_1984,lopez_internal_1992,burinskii_string-like_1993,israelit_classical_1995,finster_particlelike_1999,burinskii_super-kerr-newman_1999,burinskii_gravitational_2003,arcos_kerrnewman_2004,burinskii_dirac-kerr-newman_2008,burinskii_source_2016,burinskii_new_2017}
and obtained supports from string theory \cite{holzhey_black_1992,sen_rotating_1992,sen_extremal_1995,nishino_stationary_1995,horowitz_rotating_1996}.
What’s more, there also have been suggestions that black holes should be treated as elementary particles \cite{t_hooft_black_1990,hawking_gravitational_1971,susskind_speculations_1993,susskind_black_1994,russo_asymptotic_1995,duff_new_1994,duff_massive_1995,sen_black_1995,hull_unity_1995,townsend_eleven-dimensional_1995,witten_string_1995,strominger_massless_1995,greene_black_1995}.
Complex metric, provided by Newman and his co-workers in their derivation of the Kerr-Newman metric \cite{newman_metric_1965}, has been found to be a useful mathematical tool in various problems \cite{newman_maxwells_1973,gibbons_cosmological_1977,brown_complex_1991,burinskii_kerr_1998,burinskii_kerr-schild_2000,newman_classical_2002,burinskii_complex_2003}. Recently, a quantum picture of black holes as Bose-Einstein condensates(BEC) of gravitons
\cite{dvali_black_2013-3}.
In this picture, we found that complex Kerr-Newman metric has a deep physical meaning rather than just a mathematical model.
In a 6-D complex space, both common black holes and elementary particles are found to be special cases of the complex Kerr-Newman black holes, which can turn into each other through a phase transition.
By analysing the metric of a particle in the imaginary space, we found that the wave-like nature of a particle in 4-D spacetime is a result of the self-gravitational interaction of a particle as a black hole in the imaginary space.
\section{Phase transition of complex black hole}
\subsection{Common black hole as a particular solution of complex black hole}
The Kerr-Newman metric describes a general black hole with both charge and spin \cite{newman_metric_1965}.
The radius of its two horizons ($r_\pm$) are
\begin{equation}
{r_ \pm } = m \pm \sqrt {{m^2} - {a^2} - {Q^2}}
\label{lab1}
\end{equation}
where $m$ is its mass, $a$ is its angular momentum per unit mass, and $Q$ is its charge,$c = \hbar = G = {k_B} =1$ is used in this work ($c$ will appear where the speed of light needs to be stressed).
Equation (\ref{lab1}) seems to lose its physical meaning when $m^2< a^2+Q^2$.
However, if a horizon can have complex radius, the physical meaning of this equation can be further expanded.
Re-writing equation (\ref{lab1}), we can obtain
\begin{equation}
{r_ \pm } = m \pm i\sqrt {{a^2} + {Q^2} - {m^2}}
\label{lab2}
\end{equation}
The real radius of the complex horizon ($r_R$) is
\begin{equation}
r_R=m
\label{lab3}
\end{equation}
In the 3-D real space, an elementary particle will appear as 0-D point in low energy if it can be described by equation (\ref{lab2}) because its $r_R$ is much smaller than Planck length and too small to be measured by current technology. This agrees with standard model.
The imaginary radius of the complex horizon ($r_I$) is
\begin{equation}
{r_I} = \pm i\sqrt {{a^2} + {Q^2} - {m^2}}
\label{lab4}
\end{equation}
With the increase of its $m$, $r_I$ of a particle reduces continuously to 0$i$ and then be realized, which means that the particle as a complex black hole changes into a common black hole.
The phase transition point of the complex black hole is an extreme black hole, $r_I=0$. At the same time, $r_R$ of a particle increases continuously.
$r_R$ not only characterizes the size of the particle, but also defines the boundaries of the 3-D real space for other observers.
Therefore, in the rest frame of the particle, the increasing $r_R$ can be understood as an expansion of the coordinate origin from a 0-D point to a 2-D spherical surface with radius of $r_R$ (as shown in Fig.\ref{fig:fig1}).
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.4]{phase_transition.eps}
\caption{\textbf{Phase transition of complex black hole.}
After the point of phase transition, $r_I=0$, the imaginary radius is realized and the particle change into a black hole.
The objects in $a$ are corresponding to the objects in $b$: 0-D origin point vs 2-D origin surface, 1-D time dimension vs 3-D imaginary space.
The black holes is Bose-Einstein condensates of $N$ gravitons. For any graviton, its origin is a point on the origin surface.
}
\label{fig:fig1}
\end{figure}
In this way, the inner space of a common black hole bordered by its inner and outer horizons is in fact a realized imaginary space embedded in the 3-D real space (while the space within its inner horizon is imaginary space).
All the points in this realized imaginary space share the same real radius although their imaginary coordinates can be different.
If we consider the rotational symmetry, these points can be considered indistinguishable points in the real space.
This agrees with quantum picture of black holes as Bose-Einstein condensates of $N$ gravitons\cite{dvali_black_2013-3}.
\subsection{Gravitons' motions in a BEC black hole}
The origin surface of a complex black holes, which is one of the most important physical contents in this work, seems counter-intuitive.
In the quantum picture of black holes as Bose-Einstein condensates of $N$ gravitons\cite{dvali_black_2013-3}, this concept is easier to understand.
According to \cite{dvali_black_2013-3}, the number of gravitons of a Schwarzschild black hole in BEC with mass of $M$ is
\begin{equation}
N=M^2
\label{lab5}
\end{equation}
The mass of every graviton is
\begin{equation}
m_g=1/M
\label{lab6}
\end{equation}
The origin surface with many points can be regarded as a collection of the origins of $N$ gravitons (as shown in Fig.\ref{fig:fig1}).
The de Broglie wavelength of any graviton
\begin{equation}
\lambda = \frac{{2\pi }}{{{m_g}}} = 2\pi M
\label{lab7}
\end{equation}
is found to be the circumference of a circle with radius of $r_I$ ($=M$ for this Schwarzschild black hole with mass of $M$), which implies that a graviton of the Schwarzschild black hole in BEC may be a stand wave centered on its origin on the origin surface.
In fact, a graviton without rest mass moves at the speed of light.
If it does a uniform circular motion with a radius of $r_I$ around its origin, it will complete one cycle in one period.
In the 4-D spacetime, the origin of a particle’s rest frame moves along the time dimension at the speed of light ($a$ of Fig.\ref{fig:fig1}).
After the phase transition of space, the time dimension is unfolded into the 3-D imaginary space.
The moving distance of the origin in the time dimension will be a large value after a long time, which seems an impossible motion in the limited realized imaginary space.
A reasonable solution is that the motion of the origin of each graviton is an uniform circular motion on the origin surface at the speed of light.
The clockwise and counterclockwise rotation correspond to $t>0$ and $t<0$, respectively.
In this way, the synthetic movement of the graviton's
circular motion and the motion of its origin will be a uniform circular motion on the horizon with speed of light.
In this subsection, combining complex metrics \cite{newman_metric_1965} and the quantum black holes as BECs of gravitons
\cite{dvali_black_2013-3}, we provide a possible picture of how gravitons move in a black hole.
This picture is found to be an key to understand the geometric origin of the de Broglie waves (will be discussed in detail later).
\section{Particle as imaginary black hole in AdS}
\subsection{Singularity as the origin of time}
What kind of geometry does a particle have in the hidden 3-D imaginary space?
Penrose's idea about singularity \cite{penrose_gravitational_1965} provides us useful clue.
According to Penrose \cite{penrose_gravitational_1965}, the singularity is the origin of time.
In the 6-D complex space, the counterpart of the origin of time in 4-D spacetime is the origin of the 3-D imaginary space.
The singularity of a common Kerr–Newman black hole appear as a ring on its equatorial plane with a radius of
\begin{equation}
{r_{s,R}} = a
\label{lab8}
\end{equation}
Any direction of rotation is mathematically equivalent because of the rotational symmetry of the 3-D real space.
Therefore, the ring singularity can be regarded as a special solution of a sphere singularity after the direction of rotary axis is locked.
The singularity of a common real black hole is completely enclosed by the event horizon.
From the above section, we know the space within the event horizon is in fact a imaginary space.
Therefore, the origin surface should have an imaginary radius with a modulus of $a$.
In this way, the origin surface of the particle in imaginary space has a radius of
\begin{equation}
{r_0} = ia
\label{lab9}
\end{equation}
and the horizon of a particle in the 3-D imaginary space has a radius of $r_I$ (as shown in $I of Fig.2$), which means that a particle appears as an imaginary black hole, the horizon radii of which are
\begin{equation}
{r_ \pm } = ia \pm i\sqrt {{a^2} + {Q^2} - {m^2}}
\label{lab10}
\end{equation}
Equation (\ref{lab10}) can be rearranged as
\begin{equation}
{r_ \pm } = ia \pm \sqrt {{{(ia)}^2} - {{(im)}^2} - {Q^2}}
\label{lab11}
\end{equation}
Comparing equations (\ref{lab1}) and (\ref{lab11}), we can obtain the equivalent mass, angular momentum per unit mass of the imaginary black hole of the particle as
\begin{equation}
\left\{ {\begin{array}{*{20}{c}}
{{M_i} = ia}\\
{{a_i} = im}
\end{array}} \right.
\label{lab12}
\end{equation}
\subsection{Hawking temperature of a particle as imaginary black hole}
The imaginary black hole has a Hawking temperature of
\begin{equation}
{T_i} = \frac{1}{{2\pi }}\frac{{{r_ + } - {r_ - }}}{{2(r_ + ^2 + a_i^2)}}
\label{lab13}
\end{equation}
which has an imaginary value.
A black hole can harvest energy from its environment and lose energy through Hawking radiation.
Therefore, energy balance is a necessary condition for a stable black hole.
The Hawking temperature of a particle as an imaginary black hole is therefore a good mark of the energy level of its local imaginary space.
According to the work of Deser and Levin \cite{deser_accelerated_1997}, an inertial observer in a de Sitter ($dS$) or anti-de Sitter ($AdS$) spaces with cosmological constant $\Lambda$ will measure a temperature of
\begin{equation}
{T_\Lambda } = \frac{1}{{2\pi }}\sqrt {\frac{\Lambda }{3}}
\label{lab14}
\end{equation}
When $\Lambda<0$, this temperature will have an imaginary value.
Therefore, from the view of an observer in the real space, the imaginary space of our universe is an $AdS$ space.
According to the symmetry of complex space, the time dimension of the $AdS$ space is folded from the 3-D real space.
It should be specially stated that the imaginary values of physical concepts in the imaginary space including mass and length are only relative to the observers in the real space.
From the view of any observer in the imaginary space, the imaginary black hole is just a common real one and the imaginary space is a $dS$ space.
\subsection{Evolution of a complex black hole}
The presence of complex black holes (including common black holes and particles) makes the coordinate origin of the complex space expands from the point of $0+0i$ to a complex spherical surface with a radius of
\begin{equation}
{R_0} = m + ai
\label{lab15}
\end{equation}
The imaginary black hole of a particle also have a ring singularity with a radius of
\begin{equation}
{r_{s,I}} = a_i=im
\label{lab16}
\end{equation}
The modulus of $r_{s,I}$ equals to the radius of its origin surface in the real space.
The ring singularity of a common real black hole also have this characteristic.
Therefore, the ring singularity of a real or imaginary black hole is in fact a section
of the origin surface in its complex conjugate space after the direction of rotary axis is locked. In the complex space, singularity is covered by horizon.
The evolution of a complex black hole in complex space is shown in Fig.(\ref{fig:fig2}).
We can found that two special cases of complex black holes, particles and common black holes, are complex conjugated.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.4]{evolution.eps}
\caption{\textbf{Evolution of a complex black hole.} $I$: the particle appears as a point-like particle in the real space while as an imaginary black hole in the imaginary space; $II$: the point expands while the imaginary black hole shrinks; $III$: extreme black hole and extreme imaginary black hole; $IV$: real black hole and imaginary point-like particle. Red circles or spherical surfaces represent origin surfaces, while the black circles represent inner or outer horizon.}
\label{fig:fig2}
\end{figure}
\section{Geometric origin of de Broglie wave}
\subsection{Wave-like nature as a result of gravitons' motion}
For all elementary particles except Higgs boson in the standard model, the following equation
\begin{equation}
{r_I} = i\sqrt {{a^2} + {Q^2} - {m^2}} \approx ia
\label{lab17}
\end{equation}
is a sufficiently accurate approximation. Therefore, the imaginary Kerr-Newman black hole of the particle is an approximate imaginary Schwarzschild black hole.
According to \cite{dvali_black_2013-3}, the imaginary component of the mass of every graviton of the imaginary black hole in Bose-Einstein condensate is
\begin{equation}
{m_{g - i}} = \frac{i}{{\left| {{M_i}} \right|}} = \frac{i}{a}
\label{lab18}
\end{equation}
while the number of the gravitons is
\begin{equation}
N = {\left| {{M_i}} \right|^2} = {a^2}
\label{lab19}
\end{equation}
From the view of an imaginary stationary observer in the rest frame of the particle, the motions of the gravitons of its imaginary black hole in BEC are the synthesis of the motions of their origins and their circular motions around their origins.
When the 3-D imaginary space folds to the 1-D time dimension of the 4-D spacetime, these circular motions make particles obtain their wave-like nature.
The real part of the complex wave function is the component in a certain direction of the 3-D imaginary space which acts as the time dimension of the 4-D spacetime, while the imaginary part is the component perpendicular to this direction.
In the following, we will derive the geometric origin of the plane wave of a free Dirac fermion ($L=1/2$). The question will be discussed in the free particle's inertial coordinate frame.
\begin{figure}[h]
\centering
\includegraphics[scale=0.4]{phase_angle_1.eps}
\caption{\textbf{Geometric origin of wave nature of a Dirac fermion.}
The motions of the gravitons of the imaginary black hole in Bose-Einstein condensate make the particles obtain their wave-like nature.
The red, blue, and green bold short arcs
represent the displacement of a graviton's origin,
the displacement caused by the circular motion around its origin, and its total displacement, respectively.
}
\label{fig:fig3}
\end{figure}
For the imaginary black hole of a free Dirac fermion, we assume that two circular motions of its gravitons are in one plane (Fig.\ref{fig:fig3}).
During a time interval of $0.5t_0$, the displacement of its origin ($\Delta l_{i-0}$, the red bold short arc in Fig.\ref{fig:fig3}) is
\begin{equation}
\Delta {l_{i - 0}} = ic \times 0.5t_0
\label{lab20}
\end{equation}
During the same time, the displacement caused by the circular motion around its origin ($\Delta l_{i-g}$, the blue bold short arc in Fig.\ref{fig:fig3}) is
\begin{equation}
\Delta {l_{i - g}} = ic \times 0.5t_0
\label{lab21}
\end{equation}
The phase angle of the graviton, $\theta$, will be
\begin{equation}
\theta = \frac{{\Delta {l_{i - 0}}}}{{{r_0}}} = \frac{{\Delta {l_{i - g}}}}{{{r_I}}}
\label{lab22}
\end{equation}
In the free particle's inertial coordinate frame, the particle is stationary at the coordinate origin, which means that $v=0$. Therefore,
\begin{equation}
{r_0}(v = 0) = {r_I}(v = 0) = \frac{iL}{m_0}
\label{lab23}
\end{equation}
where $m_0$ is the rest mass of the particle.
Substituting $L=1/2$ and equation (\ref{lab23}) into equation (\ref{lab22}) yields
\begin{equation}
\theta =m_0ct_0
\label{lab24}
\end{equation}
From the view of the observer in the 3-D imaginary space, the resultant motion of the graviton happen on the outer horizon of the imaginary black hole, a spherical surface with a radius of $r_+$($\approx 2ia$).
The total displacement of the graviton ($\Delta l_i$, the green bold short arc in Fig.\ref{fig:fig3}) is
\begin{equation}
\Delta {l_i} = \theta \times r_+= ict_0
\label{lab25}
\end{equation}
The uniform energy of every graviton of the imaginary black hole in BEC means that the phase difference between them remains the same.
All the original positions of their origins are components of the starting point of the particle in time dimension.
Therefore, the phase angle of the particle is $\theta$ described in equations (\ref{lab23}).
The clockwise and counterclockwise rotation of the gravitons on the horizon of their imaginary black hole correspond to the two sign of the spin, respectively.
\subsection{Lorentz transformation as a conformal transformation}
The phase angle of the wave function of a stationary particle ($v=0$) is given above. Substituting the Lorentz transformation
\begin{equation}
{t_0} = \gamma (t - \frac{{vx}}{{{c^2}}})
\label{lab26}
\end{equation}
where $\gamma$ is the Lorentz factor
\begin{equation}
\gamma = \frac{c}{{\sqrt {{c^2} - {v^2}} }}
\label{lab27}
\end{equation}
into equation (\ref{lab24}), we can get the phase angle of a free particle with speed of $v$ as
\begin{equation}
\theta ' = mct - px
\label{lab28}
\end{equation}
From equation (\ref{lab28}), we can get some interesting information about the motion of the particle as a black hole in the imaginary space.
First, when $x=0$,
\begin{equation}
\theta ' = mct = m_0ct_0 = \theta
\label{lab29}
\end{equation}
which means that the phase angle doesn't change when the imaginary black hole shrinks (as shown in Fig.\ref{fig:fig4}, which gives a visual display of time dilation in special relativity).
Therefore, Lorentz transformation is a conformal transformation for observers in the imaginary space.
Being an imaginary black hole as BEC of gravitons, a particle will have a internal clock as conjectured by de Broglie because of the motions of its gravitons.
This internal clock is hidden in the imaginary space.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.6]{time_dilation.eps}
\caption{\textbf{de Broglie’s internal clock.}
When a particle gets more energy, its imaginary black hole shrinks but the phase angle does not change,
which will result in time dilation if the speed of light is a constant.
}
\label{fig:fig4}
\end{figure}
Then, let us analyze the physical meaning of the term of "$-px$" in equation (\ref{lab28}).
As a matter wave, a particle can appear in different locations of the 3-D real space with a certain probabilities simultaneously.
From section 2, we know that all the gravitons of a black hole in BEC share an equivalent real coordinate.
If the points in the 3-D real space, at which the particles can appear at the same time, share an equivalent imaginary coordinate, the
wave nature will be a natural result.
From the view of any graviton of a particle, the uniform linear motion of the particle with a speed of $v$ in the 3-D real space appears as the motion of its imaginary black's origin with the speed of $iv$ in the 3-D imaginary space.
We assumed that the imaginary black hole is doing an uniform circular motion on an origin surface with radius of
\begin{equation}
{r_{0-p}} = \frac{i}{p}
\label{lab30}
\end{equation}
where $p=mv$ is the momentum of the particle (as shown in Fig.\ref{fig:fig5}).
In this way, we can get the contribution of a particle's momentum to phase angle.
The phase angle caused by the motion of the imaginary black hole on the origin surface with a speed of $iv$ is
\begin{equation}
{\theta _0} = \frac{{ivt}}{{{r_{0-p}}}}
\label{lab31}
\end{equation}
The final net phase angle is
\begin{equation}
\theta ' = \theta - {\theta _0} = mct - mvt
\label{lab32}
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{traveling_wave.eps}
\caption{\textbf{Contribution of momentum to phase angle.}
The imaginary black hole of a particle is doing an uniform circular motion on an origin surface with radius of $1/mv$, which makes different points in the 3-D real space share an equivalent imaginary coordinate.
In this way, a particle can appear at these points simultaneously.
}
\label{fig:fig5}
\end{figure}
\subsection{Negative energy solutions of Dirac equation}
Since its origin is on the origin surface with radius of $r_{0-p}$, an imaginary black hole with a negative radius of "$-r_+$" still has physical meaning.
These possible states are found to correspond to the negative solutions of Dirac equation, four solutions of which are shown in Fig.\ref{fig:fig6} .
Comparing the motion of a graviton in a positive black hole in BEC ($a$ of Fig.\ref{fig:fig6}) and that of a negative energy black hole in BEC ($b$ of Fig.\ref{fig:fig6}), we can find that there is a central symmetry between them with respect to the center of the imaginary black hole.
Similarly, there is a mirror symmetry between two spin states of the particle
($a$ vs $c$, $b$ vs $d$).
In a space where the speed of particle is smaller than that of light ($v<c$), the following relationship can always be satisfied
\begin{equation}
{r_{0 - p}} > \left| {{r_ + }} \right|
\label{lab33}
\end{equation}
Therefore, any relativistic wave equation that satisfies the Lorentz transformation will have negative energy solutions.
\begin{figure}[h]
\centering
\includegraphics[scale=0.6]{four_Dirac.eps}
\caption{\textbf{Four solutions of Dirac equation.}
Blue arcs indicate the displacement of the imaginary black hole as Bose Einstein condensate of gravitons, while red arcs indicate the displacement of one of its gravitons (hollow circle indicates the starting positions and a solid circle indicates the current positions).
}
\label{fig:fig6}
\end{figure}
\section{Conclusion and Discussion}
\subsection{Conclusion}
Thanks to the recent quantum picture of black holes as Bose-Einstein condensates of gravitons \cite{dvali_black_2013-3}, we are able to re-examin the "ancient" complex metric \cite{newman_metric_1965} in this work.
The conjugate symmetry between a common black hole and elementary particle in a 6-D complex space is found.
For any observer in the 3-D real (or imaginary) part of the complex space, he (or she) can’t observe the imaginary (or real) space directly because of the barrier of the horizon.
The three imaginary (or real) dimensions will fold into a time dimension, which makes the 6-D complex space appears as a 4-D spacetime.
An elementary particle with spin appears as an imaginary black hole with a mass of $ia$ in an AdS space ($a$ is the spin pure unit mass).
In the quantum picture of black hole, this imaginary black hole consists of $N=a^2$ gravitons.
The motions of these component gravitons make the particle they make up obtain its wave-like nature.
Therefore, the quantum properties of the particles we observed in a 4-D spacetime is a result of the gravitational effect in a 3-D imaginary space.
With a time dimension folded from the 3-D real space, the 4-D imaginary spacetime is found to be a AdS space for a real observer.
This agrees with the AdS/CFT correspondence proposed by Maldacena \cite{maldacena_large-n_1999}.
\subsection{Discussion}
This work can provide us a new perspective to understand some problems in quantum mechanics and general relativity such as black hole information paradox.
The black hole information paradox shows the conflict between quantum mechanics and general relativity.
An important recent development in this area is AMPS firewall \cite{almheiri_black_2013}.
In order to resolve the AMPS firewall paradox, Maldacena and Susskind \cite{maldacena_cool_2013} provided the ER=EPR hypothesis that entangled objects may be connected through the interior via a wormhole, or Einstein-Rosen bridge.
In the picture provide in this work, we can found that the creation of an entangled particle pair is indeed a creation of entangled black hole as conjectured in ER=EPR.
In addition, according to Hawking's picture about black hole's Hawking radiation \cite{hawking_black_1974}, we know that when a particle with positive energy escapes from the horizon, its antiparticle with negative energy will fall to the singularity.
The singularity of the imaginary black hole is the origin of the real space.
Therefore, if these associated negative particles of the Hawking radiation of the imaginary black hole cross its singularity rather than end at it, the clouds of virtual particles around a particle in the real space will have a gravitational origin.
\section*{Acknowledgements}
This work is supported by the National Science Foundation of China (No. 51736004 and No.51776079).
\bibliographystyle{unsrt}
\section{Introduction}
The concern over the potential link between the black hole and the particle has a long and continuous history because it may provide us useful information about the connection between general relativity and quantum mechanics.
In 1935, trying to in search of a geometric model for elementary particles, Einstein and Rosen \cite{einstein_particle_1935} provided a speculative structure now known as the Einstein–Rosen bridge.
In 1968, Carter \cite{carter_global_1968} found that the Kerr–Newman solution \cite{newman_metric_1965} has a gyromagnetic ratio g=2 like the Dirac electron.
Then, the Kerr–Newman electron has received constant attention \cite{debney_solutions_1969,israel_source_1970,d._ivanenko_gravitational_1975,barut_zitterbewegung_1981,lopez_extended_1984,lopez_internal_1992,burinskii_string-like_1993,israelit_classical_1995,finster_particlelike_1999,burinskii_super-kerr-newman_1999,burinskii_gravitational_2003,arcos_kerrnewman_2004,burinskii_dirac-kerr-newman_2008,burinskii_source_2016,burinskii_new_2017}
and obtained supports from string theory \cite{holzhey_black_1992,sen_rotating_1992,sen_extremal_1995,nishino_stationary_1995,horowitz_rotating_1996}.
What’s more, there also have been suggestions that black holes should be treated as elementary particles \cite{t_hooft_black_1990,hawking_gravitational_1971,susskind_speculations_1993,susskind_black_1994,russo_asymptotic_1995,duff_new_1994,duff_massive_1995,sen_black_1995,hull_unity_1995,townsend_eleven-dimensional_1995,witten_string_1995,strominger_massless_1995,greene_black_1995}.
Complex metric, provided by Newman and and his co-workers in their derivation of the Kerr-Newman metric \cite{newman_metric_1965}, has been found to be a useful mathematical tool in various problems \cite{newman_maxwells_1973,gibbons_cosmological_1977,brown_complex_1991,burinskii_kerr_1998,burinskii_kerr-schild_2000,newman_classical_2002,burinskii_complex_2003}. Recently, a quantum picture of black holes as Bose-Einstein condensates of gravitons
\cite{dvali_black_2013-3}.
In this picture, we found that complex Kerr-Newman metric has a deep physical meaning rather than just a mathematical model.
In a 6-D complex space, both common black holes and elementary particles are found to be special cases of the complex Kerr-Newman black holes, which can turn into each other through a phase transition.
By analysing the metric of a particle in the imaginary space, we obtain the geometric origin of the de Broglie waves.
\section{Phase transition of complex black hole}
The Kerr-Newman metric describes a general black hole with both charge and spin \cite{newman_metric_1965}.
The radius of its two horizons ($r_\pm$) are
\begin{equation}
{r_ \pm } = m \pm \sqrt {{m^2} - {a^2} - {Q^2}}
\label{lab1}
\end{equation}
where $m$ is its mass, $a$ is its angular momentum per unit mass, and $Q$ is its charge,$c = \hbar = G = {k_B} =1$ is used in this work ($c$ will appear where the speed of light needs to be stressed).
Equation (\ref{lab1}) seems to lose its physical meaning when $m^2< a^2+Q^2$.
However, if a horizon can have complex radius, the physical meaning of this equation can be further expanded.
Re-writing equation (\ref{lab1}), we can obtain
\begin{equation}
{r_ \pm } = m \pm i\sqrt {{a^2} + {Q^2} - {m^2}}
\label{lab2}
\end{equation}
The real radius of the complex horizon ($r_R$) is
\begin{equation}
r_R=m
\label{lab3}
\end{equation}
In the 3-D real space, an elementary particle will appear as 0-D point in low energy if it can be described by equation (\ref{lab2}) because its $r_R$ is much smaller than Planck length and too small to be measured, which agrees with standard model.
The imaginary radius of the complex horizon ($r_I$) is
\begin{equation}
{r_I} = \pm i\sqrt {{a^2} + {Q^2} - {m^2}}
\label{lab4}
\end{equation}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.8]{phase_transition.eps}
\caption{\textbf{Phase transition of complex black hole.}
After the point of phase transition, $r_I=0$, the imaginary radius is realized and the particle change into a black hole.
The objects in a are corresponding to the objects in b: 0-D origin point vs 2-D origin surface, 1-D time dimension vs 3-D imaginary space.
The black holes is Bose-Einstein condensates of $N$ gravitons.
The blue circle in b is the sub-horizon of one graviton, the origin of which is a point at the origin surface.}
\label{fig:fig1}
\end{figure}
With the increase of its $m$, $r_I$ of a particle reduces continuously to 0i and then be realized, which means that the particle as a complex black hole changes into a common black hole.
The phase transition point of the complex black hole is an extreme black hole, $r_I=0$. At the same time, $r_R$ of a particle increases continuously.
$r_R$ not only characterizes the size of the particle, but also defines the boundaries of the 3-D real space for other observers.
Therefore, in the rest frame of the particle, the increasing $r_R$ can be understood as an expansion of the coordinate origin from a 0-D point to a 2-D spherical surface with radius of $r_R$ (shown in Fig.1).
In this way, the inner space of a common black hole bordered by its inner and outer horizons is in fact a realized imaginary space embedded in the 3-D real space(while the space within its inner horizon is imaginary space).
All the points in this realized imaginary space share the same real radius although their imaginary coordinates can be different.
If we consider the rotational symmetry, these points can be considered indistinguishable points in the real space. This agrees with quantum picture of black holes as Bose-Einstein condensates of gravitons\cite{dvali_black_2013-3}.
In the 4-D spacetime, the origin of a rest particle’s rest frame moves along the time dimension at the speed of light ($a$ of Fig.\ref{fig:fig1}). After the phase transition, what form does this motion of origin appear? A quantum black hole is composed of many gravitons. For each graviton, its coordinate origin is a point at the origin surface. The moving distance of the origin in the time dimension will be a large value after a long time, which seems an impossible motion in the limited realized imaginary space. A reasonable solution is that the motion of the origin of each graviton is an uniform circular motion on the origin surface at the speed of light. The clockwise and counterclockwise rotation correspond to $t>0$ and $t<0$, respectively. Similarly, the graviton without rest mass also move at the speed of light, which should also be a uniform circular motion on its sub-horizon with a radius of $r_I$ ($b$ of Fig.\ref{fig:fig1}).
According to Dvali and Gomez’s work \cite{dvali_black_2013-3}, the energy of every graviton, $m_g$, in the BEC Schwarzschild black hole with mass of $M$ is $1/M$. The de Broglie wavelength of a graviton
\begin{equation}
\lambda = \frac{{2\pi }}{{{m_g}}} = 2\pi M
\label{lab5}
\end{equation}
is found to be the circumference of the great circle of its sub-horizon ($r_I=M$ for this Schwarzschild black hole with mass of $M$). This means that every graviton of a BEC black hole is a standing wave on its own sub-horizon. The barycenter of the standing wave of a graviton is its origin on the origin surface.
As a product of the combination of quantum pictures of black holes and complex metrics, this idea is an important key to understand the geometric origin of the de Broglie waves.
\section{Particle as imaginary black hole}
What kind of geometry does a particle have in the hidden 3-D imaginary space? According to Penrose \cite{penrose_gravitational_1965}, the singularity is the origin of time. In the 6-D complex space, the counterpart of the origin of time in 4-D spacetime is the origin of the 3-D imaginary space. The singularity of a common Kerr–Newman black hole appear as a ring on its equatorial plane with a radius of a. Any direction of rotation is mathematically equivalent because of the rotational symmetry of the 3-D real space. Therefore, the ring singularity can be regarded as a special solution of a sphere singularity after the direction of the rotary axis is locked.
Inspired by this and the "key" we found in the previous section, we assumed that a particle can have many energy components as a quantum black hole and for each energy component, its origin is on a sphere surface with radius of
\begin{equation}
{r_0} = ia
\label{lab6}
\end{equation}
and the horizon in the 3-D imaginary space has a radius of $r_I$ (as shown in $I of Fig.2$), which means that a particle appears as an imaginary black hole with a radius of
\begin{equation}
{r_ \pm } = ia \pm i\sqrt {{a^2} + {Q^2} - {m^2}}
\label{lab7}
\end{equation}
Equation (\ref{lab7}) can be rearranged as
\begin{equation}
{r_ \pm } = ia \pm \sqrt {{{(ia)}^2} - {{(im)}^2} - {Q^2}}
\label{lab8}
\end{equation}
Comparing equations (\ref{lab1}) and (\ref{lab8}), we can obtain the equivalent mass, angular momentum per unit mass of the imaginary black hole as
\begin{equation}
\left\{ {\begin{array}{*{20}{c}}
{{M_i} = ia}\\
{{a_i} = im}
\end{array}} \right.
\label{lab9}
\end{equation}
The imaginary black hole has a Hawking temperature of
\begin{equation}
{T_i} = \frac{1}{{2\pi }}\frac{{{r_ + } - {r_ - }}}{{2(r_ + ^2 + a_i^2)}}
\label{lab10}
\end{equation}
which has an imaginary value.
A black hole can harvest energy from its environment and lose energy through Hawking radiation. Therefore, energy balance is a necessary condition for a stable black hole. The Hawking temperature of a particle as an imaginary black hole is therefore a good mark of the energy level of its local imaginary space. According to the work of Deser and Levin \cite{deser_accelerated_1997}, an inertial observer in a de Sitter ($dS$) or anti-de Sitter ($AdS$) spaces with cosmological constant $\Lambda$ will measure a temperature of
\begin{equation}
{T_\Lambda } = \frac{1}{{2\pi }}\sqrt {\frac{\Lambda }{3}}
\label{lab11}
\end{equation}
When $\Lambda<0$, this temperature will have an imaginary value. Therefore, from the view of an observer in the imaginary space, our universe is an $AdS$ space. According to the symmetry of complex space, the time dimension of the $AdS$ space is folded from the 3-D real space.
\begin{figure}
\centering
\includegraphics[scale=0.8]{evolution.eps}
\caption{\textbf{Evolution of a complex black hole} $I$: the particle appears as a point-like particle in the real space while as an imaginary black hole in the imaginary space; $II$: the point expands while the imaginary black hole shrinks; $III$: extreme black hole and extreme imaginary black hole; $IV$: real black hole and imaginary point-like particle.}
\label{fig:fig2}
\end{figure}
For all elementary particles except Higgs boson in the standard model,
\begin{equation}
{r_I} = i\sqrt {{a^2} + {Q^2} - {m^2}} \approx ia
\label{lab12}
\end{equation}
is a sufficiently accurate approximation. Therefore, the imaginary black hole of the particle is an approximate Schwarzschild black hole. In this way, in the quantum picture of black hole, the energy components of the particle is in fact gravitons in the Bose-Einstein condensate. According to \cite{dvali_black_2013-3}, the imaginary component of every graviton’s mass is
\begin{equation}
{m_{g - i}} = \frac{i}{{\left| {{M_i}} \right|}} = \frac{i}{a}
\label{lab13}
\end{equation}
while the number of the gravitons is
\begin{equation}
N = {\left| {{M_i}} \right|^2} = {a^2}
\label{lab14}
\end{equation}
According to equation (\ref{lab5}), the accurate value of $m_{g-i}$ is $i/r_I$. $m_g$ also have a real component of $m/N$. Therefore, the mass carried by each graviton is
\begin{equation}
{m_g} = \frac{m}{{{a^2}}} + \frac{i}{a}
\label{lab15}
\end{equation}
From the view of graviton of complex black hole in Bose-Einstein condensate, the presence of a complex black hole makes the origin of the coordinates of the complex space expands from the point of $0+0i$ to a complex spherical surface with a radius of
\begin{equation}
{R_0} = m + ai
\label{lab16}
\end{equation}
and only $r_I$ is the intrinsic radius of its sub-horizon.
\section{Geometric origin of de Broglie wave}
From the view of an observer who is stationary at the origin of the coordinate in the rest frame of the particle, the motions of the gravitons of its imaginary black hole are the synthesis of the motions of their origins and their circular motions around their origins. When the 3-D imaginary space folds to the 1-D time dimension of the 4-D spacetime, these circular motions make particles obtain their wave-like nature. In the following, we will derive the plane wave of a free particle.
First we analyze the case where two circular motions of any graviton are in one plane (a of Fig.3). For any graviton of a particle as an imaginary black hole, during a time interval of $0.5t$, the displacement of its origin ($\Delta l_{i-0}$, the red bold short arc in a of Fig.3) and the displacement caused by the circular motion around its origin ($\Delta l_{i-g}$, the blue bold short arc in a of Fig.3) are
\begin{equation}
\Delta {l_{i - 0}} = \Delta {l_{i - g}} = ic \times 0.5t
\label{lab17}
\end{equation}
From the view of the observer who is stationary at the origin of the coordinate of the 3-D imaginary, the resultant motion of the graviton happen on the outer horizon of the imaginary black hole, a spherical surface with a radius of $r_+$,
\begin{equation}
{r_ + } \approx 2ia
\label{lab18}
\end{equation}
The total displacement of the graviton ($\Delta l_i$, the green bold short arc in a of Fig.3) is the sum of $\Delta l_{i-0}$ and $\Delta l_{i-g}$
\begin{equation}
\Delta {l_i} = \Delta {l_{i - 0}} + \Delta {l_{i - g}} = ic \times 0.5t \times 2 = ict
\label{lab19}
\end{equation}
If $L=1/2$ (fermions in the standard model), the phase angle of the graviton, $\theta$, will be
\begin{subequations}
\begin{equation}
a = a_1 + a_2 + a_3
\label{lab20}
\end{equation}
Then, we analyze the case where two circular motions of any graviton are in two mutually perpendicular planes ($b$ of Fig.3). In this case, only the circular motion of every graviton’s origin makes contribution to phase angle. If $L=1$ (gauge bosons in the standard model), the phase angle of the graviton, $\theta$, will be
\begin{equation}
\theta (L = 1) = \frac{{\Delta {l_{i - 0}}}}{{{r_0}}} = mct
\label{lab21}
\end{equation}
\end{subequations}
The uniform energy of every graviton of the imaginary black hole in Bose-Einstein condensate means that the phase difference between them remains the same. All the original positions of their origins are components of the starting point in time of the particle. Therefore, the phase angle of the particle is $\theta$ described in equations (\ref{lab20}) and (\ref{lab21}).
The phase angle of the wave function of a stationary particle ($v=0$) is given above. How about the phase angle of a particle with a no-zero speed? From the view of any graviton of a particle, the uniform linear motion of the particle with a speed of v appears as the motion of its origin with the speed of iv. Inspired by the geometric characteristics of the Minkowski space and Einstein’s idea about the freely moving along a geodesic in gravitational field, we assume this motion is also an uniform circular motion as the motion in time dimension with speed of $c$ (c of Fig.3).
If the radius of curvature of this motion is
\begin{equation}
{r_0} = \frac{i}{p}
\label{lab22}
\end{equation}
where $p=mv$ is the momentum of the particle. The phase angle caused by the speed of $iv$ is
\begin{equation}
{\theta _0} = \frac{{ivt}}{{{r_0}}}
\label{lab23}
\end{equation}
The final net phase angle is
\begin{equation}
\theta ' = \theta - {\theta _0} = mct - mvt
\label{lab24}
\end{equation}
The real part of the complex wave function is the component in a certain direction of the 3-D imaginary space which acts as the time dimension of the 4-D spacetime, while the imaginary part is the component perpendicular to this direction. When the direction of rotation of the gravitons turns, we can get the negative energy solutions of wave functions.
Why the uniform linear motion of a free particle with a speed of $v+ic$ in the 4-D spacetime appears as the synthesis of uniform circular motions in the imaginary space? This imaginary black hole curves the imaginary space and makes the surrounding observers feel an equivalent central potential. The path of a inertial particle in the 4-D spacetime should be geodesics for an observer in the 3-D imaginary space. In other word, the quantum properties of the particles we observed in a 4-D spacetime is a result of the gravitational effect in a 3-D imaginary space. With a time dimension folded from the 3-D real space, the 4-D imaginary spacetime is found to be a AdS space for an inner observer. This agrees with the AdS/CFT correspondence proposed by Maldacena \cite{maldacena_large-n_1999}.
\section{Conclusion and Discussion}
Thanks to the recent quantum picture of black holes as Bose-Einstein condensates of gravitons, we are able to take a fresh look at the ancient complex metric in this work. The conjugate symmetry between a common black hole and elementary particle in a 6-D complex space is found. For any observer in the 3-D real (or imaginary) part of the complex space, he (or she) can’t observe the imaginary (or real) space directly because of the barrier of the horizon. The three imaginary (or real) dimensions will fold into a time dimension, which makes the 6-D complex space appears as a 4-D spacetime.
An elementary particle with spin appears as an imaginary black hole with a mass of $ia$ in an Ads space. In the quantum picture of black hole, this imaginary black hole consists of $N=a^2$ gravitons. The motions of these component gravitons make the particle they make up obtain its wave-like nature. We found that there are two different motion patterns for these component gravitons, which correspond to the fermions ($L=1/2$) and gauge bosons ($L=1$) in the standard model, respectively. Therefore, de Broglie wave of a particle has a geometric origin and is a result of its self-gravitational interaction in the imaginary space.
This work provides us a new perspective to understand some problems in quantum mechanics and general relativity such as generations of leptons and black hole information paradox.
As a stable elementary particle, the imaginary black hole of the electron should be a good mark of the energy level of the imaginary space of our universe. If electron have a greater rest mass, its smaller imaginary black hole will have a higher temperature, which means that the original thermal equilibrium between it and the imaginary space is destroyed. Therefore, the decays of muons and tauons may be caused by the evaporation of their imaginary black holes in imaginary space.
The black hole information paradox shows the conflict between quantum mechanics and general relativity. An important recent development in this area is AMPS firewall \cite{almheiri_black_2013}. In order to resolve the AMPS firewall paradox, Maldacena and Susskind \cite{maldacena_cool_2013} provided the ER=EPR hypothesis that entangled objects may be connected through the interior via a wormhole, or Einstein-Rosen bridge. In the picture provide in this work, we can found that the creation of an entangled particle pair is indeed a creation of entangled black hole as conjectured in ER=EPR.
\section*{Acknowledgements}
This work is supported by the National Science Foundation of China (No. 51736004 and No.51776079).
\bibliographystyle{unsrt}
|
1,108,101,562,952 | arxiv |
\section{Acknowledgments}
This work is partially funded by the German Federal Ministry of Education and Research under project ScaDS Dresden/Leipzig (BMBF 01IS14014B).
\begin{small}
\bibliographystyle{abbrv}
\section{Introduction}
Mining frequent structural patterns from a collection of graphs, usually referred to as \textit{frequent subgraph mining} (FSM), has found much research interest in the last two decades, for example, to identify significant patterns from chemical or biological structures and protein interaction networks \cite{jiang2013survey}. Besides these typical application domains, graph collections are generally a natural representation of partitioned network data such as knowledge graphs \cite{cyganiak2008n}, business process executions \cite{petermann2014biiig} or communities in a social network \cite{junghanns2016epgm}. We identified two requirements for FSM on such data that are not satisfied by existing approaches: First, such data typically describes directed multigraphs, i.e., the direction of an edge has a semantic meaning and there may exist multiple edges between the same pair of vertices.
Second, single machine solutions will not be sufficient for big data scenarios where either input data volume as well as size of intermediate results can exceed main memory or achievable runtimes are not satisfying.
An established approach to speed up or even enable complex computations on very large data volumes is data-centric processing on clusters without shared memory.
The rise of this approach was strongly connected with the MapReduce \cite{dean2008mapreduce} programming paradigm, which has also been applied to the FSM problem \cite{hill2012iterative, lu2013efficiently, aridhi2014novel, lin2014large, bhuiyan2015iterative}. However, none of the approaches provides support for directed multigraphs. Further on, MapReduce is not well suited for complex iterative problems like FSM as it leads to a massive overhead of disk access.
In recent years, a new generation of advanced cluster computing systems like Apache Spark \cite{zaharia2012resilient} and Apache Flink \cite{carbone2015apache}, in the following denoted by \textit{distributed in-memory dataflow systems}, appeared. In contrast to MapReduce, these systems provide a larger set of operators and support holding data in main memory between operators as well as during iterative calculations.
In this work, we propose DIMSpan, an advanced approach to distributed FSM based on this kind of system. Our contributions can be summarized as follows:
\vspace{-1mm}
\begin{itemize}[leftmargin=4mm]
\itemsep0.2em
\item We propose DIMSpan, the first approach to parallel FSM based on distributed in-memory dataflow systems (Section \ref{sec:algorithm}). It adapts all pruning features of the popular gSpan \cite{yan2002gspan} algorithm to the dataflow programming model. Further on, it supports directed multigraphs and its data structures are optimized to pruning and compression .
\item We provide a comparison to existing MapReduce based approaches (Section \ref{sec:mr}) and show that DIMSpan not only requires fewer disk access but also shuffles less data over the network and can reduce the total number of expensive isomorphism resolutions to a minimum.
\item We present results of experimental evaluations (Section \ref{sec:eval}) based on real and synthetic datasets to show the scalability of our approach as well as the runtime impact of single pruning and optimization techniques .
\item Our implementation is practicable and works for arbitrary string-labeled graphs. We provide its source code to the community as part of the \textsc{Gradoop} framework \cite{petermann2016graph} under an Open Source licence.
\end{itemize}
\vspace{-1mm}
In addition, we provide background knowledge and discuss related work in Section \ref{sec:background}. Finally, we conclude and give a preview on future work in Section \ref{sec:conclusion}.
\section{Background \& Related Work}
\label{sec:background}
In this section, we introduce the distributed dataflow programming model, define the frequent subgraph mining problem and discuss related work.
\input{211_unary}
\input{221_lattice}
\input{21_dataflow}
\input{22_fsm}
\input{23_related}
\subsection{Distributed Dataflow Model}
Distributed dataflow systems like MapReduce \cite{dean2008mapreduce}, Apache Spark \cite{zaharia2012resilient} or Apache Flink \cite{carbone2015apache} are designed to implement data-centric algorithms on shared nothing clusters without handling the technical aspects of parallelization. The fundamental programming abstractions are datasets and transformations among them. A \textit{dataset} is a set of data objects partitioned over a cluster of computers. A \textit{transformation} is an operation that is executed on the elements of one or two input datasets. The output of a transformation is a new dataset. Transformations can be executed concurrently on $W = \{w_0,w_1,..,w_n\}$ available \textit{worker threads}, where every thread executes the transformation on an associated dataset partition.
There is no shared memory among threads.
Depending on the number of input datasets we distinguish \textit{unary} and \textit{binary} transformations. Table \ref{tab:unary} shows example unary tranformations. We further divide them into those transformations processing \textit{single elements} and those processing \textit{groups of elements}. All of the shown functions require the user to provide a \textit{transformation function} $\tau$ that needs to be executed for each element or group. A simple transformation is \textit{filter}, were $\tau$ is a predicate function and only those elements for which $\tau$ evaluates to true will be added to the output. Another simple transformation is \textit{map}, where $\tau$ describes how exactly one output element is derived from an input element. \textit{Flatmap} is similar to map but allows an arbitrary number of output elements. MapReduce provides only one single-element transformation (denoted by \textit{MRMap} in Table \ref{tab:unary}) which is a variant of flatmap that requires input and output elements to be key-value pairs.
The most important element group transformation is \textit{reduce}. Here, input as well as output are key-value pairs and for each execution all elements sharing the same key are aggregated and $\tau$ describes the generation of a single output pair with the same key. Since input pairs with the same key may be located in different partitions they need to be \textit{shuffled} among threads which is typically causing network traffic among physical machines. If $\tau$ is associative (e.g. summation), an additional combine transformation can be used to reduce this traffic. \textit{Combine} is equivalent to reduce but skips shuffling, i.e., in the worst case one output pair is generated for each key and thread. Afterwards, these partial aggregation results can be passed to a reduce transformation.
As map and filter can also be expressed using MRMap, MapReduce and the new generation of \textit{distributed in-memory dataflow systems} (DIMS) like Spark and Flink have the same expressive power in terms of unary transformations. However, in the case of successive or iterative MRMap-reduce phases intermediate results need to be read from disk at the beginning and written to disk at the end of each phase. Thus, MapReduce is not well suited to solve iterative problems and problem-specific distributed computing models arose, for example, to process very large graphs \cite{mccune2015thinking}. In contrast, MapReduce and DIMS are general purpose platforms and not dedicated to a specific problem. However, DIMS support more complex programs including iterations, binary transformations (e.g., set operators like \textit{union} and \textit{join}) and are able to hold datasets in main memory during the whole program execution.
\subsection{Frequent Subgraph Mining}
\label{sec:problem}
Frequent subgraph mining (FSM) is a variant of frequent pattern mining \cite{aggarwal2014frequent} where patterns are graphs. There are two variants of the FSM problem. \textit{Single graph FSM} identifies patterns occurring at least a given number of times within a single graph, while \textit{graph transaction FSM} searches for patterns occurring in a minimum number of graphs in a collection. Our proposed approach belongs to the second setting. Since there exist many variations of this problem we first define our problem precisely before discussing related work and introducing our algorithm.
\pagebreak
\begin{definition}
\label{def:graph}\textsc{(Graph)}.
Given two global label sets $\mathcal{L}_v, \mathcal{L}_e$, then a \textit{directed labeled multigraph}, in the following simply referred to as \textit{graph}, is defined to be a hextuple $G = \langle V,E, s, t, \lambda_v, \lambda_e \rangle$, where $V = \{v\}$ is the set of vertices (vertex identifiers), $E = \{e\}$ is the set of edges (edge identifiers), the functions $s : E \rightarrow V\ /\ t : E \rightarrow V$ map a \textit{source} and a \textit{target} vertex to a every edge and $\lambda_v : V \rightarrow \mathcal{L}_v\ /\ \lambda_e : E \rightarrow \mathcal{L}_e$ associate labels to vertices and edges. An edge $e \in E$ is \textit{directed} from $s(e)$ to $t(e)$. A multigraph supports loops and parallel edges.
\end{definition}
\begin{definition}
\label{def:subgraph}\textsc{(Subgraph).}
Let $S, G$ be graphs then $S$ will be considered to be a \textit{subgraph} of $G$, in the following denoted by $S \sqsubseteq G$, if $S$ has subsets of vertices $S.V \subseteq G.V$ and edges $S.E \subseteq G.E$ and $\forall e \in S.E : s(e), t(e) \in S.V$ is true.
\end{definition}
\hspace{-4mm}
On the bottom of Figure \ref{fig:lattice}, a collection of directed multigraphs $\mathcal{G} = \{G_1, G_2,G_3\}$ and an example subgraph $S_0 \sqsubseteq G_1$ are illustrated. Identifiers and labels of vertices and edged are encoded in the format \texttt{id:label}, e.g., \texttt{1:A}.
\begin{definition}
\label{def:isomorphism}\textsc{(Isomorphism).}
Two graphs $G,H$ will be considered to be isomorphic ($G \simeq H$) if two bijective mappings exist for vertices $\iota_v : G.V \leftrightarrow H.V$ and edges $\iota_e : G.E \leftrightarrow H.E$ with matching labels, sources and targets, i.e., $\forall v \in G.V : G.\lambda_v(v) = H.\lambda_v(\iota_v(v))$ and $\forall e \in G.E : G.\lambda_e(e) = H.\lambda_e(\iota_e(e)) \wedge G.s(e) = H.s(\iota_e(e)) \wedge G.t(e) = H.t(\iota_e(e))$.
\end{definition}
\begin{definition}
\label{def:lattice}\textsc{(Pattern Lattice).}
A \textit{pattern} is a connected graph isomorphic to a subgraph $P \simeq S$. Let $\mathcal{P} = \{P^{-1}, P_0,.., P_n\}$ be the set of all patterns isomorphic to any subgraph in a graph collection, than patterns form a \textit{lattice} based on parent-child relationships. $P_p $ will be a parent of $P_c$ if $P_p \sqsubset P_c \wedge |P_p.E| = |P_c.E|$ - 1.
Based on edge count $k$ there are disjoint \textit{levels} $\mathcal{P}^{-1},.. ,\mathcal{P}^k \subseteq \mathcal{P}$. Root level $\mathcal{P}^{-1} = \{P^{-1}\}$ contains only the empty pattern $P^{-1}$ which is the parent of all patterns with $k=0$. For all other patterns $\forall P^k \in \mathcal{P}, k > 0 \ \exists\ P^{k-1}\in \mathcal{P} : P^{k-1} \sqsubset P^{k}$ is true.
\end{definition}
\hspace{-4mm}
Figure \ref{fig:lattice} shows the lattice of patterns $\mathcal{P} = \{P_{00},..,P_{20}\}$ occurring in the example graph collection $\mathcal{G}$.
\begin{definition}
\label{def:embedding}\textsc{(Embedding).}
Let $G$ be a graph and $P$ be a pattern, then an \textit{embedding} is defined to be a pair $m(G,P) = \langle \iota_v, \iota_e \rangle$ of isomorphism mappings describing a subgraph $S \sqsubseteq G$ isomorphic to $P$. As a graph may contain $n$ subgraphs isomorphic to the same pattern (e.g., subgraph automorphisms), we use $\mu : \mathcal{P} \rightarrow M^n$ to denote an \textit{embedding map}, which assoicates $n$ elements of an embeddings set $M$ to every pattern $P \in \mathcal{P}$. If $\mu$ maps to an empty tuple, the graph will not contain a pattern.
\end{definition}
\hspace{-4mm}
Figure \ref{fig:lattice} shows three differently colored edge mappings of example embeddings $m_0(G_1, P_{20}), m_1(G_2, P_{20})$ and $m_2(G_2, P_{20})$.
\begin{definition}
\label{def:support}\textsc{(Frequency/Support).}
Let $\mathcal{G} = \{G_0,..,G_n\}$ be a graph collection and $P$ be a pattern, then the \textit{frequency} $\phi : \mathcal{P} \rightarrow \mathbb{N}$ of a pattern is the number of graphs containing at least one subgraph isomorphic to the pattern. The term \textit{support} describes the frequency of a pattern relative to
the number of graphs $\sigma(P) = \phi(P) / |\mathcal{G}|$.
\end{definition}
\begin{definition}
\label{def:fsm}\textsc{(Frequent Subgraph Mining).}
Let $\mathcal{G}$ be a graph collection, $\mathcal{P}$ the set of all contained patterns and $s_{min}$ be the minimum support with $0 \leq s_{min} \leq 1$, then the problem of \textit{frequent subgraph mining} is to identify the complete set of patterns $\mathcal{F} \subseteq \mathcal{P}$ where $\forall P \in \mathcal{P} : P \in \mathcal{F}\Leftrightarrow \sigma(P) \geq s_{min}$ is true.
\end{definition}
\hspace{-4mm}
Frequent subgraph mining for the example graph collection $\mathcal{G} = \{G_1, G_2,G_3\}$ with $s_{min} = 50\% / f_{min} = 2$
results in the five non-empty patterns with $\phi(P) \geq 2$ in the lattice of Figure \ref{fig:lattice}.
\subsection{Related Work}
A recent survey \cite{jiang2013survey} by Jiang et al. provides an extensive overview about frequent subgraph mining (FSM). Due to limited space and the vast amount of work related to this problem we only discuss approaches matching Definition \ref{def:fsm}. Thus, we omit the single-graph setting \cite{bringmann2008frequent, elseidy2014grami, teixeira2015arabesque} as well as graph-transaction approaches with incomplete results like maximal \cite{thomas2010margin}, closed \cite{yan2003closegraph} or significant \cite{ranu2009graphsig} frequent subgraph mining.
The first exact FSM algorithms, e.g., AGM \cite{inokuchi2000apriori} and FSG \cite{kuramochi2001frequent}, followed an \textit{a priori} approach. These algorithms implement a level-wise breath-first-search (BFS, illustrated by Figure \ref{fig:bfs}) in the pattern lattice, i.e., candidate patterns $\mathcal{P}^k$ are generated and the support is calculated by subgraph isomorphism testing. In a subsequent pruning step frequent patterns $\mathcal{F}^k \subseteq \mathcal{P}^k$ are filtered and joined to form children $\mathcal{P}^{k+1}$ (next round's candidates). The search is stopped as soon as $\mathcal{F}^k = \emptyset$. The disadvantage of these algorithms is that they are facing the subgraph isomorphism problem during candidate generation and support counting. Further on, it is possible that many generated candidates might not even appear.
\input{222_searches}
Thus, the next generation of \textit{pattern-growth} based FSM algorithms appeared and outperformed the a priori ones. Popular representatives of this category are MOFA \cite{borgelt2002mining}, gSpan \cite{yan2002gspan}, FFSM \cite{huan2003efficient} and Gaston \cite{nijssen2005gaston}. In comparison to the a priori ones, these algorithms traverse the lattice in a depth-first search (DFS, illustrated by Figure \ref{fig:dfs}) and skip certain links in the lattice (dotted lines in Figure \ref{fig:lattice}) to avoid visiting child patterns multiple times. A key concept of these algorithms are canonical labels generated during DFS. However, if labels are generated without recalculation (e.g., gSpan) they won't totally prevent false positives (non canonical labels) and thus an additional isomorphism-based verification will be required. Comparative work \cite{worlein2005quantitative, nijssen2006frequent} has shown that runtime can be decreased by fast label generation and holding embeddings in main memory.
While most popular exact FSM algorithms are from the first half of the 2000s, more recent work focuses on problem variations \cite{jiang2013survey} as well as parallelization, for example, using GPUs \cite{kessl2014parallel}, FPGAs \cite{stratikopoulos2014hpc} and multithreading \cite{vo2015parallel}. All existing approaches of graph transaction FSM on shared nothing clusters \cite{hill2012iterative, lu2013efficiently, aridhi2014novel, lin2014large, bhuiyan2015iterative} are based on MapReduce \cite{dean2008mapreduce} and will be further discussed in comparison to DIMSpan in Section \ref{sec:mr}. Graph transaction FSM cannot benefit from vertex-centric graph processing approaches \cite{mccune2015thinking} as partitioning a single graph shows different problems than partitioning a graph collection.
\section{Algorithm}
\label{sec:algorithm}
In the following, we provide details about the DIMSpan algorithm including its concept (\ref{sec:concept}), the respective dataflow program (\ref{sec:flow}) as well as pruning and optimization techniques (\ref{sec:branch} - \ref{sec:compression}).
\input{31_concept}
\input{32_flow}
\input{321_flow}
\input{33_gspan}
\input{341_pseudocode}
\input{34_growth}
\input{35_validation}
\input{36_dictionary}
\input{371_element}
\input{37_compression}
\subsection{Concept}
\label{sec:concept}
In an FSM algorithm that follows the distributed dataflow programming model the input graph collection $\mathcal{G}$ is represented by a dataset of graphs equally partitioned into disjoint subsets $\mathcal{G}_1, \mathcal{G}_2, ..,\mathcal{G}_n$ corresponding to the availble \textit{worker threads} $W = \{w_1,w_2,..,w_n\}$. Thus, transformations can be executed on $\left| W \right|$ graphs in parallel but every exchange of global knowledge (e.g., local pattern frequencies) requires synchronization barriers in the dataflow program which cause network traffic. Our major optimization criteria were minimizing delays dependent on exchanged data volume and, as FSM contains the NP-complete subgraph isomorphism problem, minimize the number of isomorphism resolutions.
To achieve the latter, we adapted approaches of two efficient pattern-growth algorithms gSpan \cite{yan2002gspan} and Gaston \cite{nijssen2005gaston}. These algorithms basically are iterations of pattern growth, counting and filter operations but differ in detail. gSpan allows fast append-only generation of canonical labels representing patterns but records only pattern-graph occurrence lists. This requires subgraph isomorphism testing to recover embeddings. In contrast, Gaston has a more complex label generation tailored to the characteristics of molecular databases but stores complete pattern-embedding maps. For the design of DIMSpan, we combine the strong parts of both algorithms. In particular, we use a derivate of gSpan canonical labels (Section \ref{sec:gspan}) but also store embedding maps to avoid subgraph isomorphism testing at the recovery of previous iterations' embeddings. To minimize the additional memory usage, we use optimized data structures and compression (Section \ref{sec:compression}).
With regard to the absence of shared memory in distributed dataflows, the DFS search of pattern growth algorithms is not optimal as it requires $|\mathcal{P}|$ iterations (one for each visited pattern) while a BFS search only takes $k^{max}$ iterations (maximum edge count). Thus, we decided to perform a \textit{level-wise depth-first search} (LDFS, illustrated by Figure \ref{fig:ldfs}), which can be abstracted as a set of massive parallel constrained DFSs with level-wise forking. This approach allows us to benefit from the efficiency of pattern growth algorithms and to apply level-wise frequency pruning at the same time. For example, in Figure \ref{fig:lattice} we apply the frequency pruning of $P_{10}, P_{11}, P_{12}$ in parallel within the same iteration but use search constraints (Section \ref{sec:branch}) to grow only from $P_{10}$ to $P_{20}$.
By using a distributed in-memory dataflow system instead of MapReduce, DIMSpan further benefits from the capability to hold graphs including supported patterns and their embeddings in main memory between iterations and to exchange global knowledge by sending complete copies of each iteration's $k$-edge frequent patterns to every worker without disk access. In Apache Spark and Apache Flink this technique is called broadcasting\footnote{http://spark.apache.org/docs/latest/programming-guide.html\#broadcast-variables}\footnote{https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/batch/index.html \#broadcast-variables}.
\subsection{Distributed Dataflow}
\label{sec:flow}
Algorithm \ref{alg:flow} shows the distributed dataflow of DIMSpan. Inputs are a dataset of graphs $\mathcal{G}$ and the minimum frequency threshold $f_{min}$. The output is the dataset of frequent patterns $\mathcal{F}$. For each graph, supported 1-edge patterns $\mathcal{P}^1$ and the embedding map $\mu^1$ are already computed in a preprocessing step (see Section \ref{sec:dictionary}). Our algorithm is iterative and per iteration one level of the pattern lattice is processed until no more frequent patterns exist (line 12). In the following, we describe transformations and intermediate datasets of the iteration body (lines 4 to 11) in more detail:
\textbf{Line 4 - Report:} In the beginning of each iteration every graph reports all $k$-edge ($k\geq1$) supported patterns, i.e., the keys of the last iteration's embedding map $\mu^k$, through a \textit{flatmap} transformation.
\textbf{Line 5 - Combine:} The partition frequency of patterns $\phi_w : \mathcal{P} \times W \rightarrow \mathbb{N}$ is counted in a \textit{combine} tranformation. As this is the last operation before data is shuffled among workers the execution cardinality of pattern operations (e.g. verification, see Section \ref{sec:validation}) here is already reduced from $\left| \mathcal{P} \times \mathcal{G} \right|$ to $\left| \mathcal{P} \times W \right|$.
\textbf{Line 6 - Reduce:} The global frequency of patterns $\phi : \mathcal{P} \rightarrow \mathbb{N}$ is calculated in a \textit{reduce} transformation. Therefore, partition frequencies are shuffled among workers and summed up.
\textbf{Line 7 - Frequency pruning:} After global frequencies of all patterns are known, a \textit{filter} transformation is used to determine the frequent ones. Executing pattern operations here further reduces their cardinality from $\left| \mathcal{P} \times W \right|$ to $\left| \mathcal{F} \right| \leq \left| \mathcal{P} \right|$.
\textbf{Line 8 - Broadcasting:} After $\mathcal{F}^k$ is known, a complete copy is sent to the main memory of all workers using \textit{broadcasting}.
\textbf{Line 9 - Pattern growth:} Here, the previously broadcasted set $\mathcal{F}^k$ is used to filter each graph's embeddings $\mu^k$ to those of frequent patterns. For each of the remaining embeddings, the constrained pattern growth (Section \ref{sec:branch}) is performed to generate $\mu^{k+1}$.
\textbf{Line 10 - Obsolescence filter:} After pattern growth, we apply another \textit{filter} operation and only graphs with non-empty $\mu^{k+1}$ will pass. Thus, $\mathcal{G}$ can potentially shrink in each iteration if only a subset of graphs accumulates frequent patterns.
\textbf{Line 11 - Result storage:} Finally, we use a binary \textit{union} transformation to add the iteration's results to the final result set.
\subsection{Canonical Labels for Directed Multigraphs}
\label{sec:gspan}
We use a derivate of the gSpan minimum DFS code \cite{yan2002gspan} as canonical labels for directed multigraph patterns:
\begin{definition}
\label{def:dfscode}\textsc{(DFS Code)}.
A \textit{DSF code} representing a pattern of $j$ vertices and $k$ edges ($j,k \geq 1$) is defined to be a $k$-tuple $C = \langle x_1,x_2,..,x_k \rangle$ of extensions, where each \textit{extension} is a hextuple $x =\langle t_a, t_b, l_a, d, l_e, l_b \rangle$ representing the traversal of an edge $e$ with label $l_e \in \mathcal{L}_e$ from a \textit{start} vertex $v_a$ to an \textit{end} vertex $v_b$. $d \in \{in, out\}$ indicates if the edge was traversed in or against its direction. A traversal will be considered to be in direction, if the start vertex is the source vertex, i.e., $v_a(x) = s(e)$. The fields $l_a, l_b \in \mathcal{L}_v$ represent the respective labels of both vertices and their initial discovery times $t_a, t_b \in T \mid T = \langle 0, .., j \rangle$ where the vertex at $t=0$ is always the start vertex of the first extension. A DFS code $C_p$ will be considered to be the parent of a DFS code $C_c$, iff $\forall i \in \langle 1,..,k-1 \rangle : C_c.x_i = C_p.x_i$.
\end{definition}
According to this definition child DFS codes can be easily generated by adding a single traversal to their parent. Further on, DFS codes support multigraphs since extension indexes can be mapped to edges identifiers to describe embeddings.
However, there may exist multiple DFS codes representing the same graph pattern. To use DFS codes as a canonical form, gSpan is using a lexicographic order to determine a minimum one among all possible DFS codes \cite{yan2002tr}. This order is a combination of two linear orders. The first is defined on start and end vertex times of extensions $T \times T$, for example, a backwards growth to an already discovered vertex is smaller than a forwards growth to a new one.
The second order is defined on the labels of start vertex, edge and end vertex $\mathcal{L}_v \times \mathcal{L}_e \times \mathcal{L}_v$, i.e., if a comparison cannot be made based on vertex discovery times, labels and their natural order (e.g., alphabetical) are compared from left to right. To support directed graphs, we extended this order by direction $D = \{in, out\}$ with $out < in$ resulting into an order over $\mathcal{L}_v \times D \times \mathcal{L}_e \times \mathcal{L}_v$, i.e., in the case of two traversals with same start vertex labels, a traversal of an outgoing edge will always be considered to be smaller.
\begin{definition}
\label{def:mindfs}\textsc{(Minimum DFS Code)}.
There exists an order among DFS codes such that $\forall C_1, C_2 : C_1 < C_2 \vee C_1 = C_2 \vee C_1 > C_2$ is true. Let $\mathcal{C}_P$ be the set of all DFS codes describing a pattern P and $C_{min}$ be its minimum DFS code, than $\nexists\ C_i \in \mathcal{C}_P : C_i < C_{min}$ is true.
\end{definition}
\subsection{Constrained Pattern Growth}
\label{sec:branch}
Besides gSpan's canonical labels we also adapted the growth constraints to skip parent-child relationships in the pattern lattice (dotted lines in Figure \ref{fig:lattice}). However, in contrast to gSpan, we don't perform a pattern-centric DFS (Figure \ref{fig:dfs}) but an level-wise DFS (Figure \ref{fig:ldfs}), i.e., we perform highly concurrent embedding-centric searches. Due to limited space, we refer to \cite{yan2002tr} for the theoretical background and focus on our adaptation to the distributed dataflow programming model.
There are two constraints for growing children of a parent embedding. The first, in the following denoted by \textit{time constraint}, dictates that forwards growth is only allowed starting from the rightmost path and backwards growth only from the rightmost vertex, where \textit{forwards} means an extension to a vertex not contained in the parent, \textit{backwards} means an extension to a contained one, the \textit{rightmost vertex} is the parent's latest discovered vertex and the \textit{rightmost path} is the path of forward growths from the initial start vertex to the rightmost one. The second constrained, in the following denoted by \textit{branch constraint}, commands that the minimum DFS code of an edge $C^1(e)$ needs to be greater than or equal to the parent's \textit{branch} $C^1(P)$ which is the 1-edge code described by only the initial extension of the a pattern's minimum DFS code.
Algorithm \ref{alg:pg} shows our adaption of these constraints to the distributed dataflow programming model, in particular, a map function $\tau$ that executes pattern growth for all embeddings of frequent patters in a single graph (line 9 of Algorithm \ref{alg:flow}). Therefore, we hold not only $G$ but also embedding map $\mu^k$ for each element of $\mathcal{G}$ and enable $\tau$ access to $\mathcal{F}^k$ as it was received by every worker in the broadcasting step (line 8 of Algorithm \ref{alg:flow}).
In an embedding-centric approach, a naive solution would be testing possible growth for the cross of supported frequent patterns' embeddings and the graph's edges. As an optimization, we use a merge strategy based on the branch constraint to reduce the number of these tests. Therefore, $\mathcal{F}^k$ in Algorithm \ref{alg:pg} is an n-tuple and ascendantly sorted by minimum DFS code. When executing the map function, we keep a current minimum branch $C^1_{min}$ and a current edge candidate set $E_{\geq min}$ (lines 1,2). Then, for every supported frequent pattern (line 3) we compare its branch to the current minimum (line 4) and only if it is greater, the current minimum will be updated (line 5) and the set of growth candidates can be shrunk (line 6). Thus, only for the cross of embeddings and branch-validated edges (line 8) parent containment and the time constraint need to be checked (line 9). In the case of a successful growth (line 10) the resulting pattern and its embedding will be added to $\mu^{k+1}$, the output of the map function (line 14). Sorting and rightmost path calculation are not part of the map function and executed only $|W \times \mathcal{F}|$ times at broadcast reception.
\subsection{False Positive Verification}
\label{sec:validation}
\vspace{-2mm}
Although the constrained pattern growth described previously helps skipping links in the pattern lattice (dotted lines in Figure \ref{fig:lattice}), it gives no guarantee for visiting every pattern only once. In the case of multiple ($n$) visits, $n-1$ non-minimal DFS codes (\textit{false positives}) will be generated. Thus, they need to be verified, e.g., by turning the label into a graph and recalculating the minimum DFS code. This is the only part of the algorithm facing the isomorphism problem and reducing its cardinality may reduce total runtime \cite{yan2002tr}. Thus, we evaluated moving the verification step to three different steps in the dataflow, in particular before reporting (line 4 in Algorithm \ref{alg:flow}), after partition frequency counting (line 5) and after frequency pruning (line 7). In the first case, false positives won't be counted and shuffled but verification is executed $|\mathcal{P} \times \mathcal{G}|$ times; in the second case, false positives are counted but not shuffled with $| \mathcal{P} \times W|$ verifications and in the last case, they will be counted and shuffled but only $|\mathcal{F}|$ verifications are required. By experimental evaluation we found that the first option in always slow while the others lead to similar runtimes (see Section \ref{sec:config}).
\subsection{Preprocessing and Dictionary Coding}
\label{sec:dictionary}
\vspace{-1mm}
Before executing the dataflow shown by Algorithm \ref{alg:flow}, we apply preprocessing that includes label-frequency based pruning, string-integer dictionary coding and sorting edges according to their 1-edge minimum DFS codes. The original gSpan algorithm already used these concepts but we improved the first two and adapted the third to our level-wise DFS strategy. In the first preprocessing step, we determine frequent vertex labels and broadcast a dictionary to all workers. Afterwards, we drop all vertices with infrequent labels as well as their incident edges. Then, we determine frequent edge labels, in contrast to the original, only based on the remaining edges. Thus, we can potentially drop more edges, for example, $e_1$ of $G_1$ in Figure \ref{fig:lattice} would be removed. This would not be the case by just evaluating its edge label since without dropping $e_2$ of $G_0$ before (because $v_2$ has infrequent label \texttt{C}) the frequency of edge label \texttt{b} would be $2$, i.e., considered to be frequent.
After dictionaries for vertex and edge labels are made available to all workers by broadcasting, we not only replace string labels by integers to save memory and to accelerate comparison but also sort edges according to their minimum DFS code, i.e., we use n-tuples instead of sets to store edges. We benefit from the resulting sortedness in every execution of the constrained pattern growth (see Section \ref{sec:branch}) as the effort of determining branch-valid edge candidates (line 6 of Algorithm \ref{alg:pg}) is reduced from a set filter operation to a simple increase of the minimum edge index.
\subsection{Data Structures and Compression}
\label{sec:compression}
We not only use minimum DFS codes as canonical labels but also a data structure based thereon to support all pattern operations (counting, growth and verification) without format conversions. We further store graphs as sorted lists of 1-edge DFS codes to allow a direct comparison at the lookup for the first valid edge of a branch in the pattern growth process (line 6 of Algorithm \ref{alg:pg}). Figure \ref{fig:element} illustrates a single element of $\mathcal{G}$ in Algorithm \ref{alg:flow} representing $G_2$ from Figure \ref{fig:lattice} and its embedding map $\mu^k$ in the $k=2$ iteration. Graphs and patterns are stored according to Definition \ref{def:dfscode} but encoded in integer arrays where all 6 elements store a graph's edge or a pattern's extension. For the sake of readability we use alphanumerical characters in Figure \ref{fig:element}. $\mu^k$ is stored as a pair of nested integer arrays $\langle \mathcal{P}^k, \mathcal{M}^k \rangle$ where equal indexes map embeddings to patterns. All embeddings of the same pattern are encoded in a single multiplexed integer array where all $|P.V|+|P.E|$ elements store a single embedding. Here, indexes relative to their offset relate vertex ids to their initial discovery time and edge ids to extension numbers.
This data structure not only allows fast pattern operations but also enables lightweight and effective integer compression. Therefore, we exploit the predictable value ranges of our integer arrays. As we use dictionary coding and vertex discovery times are bound by the maximum edge count $k_{max}$ the array's values may only range from $0..(max(k_{max},l_v, l_e)-1)$ where $l_v, l_e$ are the numbers of distinct vertex and edge labels. In the context of FSM, the maximum value will typically be much less than the integer range of $2^{32}$. There are compression techniques benefiting from low-valued integer arrays \cite{lemire2015decoding}. In preliminary experiments we found that Simple16 \cite{zhang2008performance} allows very fast compression and gives an average compression ratio of about 7 over all patterns found in our synthetic test dataset (see Section \ref{sec:data}). We apply integer compression not only to patterns but also to graphs and embeddings, which also have low maximum values, to decrease memory usage. Embeddings and graphs are only decompressed on demand and at maximum for one graph at the same time. All equality-based operations (map access and frequency counting) are performed on compressed values. Our experimental evaluation results show a significant impact of this compression strategy (see Section \ref{sec:config}).
\section{Implementation}
\label{sec:impl}
Our approach to distributed FSM was implemented as an extension of the \textsc{Gradoop} framework \cite{junghanns2016epgm}, a 3rd party library of the distributed dataflow framework Apache Flink \cite{carbone2015apache}. In the following, we will briefly introduce \textsc{Gradoop} and Flink and provide implementation details.
\input{41_gradoop}
\subsection{Apache Flink}
\input{43_structures}
\subsection{Preprocessing}
\subsection{Distributed Iterative Mining}
\subsection{\textsc{Gradoop} Framework}
\textsc{Gradoop} \cite{junghanns2016epgm} is a system for declarative graph analytics. It allows the combination of multiple graph operators and graph mining algorithms in a single analytical program. To enable distributed execution of analytical programs on shared nothing clusters, \textsc{Gradoop}'s graph representation is mapped to the dataset abstraction and operators as well as algorithms are implemented using Flink's transformations. The source code of \textsc{Gradoop} is available online under the GPL license\footnote{\url{www.gradoop.com}}.
A \textsc{Gradoop} program can be represented by a directed acyclic graph (DAG) where vertices are graphs or graph collections and edges are operators or algorithms. Besides general-purpose operators (e.g., the union of two graph collections) the framework provides a generic \textit{call} operator and fitting Java interfaces to make arbitrary graph mining algorithms applicable in \textsc{Gradoop} analytical programs. Our approach to distributed FSM was designed to be applicable with the call operator. However, we also provide a standalone program\footnote{\url{github...}} to use our implementation without the framework. In contrast to other FSM implementations, we support directed or undirected multigraphs without arbitrary string labels as input and require not upfront integer-relabeling or sorting.
\subsection{Data Structures}
\label{sec:model}
We performed preliminary experiments to find data structures that are space efficient, show a good performance for lexicographic comparison (branch restriction, false positive validation) and equality testing (counting) and can be fast serialized and deserialized between Flink transformations. Therefore, we evaluated the performance of Java objects, strings, byte arrays, nested integer arrays and multiplexed integer arrays. We found out that multiplexed integer arrays meet all of the three criteria best, in particular with regard to the fast but effective compression techniques available for integer arrays \cite{lemire2015decoding}.
The main data structures used in our approach are graphs, patterns and their embeddings. We use integer arrays of length $6 \times k$ to represent $k$-edge patterns and graphs, in particular $G/P = \langle v^1_a, v^1_b, l^1_a, d^1, l^1_e, l^1_b,..,v^k_a, v^k_b, l^k_a, d^k, l^k_e, l^k_b \rangle$. In graphs, edge are already stored in their minimal traversal direction according to gSpan lexicographic order.
\section{Evaluation}
\label{sec:eval}
In this section we present the results of a performance evaluation of DIMSpan based on a real molecular dataset of simple undirected graphs and a synthetic dataset of directed multigraphs. We evaluate scalability for increasing volume of input, decreasing minimum support and variable cluster size. Further on, we analyze the runtime impact of the discussed pruning and optimization techniques.
\input{51_impl}
\input{541_scalability}
\input{52_data}
\input{551_config}
\input{53_size}
\input{54_scalability}
\input{55_config}
\subsection{Implementation and Setup}
\label{sec:impl}
We evaluated DIMSpan using Java 1.8.0\_102, Apache Flink 1.1.2 and Hadoop 2.6.0. More precisely we used Flink's DataSet API\footnote{\url{https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/batch/index.html}} for all transformations and its \textit{bulk iteration} for the iterative part. We further used the Simple16 implementation from JavaFastPFOR\footnote{\url{https://github.com/lemire/JavaFastPFOR}} for compression. The source code is available on GitHub\footnote{\url{https://github.com/dbs-leipzig/gradoop}; org.gradoop.examples.dimspan} under GPL v3 licence. All experiments were performed on our in-house cluster with 16 physical machines equipped with an Intel E5-2430 2.5 Ghz 6-core CPU, 48 GB RAM, two 4 TB SATA disks and running openSUSE 13.2. The machines are connected via 1 Gigabit Ethernet.
\subsection{Comparison to Dataflow-based Approaches}
\subsection{Data Sets}
\label{sec:data}
We evaluated three data-related dimensions that impact the runtime of a distributed FSM algorithm: structural characteristics, \textit{input size} $|\mathcal{G}|$ and \textit{result size} $|\mathcal{F}|$. To show scalability in relation to one of the dimensions, the other two need to be fixed. While $|\mathcal{F}|$ can be increased by decreasing the minimum support threshold, varying the other two dimensions separately is less trivial. Thus, we decided to use two base datasets with divergent structural characteristics and just copy every graph several times to increase $|\mathcal{G}|$ under preservation of structural characteristics and $|\mathcal{F}|$ .
The first base dataset is \textit{yeast-active}\footnote{\url{https://www.cs.ucsb.edu/~xyan/dataset.htm}}, in the following denoted by \textit{molecular}, a real dataset from anti-cancer research. It was chosen to represent molecular databases because structural characteristics among them do not fundamentally differ due to the rules of chemistry. For example, all molecular databases describe simple undirected graphs with only two different edge labels (single and double bond) and most frequent patterns are paths or trees \cite{nijssen2005gaston}. The base dataset contains around 10K graphs (9567) and is scaled up to datasets containing around 100K to 10M graphs. We did not use an optimized version of DIMSpan for undirected graphs but provide an according parameter. If the parameter is set to undirected, the direction indicator (see Section \ref{sec:gspan}) will just be ignored. Dedicated application logic is only used when it is unavoidable, for example, an 1-edge DFS code desribing a non-loop edge with two equal vertex labels (automorphism) leads to two embeddings in undirected mode.
The second category of datasets, in the following denoted by \textit{synthetic}, was created by our own data generator\footnote{org.gradoop.flink.datagen.transactions.predictable}. It generates unequally sized connected directed multigraphs where each 10th graph has a different size ranging from $|V| = 10, |E| = 14$ to $|V| = 91, |E| = 140$. There are 11 distinct vertex and $5 + |\mathcal{G}|/1000$ distinct edge labels. The result is predictable and contains 702 frequent patterns with 1 to 13 edges for each min support decrement of 10\% (i.e., 702 for 100\%, 1404 for 90\% , ..). The patterns contain loops, parallel edges (in and against direction), different subgraph automorphisms (e.g., "rotated" and "mirrored") separately as well as in all combinations. The data generator was not only designed for benchmarking but also for testing the correctness of implementations. To verify the number of contained frequent patterns we implemented a simple pruning-free brute-force FSM algorithm and manually verified all patterns of sizes 1..4, 12,13.
\subsection{Input and Result Size}
\label{sec:size}
\vspace{-2mm}
Table \ref{tab:size} and Figure \ref{fig:size} show measurement results for increasing input $|\mathcal{G}|$ and result size $|\mathcal{F}|$ (decreasing minimum support $s_{min}$) for both datasets on a cluster with 16 machines, i.e., 96 worker threads ($|W| = 96$). Table \ref{tab:size} shows absolute runtimes in minutes while Figure \ref{fig:size} illustrates relative runtimes for processing a portion of 100K graphs in seconds, i.e., $t(100K) = t(|\mathcal{G}|) * \nicefrac{100K}{|\mathcal{G}|}$.
In more detail, Figure \ref{fig:size} shows runtimes for an increasing number of graphs (left hand side) and for increasing result sizes (right hand side) for molecular as well as synthetic datasets. We observe that DIMSpan achieves an excellent scalability with regard to both dimensions since for nearly all configurations runtime grows less than the input size or result size.
The charts on the left hand side of Figure \ref{fig:size} indicate that the time to process 100K graphs is constantly decreasing with an increasing input size for both workloads. The reason is our optimization strategy that verifies DFS codes after counting (see Section \ref{sec:validation}) which makes the number of isomorphism resolutions independent from the input size. However, we see a decrease of this effect with decreasing threshold, which indicates that pattern growth becomes more time consuming for lower thresholds.
The charts on the right hand side of Figure \ref{fig:size} show the time to process 100K graphs for decreasing support thresholds, i.e., increasing result sizes. The shapes of both charts fundamentally differ as the result size of the molecular dataset increases exponentially for decreasing thresholds while the synthetic dataset (by design) shows a linear growth.
For the synthetic datasets we observe near-perfect linearly increasing runtimes. On the other hand, for our real-world molecular datasets there are non-linear effects for low support thresholds. While runtime grows less than the result size for up to a minimal support $s_{min}$ of 5\%, a further reduction of $s_{min}$ causes a higher increase in runtimes. For example, for 10M graphs the runtime goes up by factor $3.5$ for $s_{min}=3\%$ compared to
$s_{min}=5\%$ while the result size only increases by factor $2.7$. Additionally, we see again the positive effect of post-counting verification as scalability becomes better with increased input volume.
\subsection{Cluster Size}
\label{sec:scalability}
\vspace{-2mm}
Figure \ref{fig:speedup} shows measured runtimes and gained speedup for varying cluster sizes with fixed $|\mathcal{G}|$ and $s_{min}$. The speedup is measured over cluster size 2, as Flink is choosing an alternative execution strategy for a single machine which would lead to a superlinear speedup from 1 to 2 machines on the synthetic dataset.
We see that DIMSpan scales sublinear but achieves notable speedups on both datasets for an increasing number of machines which justifies adding machines to decrease absolute runtime in big data scenarios.
\subsection{Configuration Slowdown}
\label{sec:config}
\vspace{-1mm}
To analyze the impact of the proposed optimizations, we evaluated to which degree response times slow down when we omit single optimization and pruning techniques.
Table \ref{tab:slowdown} shows the observed slowdowns compared to the default DIMSpan algorithm including all optimizations for 16 machines and fixed data parameters. We see
that there are some differences between the two datasets, but that pattern compression (1, 2) is the most effective optimization technique. Its effectiveness is
primarily because of faster counting that is enabled by smaller data objects rather than the lower network traffic. As our integer array representation is already memory efficient, performing compression dedicatedly before shuffling even lead to a larger slowdown for the synthetic dataset as the compression effort is higher than its benefit. Embedding compression (3) and graph compression (4) not only lower the memory footprint but also increase runtime as data passed among iterations can be faster serialized by Apache Flink.
Moving the verification to the end of the pattern growth process (5) will show, as expected, a notable slowdown, even if false positives are counted otherwise (see Sections \ref{sec:validation} and \ref{sec:mr}). Moving the verification after the filter step (6) has no notable impact. The effect of our preprocessing (7) highly depends on the dataset. It is very high for our synthetic dataset with many infrequent edge labels but still effective on the real molecular data. Finally (8), we disabled the branch constraint check (line 4 of Algorithm \ref{alg:pg}). Note, that the result remains correct as the branch pruning also applies automatically at verification. For the synthetic dataset, this leads even to an improved runtime as we avoid sorting edges in the preprocessing, don't perform the check in pattern growth and consequently never execute lines 5 and 6 of Algorithm \ref{alg:pg}. By contrast, we can benefit from this technique for the molecular dataset as it has a much lower number of distinct minimum 1-edge DFS codes.
\section{Conclusions \& Future Work}
\label{sec:conclusion}
\vspace{-1mm}
We proposed DIMSpan, the first approach to parallel transactional FSM that combines the effective search space pruning of a leading single-machine algorithm with the technical advantages of state-of-the-art distributed in-memory dataflow systems. DIMSpan is part of \textsc{Gradoop} \cite{junghanns2016epgm, petermann2016graph}, an open-source framework for distributed graph analytics. A functional comparison to approaches based on MapReduce (Section \ref{sec:mr}) has shown that DIMSpan is superior in terms of network traffic, disk access and the number of isomorphism resolutions. Our experimental evaluation showed the high scalability of DIMSpan for very large datasets and low support thresholds, not only on molecular data but also on directed multigraphs.
We found that its runtime benefits most from optimized data structures as well as cheap and effective compression based thereon. In future work we will further optimize data representation and compression to count and exchange only as few bits as possible. Further on, we will investigate adaptive partitioning strategies to optimize load balancing among threads.
\section{Comparison to Approaches based on MapReduce}
\label{sec:mr}
To the best of our knowledge, only five approaches to transactional FSM based on shared nothing clusters exist \cite{hill2012iterative, lu2013efficiently, aridhi2014novel, lin2014large, bhuiyan2015iterative}. They are all based on MapReduce. In this section, we compare three of these approaches to DIMSpan since \cite{aridhi2014novel, bhuiyan2015iterative} show relaxed problem definitions in comparison to Definition \ref{def:fsm}. The comparison focuses on our optimization criteria, in particular upper bounds of shuffled data volume, required disk access and the number of isomorphism resolutions. Isomorphisms are resolved either when counting patterns by subgraph isomorphism testing or, as both require enumerating all permutations of a certain graph representation, at the generation of canonical labels from scratch as well as during their verification (see Section \ref{sec:validation}) .
\subsection{Comparison}
Table \ref{tab:comp} shows a comparison of I-FSM \cite{hill2012iterative}, MR-FSE \cite{lu2013efficiently}, DIMSpan and the filter-refinement (F\&R) approach of \cite{lin2014large} with regard to the stated dimensions. While the first three are iterative (i.e., level-wise search), F\&R is partition-based and requires only a single phase. All approaches including DIMSpan can be represented by two map-reduce (MRMap-reduce) phases where upper bounds of iterative approaches express the union of single iterations. On top of Table \ref{tab:comp} we provide orders among data volumes and cardinalities.
\textbf{I-FSM} is using complete subgraphs $\mathcal{S}$ as its main data structure. In Map 1 $k$-edge subgraphs of the previous iteration are read from disk and shuffled by graph id. In Reduce 1, graphs are reconstructed by a union of all subgraphs. Afterwards, $k+1$-edge subgraphs are generated and written to disk. In Map 2 they are read again and a (in \cite{hill2012iterative} not further specified) canonical label is calculated for every subgraph. Thus, the isomorphism problem is resolved with maximum cardinality $|\mathcal{S}|$. Then, all subgraphs are shuffled again according to the added label. In Reduce 2, label frequencies are counted. Finally, all subgraphs showing a frequent label are written to disk.
\textbf{MR-FSE} is using embedding maps $\mathcal{M}$ as its main data structure, i.e., with regard to vertex- and edge labels an irredundant version of $\mathcal{S}$ that describes subgraphs by patterns and embeddings (see Section \ref{sec:problem}). In Map 1 $k$-edge maps of the previous iteration are read from disk. Additionally, all $k$-edge frequent patterns are read by each worker ($W \times \mathcal{P}$). Then, graphs are reconstructed based on embeddings, pattern growth is applied and updated maps are written back to disk. MR-FSE is using DFS codes like DIMSpan but in \cite{lu2013efficiently} it is clearly stated that no verification is performed at any time. Instead, false positives are detected by enumerating all DFS code permutations of each distinct edge set (subgraph) to choose the minimal one. Consequently, isomorphisms among DFS codes are resolved $|\mathcal{S}|$ times. Reduce 1 is not used. In Map 2 the grown maps are read again and a tuple for each pattern and supporting graph ($\mathcal{I}_\mathcal{G} \times \mathcal{P})$ is shuffled. In Reduce 2, pattern frequencies are counted, filtered and written to disk.
\textbf{F\&R} reads graphs from disk and runs Gaston \cite{nijssen2005gaston}, an efficient single-machine algorithm, on each partition in Map 1. Then, a statistical model is used to report partition frequencies of patterns. Thus, every pattern is verified and shuffled only once per partition ($W \times \mathcal{P}$). In Reduce 1, local frequencies are evaluated for each pattern and a set of candidate patterns $\mathcal{P}$ including some frequency information are written to disk. In Map 2 graphs and information about candidate patterns are read from disk. For some partitions, local pattern frequencies may be unknown at this stage. Thus, they are refined by a priori like subgraph-isomorphism testing. The upper bound is not fully $|\mathcal{G} \times \mathcal{P}|$ as it is guaranteed that the exact frequency is known for at least one partition. In Reduce 2, refined pattern frequencies are summed up, filtered and written to disk.
\input{531_size}
\textbf{DIMSpan} reads graphs only once from disk before the iterative part and writes patterns only once to disk afterwards. Map and Reduce 1 are similar to MR-FSE, but DIMSpan keeps graphs as well as embedding maps in main memory and requires no disk access to keep state. In its default configuration, DIMSpan verifies patterns after a combine operation at the end of Map 2 and is thus facing isomorphism resolution only $|W \times \mathcal{P}|$ times. By configuration, verification can be moved to Reduce 2 which further reduces cardinality to $|\mathcal{P}|$ (see Section \ref{sec:validation}). The second effect of the combine operation is shuffling only partition frequencies like F\&R. To make frequent $k$-edge patterns available to all workers, DIMSpan uses broadcasting which requires only network traffic but no disk access.
\subsection{Summary}
DIMSpan reduces disk access to a minimum as it is based on a distributed in-memory system. However, beyond the technology-based advantage, DIMSpan is also superior in the number of isomorphism resolutions as it moves pattern verification after counting and will not apply a priori like operations at any time. Further on, DIMSpan shuffles partition frequencies only once which is less than F\&R (twice) and much less than complete subgraphs (I-FSM) or graph-pattern supports (MR-FSE). Besides the discussed dimensions, only F\&R and DIMSpan use compression. DIMSpan is the only approach that supports directed multigraphs and applies preprocessing (see Section \ref{sec:dictionary}).
Our comparison clearly shows that I-FSM is inefficient, as it is the only approach that reads, shuffles and writes full subgraphs twice per iteration. On the other hand, it would have been interesting to reproduce evaluation results of MR-FSE and F\&R on our own cluster. Unfortunately, MR-FSE is not available to the public. Regarding F\&R, only binaries\footnote{\url{https://sourceforge.net/projects/mrfsm/}} are available. However, there is no sufficient English documentation and they rely on an outdated non-standard Hadoop installation. Thus, we were not able to execute the binaries without errors despite notable effort and support of the author.
\section*{Appendix - Disprove of Isomorphism-free Verification of MRFSE}
In the following, we provide a disprove by counterexample to show that the isomorphism-free verification of gSpan's minimum DFS codes \cite{yan2002gspan} proposed by \cite{lu2013efficiently} according to the information provided in the paper is not correct as it fails for subgraphs containing automorphisms. The author did not answer to a source code request.
\subsection*{Definitions}
We first define graph, DFS code and embedding according to \cite{lu2013efficiently}:
\begin{small}
\begin{definition}
\textsc{(Graph)}.
A simple undirected vertex-labeled graph, in the following simply denoted by \textit{graph}, is a triple $G=\langle V, E, \lambda \rangle$ of vertice $V$, edges $E \subseteq \mathcal{P}(E) \mid \forall e \in E : |e| \in \{1,2\}$ and a labeling function $\lambda : (V \cup E) \rightarrow L$ associating a label $l \in L$ to every vertex and edge.
\end{definition}
\begin{definition}
\label{def:dfscode}\textsc{(DFS Code)}.
A \textit{DFS code} representing a pattern of $j$ vertices and $k$ edges ($j,k \geq 1$) is defined to be an $k$-tuple $C = \langle x_1,x_2,..,x_k \rangle$ of extensions, where each \textit{extension} is a pentuple $x =\langle t_a, t_b, l_a, l_e, l_b \rangle$ representing the traversal of an edge $e$ with label $l_e \in L$ from a \textit{start} vertex $v_a$ with label $l_a \in L$ to an \textit{end} vertex $v_b$ with $l_b \in L$ and their initial discovery times $t_a, t_b \in T \mid T = \langle 0, .., j \rangle$ where the vertex at $t=0$ is always the start vertex of the first extension. A DFS code $C_p$ will be considered to be the parent of a DFS code $C_c$, iff $\forall i \in \langle 1,..,k-1 \rangle : C_c.x_i = C_p.x_i$.
\end{definition}
A \textit{minimum DFS code} is defined according to Definition \ref{def:mindfs}.
\begin{definition}
\textsc{(Embedding)}.
Given a graph $G$ and a DFS code $C$, than an \textit{embedding} is defined to be an n-tuple $m = \langle v_t \mid t \in T \rangle$ of vertices and their index correponds to the initial discovery time.
\end{definition}
\end{small}
\subsection*{Proposition}
Due to the condition in line 10 of Algorithm 2 and Lemma V of \cite{lu2013efficiently}, a pair of DFS code and embedding $g^{s}_{e}$ will only be added to the output, if there not already exists a pair in hashset $genG$ covering exactly the same edge set. In consequence, only the first $g^s_e$ for each distinct edge set will be added to the output. Thus, the authors propose that for every graph $G$ and every minimal $k$-edge DFS code $C_{min}$ it is possible to generate the complete set of minimal $k+1$-edge children based on exactly one embedding which maps a subgraph of $G$ to $C_{min}$.
\subsection*{Counterexample}
Given the two graphs $G_1,G_2$ of Figure \ref{fig:disprove}, in the 3rd iteration the two black-lined subgraphs are both represented by a single minimum DFS code $C^3_{min}$. After extending both by the red edges to the resulting minimal DFS code is $C^4_{min}$. Both minimum DFS codes are listed on top of Table \ref{tab:disprove}. The table further lists all 6 possible embeddings $m_{11},..,m_{16}$ and $m_{21},..,m_{16}$ for each of the two graphs before and after pattern growth. We see not all possible extensions lead to a minimum DFS code (e.g., $m_{11}$ does not). These are the ones that have to be filtered out in a verification step to be neither added to the result nor to become extended in the subsequent iteration.
In \cite{lu2013efficiently}, only the first discovered embedding for each distinct edge subset and minimum DFS code is stored, i.e., only one of $m_{11},..,m_{16}$ and one of $m_{21},..,m_{16}$. Let $m_{11}$ and $m_{21}$ be the stored ones, than the frequency of $C^4_{min}$ will be incorrect as the false positive code of $m_{11}$ will be counted instead. Let $m_{11}$ and $m_{22}$ be the stored ones than the correct minimum DFS code will never be generated. We see, a single embedding per distinct edge set and DFS code cannot guarantee to generate all minimal children. Our counterexample shows that extending DFS codes using an isomorphism-free append-only approach based on only a single embedding cannot guarantee for the correct result.
\subsection*{Contradiction}
The isomorphism-free verification of \cite{lu2013efficiently} potentially fails for all subgraphs containing at least one automorphism. Subgraphs similar to our counterexample occur inter alia in molecular databases, for example, cycloalkanes\footnote{\url{https://en.wikipedia.org/wiki/Cycloalkane}}.
\begin{figure}[t]
\vspace{-3mm}
\caption{Illustration of our couterexample showing two graphs $G_1,G_2$ with each one $3$-edge subgraph containing automorphisms (black lines) and an extension to a $4$-edge subgraph (red lines). Roman numbers are vertex identifiers.}
\label{fig:disprove}
\centering
\includegraphics[width=0.45\textwidth]{eps/disprove.eps}
\end{figure}
\renewcommand{\arraystretch}{1.2}
\begin{table}[t]
\caption{Embeddings and DFS codes during the pattern growth from $3$-edge subgraphs (black lines) to $4$-edge subgraphs (red lines) in the graphs of Figure \ref{fig:disprove}.}
\label{tab:disprove}
\begin{center}
\begin{tabular}{lllll}
$C^3_{min}$ & \multicolumn{4}{l}{ $\langle \langle 0,1,A,a,A \rangle, \langle 1,2,A,a,A \rangle, \langle 2,0,A,a,A \rangle \rangle$} \\
$C^4_{min}$ & \multicolumn{4}{l}{ $\langle \langle 0,1,A,a,A \rangle, \langle 1,2,A,a,A \rangle, \langle 2,0,A,a,A \rangle, \langle 2,3,A,b,B \rangle \rangle$} \\
\\
\hline
& $k$-edge emb. & $k+1$-edge emb. & extension & minimal \\
\hline
$G_1$ : \\
$m_{11}$ & $\langle i,ii,iii \rangle$ & $\langle i,ii,iii, iv \rangle$ & $\langle 1,3,A,b,B \rangle$ & no\\
$m_{12}$ & $\langle i,iii,ii \rangle$ & $\langle i,iii,ii, iv \rangle$ & $\langle 2,3,A,b,B \rangle$ & yes\\
$m_{13}$ & $\langle ii,i,iii \rangle$ & $\langle ii,i,iii, iv \rangle$ & $\langle 0,3,A,b,B \rangle$ & no\\
$m_{14}$ & $\langle ii,iii,i \rangle$ & $\langle ii,iii,i, iv \rangle$ & $\langle 0,3,A,b,B \rangle$ & no\\
$m_{15}$ & $\langle iii,i,ii \rangle$ & $\langle iii,i,ii, iv \rangle$ & $\langle 2,3,A,b,B \rangle$ & yes\\
$m_{16}$ & $\langle iii,ii,i \rangle$ & $\langle iii,ii,i, iv \rangle$ & $\langle 1,3,A,b,B \rangle$ & no\\
\hline
$G_2$ : \\
$m_{21}$ & $\langle i,ii,iii \rangle$ & $\langle i,ii,iii, iv \rangle$ & $\langle 2,3,A,b,B \rangle$ & yes\\
$m_{22}$ & $\langle i,iii,ii \rangle$ & $\langle i,iii,ii, iv \rangle$ & $\langle 1,3,A,b,B \rangle$ & no\\
$m_{23}$ & $\langle ii,i,iii \rangle$ & $\langle ii,i,iii, iv \rangle$ & $\langle 2,3,A,b,B \rangle$ & yes\\
$m_{24}$ & $\langle ii,iii,i \rangle$ & $\langle ii,iii,i, iv \rangle$ & $\langle 1,3,A,b,B \rangle$ & no\\
$m_{25}$ & $\langle iii,i,ii \rangle$ & $\langle iii,i,ii, iv \rangle$ & $\langle 0,3,A,b,B \rangle$ & no\\
$m_{26}$ & $\langle iii,ii,i \rangle$ & $\langle iii,ii,i, iv \rangle$ & $\langle 0,3,A,b,B \rangle$ & no\\
\hline
\end{tabular}
\end{center}
\end{table}
|
1,108,101,562,953 | arxiv | \section*{Dicke simulators with emergent collective quantum computational abilities\\ supporting material}
\subsection*{Pietro Rotondo, Marco Cosentino Lagomarsino, and Giovanni Viola }
In this material, we give more details on the derivations of the results presented in the main text.
\section{Derivation of Equation (3)}
We begin with the derivation of Eq. (3), of the main text. We consider the partition function $Z=\mathrm{Tr}e^{-\beta H}$ with $H$ given in Eq. (2) for $\Delta = 0$. In this fully commuting limit we can evaluate the partition function straightforwardly.
We introduce new set of bosonic operators:
\begin{equation}
b^{\dagger}_k = a^{\dagger}_k + \frac{\Omega}{\omega\sqrt N}\sum_{i=1}^N g_{ik} \sigma^x_i\,, \qquad b_k = a_k + \frac{\Omega}{\omega\sqrt N}\sum_{i=1}^N g_{ik} \sigma^x_i \,.
\end{equation}
with $[b_{k'},b^{\dagger}_k]=\delta_{k,k'}$. By means of those, $H$ is written as the sum of two commuting operators:
\begin{equation}
H = \omega \sum_{k=1}^M b^{\dagger}_k b^{\phantom{\dagger}}_k - \frac{\Omega^2}{N \omega} \sum_{i,j=1}^N \sum_{k=1}^M g_{ik}g_{jk} \sigma^x_i \sigma^x_j
\end{equation}
As a byproduct we obtain the factorization of the full partition function \cite{Rotondo:PRB:15s}:$
Z = Z_B \, Z_{H}
$, where $Z_B$ is an overall free boson partition function that we can safely ignore in the thermodynamic limit. On the other hand
\begin{equation}\label{eq:sup:ZF}
Z_H=\mathrm{Tr}_{\sigma}\exp\left(\beta\sum_{i,j}J_{ij} \sigma_i \sigma_j \right)
\end{equation}
is an Ising contribution with both spin and mode dependent couplings $J_{ij} $ of the form given in Eq. (3) of the main text.
In Eq.\eqref{eq:sup:ZF} $\mathrm{Tr}_{\sigma}$ indicates the trace on the spins only.
\section{Derivation of Equation (5)}
In this section we report the derivation of Eq. (5) of the main text, which essentially follows the derivation of Wang and Hioe~\cite{Wang:PRA:73s} proved to be rigorous by Hepp and Lieb \cite{Lieb:PRA:73s}.
The authors of Ref.~\cite{Wang:PRA:73s} have shown explicitly, in the termodynamic limit, that the convenient way to calculate the trace on the Hilbert space of bosons, in the partition function, is to evaluate it on the set of the coherent states $\vert \{\alpha\} \rangle$.
The photonic matrix element in the partition function of Eq. (4) equals in the thermodynamic limit ($\alpha_k = x_k + i y_k$):
\begin{equation}
\bra{\{\alpha\}} e^{-\beta H} \ket{\{\alpha\}} \simeq \exp\left(-\beta \omega \sum_{k=1}^M (x_k^2+y_k^2) - \beta \Delta \sum_{i=1}^N \sigma^z_i - \beta\frac{\Omega}{\sqrt N}\sum_{i=1}^N \sum_{k=1}^M g_{ik} x_k \sigma^x_i \right)\,.
\end{equation}
The atomic trace thus factorizes and it can be calculated:
\begin{equation}
Z = \int \prod_{k=1}^M \,\frac{dx_k\,dy_k}{\pi} \, e^{-\beta \omega \sum_{k=1}^M (x_k^2+y_k^2)} \prod_{i=1}^N \cosh \left(\beta \sqrt{\Delta^2 + \frac{\Omega^2}{N} \left(\sum_{k=1}^M g_{ik}x_k\right)^2}\right) = \int \prod_{k=1}^M \,\frac{dm_k}{\pi} e^{-N f(\mathbf{m},\beta)}\,.
\end{equation}
In the last term we introduced the vectorial notation defined in the main text. The final expression for the free energy is:
\begin{equation}
f(\mathbf{m},\beta) = \beta \mathbf{m} \cdot \mathbf{m} -
\frac{1}{N} \sum_{i=1}^N \log \cosh\left(\beta \sqrt{\Delta^2 + \frac{\Omega^2}{N} \left(\mathbf{g}_i \cdot \mathbf m\right)^2}\right)\,,
\label{freeenergy}
\end{equation}
By minimizing the free energy above and using the self averaging property of Eq. (\ref{freeenergy}), we obtain the exact mean field equations:
\begin{equation}
\mathbf{m} = \frac{\Omega^2}{2} \Braket{\frac{(\mathbf{g}\cdot\mathbf{m})\, \mathbf{g}}{\mu(\mathbf{g})}\tanh \left(\beta \mu(\mathbf{g})\right)}_{\mathbf{g}}
\label{meanfield}
\end{equation}
To locate the critical point it suffices to expand in Taylor series Eqs.~(\ref{freeenergy}),(\ref{meanfield}):
\begin{align}
f(\mathbf{m})- f(\mathbf{0}) &= \beta\left(1- \frac{\Omega^2}{2\Delta} \tanh (\beta \Delta)\right) \mathbf{m} \cdot \mathbf{m} + O\left(m_k^4\right)\,,\qquad
m_k = \frac{\Omega^2}{2\Delta} \tanh (\beta \Delta)\,m_k + O(m_k^3)\,.
\label{saddle_tay}
\end{align}
As in the conventional Dicke model, a temperature-independent
threshold $\Omega_c^2 = 2 \Delta$ emerges. For $\Omega < \Omega_c$,
the phase transition is inhibited at all temperatures. Whenever the
magnitude of the coupling exceeds this threshold value, the
critical temperature is located at $T_c = \Delta/\arctanh
\left(2\Delta/\Omega^2\right)$.
Solutions to Eq. (\ref{meanfield}) can be classified according to
the number of non-zero components $n$ of the order parameter
$\mathbf{m}$ \cite{Amit:PRA:85s}:
\begin{equation}
\mathbf{m}_n = m^{(n)}(\underbrace{1,1,\cdots,1}_{n \,\,times},\underbrace{0,0,\cdots,0}_{M-n \,\,times})\,,
\end{equation}
where all permutations are also possible. In particular, solutions for
$n=1$ are the ones with the lowest free energy. There are $2M$ of such
solutions, corresponding to the $\mathbb{Z}_2 \times S_M$ symmetry
breaking of our multimode Dicke model.
For completeness we report the explicit expression for the Hessian matrix of the free energy, omitted in the main text for space imitations:
\begin{align}&
\frac{\partial^2{f}}{\partial{m_k} \partial{m_l}} =
2 \delta_{kl} - \Omega^2 \Braket{\frac{g_k g_l}{\mu(\mathbf{g})}\tanh \left(\beta\mu(\mathbf{g})\right)}_{\mathbf{g}}
+\Omega^4 \Braket{\frac{(\mathbf{g} \cdot \mathbf{m})^2\,g_k g_l}{\mu(\mathbf{g})^2}\left[\frac{\tanh \left(\beta\mu(\mathbf{g})\right)}{\mu(\mathbf{g})}+\beta \mathrm{sech}^2\left(\beta\mu(\mathbf{g})\right)\right]}_{\mathbf{g}}\,.
\end{align}
|
1,108,101,562,954 | arxiv | \section{Introduction}
Random cell movement plays a very important role in embryonic morphogenesis,
wound healing, cancer cells proliferation, and many other physiological and
pathological processes \cite{R}. The microscopic theory of the migration of
cells and bacteria towards a favorable environment (chemotaxis) is based on
random walk models \cite{EO,Hillen,Alt,OS1}. The \textquotedblleft
velocity-jump\textquotedblright\ models concern with self-propelled motion
involving the runs and tumbles, while \textquotedblleft
space-jump\textquotedblright models deal with the cells making jumps in
space. Much of the literature on the theoretical studies of cells motility
has been concerned with Markov random walk models (see, for example, \cit
{Baker1,EO}). However, the analysis of random movement of wild-type and
mutated epithelial cells shows the anomalous dynamics of cell migration \cit
{D} (see also \cite{M}). Over the past few years there have been several
attempts to model non-Markovian anomalous cell transport involving
subdiffusion and superdiffusion \cite{D,Fed0,Fed00,Fed1,Fed2,HL}. In this
paper we shall deal with a non-Markovian \textquotedblleft
space-jump\textquotedblright model that describes the non-homogeneous in
space subdiffusive transport of cells.
\subsection{Markov random walk model.}
First let us consider a Markov model for random cell movement along
one-dimensional lattice such that all steps are of equal length $1$. We
define the probabilit
\begin{equation}
p(k,t)=\Pr \left\{ X(t)=k\right\}
\end{equation
that the position of cell $X(t)$\ is at point $k\in \mathbb{Z}$\ at time $t.
We introduce at each point $k$\ the rate of jump to the left $\mu (k)$\ and
the rate of jump to the right $\lambda (k)$. This random walk is called a
generalized birth-death process \cite{Cox}. The master equation for $p(k,t)
\ can be written as
\begin{equation}
\frac{\partial p(k,t)}{\partial t}=\lambda (k-1)p(k-1,t)+\mu
(k+1)p(k+1,t)-\left( \lambda (k)+\mu (k)\right) p(k,t). \label{Master11}
\end{equation
This model corresponds to the case when intervals between jumps at point $k$
are exponentially distributed with parameter $\lambda (k)+\mu (k).$ When the
cell makes a jump from the position $k,$\ it jumps to the right with
probability $\lambda (k)/(\lambda (k)+\mu (k))$ and to the left with the
probability $\mu (k)/(\lambda (k)+\mu (k))$ \cite{Cox}.
The dependence of $\mu (k)$ and $\lambda (k)$ on space can be introduced in
different ways depending on how cells sense the surrounding environment. For
the local chemotaxis models, the rates $\lambda (S(k))$\ and $\mu (S(k))$
are the functions of the local concentration of the chemotactic substance
S(k).$ There exist several non-local and barrier models that are different
in terms of the dependence of rate functions on the chemotactic substance
\cite{Baker1,OS1}. For example, the rates $\mu (k)$ and $\lambda (k)$ can
depend on the concentration of the chemotactic substance at neighbouring
positions $k-1$ and $k+1$ as in (\ref{nonlocal}). In the continuous limit,
the master equation (\ref{Master11}) can be reduced to the classical
advection-diffusion equation in which the cell flux involves the standard
diffusion term and the advection term due to chemotaxis.
If we consider only positive values of $k,$ we need to implement boundary
conditions at the point $k=1$. Here we assume that if cell hits the wall on
the boundary, it is reflected with the probability $1-\chi $ and absorbed by
wall with the probability $\chi .$ Then one can write $p(1,t+\Delta
t)=(1-\lambda (1)\Delta t-\mu (1)\Delta t)p(1,t)+$ $\mu (1)(1-\chi
)p(1,t)\Delta t+\mu (2)p(2,t)\Delta t+o(\Delta t).$ In the limit $\Delta
t\rightarrow 0$ we obtain
\begin{equation}
\frac{\partial p(1,t)}{\partial t}=-\chi \mu (1)p(1,t)+\mu (2)p(2,t)-\lambda
(1)p(1,t), \label{bound}
\end{equation
where $0\leq \chi \leq 1.$
Non-uniform stationary solution of master equation (\ref{Master11}) can be
interpreted as cell aggregation phenomenon \cite{OS1}. In particular, if
there is no absorption on the boundary ($\chi =0$), the stationary solution
p_{st}(k)$ can be easily found from (\ref{Master11}) and (\ref{bound}). We
obtain
\begin{equation}
p_{st}(k)=p_{st}(1)\prod\limits_{i=1}^{k-1}\frac{\lambda (i)}{\mu (i+1)
,\qquad k>1, \label{ma}
\end{equation
wher
\begin{equation*}
p_{st}(1)=\left( 1+\sum\limits_{k=2}^{\infty }\prod\limits_{i=1}^{k-1}\frac
\lambda (i)}{\mu (i+1)}\right) ^{-1}
\end{equation*
provided the series is convergent.
\subsection{Anomalous random walks}
It is tempting to generalize the master equation (\ref{Master11}) for the
anomalous case by replacing the time derivative with the Caputo derivative
\cite{Klages,Mee,Met2}
\begin{equation}
\frac{\partial ^{\nu }p\left( k,t\right) }{\partial t^{\nu }}=\frac{1}
\Gamma (1-\nu )}\int_{0}^{t}\frac{\partial p\left( k,u\right) }{\partial u
\frac{du}{(t-u)^{1-\nu }} \label{Caputo}
\end{equation
as it is done in \cite{FBD} for a fractional linear birth--death process.
Here $\nu $ is the anomalous exponent: $0<\nu <1.$ Although this
generalization is very attractive from a mathematical point of view, it is
not appropriate for a non-homogeneous medium for which the exponent $\nu $
depends on $k$. The non-homogeneous fractional equation for $p(k,t)$ can be
written as
\begin{equation}
\frac{\partial p(k,t)}{\partial t}=a(k-1)\mathcal{D}_{t}^{1-\nu
(k-1)}p(k-1,t)+b(k+1)\mathcal{D}_{t}^{1-\nu (k+1)}p(k+1,t)-(a(k)+b(k)
\mathcal{D}_{t}^{1-\nu (k)}p(k,t), \label{dis1}
\end{equation
where \bigskip $\mathcal{D}_{t}^{1-\nu (k)}$ is the Riemann-Liouville
fractional derivative with varying orde
\begin{equation}
\mathcal{D}_{t}^{1-\nu (k)}p\left( k,t\right) =\frac{1}{\Gamma (\nu (k))
\frac{\partial }{\partial t}\int_{0}^{t}\frac{p\left( k,u\right) du}
(t-u)^{1-\nu (k)}}. \label{RL}
\end{equation
Here $\nu (k)$ is the anomalous exponent corresponding to the site $k$ and
the anomalous rate coefficients $a(k)$ and $b(k)$ have to be determined, see
(\ref{ab}). The crucial point here is that the anomalous exponent $\nu (k)$
depends on the site $k$. The fractional equation (\ref{dis1}) cannot be
rewritten in terms of Caputo derivative (\ref{Caputo}). It turns out that
even small non-homogeneous variations of the exponent $\nu $ lead to a
drastic change of $p(k,t)$ in the limit $t\rightarrow \infty $ \ \cite{Fed3
. It means that the subdiffusive fractional equations with constant
anomalous exponent $\nu $ are\textit{\ not structurally stable}. If, for
example, the point $k=M$ has the property that $\nu (M)<\nu (k)$ for all
k\neq M$ and $\chi =0,$ one can find that
\begin{equation}
p(k,t)\rightarrow 0,\qquad p(M,t)\rightarrow 1,\qquad 1\leq k\leq N
\end{equation
as $t\rightarrow \infty .$ This result has been interpreted as anomalous
aggregation of cells at the point $k=M$ \cite{Fed1}. In this paper we shall
find the conditions for anomalous aggregation for the semi-infinite interval
$1\leq k<\infty .$ It should also be noted that non-homogeneous variations
of the exponent $\nu $ destroy the Gibbs-Boltzmann distribution as a long
time limit of the fractional Fokker-Planck equation \cite{Fed3}. Of course,
for the constant value of $\nu ,$ the formulation in terms of Caputo or
Riemann-Liouville operators are equivalent, as long as proper care is taken
of the initial values \cite{Klages,Mee,Met2}.
\subsection{Anomalous diffusion with reaction.}
Another extension of traditional Markov random walks models is non-Markovian
theory of anomalous transport with reaction dynamics \cit
{Fed000,MFH,Nep,Sokolov,Sh,Vo}. In particular, this theory has been used for
the analysis of the proliferation and migration dichotomy of cancer cells
\cite{Fed0,Fed00,Fed2}. In this paper we consider the inhibition of cell
growth by anticancer therapeutic agents. To model this inhibition we
introduce the random death process with non-uniform death rate parameter. We
assume that during time interval $\left( t,t+\Delta t\right) $ at point $k$\
each cell has a chance $\theta (k)\Delta t+o(\Delta t)$ of dying, where
\theta (k)$ is the death rate \cite{Iomin}. It is easy to take into account
this process for Markov models. We just add the term $-\theta (k)p(k,t)$ to
the right hand side of the master equation (\ref{Master11}). On the
contrary, the anomalous master equation involves a non-trivial combination
of transport and death kinetic terms because of memory effects \cit
{HLW,MFH,Abad}. In this paper we shall derive the following fractional
equation
\begin{eqnarray}
\frac{\partial p(k,t)}{\partial t} &=&a(k-1)e^{-\theta (k-1)t}\mathcal{D
_{t}^{1-\nu (k-1)}\left[ p(k-1,t)e^{\theta (k-1)t}\right] \notag \\
&&+b(k+1)e^{-\theta (k+1)t}\mathcal{D}_{t}^{1-\nu (k+1)}\left[
p(k+1,t)e^{\theta (k+1)t}\right] \notag \\
&&-\left( a(k)+b(k)\right) e^{-\theta (k)t}\mathcal{D}_{t}^{1-\nu (k)}\left[
p(k,t)e^{\theta (k)t}\right] -\theta (k)p(k,t). \label{master30}
\end{eqnarray}
\subsection{Mean field master equation for the density of cells.}
Instead of the probability $p(k,t)$ for an individual cell one can consider
the mean density of cells $\rho (x,t)$ as a function of space $x$ and time
t $. The master equation (\ref{Master11}) can be rewritten as the equation
for the density $\rho (x,t)$ by changing the variables as $k\rightarrow x$
and $k\pm 1\rightarrow x\pm l$:
\begin{equation}
\frac{\partial \rho (x,t)}{\partial t}=\lambda (x-l)\rho (x-l,t)+\mu
(x+l)\rho (x+l,t)-(\lambda (x)+\mu (x))\rho (x,t)-\theta (x)\rho (x,t),
\label{Master10}
\end{equation
where $l$ is the jump size, $\theta (x)$ is the death rate. The advantage of
this equation is that one can easily take into account various non-linear
effects by assuming the dependence of the rate functions $\lambda (x),\mu
(x) $ and $\theta (x)$ on the average density $\rho (x,t).$
In the anomalous subdiffusive case, the master equation for mean field $\rho
(x,t)$ can be obtained from (\ref{master30}). It can be written as a mass
balance equation
\begin{equation}
\frac{\partial \rho (x,t)}{\partial t}=-I(x,t)+I(x-l,t)-\theta (x)\rho (x,t),
\label{balance}
\end{equation
where $I(x,t)$ is the total flow of cells from the point $x$ to $x+l$
\begin{equation}
I(x,t)=a(x)e^{-\theta (x)t}\mathcal{D}_{t}^{1-\nu (x)}\left[ e^{\theta
(x)t}\rho (x,t)\right] -b(x+l)e^{-\theta (x+l)t}\mathcal{D}_{t}^{1-\nu (x+l)
\left[ e^{\theta (x+l)t}\rho (x+l,t)\right] \label{flux}
\end{equation
and $I(x-l,t)$ is the total flow of cells from the point $x-l$ to $x$
\begin{equation}
I(x-l,t)=a(x-l)e^{-\theta (x-l)t}\mathcal{D}_{t}^{1-\nu (x-l)}\left[
e^{\theta (x-l)t}\rho (x-l,t)\right] -b(x)e^{-\theta (x)t}\mathcal{D
_{t}^{1-\nu (x)}\left[ e^{\theta (x)t}\rho (x,t)\right] .
\end{equation
Here $a(x)$ and $b(x)$ are the anomalous rate functions, see (\ref{ab}). One
can see that the flow of cells $I(x,t)$ depends on the death rate $\theta
(x) $. It means that in the anomalous case one cannot separate the transport
of cells from the death process \cite{HLW}. This phenomenon does not exist
in the Markovian case. For the Markov model (\ref{Master10}) the flux $I(x,t)
$ is independent from $\theta (x):$
\begin{equation}
I(x,t)=\lambda (x)\rho (x,t)-\mu (x+l)\rho (x+l,t). \notag
\end{equation
When the density $\rho (x,t)$ is conserved ($\theta =0$), the master
equation (\ref{balance}) can be approximated by the fractional Fokker-Planck
equation \cite{Klages,Met1,Met2
\begin{equation}
\frac{\partial \rho (x,t)}{\partial t}=-\frac{\partial }{\partial x}\left[
l(a(x)-b(x))\mathcal{D}_{t}^{1-\nu (x)}\rho (x,t)\right] +\frac{\partial ^{2
}{\partial x^{2}}\left[ \frac{l^{2}}{2}(a(x)+b(x))\mathcal{D}_{t}^{1-\nu
(x)}\rho (x,t)\right] . \label{FFP}
\end{equation
This is an example of the fractional equation with varying anomalous
exponent \cite{Ch}. Note that $a(x)-b(x)\sim l$ as $l\rightarrow 0$, see
\ref{dif2}).
The purpose of the next section is to set up a non-Markovian discrete-space
random walk model describing cell motility involving memory effects, the
death process and subdiffusive transport.
\section{Non-Markovian discrete-space random walk model}
\subsection{ Random cell motility}
There exist numerous mechanisms that facilitate random cell movement \cite{R
. In this paper we adopt the following random model of cell motility. When
the cell makes a jump to position $k$, the time the cell spends here before
it makes a jump to point $k-1$ or $k+1$ is random. It is called the
residence time or waiting (holding) time. We define the residence time at
position $k$ as
\begin{equation}
T_{k}=\min \left( T_{k}^{\mu },T_{k}^{\lambda }\right) , \label{min}
\end{equation
where $T_{k}^{\mu }$ and $T_{k}^{\lambda }$ are the independent random times
of jump to the left and right respectively. The idea here is that there
exist internal cellular signals involving two "hidden" independent\ random
alarm clocks. If one of the clocks goes off first, say $T_{k}^{\lambda
}<T_{k}^{\mu }$, the cell moves to the right to the point $k+1$. The other
clock "tells" the cell to move left to the point $k-1$ if it goes off first
$T_{k}^{\mu }<T_{k}^{\lambda })$. Note that migration of cells is a highly
complicated dynamic process which is regulated by both intercellular signals
and the surrounding environment. Since we do not know the exact mechanism of
cell motility we use a stochastic approach involving two random times
T_{k}^{\mu }$ and $T_{k}^{\lambda }$ for jumping to the left and right. Note
that if the random times $T_{k}^{\mu }$ and $T_{k}^{\lambda }$ are
exponentially distributed with the rates $\mu (k)$ and $\lambda (k)$
respectively, we have a classical Markov model with the master equation (\re
{Master11}). If the random variables $T_{k}^{\mu }$ and $T_{k}^{\lambda }$
are not exponentially distributed, the standard Markov approach does not
work. In this section we consider the non-Markovian case when $T_{k}^{\mu }$
and $T_{k}^{\lambda }$ are independent positive random variables with
general survival functions
\begin{equation}
\Psi _{\mu }(k,\tau )=\Pr \left\{ T_{k}^{\mu }>\tau \right\} ,\qquad \Psi
_{\lambda }(k,\tau )=\Pr \left\{ T_{k}^{\lambda }>\tau \right\} .
\label{sur55}
\end{equation
The Markov model (\ref{Master11}) corresponds to the following choice
\begin{equation}
\Psi _{\mu }(k,\tau )=e^{-\mu (k)\tau },\qquad \Psi _{\lambda }(k,\tau
)=e^{-\lambda (k)\tau }.
\end{equation
It is convenient to introduce the rate of escape (hazard function) $\gamma
(k,\tau )$ from the point $k$ as
\begin{equation}
\gamma (k,\tau )=\lim_{h\rightarrow 0}\frac{\Pr \left\{ \tau <T_{k}<\tau
+h\;|_{T_{k}>\tau }\right\} }{h}. \label{tran}
\end{equation
If we denote the survival function at the point $k$ as
\begin{equation*}
\Psi (k,\tau )=\Pr \left\{ T_{k}>\tau \right\}
\end{equation*
and the residence time probability density function a
\begin{equation*}
\psi (k,\tau )=-\frac{\partial \Psi (k,\tau )}{\partial \tau },
\end{equation*
then \cite{Cox}
\begin{equation}
\gamma (k,\tau )=\frac{\psi (k,\tau )}{\Psi (k,\tau )}. \label{def}
\end{equation
Now we determine this rate function in terms of statistical characteristics
of random residence times $T_{k}^{\mu }$ and $T_{k}^{\lambda }.$ It follows
from the definition of the residence time $T_{k}$ at position $k$ (\ref{min
) that the survival function $\Psi (k,\tau )$ can be written as a produc
\begin{equation*}
\Psi (k,\tau )=\Psi _{\lambda }(k,\tau )\Psi _{\mu }(k,\tau ),
\end{equation*
where $\Psi _{\lambda }(k,\tau )$ and $\Psi _{\mu }(k,\tau )$ are defined by
(\ref{sur55}). Differentiation of this equation with respect to $\tau $ give
\begin{equation}
\psi (k,\tau )=\psi _{\lambda }(k,\tau )+\psi _{\mu }(k,\tau ), \label{wtd}
\end{equation
where the transition densities $\psi _{\lambda }(k,\tau
) $ and $\psi _{\mu }(k,\tau )$ are defined as
\begin{equation}
\psi _{\lambda }(k,\tau )=-\frac{\partial \Psi _{\lambda }(k,\tau )}
\partial \tau }\Psi _{\mu }(k,\tau ),\qquad \psi _{\mu }(k,\tau )=-\frac
\partial \Psi _{\mu }(k,\tau )}{\partial \tau }\Psi _{\lambda }(k,\tau ).
\end{equation
The formula (\ref{wtd}) is the particular case of the general expression for
the residence time PDF in terms of the transition densities (see
formula (5) in the classical paper by van Kampen \cite{Kampen}). These
transition densities have a clear probabilistic meaning. For example, $\psi
_{\mu }(k,\tau )\Delta \tau $ is the probability that the cell's jump to the
left occurs in the time interval $(\tau ,\tau +\Delta \tau )$ since the cell
arrived at point $k$. If we divide both sides of (\ref{wtd}) by the survival
function $\Psi (k,\tau )$ and use the formula (\ref{def}), we obtain
\begin{equation}
\gamma (k,\tau )=\lambda (k,\tau )+\mu (k,\tau ), \label{rate}
\end{equation
where the rate of jump to the right $\lambda (k,\tau )$ and the rate of jump
to the left $\mu (k,\tau )$ are defined as
\begin{equation}
\lambda (k,\tau )=\frac{\psi _{\lambda }(k,\tau )}{\Psi (k,\tau )},\qquad
\mu (k,\tau )=\frac{\psi _{\mu }(k,\tau )}{\Psi (k,\tau )}. \label{sur5}
\end{equation
Note that the transition rates $\lambda (k,\tau )$ and $\mu (k,\tau )$ can
be introduced from the very beginning as it is done in \cite{Kampen}. By
using (\ref{def}) and (\ref{rate}), we write the survival function $\Psi
(k,\tau )$ as
\begin{equation}
\Psi (k,\tau )=e^{-\int_{0}^{\tau }\lambda (k,\tau )d\tau -\int_{0}^{\tau
}\mu (k,\tau )d\tau }. \label{s3}
\end{equation
The residence time probability density function $\psi (k,\tau )$ \bigskip
takes the form
\begin{equation*}
\psi (k,\tau )=(\lambda (k,\tau )+\mu (k,\tau ))e^{-\int_{0}^{\tau }\lambda
(k,\tau )d\tau -\int_{0}^{\tau }\mu (k,\tau )d\tau }.
\end{equation*
For the Markov case for which $\lambda (k)$ and $\mu (k)$ are independent of
the residence time variable $\tau ,$ we obtain from (\ref{s3}) the standard
exponential survival function
\begin{equation*}
\Psi (k,\tau )=e^{-\lambda (k)\tau -\mu (k)\tau }
\end{equation*
corresponding to the Markov master equation (\ref{Master11}).
\subsection{ Structured probability density function}
If the residence time probability density function $\psi $ is not
exponential, the random walk is non-Markovian. The standard method to deal
with non-Markovian stochastic processes is to add auxiliary variables to the
definition of the random walk to make the process Markovian \cite{Cox}. Here
we introduce the structured probability density function $\xi (k,t,\tau )$
involving residence time $\tau $ as auxiliary variable. The structural
density gives the probability that the cell position $X(t)$ at time $t$ is
at the point $k$ and its residence time $T_{k}$ at point $k$ is in the
interval $(\tau ,\tau +d\tau ).$ This is a standard way to deal with
non-Markovian random walks \cite{Cox,MFH}. Suppose that cells die at random
at rate $\theta (k)$ that depends on $k.$ The density $\xi (k,t,\tau )$
obeys the balance equation
\begin{equation}
\frac{\partial \xi }{\partial t}+\frac{\partial \xi }{\partial \tau
=-\lambda (k,\tau )\xi -\mu (k,\tau )\xi -\theta (k)\xi . \label{basic}
\end{equation
We consider only the case when the residence time of random walker at $t=0$
is equal to zero, so the initial condition is
\begin{equation}
\xi (k,0,\tau )=p_{0}(k)\delta (\tau ), \label{initial}
\end{equation
where $p_{0}(k)=$ $\Pr \left\{ X(0)=k\right\} $. The boundary condition in
terms of residence time variable ($\tau =0)$ can be written as \cite{Cox}
\begin{equation}
\xi (k,t,0)=\int_{0}^{t}\lambda (k-1,\tau )\xi (k-1,t,\tau )d\tau
+\int_{0}^{t}\mu (k+1,\tau )\xi (k+1,t,\tau )d\tau . \label{arr}
\end{equation
In what follows we consider only positive values of $k$. In which case, we
have to specify the boundary condition for $k=1$. We write
\begin{equation}
\xi (1,t,0)=(1-\chi )\int_{0}^{t}\mu (1,\tau )\xi (1,t,\tau )d\tau
+\int_{0}^{t}\mu (2,\tau )\xi (2,t,\tau )d\tau . \label{k=0}
\end{equation
This equation tells us that when cells escape from the point $k=1$ and move
to the left with the rate $\mu (1,\tau )$, they are adsorbed by the wall
with probability $\chi $, and reflected back to the position $k=1$ with the
probability $1-\chi .$ Note that this boundary condition can be written in
many different ways, for example, the cells can be reflected to state $k=2$.
One can also introduce a residence time PDF for a wall such that the
reflection is not instantaneous.
We solve (\ref{basic}) by the method of characteristic
\begin{equation}
\xi (k,t,\tau )=\xi (k,t-\tau ,0)e^{-\int_{0}^{\tau }\lambda (k,\tau )d\tau
-\int_{0}^{\tau }\mu (k,\tau )d\tau -\theta (k)\tau },\quad \tau <t,\quad
k\geq 1. \label{s1}
\end{equation
The structural density $\xi $ can be rewritten in terms of the survival
function $\Psi (k,\tau )$ (\ref{s3}) and the integral arrival rate
\begin{equation*}
j(k,t)=\xi (k,t,0)
\end{equation*
a
\begin{equation}
\xi (k,t,\tau )=j\left( k,t-\tau \right) \Psi (k,\tau )e^{-\theta (k)\tau
},\quad \tau <t,\quad k\geq 1. \label{je}
\end{equation
Our purpose now is to derive the master equation for the probability
p(k,t)=\Pr \left\{ X(t)=k\right\} :
\begin{equation}
p(k,t)=\int_{0}^{t}\xi (k,t,\tau )d\tau ,\quad k\geq 1. \label{denG}
\end{equation
Let us introduce the integral escape rate to the right $i_{\lambda }(k,t)$
and the integral escape rate to the left $i_{\mu }(k,t)$ a
\begin{equation}
i_{\lambda }(k,t)=\int_{0}^{t}\lambda (k,\tau )\xi (k,t,\tau )d\tau ,\qquad
i_{\mu }(k,t)=\int_{0}^{t}\mu (k,\tau )\xi (k,t,\tau )d\tau . \label{i1}
\end{equation
Then the boundary conditions (\ref{arr}) and (\ref{k=0}) can be written in a
very simple form:
\begin{equation}
j(k,t)=i_{\lambda }(k-1,t)+i_{\mu }(k+1,t),\quad k>1 \label{j}
\end{equation
and
\begin{equation}
j(1,t)=(1-\chi )i_{\mu }(1,t)+i_{\mu }(2,t).
\end{equation
It follows from (\ref{sur5}), (\ref{initial}), (\ref{je}) and (\ref{i1})
that
\begin{equation}
i_{\lambda }(k,t)=\int_{0}^{t}\psi _{\lambda }(k,\tau )j(k,t-\tau
)e^{-\theta (k)\tau }d\tau +\psi _{\lambda }(k,t)p_{0}(k)e^{-\theta (k)t},
\label{i5}
\end{equation
\begin{equation}
i_{\mu }(k,t)=\int_{0}^{t}\psi _{\mu }(k,\tau )j(k,t-\tau )e^{-\theta
(k)\tau }d\tau +\psi _{\mu }(k,t)p_{0}(k)e^{-\theta (k)t}. \label{i6}
\end{equation
Substitution of (\ref{initial}) and (\ref{je}) to (\ref{denG}), gives
\begin{equation}
p(k,t)=\int_{0}^{t}\Psi (k,\tau )j(k,t-\tau )e^{-\theta (k)\tau }d\tau +\Psi
(k,t)p_{0}(k)e^{-\theta (k)t}. \label{p1}
\end{equation
It is convenient to introduce the integral escape rate $i(k,t)$ as the sum
of the escape rate to the right $i_{\lambda }(k,t)$ and the escape rate to
the left $i_{\mu }(k,t)$
\begin{equation}
i(k,t)=i_{\lambda }(k,t)+i_{\mu }(k,t). \label{bau1}
\end{equation
The balance equation for $p(k,t)$ can be written a
\begin{equation}
\frac{\partial p(k,t)}{\partial t}=-i(k,t)+j(k,t)-\theta (k)p(k,t),\quad k>1.
\label{balan}
\end{equation
To obtain a closed equation for $p(k,t)$ we need to express $i(k,t)$ and
j(k,t)$ in terms of $p(k,t).$ By applying the Laplace transform $\hat{\psi
(k,s)=\int_{0}^{\infty }\psi (k,\tau )e^{-s\tau }d\tau $ to (\ref{i5}), (\re
{i6}) and (\ref{p1}), we obtain
\begin{equation*}
\hat{\imath}_{\lambda }(k,s)=\hat{\psi}_{\lambda }(k,s+\theta (k))\left[
\hat{\jmath}(k,s)+p_{0}(k)\right] ,
\end{equation*
\begin{equation*}
\hat{\imath}_{\mu }(k,s)=\hat{\psi}_{\mu }(k,s+\theta (k))\left[ \hat{\jmath
(k,s)+p_{0}(k)\right]
\end{equation*
an
\begin{equation*}
\hat{p}(k,s)=\hat{\Psi}(k,s+\theta (k))\left[ \hat{\jmath}(k,s)+p_{0}(k
\right] .
\end{equation*
In the Laplace space we have the following expressions for escape rates
\begin{equation}
\hat{\imath}_{\lambda }(k,s)=\frac{\hat{\psi}_{\lambda }(k,s+\theta (k))}
\hat{\Psi}(k,s+\theta (k))}\hat{p}(k,s),\qquad \hat{\imath}_{\mu }(k,s)
\frac{\hat{\psi}_{\mu }(k,s+\theta (k))}{\hat{\Psi}(k,s+\theta (k))}\hat{p
(k,s).
\end{equation
Using inverse Laplace transform and shift theorem we find
\begin{eqnarray}
i_{\lambda }(k,t) &=&\int_{0}^{t}K_{\lambda }(k,t-\tau )e^{-\theta
(k)(t-\tau )}p(k,\tau )d\tau , \notag \\
i_{\mu }(k,t) &=&\int_{0}^{t}K_{\mu }(k,t-\tau )e^{-\theta (k)(t-\tau
)}p(k,\tau )d\tau , \label{iii}
\end{eqnarray
where $K_{\lambda }(k,t)$ and $K_{\mu }(k,t)$ are the memory kernels defined
by Laplace transforms
\begin{equation}
\hat{K}_{\lambda }\left( k,s\right) =\frac{\hat{\psi}_{\lambda }(k,s)}{\hat
\Psi}\left( k,s\right) },\qquad \hat{K}_{\mu }\left( k,s\right) =\frac{\hat
\psi}_{\mu }(k,s)}{\hat{\Psi}\left( k,s\right) }. \label{new5}
\end{equation
It follows from (\ref{j}), (\ref{bau1}), (\ref{balan}) and (\ref{iii}) that
the master equation for the probability $p(k,t)$ i
\begin{eqnarray}
\frac{\partial p(k,t)}{\partial t} &=&\int_{0}^{t}K_{\lambda }(k-1,t-\tau
)p(k-1,\tau )e^{-\theta (k-1)(t-\tau )}d\tau \notag \\
&&+\int_{0}^{t}K_{\mu }(k+1,t-\tau )p(k+1,\tau )e^{-\theta (k+1)(t-\tau
)}d\tau \notag \\
&&-\int_{0}^{t}[K_{\lambda }(k,t-\tau )+K_{\mu }(k,t-\tau )]p(k,\tau
)e^{-\theta (k)(t-\tau )}d\tau -\theta (k)p(k,t) \label{master99}
\end{eqnarray
for $k>1.$ The balance equation for $k=1$ i
\begin{equation}
\frac{\partial p(1,t)}{\partial t}=-\chi i_{\mu }(1,t)-i_{\lambda
}(1,t)+i_{\mu }(2,t)-\theta (1)p(1,t) \notag
\end{equation
o
\begin{eqnarray}
\frac{\partial p(1,t)}{\partial t} &=&-\chi \int_{0}^{t}K_{\mu }(1,t-\tau
)p(1,\tau )e^{-\theta (1)(t-\tau )}d\tau -\int_{0}^{t}K_{\lambda }(1,t-\tau
)p(1,\tau )e^{-\theta (1)(t-\tau )}d\tau \notag \\
&&+\int_{0}^{t}K_{\mu }(2,t-\tau )p(2,\tau )e^{-\theta (2)(t-\tau )}d\tau
-\theta (1)p(1,t),
\end{eqnarray
where $0\leq \chi \leq 1.$ The master equation for $p(k,t)$ can be rewritten
in terms of the probability flux $I(k,t)$ from the point $k$ to $k+1$
\begin{equation}
I(k,t)=\int_{0}^{t}K_{\lambda }(k,t-\tau )p(k,\tau )e^{-\theta (k)(t-\tau
)}d\tau -\int_{0}^{t}K_{\mu }(k+1,t-\tau )p(k+1,\tau )e^{-\theta
(k+1)(t-\tau )}d\tau \label{I}
\end{equation
as
\begin{equation}
\frac{\partial p(k,t)}{\partial t}=-I(k,t)+I(k-1,t)-\theta (k)p(k,t).
\end{equation
In the next section we shall derive fractional master equation for $p(k,t).$
\section{\ Anomalous subdiffusion in heterogeneous media}
We now turn to the anomalous subdiffusive case. We assume that the longer
cell survives at point $k$, the smaller the transition probability from $k$
becomes. It means that the transition rates $\lambda (k,\tau )$ and $\mu
(k,\tau )$ are decreasing functions of residence time $\tau .$ We assume
that
\begin{equation}
\lambda (k,\tau )=\frac{\nu _{\lambda }(k)}{\tau _{0}(k)+\tau },\qquad \mu
(k,\tau )=\frac{\nu _{\mu }(k)}{\tau _{0}(k)+\tau }, \label{rates}
\end{equation
where $\tau _{0}(k)$ is a parameter with units of time. Both $\nu _{\lambda
}(k)$ and $\nu _{\mu }(k)$ play a very important role in what follows. From
\ref{s3}) and (\ref{rates}) we find that the survival function has a
power-law dependence
\begin{equation*}
\Psi (k,\tau )=\left( \frac{\tau _{0}(k)}{\tau _{0}(k)+\tau }\right) ^{\nu
(k)},
\end{equation*
where the exponen
\begin{equation}
\nu (k)=\nu _{\lambda }(k)+\nu _{\mu }(k) \label{exp}
\end{equation
depends on the state $k.$ Residence time probability density function $\psi
(k,\tau )=-\partial \Psi (k,\tau )/\partial \tau $ has the Pareto form
\begin{equation}
\psi (k,\tau )=\frac{\nu (k)\left( \tau _{0}(k)\right) ^{\nu (k)}}{(\tau
_{0}(k)+\tau )^{1+\nu (k)}}, \label{Pareto}
\end{equation
The anomalous subdiffusive case \cite{Klages,MFH} corresponds t
\begin{equation*}
\nu (k)=\nu _{\lambda }(k)+\nu _{\mu }(k)<1.
\end{equation*
We can notice from (\ref{rates}) that the ratios $\lambda (k,\tau )$ and
\mu (k,\tau )$ to $\lambda (k,\tau )+\mu (k,\tau )$ are independent of the
residence time variable $\tau $ that is
\begin{equation*}
\frac{\lambda (k,\tau )}{\lambda (k,\tau )+\mu (k,\tau )}=\frac{\nu
_{\lambda }(k)}{\nu _{\lambda }(k)+\nu _{\mu }(k)},\qquad \frac{\mu (k,\tau
}{\lambda (k,\tau )+\mu (k,\tau )}=\frac{\nu _{\mu }(k)}{\nu _{\lambda
}(k)+\nu _{\mu }(k)}.
\end{equation*
In this case it is convenient to introduce the probabilities of jumping to
the righ
\begin{equation}
p_{\lambda }(k)=\frac{\nu _{\lambda }(k)}{\nu _{\lambda }(k)+\nu _{\mu }(k)}
\label{pr1}
\end{equation
and to the lef
\begin{equation}
p_{\mu }(k)=\frac{\nu _{\mu }(k)}{\nu _{\lambda }(k)+\nu _{\mu }(k)}.
\label{pr2}
\end{equation
Note that these jump probabilities are completely determined by the
anomalous exponents $\nu _{\lambda }(k)$ and $\nu _{\mu }(k).$ In the
standard CTRW theory, these jump probabilities are given independently \cit
{Klages,MFH}.
Let us consider the non-local model for which the jump probabilities
p_{\lambda }(k)$ and $p_{\mu }(k)$ depend on the chemotactic substance $S(k)$
as follows
\begin{equation}
p_{\lambda }(k)=Ae^{-\beta \left( S(k+1)-S(k)\right) },\qquad p_{\mu
}(k)=Ae^{-\beta \left( S(k-1)-S(k)\right) }, \label{nonlocal}
\end{equation
where the parameter $A$ is determined from $p_{\lambda }(k)+$ $p_{\mu
}(k)=1. $ These jump probabilities describe the bias of cells with respect
to the spatial gradient $S(k+1)-S(k)$ \cite{Baker1,OS1}. One can obtain \cit
{HL}
\begin{equation}
p_{\lambda }(k)-p_{\mu }(k)=\frac{e^{-\beta S(k+1)}-e^{-\beta S(k-1)}}
e^{-\beta S(k+1)}+e^{-\beta S(k-1)}}. \label{diff}
\end{equation}
The transition PDF's $\psi _{\lambda }(k,\tau )=\lambda (k,\tau )\Psi
(k,\tau )$ and $\psi _{\mu }(k,\tau )=\mu (k,\tau )\Psi (k,\tau )$ can be
rewritten in terms of $\psi (k,\tau ),p_{\lambda }(k)$ and $p_{\mu }(k)$ as
\begin{equation}
\psi _{\lambda }(k,\tau )=p_{\lambda }(k)\psi (k,\tau ),\qquad \psi _{\mu
}(k,\tau )=p_{\mu }(k)\psi (k,\tau ). \label{new3}
\end{equation
The asymptotic approximation for the Laplace transform of the waiting time
density $\psi (k,\tau )$ of the Pareto form (\ref{Pareto})\ can be found
from the Tauberian theorem \cite{Feller}
\begin{equation*}
\hat{\psi}\left( k,s\right) \simeq 1-g(k)s^{\nu (k)},\qquad s\rightarrow 0
\end{equation*
wit
\begin{equation}
g(k)=\Gamma (1-\nu (k))\left( \tau _{0}(k)\right) ^{\nu (k)}.
\end{equation
We obtain from (\ref{new5})\ and (\ref{new3}) the Laplace transforms of the
memory kernel
\begin{equation}
\hat{K}_{\lambda }\left( k,s\right) \simeq \frac{p_{\lambda }(k)s^{1-\nu (k)
}{g(k)},\qquad \hat{K}_{\mu }\left( k,s\right) \simeq \frac{p_{\mu
}(k)s^{1-\nu (k)}}{g(k)},\qquad s\rightarrow 0.
\end{equation
Therefore, the integral escape rates to the right $i_{\lambda }(k,t)$ and to
the left $i_{\mu }(k,t)$ in the subdiffusive case are
\begin{eqnarray}
i_{\lambda }(k,t) &=&a(k)e^{-\theta (k)t}\mathcal{D}_{t}^{1-\nu (k)}\left[
p(k,t)e^{\theta (k)t}\right] , \notag \\
i_{\mu }(k,t) &=&b(k)e^{-\theta (k)t}\mathcal{D}_{t}^{1-\nu (k)}\left[
p(k,t)e^{\theta (k)t}\right] .
\end{eqnarray
Here $D_{t}^{1-\nu (k)}$ is the Riemann-Liouville fractional derivative with
varying order defined by (\ref{RL}). The anomalous rate functions $a(k)$\
and $b(k)$ are
\begin{eqnarray}
a(k) &=&\frac{p_{\lambda }(k)}{g(k)}=\frac{\nu _{\lambda }(k)}{\nu (k)\Gamma
(1-\nu (k))\left( \tau _{0}(k)\right) ^{\nu (k)}},\qquad \notag \\
b(k) &=&\frac{p_{\mu }(k)}{g(k)}=\frac{\nu _{\mu }(k)}{\nu (k)\Gamma (1-\nu
(k))\left( \tau _{0}(k)\right) ^{\nu (k)}} \label{ab}
\end{eqnarray
with the anomalous exponent $\nu (k)$ defined in (\ref{exp}).The master
equation (\ref{master99}) takes the form of non-homogeneous fractional
equation
\begin{eqnarray}
\frac{\partial p(k,t)}{\partial t} &=&a(k-1)e^{-\theta (k-1)t}\mathcal{D
_{t}^{1-\nu (k-1)}\left[ p(k-1,t)e^{\theta (k-1)t}\right] \notag \\
&&+b(k+1)e^{-\theta (k+1)t}p(k+1)\mathcal{D}_{t}^{1-\nu (k+1)}\left[
p(k+1,t)e^{\theta (k+1)t}\right] \notag \\
&&-\left( a(k)+b(k)\right) e^{-\theta (k)t}\mathcal{D}_{t}^{1-\nu (k)}\left[
p(k,t)e^{\theta (k)t}\right] -\theta (k)p(k,t) \label{fractional}
\end{eqnarray
for $k>1.$
For $k=1$ with $\theta (k)=\chi =0$, we obtai
\begin{equation}
\frac{\partial p(1,t)}{\partial t}=b(2)\mathcal{D}_{t}^{1-\nu (2)}p(2,t)-a(1
\mathcal{D}_{t}^{1-\nu (1)}p(1,t). \label{f1}
\end{equation
The fractional probability flux $I_{\nu }(k,t)$ from the point $k$ to $k+1$
is \
\begin{equation}
I_{\nu }=a(k)e^{-\theta (k)t}\mathcal{D}_{t}^{1-\nu (k)}\left[
p(k,t)e^{\theta (k)t}\right] -b(k+1)e^{-\theta (k+1)t}p(k+1)\mathcal{D
_{t}^{1-\nu (k+1)}\left[ p(k+1,t)e^{\theta (k+1)t}\right] . \label{If}
\end{equation
The equation (\ref{fractional}) can be rewritten in terms of the probability
flux $I_{\nu }(k,t)$ as
\begin{equation}
\frac{\partial p(k,t)}{\partial t}=-I_{\nu }(k,t)+I_{\nu }(k-1,t)-\theta
(k)p(k,t).
\end{equation}
\subsection{Fractional Fokker-Planck equation for cells density and
chemotaxis}
In this subsection we consider the continuous case ($k\rightarrow x$) and
find the drift $l(a(x)-b(x))$ together with diffusion coefficient in the
fractional Fokker-Planck equation (\ref{FFP}). It follows from (\ref{ab})
that the drift is proportional to the difference in the anomalous exponents
\nu _{\lambda }(x)$ and $\nu _{\mu }(x)$, sinc
\begin{equation}
a(x)-b(x)=\frac{p_{\lambda }(x)-p_{\mu }(x)}{g(x)}=\frac{\nu _{\lambda
}(x)-\nu _{\mu }(x)}{\nu (x)\Gamma (1-\nu (x))\left( \tau _{0}(x)\right)
^{\nu (x)}}. \label{dif1}
\end{equation
The difference $\nu _{\lambda }(x)-\nu _{\mu }(x)$ can be approximated in
the different ways. In the case of chemotaxis this difference is
proportional to the gradient of the local concentration of the chemotactic
substance $S(x).$ Using (\ref{diff}), we obtain
\begin{equation*}
p_{\lambda }(x)-p_{\mu }(x)=\frac{e^{-\beta S(x+l)}-e^{-\beta S(x-l)}}
e^{-\beta S(x+l)}+e^{-\beta S(x-l)}}.
\end{equation*
In the limit $l\rightarrow 0$, we have the standard chemotaxis model
\begin{equation}
a(x)-b(x)=\frac{p_{\lambda }(x)-p_{\mu }(x)}{g(x)}=-\frac{\beta l}{g(x)
\frac{\partial S}{\partial x}+o\left( l\right) , \label{dif2}
\end{equation
where the case \ $\beta >0$ corresponds to the negative taxis: the drift is
in the direction of the decrease in the value of the chemotactic substance
S(x)$. The fractional Fokker-Planck equation (\ref{FFP}) takes the for
\begin{equation}
\frac{\partial \rho (x,t)}{\partial t}=\frac{\partial }{\partial x}\left[
\frac{l^{2}\beta }{g(x)}\frac{\partial S}{\partial x}\mathcal{D}_{t}^{1-\nu
(x)}\rho (x,t)\right] +\frac{\partial ^{2}}{\partial x^{2}}\left[ \frac{l^{2
}{2g(x)}\mathcal{D}_{t}^{1-\nu (x)}\rho (x,t)\right] . \label{newFFP}
\end{equation
\begin{figure}[tbp]
\includegraphics[scale=0.4]{temporary2}
\caption{{\ Monte-Carlo simulation of the stationary solution to equation
\protect\ref{newFFP}) and the analytical solution (\protect\ref{Bo}) for the
linear distribution of the chemotactic substance $S(x)=mx$ with $m=2$,
\protect\beta =10^{-2}.$}}
\label{fig1}
\end{figure}
As an example, let us consider the case when the anomalous exponent $\nu $
and time parameter $\tau _{0}$ are constants. Then the fractional
Fokker-Planck equation (\ref{newFFP}) can be rewritten as follows
\begin{equation}
\frac{\partial \rho (x,t)}{\partial t}=2\beta D_{\nu }\mathcal{D}_{t}^{1-\nu
}\frac{\partial }{\partial x}\left[ \frac{\partial S}{\partial x}\rho (x,t
\right] +D_{\nu }\mathcal{D}_{t}^{1-\nu }\frac{\partial ^{2}\rho (x,t)}
\partial x^{2}}, \label{Caputo2}
\end{equation
where $D_{\nu }$ is the fractional diffusion coefficien
\begin{equation*}
D_{\nu }=\frac{l^{2}}{2\Gamma (1-\nu )\tau _{0}^{\nu }}.
\end{equation*
In the case of the reflective boundary conditions at $x=0$, the fractional
equation (\ref{Caputo2}) admits the stationary solution $\rho _{st}(x)$ in
the semi-infinite domain $[0,\infty ).$ It obeys the equation
\begin{equation}
2\beta \frac{\partial }{\partial x}\left[ \frac{\partial S(x)}{\partial x
\rho _{st}(x)\right] +\frac{\partial ^{2}\rho _{st}(x)}{\partial x^{2}}=0
\end{equation
and has the form of the Boltzmann distribution \cite{Met1,Met2}
\begin{equation}
\rho _{st}(x)=N^{-1}\exp \left[ -2\beta S(x)\right] , \label{Bo}
\end{equation
where $N=\int_{0}^{\infty }\exp \left[ -2\beta S(x)\right] dx.$ This
distribution describes the aggregation of cells due to nonuniform
distribution of the chemotactic substance $S(x).$ Fig. 1 illustrates the
stationary profile of cells density $\rho _{st}(x)=$ $2\beta m\exp \left[
-2\beta mx\right] $ for the linear distribution $S(x)=mx$ and $m=2$, $\beta
=10^{-2}$.
We use Monte-Carlo method to simulate a stationary solution to equation (\re
{newFFP}). We select the following parameters: $\nu =0.5$, $\tau _{0}=1$,
and $l=0.01$. The Fig. 1 shows the result of the $10^{6}$ simulated random
walk trajectories, with jump probabilities given by (\ref{nonlocal}), Pareto
waiting time distribution (\ref{Pareto}), and terminal time $T=10^{6}$.
In the next section we will be concerned with the asymptotic behavior of the
solution of the master equation $p(k,t)$ as $t\rightarrow \infty $ \ for
\theta (k)=0.$ In particular, we will show that the stationary distribution
\ref{Bo}) is not structurally stable with respect to the spatial variations
of the anomalous exponent.
\section{Structural instability and anomalous aggregation}
It has recently been shown that the subdiffusive fractional equations with
constant anomalous exponent $\nu $ in a bounded domain $\left[ 0,L\right] $\
are not structurally stable with respect to the non-homogeneous variations
of parameter $\nu $ \cite{Fed3}. It turns out that the spatial variations of
the anomalous exponent lead to a drastic change in asymptotic behavior of
p(k,t)$ for large $t.$ The purpose of this section is to find the conditions
of this structural instability in semi-infinite domain $1\leq k<\infty .$ We
consider the case when $\theta (k)=\chi =0$ for which the total probability
is conserved
\begin{equation}
\sum\limits_{k=1}^{\infty }p(k,t)=1 \label{conse}
\end{equation
and the fractional probability flux $I_{\nu }(k,t)$ from the point $k$ to
k+1$ is
\begin{equation}
I_{\nu }(k,t)=a(k)\mathcal{D}_{t}^{1-\nu (k)}p(k,t)-b(k+1)p(k+1)\mathcal{D
_{t}^{1-\nu (k+1)}p(k+1,t). \label{If0}
\end{equation
For simplicity we assume that the initial conditions are $p_{0}(1)=1$ and
p_{0}(k)=0$ for $k\neq 1.$Taking the Laplace transform of (\ref{dis1}) and
\ref{conse}) we obtain
\begin{eqnarray}
s\hat{p}(k,s) &=&a(k-1)s^{1-\nu (k-1)}\hat{p}(k-1,s)+b(k+1)s^{1-\nu (k+1)
\hat{p}(k+1,s) \notag \\
&&-\left( a(k)+b(k)\right) s^{1-\nu (k)}\hat{p}(k,s) \label{L2}
\end{eqnarray
an
\begin{equation}
\sum\limits_{k=1}^{\infty }s\hat{p}(k,s)=1. \label{norm}
\end{equation
Since there is no flux of cells outside the left border, we have for $k=1
\begin{equation}
s\hat{p}(1,s)-1=b(2)s^{1-\nu (2)}\hat{p}(2,s)-a(1)s^{1-\nu (1)}\hat{p}(1,s).
\label{L1}
\end{equation
In the limit $s\rightarrow 0,$ one can obtain from (\ref{L1}) simple formula
expressing $\hat{p}(2,s)$ in terms of $\hat{p}(1,s)
\begin{equation*}
\hat{p}(2,s)\simeq \frac{a(1)s^{\nu (2)-\nu (1)}}{b(2)}\hat{p}(1,s),\qquad
s\rightarrow 0.
\end{equation*
In general, we find from (\ref{L2}) and (\ref{L1}) $\hat{p}(k,s)$ in terms
of $\hat{p}(k-1,s)$
\begin{equation}
\hat{p}(k,s)\simeq \frac{a(k-1)s^{\nu (k)-\nu (k-1)}}{b(k)}\hat{p
(k-1,s),\qquad k>1,\qquad s\rightarrow 0. \label{re1}
\end{equation
This formula has very simple probabilistic meaning: the flux $I_{\nu
}(k-1,t)\rightarrow 0$ as $t\rightarrow \infty .$ If we take the Laplace
transform of $I_{\nu }(k-1,t)$ from (\ref{If0}), we obtain
\begin{equation}
\hat{I}_{\nu }(k-1,s)=a(k-1)s^{1-\nu (k-1)}\hat{p}(k-1,s)-b(k)p(k)s^{1-\nu
(k)}\hat{p}(k,s).
\end{equation
It follows from (\ref{re1}) that $\hat{I}_{\nu }(k,s)\simeq 0$ as
s\rightarrow 0.$
\subsection{ Stationary solution to fractional equation with constant
anomalous exponent}
Let us assume that the anomalous exponent $\nu (k)$ is independent of the
position $k$ that is $\nu =const.$ Let us find stationary solution to the
fractional master equation (\ref{dis1}
\begin{equation}
p_{st}(k)=\lim_{t\rightarrow \infty }p(k,t)=\lim_{s\rightarrow 0}s\hat{p
(k,s). \label{st}
\end{equation
It follows from (\ref{re1}) that
\begin{equation*}
\hat{p}(k,s)\simeq \frac{a(k-1)}{b(k)}\hat{p}(k-1,s),\qquad k>1,\qquad
s\rightarrow 0.
\end{equation*
or
\begin{equation}
\hat{p}(k,s)\simeq \prod\limits_{j=1}^{k-1}\frac{a(j)}{b(j+1)}\hat{p
(1,s),\qquad k>1,\qquad s\rightarrow 0.
\end{equation
Using the normalization condition (\ref{norm}) and (\ref{st}), we obtain the
stationary solution of the equation (\ref{dis1}
\begin{equation}
p_{st}(k)=p_{st}(1)\prod\limits_{j=1}^{k-1}\frac{a(j)}{b(j+1)},\qquad k>1,
\label{an}
\end{equation
where
\begin{equation*}
p_{st}(1)=\left( 1+\sum\limits_{k=2}^{\infty }\prod\limits_{j=1}^{k-1}\frac
a(j)}{b(j+1)}\right) ^{-1}.
\end{equation*
If the sum
\begin{equation*}
\sum\limits_{k=2}^{\infty }\prod\limits_{j=1}^{k-1}\frac{a(j)}{b(j+1)}
\end{equation*
is divergent, the stationary solution does not exist. In particular, if we
assume that the anomalous rate functions $a$ and $b$ are equal, that is,
a(k)=b(k+1),$ then for a finite domain with $k=1,2,...,N,$ we obtain uniform
distribution $p_{st}(k)=1/N$ \ for every $k.$ The stationary solution (\re
{an}) is very similar to (\ref{ma}) corresponding to the Markov birth-death
model. However, this similarity is very deceptive, because (\ref{an}) is not
structurally stable with respect to the non-homogeneous variations of
parameter $\nu $. The aim of next subsection is to show this structural
instability.
\subsection{\ Anomalous aggregation.}
Now we consider non-homogeneous case for which the anomalous exponent
depends on $k.$ We assume that the point $k=M$ has the property that $\nu
(M)<\nu (k)$ for all $k\neq M$. Our purpose now is to find the conditions
under whic
\begin{equation}
\lim_{t\rightarrow \infty }p(M,t)=1,\qquad \lim_{t\rightarrow \infty
}p(k,t)=0,\qquad k\neq M. \label{ag}
\end{equation
It means that the total probability concentrates just at one point $k=M.$
This phenomenon is called an anomalous aggregation \cite{Fed1}. This
asymptotic behavior of cells was observed in experiments on phagotrophic
protists when \textquotedblleft cells become immobile in attractive patches,
which will then eventually trap all cells\textquotedblright\ \cite{AA}. In
the Laplace space, (\ref{ag}) takes the form
\begin{equation}
\lim_{s\rightarrow 0}s\hat{p}(M,s)=1,\qquad \lim_{s\rightarrow 0}s\hat{p
(k,s)=0,\qquad k\neq M. \notag
\end{equation
We can rewrite the normalization condition (\ref{norm}) a
\begin{equation}
s\hat{p}(M,s)+\sum\limits_{k=1}^{M-1}s\hat{p}(M-k,s)+\sum\limits_{k=1}^
\infty }s\hat{p}(M+k,s)=1. \label{cons}
\end{equation
By using (\ref{re1}), we express $\hat{p}(M+k,s)$ in terms of $\hat{p}(M,s)
\ as follows
\begin{equation}
\hat{p}(M+k,s)\simeq \hat{p}(M,s)\prod\limits_{j=1}^{k}\frac{a(M+j-1)}{b(M+j
}s^{\nu (M+k)-\nu (M)},\qquad k\geq 1,\qquad s\rightarrow 0. \label{M1}
\end{equation
Now we write the formula for $\hat{p}(M-k,s)$ in terms of $\hat{p}(M,s)$
\begin{equation}
\hat{p}(M-k,s)\simeq \hat{p}(M,s)s^{\nu (M-k)-\nu (M)}\prod\limits_{j=1}^{k
\frac{b(M-j+1)}{a(M-j)},\qquad k=1,...,M-1,\qquad s\rightarrow 0. \label{M2}
\end{equation
Now we substitute (\ref{M1}) and (\ref{M2}) into (\ref{cons}) and use $s\hat
p}(M,s)$ as a common factor
\begin{equation}
s\hat{p}(M,s)\left( 1+\sum\limits_{k=1}^{M-1}s^{\nu (M-k)-\nu
(M)}\prod\limits_{j=1}^{k}\frac{b(M-j+1)}{a(M-j)}+\sum\limits_{k=1}^{\infty
}s^{\nu (M+k)-\nu (M)}\prod\limits_{j=1}^{k}\frac{a(M+j-1)}{b(M+j)}\right)
\simeq 1. \notag
\end{equation
Since $\nu (M)<\nu (k)$ for any $k\neq M$, we have $s^{\nu (M+k)-\nu
(M)}\rightarrow 0$ and $s^{\nu (M-k)-\nu (M)}\rightarrow 0$ as $s\rightarrow
0.$ We conclude that i
\begin{equation}
\sum\limits_{k=1}^{\infty }s^{\nu (M+k)-\nu (M)}\prod\limits_{j=1}^{k}\frac
a(M+j-1)}{b(M+j)}\rightarrow 0 \notag
\end{equation
as $s\rightarrow 0,$ then $s\hat{p}(M,s)\rightarrow 1,$ while $s\hat{p
(k,s)\rightarrow 0$.\ It means that in the limit $t\rightarrow \infty ,$ we
obtain (\ref{ag}). If instead of probability $p(k,t)$ we consider the mean
density of cells $\rho \left( x,t\right) ,$ the formula (\ref{ag}) can be
rewritten as $\rho \left( x,t\right) \rightarrow \delta (x-x_{\min })$ as
t\rightarrow \infty ,$\ where $x_{\min }$ is the point on the interval
[0,\infty )$ at which $\nu (x)$ takes its minimum value. Note that this
result was obtained for a symmetrical random walk in the context of
chemotaxis and anomalous aggregation \cite{Fed1}.
\section{Conclusions.}
We have studied a non-homogeneous in space and non-local in time random walk
model describing anomalous subdiffusive transport of cells. Using a Markov
model with structured probability density function, we have derived
non-local in time and fractional master equations for the probability of
cell position. The advantage of our probabilistic approach is that it allows
us to take into account the death process within the general non-Markovian
random walk. The main feature of our fractional model is that the transition
probabilities for jumping on the left and right depend inversely on the
residence time variable. This dependence induces power-law residence time
distribution and ultimately the anomalous subdiffusion of cells. It has
recently been shown that the subdiffusive fractional equations with constant
anomalous exponent\ are not structurally stable in a bounded domain with
respect to the non-homogeneous variations. In this paper we have extended
and complemented our previous results for infinite domain and found exact
conditions under which the structural instability takes place. Our model can
be generalized in many ways, e.g., by modelling the residence time by
internal chemical reactions via a stochastic or ordinary differential
equations instead of simple equation for the residence time $d\tau /dt=1$.
It would be interesting to take into account the density-dependent dispersal
\cite{Campos2} including non-linear exclusion process with cell-to-cell
adhesion \cite{Baker, Khain}.
\section*{Acknowledgements}
The authors gratefully acknowledge the support of the Federal Programme
N 14.A18.21.0867. SF acknowledges the warm hospitality of Department
of Mathematical Physics, Ural Federal University. SF also acknowledges the
support of the EPSRC Grant EP/J019526/1. The authors wish to thank S.
Falconer for interesting discussions.
|
1,108,101,562,955 | arxiv | \section{Introduction}
The Erd\H{o}s conjecture on distinct distances in the Euclidean plane ${\mathbb{R}^2$ says, $d(P):=|\{d(p,q)\mid p,q\in P\}|\gtrsim \frac{|P|}{\sqrt{\log|P|}}$ for any finite set $P\subset{\mathbb{R}^2$. Here ``$\gtrsim$" means ``$\geq C\cdot$" for some absolute constant $C>0$ and $d(,)$ is the Euclidean distance. In Guth-Katz \cite{GK}, the nearly optimal bound $d(P)\gtrsim\frac{|P|}{\log|P|}$ was established. The authors used a group theoretic framework, called the Elekes-Sharir framework, to reduce the problem of enumerating distinct distances to that of estimating line-line incidences in ${\mathbb{R}^3$.
The essential object they studied is a type of energy, which they call \textit{distance quadruples}, i.e. $Q(P)=:\{(p_1,q_1,p_2,p_2)\in P^4\mid d(p_1,q_1)=d(p_2,q_2)\}$. We call $|Q(P)|$ the \textbf{distance energy} of $P$, denoted by $E_{2}(P)$. Moreover, we can define $E_{k}(P):=|\{(p_1,q_1,\dots,p_k,q_k)\in P^{2k}\mid d(p_1,q_1)=\cdots=d(p_k,q_k)\}|$, and call it the $k$-th distance energy of $P$.
In this paper, we consider higher distance energy $E_k(P)$ for $k\geq 3$ and investigate higher moments in the Elekes-Sharir framework with the expectation that it might be more efficient than just estimating $E_2(P)$ as in \cite{GK}. Due to technical reasons, we only consider the example of square grids, which yet shows that the expectation is in vain for the Euclidean plane. Moreover, we also study distance energies in ${\mathbb{R}^n, n\geq3$ for the square grid example.
\\
{\bf Acknowledgements.} ZL is supported by Guangdong Basic and Applied Basic Research Foundation (No. 2021A1515110206). XM is supported by the National Natural Science Foundation of China (NSFC, Grant No. 12201346) and Shandong Provincial Foundation (Grant No. 2022HWYQ-046).
\section{Higher moments in the Elekes-Sharir framework}
We may define distance energy in any general metric space $(M,d)$. Let $P\subset M$ be a set of $N$ points and $d(P)$ the number of distinct distances between points of $P$. For each distance $d_i,i=1,\cdots,d(P)$, let $n_i$ be the number of pairs of points from $P$ at distance $d_i$. Clearly $\sum_{i=1}^{d(P)}n_i=2\binom{N}{2}=N(N-1)$. Then we use H\"{o}lder's inequality to get the following
\begin{lemma}\label{lem-Holder}For any positive integer $k\geq 2$,
\[d(P)\geq\dfrac{(N^2-N)^{\frac{k}{k-1}}}{(\sum_{i=1}^{d(P)}n_i^k)^{\frac{1}{k-1}}}.\]
\end{lemma}
\begin{proof}
By H\"{o}lder's inequality for $\frac{k}{k-1}+\frac{1}{k-1}=1$,
\[N^2-N=\sum_{i=1}^{d(P)}n_i\leq \left(\sum_{i=1}^{d(P)}1^{\frac{k}{k-1}}\right)^{\frac{k-1}{k}}\left(\sum_{i=1}^{d(P)}n_i^k\right)^{1/k}={d(P)}^{\frac{k-1}{k}}\left(\sum_{i=1}^{d(P)}n_i^k\right)^{1/k}.\]
Rearrange we get the desired inequality.
\end{proof}
By our definition, $E_k(P)=\sum_{i=1}^{d(P)}n_i^k$. In order to prove Erd\H{o}s' conjecture in this setting, we need to show
\begin{equation}\label{eq-conjecture}
E_k(P)\lesssim N^{k+1}\left(\log N\right)^{\frac{k-1}{2}}, \forall P\subset{\mathbb{R}^2 \text{ with } |P|=N,
\end{equation}
at least for some $k\geq2$.
Guth and Katz \cite{GK} already showed that this is not true for $k=2$. Actually, they proved $E_2(P)\lesssim N^3\log N$. They also calculated for the example where $P$ is a square grid with $N$ points, $E_2(P)\gtrsim N^3\log N$, see appendix of \cite{GK}. Thus, to verify or refute (\ref{eq-conjecture}), we need to estimate $E_k(P)$ at least for the square grid example.
\section{Rough estimate on higher distance energy of square grids}
In this section, we use number theoretic method, specifically, by calculating Dirichlet series on the number of expression as sum of squares derived from higher distance energies $E_k(P)$. Note that in the appendix of \cite{GK}, $E_2(P)$ was estimated by counting line-line incidences in ${\mathbb{R}^3$.
Let $P=[\sqrt{N}]\times[\sqrt{N}]$ be the square grid of size $N$, where $[x]$ denotes the set of integers ranging from $1$ to $\lfloor x\rfloor$. Then a piece of energy in $E_k(P)$ is afforded by $a_1^2+b_2^2=\cdots=a_k^2+b_k^2$ for some $a_i,b_i\in[\sqrt{N}], i=1,\dots,k$, and some $p_i,q_i\in P$ such that $p_i-q_i=(\pm a_i,\pm b_i)$. Note that for $a_i,b_i\leq\frac{\sqrt{N}}{2}$, the number of such pairs $(p_i,q_i)$ is $\gtrsim \sqrt{N}\cdot\sqrt{N}=N$. Denoting $r(n):=|\{(a,b)\in\mathbb{Z}^2\mid a^2+b^2=n\}|$, we get the rough estimate written as
\begin{align}\label{eq-energy estimate}
N^{k}\sum_{n\leq N}r(n)^k\gtrsim E_k(P)\gtrsim N^k\sum_{n\leq\frac{N}{2}}r(n)^k.
\end{align}
On average, the number of square sum expressions have the following estimate
\begin{proposition}\label{prop-sq sum est.}
For any positive integer $k$ and $x\in{\mathbb{R}_+$, we have the following asymptotics
\[\sum_{n\leq x}r(n)^k\sim xP_{2^{k-1}-1}(\log x),\]
where $P_{2^{k-1}-1}$ is a polynomial of degree $2^{k-1}-1$.
\end{proposition}
Note that more precisely for $k=2$, \begin{equation}\label{eq-k=2}
\sum_{n\leq x}r^2(n)\sim 4x\log x+O(x)
\end{equation}
(see Wilson \cite{Wilson}).
The lemma is a detour of consequence by the following two results.
\begin{lemma}[(7.20) of \cite{Wilson}]\label{lem-Wilson}
For any positive integer $k$, there is the following expression of Dirichlet series:
\[\sum_{n=1}\frac{r(n)^k}{n^s}=4^k(1-2^{-s})^{2^{k-1}-1}\left(\zeta(s)\eta(s)\right)^{2^{k-1}}\phi(s), \forall\Re(s)>1,\]
where $\zeta(s)$ is the Riemann zeta function, $\eta(s)=1^{-s}-3^{-s}+5^{-s}-7^{-s}+\cdots$, and $\phi(s)=\prod_{p}\left(1+\sum_{\nu=2}^\infty a_\nu p^{-\nu s}\right)$ is absolutely convergent for $\Re(s)>\frac{1}{2}$.
\end{lemma}
To get the average of $r^k(n)$, we rely on the follow form of Perron's formula:
\begin{lemma}[Theorem 1 of Chapter V in Karatsuba \cite{Karatsuba}]\label{lem-Perron}
Assume that the Dirichlet series $f(s)=\sum\limits_{n=1}^\infty\frac{a_n}{n^s}$ converges absolutely for $\Re(s)>1$, $|a_n|\leq A(n)$ for some monotonically increasing function $A(x)>0$, and
\[\sum_{n=1}^\infty\frac{|a_n|}{n^\sigma}=O((\sigma-1)^{-\alpha}), \alpha>0,\]
as $\sigma\rightarrow 1_+$. Then for any $b_0\geq b>1$, $T\geq 1$, and $x=N+\frac{1}{2}$, we have
\[\sum_{n\leq x}a_n=\frac{1}{2\pi i}\int_{b-iT}^{b+iT}f(s)\frac{x^s}{s}ds+O\left(\frac{x^b}{T(b-1)^{\alpha}}\right)+O\left(\frac{xA(2x)\log x}{T}\right).\]
\end{lemma}
\iffalse
\begin{lemma}[Theorem of \cite{GKLN}]\label{lem-GKLN}
Assume that $0\leq a(n)=o(n^\epsilon)$ for every $\epsilon>0$, and its associated Dirichlet series can be expressed as
\[\sum_{n=1}^\infty\frac{a(n)}{n^s}=\frac{\prod_{m=1}^M\zeta_{K^*_m}(s)}{\prod_{j=0}^J\left(\zeta_{K_j}(s)\right)^{\tau_j}}G(s), \forall\Re(s)>1.\]
Here $\zeta_{K^*_m}(s)$ are Dedekind zeta functions on some number fields $K^*_m$ of degree $d_m\in\{1, 2\}$ over $\mathbb{Q}$, $d_1+\cdots+d_M=4$, $K_j$ are arbitrary algebraic number fields, $\tau_j$ are fixed real numbers, $J\geq0$, and $G(s)$ is a ``harmless factor" (holomorphic, bounded away from 0 uniformly for $\Re(s)\geq\sigma$, $\sigma<\frac{1}{2}$). Then
\[\sum_{n\leq x}a(n)\sim x P_{M-1}(\log x),\]
for some degree $M-1$ polynomial $P_{M-1}$.
\end{lemma}
\fi
\begin{proof}[Proof of Proposition \ref{prop-sq sum est.}]
First, $r(n)$ is indeed always of order $o(n^\epsilon)$ for any $\epsilon>0$ (see for instance, Theorem 338 of Hardy and Wright \cite{HW}). Thus the condition of Lemma \ref{lem-Perron} is easily satisfied. Also note that $\eta(s)$ is holomorphic and poles are on only $\zeta(s)$. Then Wilson's calculation of the Dirichlet series of $r(n)^k$ as in Lemma \ref{lem-Wilson} shows that, by estimating the residue integral of contour, the order of $\sum_{n\leq x}r(n)^k$ should be $x(\log x)^{2^{k-1}-1}$, whereas $T$ may tend to be larger than any log power due to irrelevance of choices of contours.
\end{proof}
Proposition \ref{prop-sq sum est.} together with (\ref{eq-energy estimate}) immediately implies the following
\begin{theorem}
For $k\geq2$ and large positive integer $N$ ($\gg k$), we have the following estimate on the $k$-th distance energy:
\[N^{k+1}(\log N)^{2^{k-1}-1}\gtrsim E_k([\sqrt{N}]\times[\sqrt{N}])\gtrsim_k N^{k+1}(\log N)^{2^{k-1}-1}.\]
\end{theorem}
The result turns against the expectation of (\ref{eq-conjecture}) by a big log factor. Moreover, if $N^{k+1}(\log N)^{2^{k-1}}$ is the right order for $E_k(P)$ in general, then by Lemma \ref{lem-Holder}, in the Elekes-Sharir framework we may get the most efficient bound $d(P)\gtrsim\frac{N}{\log N}$ only when we study the second moment.
\section{Distance energy of square grids in higher dimensions}
In addition, we notice that the distance energy of square lattices in higher dimensions is optimal. Consider $P=[\sqrt[m]{N}]^m$ the square grid of size $N$ in ${\mathbb{R}^m, m\geq 3$, and let $r_m(n)=|\{(x_1,\dots,x_m)\in\mathbb{Z}^m\mid x_1^2+\cdots+x_m^2=n\}|$. Then similarly we have
\[N^2\sum\limits_{n\leq \frac{N^{2/3}}{2}}r_3^2(n)\leq E_2(P)\leq N^2\sum\limits_{n\leq N^{2/3}}r_3^2(n).\]
To this end, we introduce a more general result as follows
\begin{lemma}[Theorem 6.1 of M\"{u}ller \cite{Muller}]\label{lem-Muller}
Let $q(\mathbf{x})=\frac{1}{2}\mathbf{x}^TQ\mathbf{x}$ be a primitive positive definite integral quadratic form in $m\geq 3$ variables and $r_Q(n)=|\{\mathbf{x}\in\mathbb{Z}^m\mid q(\mathbf{x})=n\}|$. Then
\[\sum_{n\leq x}r_Q^2(n)=Bx^{m-1}+O\left(x^{(m-1)\frac{4m-5}{4m-3}}\right),\]
for some constant $B>0$ depending on $Q$. For $m=2$,
\[\sum_{n\leq x}r_Q^2(n)=A_Qx\log x+O(x),\]
where \[A_Q=12\frac{A(q)}{q}\prod_{p\mid q}\left(1+\frac{1}{p}\right)^{-1},\ q=\det(Q).\]
Here $A(q)$ denotes the multiplicative function defined by $A(p^e)=2+(1-\frac{1}{p})(e-1)$ for odd prime $p$, and
\[A(2^e)=\begin{cases}
1, \text{ if }e\leq 1,\\
2 \text{ if } e=2,\\
e-1, \text{ if }e\geq 3.
\end{cases}\]
\end{lemma}
This immediately implies
\begin{corollary}\label{cor-3 d}
For $P=[\sqrt[m]{N}]^m$ the square grid of size $N$ in ${\mathbb{R}^m, m\geq 3$, we have
\[N^{2+\frac{2m-2}{m}}\lesssim_m E_2(P)\lesssim_m N^{2+\frac{2m-2}{m}}.\]
\end{corollary}
Note that by Legendre's three-square theorem, $n=x^2+y^2+z^2\leq N^{\frac{2}{3}}$ for $n\neq 4^a(8m+7)$, which amount to $cN^{\frac{2}{3}}$ numbers for some $c>0$, i.e. $d(P)=c'N^{\frac{2}{3}}, c'>0$, as Erd\H{o}s noted for the distinct distances conjecture in ${\mathbb{R}^3$. For $m\geq 4$, by Lagrange's four-square theorem, each positive integer can be expressed as a sum of $m$ squares, i.e. $d(P)=c{N}^{\frac{2}{m}}$. Thus for any $m\geq3$, we can conclude that
\begin{corollary}
For any $m\geq3$, the estimate by distance energy of distinct distances in ${\mathbb{R}^m$ is optimal, i.e.
\[d(P)\lesssim\frac{|P|^4}{E_2(P)},\]
for certain examples like $P=[\sqrt[m]{N}]^m$ the square grid of size $N$.
\end{corollary}
This seems to indicate the following
\begin{conjecture}
$E_{2}(P)\lesssim |P|^{2+\frac{2m-2}{m}}$ for any finite set $P\subset{\mathbb{R}^m$.
\end{conjecture}
A proof of the above estimate for distance energy suffices to solve the Erd\H{o}s conjecture in higher dimensions. However, similar as in ${\mathbb{R}^2$, we believe that estimate by higher distance energies would not be optimal.
\iffalse
\section{Asymptotics of the distance energy for the square lattice in circles}\label{sec-square}
More precise estimate on the distance energy on square grids takes us more effort to develop number theoretic methods. For convenience, we study lattice grids in circles and derive bounds for $E_2(P)\lesssim N^3\log N$ when $P=\mathbb{Z}^2\cap B_{\sqrt{N}}(0,0)$, where $B_n(a,b)$ is the disk centered at $(a,b)$ with radius $n$. By results of the Gauss circle problem (see 1.4 of \cite{Karatsuba}),
\begin{equation}\label{eq-circle problem}
|P|=\pi N+o(N^{1/3}).
\end{equation}
Denote by $r_{a,b}(n)=\{(x,y)\in P\mid (x-a)^2+(y-b)^2=n\}$ so that $r_{0,0}(n)=r(n)$ for $n\leq N$. Actually, if $\sqrt{a^2+b^2}\leq\sqrt{N}-\sqrt{n}$, then $r_{a,b}(n)=r(n)$. For $\sqrt{a^2+b^2}>\sqrt{N}-\sqrt{n}$, $\partial B_{\sqrt{n}}(a,b)$ is cut by $\partial B_{\sqrt{N}}(0,0)$. By easy calculation, the cut arc has angle $2\arccos\left(\frac{a^2+b^2+n-N}{2\sqrt{n(a^2+b^2)}}\right)$. Then by symmetry, we may expect that
\begin{equation}\label{eq-pts on arc}
r_{a,b}(n):\ \begin{cases}=r(n), \text{ if }\sqrt{a^2+b^2}\leq\sqrt{N}-\sqrt{n}, n\leq N;\\
\sim\frac{r(n)}{\pi}\arccos\left(\frac{a^2+b^2+n-N}{2\sqrt{n(a^2+b^2)}}\right), \text{ otherwise}.
\end{cases}
\end{equation}
Clearly, $R(n):=\sum_{(a,b)\in P}r_{a,b}(n)$ counts all the pairs of points $(p,q)\in P^2$ with $d(p,q)=n$ and $E_2(P)=\sum_{n\leq N}R(n)^2$. Although single $r_{a,b}(n)$'s may deviate from the approximation by (\ref{eq-pts on arc}), by symmetry it allows us to establish the average asymptotics of $R(n)$ as
\begin{lemma}\label{lem-Rn}
Let $P$ be the integer points in the disk of radius $\sqrt{N}$ and $R(n)$ be the number of pairs of points from $P$ with distance $\sqrt{n}, n\leq N$ as above. Then
\[R(n)\sim \left(2\arccos\left(\frac{\sqrt{n/N}}{2}\right)-\sqrt{\frac{4Nn-n^2}{4N^2}}\right)Nr(n).\]
\end{lemma}
\begin{proof}
By (\ref{eq-pts on arc}), (\ref{eq-circle problem}) and simple approximation with integral, we have for $n\leq N$
\begin{align*}R(n)&\sim r(n)\sum_{\sqrt{a^2+b^2}\leq\sqrt{N}-\sqrt{n}}1+\frac{r(n)}{\pi}\sum_{\sqrt{N}-\sqrt{n}<\sqrt{a^2+b^2}\leq\sqrt{N}}\arccos\left(\frac{a^2+b^2+n-N}{2\sqrt{n(a^2+b^2)}}\right)\\
&\sim\pi r(n)(\sqrt{N}-\sqrt{n})^2+\frac{r(n)}{\pi}\iint_{(\sqrt{N}-\sqrt{n})^2\leq x^2+y^2\leq N}\arccos\left(\frac{x^2+y^2+n-N}{2\sqrt{n(x^2+y^2)}}\right)dxdy.
\end{align*}
Using polar coordinates we may transform the double integral into
\begin{align*}
&2\pi\int_{\sqrt{N}-\sqrt{n}}^{\sqrt{N}}r\arccos\left(\frac{r^2+n-N}{2\sqrt{n}r}\right)dr\\
=&\pi r^2\arccos\left(\frac{r^2+n-N}{2\sqrt{n}r}\right)\mid_{\sqrt{N}-\sqrt{n}}^{\sqrt{N}}+\pi\int_{\sqrt{N}-\sqrt{n}}^{\sqrt{N}}r^2\frac{\frac{1}{2\sqrt{n}}+\frac{N-n}{2\sqrt{n}r^2}}{\sqrt{1-\frac{(r^2+n-N)^2}{4nr^2}}}dr\\
=&\pi N\arccos\left(\frac{\sqrt{n/N}}{2}\right)-\pi^2(\sqrt{N}-\sqrt{n})^2+\frac{\pi}{2}\int_{\sqrt{N}-\sqrt{n}}^{\sqrt{N}}\frac{r^2+N-n}{\sqrt{4nr^2-(r^2+n-N)^2}}d(r^2).
\end{align*}
Substituting with $s=\frac{r^2-n-N}{2\sqrt{Nn}}$ in the last integral makes it into
\begin{align*}
\int_{-1}^{-\frac{\sqrt{n/N}}{2}}\frac{2\sqrt{Nn}s+2N}{\sqrt{1-s^2}}ds&=-2\sqrt{Nn}\sqrt{1-s^2}\mid_{-1}^{-\frac{\sqrt{n/N}}{2}}+2N\arcsin(s)\mid_{-1}^{-\frac{\sqrt{n/N}}{2}}\\
&=-\sqrt{4Nn-n^2}+2N\left(\frac{\pi}{2}-\arcsin\left(\frac{\sqrt{n/N}}{2}\right)\right).
\end{align*}
Summing up everything provides us the following answer:
\begin{align*}R(n)\sim&\pi r(n)(\sqrt{N}-\sqrt{n})^2+r(n)N\arccos\left(\frac{\sqrt{n/N}}{2}\right)-\pi r(n)(\sqrt{N}-\sqrt{n})^2\\
&-\frac{r(n)}{2}\sqrt{4Nn-n^2}+\frac{\pi r(n)}{2}N-r(n)N\arcsin\left(\frac{\sqrt{n/N}}{2}\right)\\
=&r(n)\left(N\arccos\left(\frac{\sqrt{n/N}}{2}\right)-\sqrt{Nn-\frac{n^2}{4}}+\frac{\pi}{2}N-N\arcsin\left(\frac{\sqrt{n/N}}{2}\right)\right)\\
=&\left(2\arccos\left(\frac{\sqrt{n/N}}{2}\right)-\sqrt{\frac{4Nn-n^2}{4N^2}}\right)Nr(n).
\end{align*}
For $N<n\leq 4N$, we have
\begin{align*}
R(n)&\sim\frac{r(n)}{\pi}\sum_{\sqrt{n}-\sqrt{N}\leq\sqrt{a^2+b^2}\leq\sqrt{N}}\arccos\left(\frac{a^2+b^2+n-N}{2\sqrt{n(a^2+b^2)}}\right)\\
&\sim\frac{r(n)}{\pi}\iint_{(\sqrt{n}-\sqrt{N})^2\leq x^2+y^2\leq N}\arccos\left(\frac{x^2+y^2+n-N}{2\sqrt{n(x^2+y^2)}}\right)dxdy\\
&=2r(n)\int_{\sqrt{n}-\sqrt{N}}^{\sqrt{N}}r\arccos\left(\frac{r^2+n-N}{2\sqrt{n}r}\right)dr\\
&=r(n) r^2\arccos\left(\frac{r^2+n-N}{2\sqrt{n}r}\right)\mid_{\sqrt{n}-\sqrt{N}}^{\sqrt{N}}+r(n)\int_{\sqrt{n}-\sqrt{N}}^{\sqrt{N}}r^2\frac{\frac{1}{2\sqrt{n}}+\frac{N-n}{2\sqrt{n}r^2}}{\sqrt{1-\frac{(r^2+n-N)^2}{4nr^2}}}dr\\
&=r(n) N\arccos\left(\frac{\sqrt{n/N}}{2}\right)+\frac{r(n)}{2}\int_{\sqrt{n}-\sqrt{N}}^{\sqrt{N}}\frac{r^2+N-n}{\sqrt{4nr^2-(r^2+n-N)^2}}d(r^2)\\
&=r(n) N\arccos\left(\frac{\sqrt{n/N}}{2}\right)-\frac{r(n)}{2}\sqrt{4Nn-n^2}+r(n)N\left(\frac{\pi}{2}-\arcsin\left(\frac{\sqrt{n/N}}{2}\right)\right)\\
&=\left(2\arccos\left(\frac{\sqrt{n/N}}{2}\right)-\sqrt{\frac{4Nn-n^2}{4N^2}}\right)Nr(n),
\end{align*}
which adopts the same form as before.
\end{proof}
\begin{remark}
For $n=0$, by Lemma \ref{lem-Rn} $R(n)\sim\pi N$, which coincides with the obvious counting by (\ref{eq-circle problem}). For $n=4N$, the above coefficient of the main term vanishes, which also coincides with the obvious fact. It is also easy to see that the asymptotics is valid for $n=o(N)$ and $4N-o(N)$.
\end{remark}
The above asymptotics immediately implies the following
\begin{theorem}\label{thm-E2 of grids in circle}
Let $P$ be the set of integer points in a disk of radius $\sqrt{N}$, then
\[E_2(P)\sim\left(\frac{4\pi^2}{9}+2\alpha\right)N^3\log N,\]
where $\alpha=\int_{0}^1\sqrt{\frac{t}{1-t}}\arccos\left(\frac{\sqrt{t}}{2}\right)dt=\frac{\pi^2}{4}-\frac{2\sqrt{2}}{3}{}_pF_q(\frac{1}{2},\frac{1}{2},2;\frac{3}{2},\frac{5}{2};\frac{1}{2})=1.776032950079250\cdots$, in the form of hypergeometric function.
\end{theorem}
\begin{proof}
Let $E(x)=\sum_{n\leq x}r^2(n)$. Then by Lemma \ref{lem-Rn}, (\ref{eq-k=2}) and Abel summation, we get
\begin{align}\label{eq-slow convergence}
E_2(P)&=\sum_{n\leq N}R(n)^2\sim N^2\sum_{n\leq N}r^2(n)\arccos^2\left(\sqrt{\frac{n}{4N}}\right)\\
&=N^2\sum_{n=1}^{N}\left(E(n)-E(n-1)\right)\arccos^2\left(\sqrt{\frac{n}{4N}}\right)+N^2\frac{\pi^2}{4}\notag\\
&\sim N^2E(N)\frac{\pi^2}{9}+N^2\sum_{n=1}^{N-1}E(n)\left(\arccos^2\left(\sqrt{\frac{n}{4N}}\right)-\arccos^2\left(\sqrt{\frac{n+1}{4N}}\right)\right)\notag\\
&\sim\frac{4\pi^2}{9}N^3\log N+2N^2\sum_{n=1}^{N-1}E(n)\arccos\left(\sqrt{\frac{n}{4N}}\right)\left(\arccos\left(\sqrt{\frac{n}{4N}}\right)-\arccos\left(\sqrt{\frac{n+1}{4N}}\right)\right)\notag\\
&\sim\frac{4\pi^2}{9}N^3\log N+\frac{1}{2}N^2\sum_{n=1}^{N-1}\frac{E(n)}{\sqrt{(N-n)n}}\arccos\left(\sqrt{\frac{n}{4N}}\right)\notag\\
&\sim\frac{4\pi^2}{9}N^3\log N+2N^2\sum_{n=1}^{N-1}\frac{\sqrt{n}\log n}{\sqrt{N-n}}\arccos\left(\sqrt{\frac{n}{4N}}\right).\notag
\end{align}
To deal with the last summation, we introduce a double counting method to split the log factor out as follows. First, we partition the interval $[1,N]$ into $[\frac{m-1}{K}N,\frac{m}{K}N)$ for $m=1,\dots,K$. On each sub-interval, since the other factors $\sqrt{\frac{n}{N-n}}$ and $\sqrt{\frac{n}{4N}}$ are homogeneous in $n$ and $N$, we can easily squeeze the partial sum as
\begin{align}\label{eq-squeeze}
&\sqrt{\frac{m-1}{K-m+1}}\arccos\left(\sqrt{\frac{m}{4K}}\right)\sum_{\frac{m-1}{K}N\leq n<\frac{m}{K}N}\log n\\
<&\sum_{\frac{m-1}{K}N\leq n<\frac{m}{K}N}\frac{\sqrt{n}\log n}{\sqrt{N-n}}\arccos\left(\sqrt{\frac{n}{4N}}\right)\notag\\
<&\sqrt{\frac{m}{K-m}}\arccos\left(\sqrt{\frac{m-1}{4K}}\right)\sum_{\frac{m-1}{K}N\leq n<\frac{m}{K}N}\log n.\notag
\end{align}
Then we can asymptotically approximate the partial sum of $\log n$ by integral as
\[\sum_{\frac{m-1}{K}N\leq n\leq\frac{m}{K}N}\log n\sim\frac{N}{K}\int_{m-1}^m(\log x+\log\frac{N}{K})dx\sim\frac{N}{K}(\log N-\log K+\log m)\sim\frac{N\log N}{K},\]
if we set $\log K=o(\log N)$, i.e. $K=N^{o(1)}$. Thus by (\ref{eq-squeeze}), the sum may be abbreviated to
\[\sum_{n=1}^{N-1}\frac{\sqrt{n}\log n}{\sqrt{N-n}}\arccos\left(\sqrt{\frac{n}{4N}}\right)\sim N\log N\cdot\frac{1}{K}\sum_{m<K}\frac{\sqrt{m}}{\sqrt{K-m}}\arccos\left(\sqrt{\frac{m}{4K}}\right),\]
as $K\rightarrow+\infty$.
The above argument can be directly summarized in general as
\begin{lemma}\label{lem-split log}
If the function $f(x,y)>0$ is homogeneous and continuous in $x$ and $y$, and $\frac{1}{K}\sum_{m\leq K}f(m,K)\rightarrow c\neq0$ as $K\rightarrow+\infty$, then
\[\sum_{n\leq N}f(n,N)\log n\sim cN\log N.\]
\end{lemma}
In our case, we need to calculate
\[\frac{1}{K}\sum_{m<K}\frac{\sqrt{m}}{\sqrt{K-m}}\arccos\left(\sqrt{\frac{m}{4K}}\right)\rightarrow\int_{0}^1\sqrt{\frac{t}{1-t}}\arccos\left(\frac{\sqrt{t}}{2}\right)dt,\]
which can be easily seen convergent.
\end{proof}
\begin{remark}
We notice that the series of the last summation in (\ref{eq-slow convergence}) (divided by $N\log N$) converges extremely slow. Computing until $N=10^{11}$, the second decimal is not even stable.
\end{remark}
We use area counting to redo the above computation. Define $s_{a,b}(n)=|\{(x,y)\in\mathbb{Z}^2\mid (x-a)^2+(y-b)^2\leq n\}$ for any $(a,b)\in B_{\sqrt{N}}(0,0), 0\leq n\leq 4N$. Clearly $s_{a,b}(n)=\text{area}(B_{a,b}(n)\cap B_{0,0}(\sqrt{N}))+O(\sqrt{n})$. Let $S(n)=\sum_{a^2+b^2\leq N}s_{a,b}(n)$. Then $R(n)=S(n)-S(n-1)$ and \[E_2(P)=\sum_{n\leq 4N}R^2(n)=\sum_{n\leq 4N}(S(n)-S(n-1))^2\sim2\sum_{n\leq4N}S(n)^2-2\sum_{n\leq4N}S(n)S(n-1).\]
By simple integral, we have ($r^2=a^2+b^2$)
\begin{align}\label{eq-integral}
s_{a,b}(n)\sim\begin{cases}
\pi n, \text{ if } r\leq\sqrt{N}-\sqrt{n}, n\leq N;\\
N\arccos\left(\frac{N-n+r^2}{2\sqrt{N}r}\right)+n\arccos\left(\frac{n-N+r^2}{2\sqrt{n}r}\right)-\frac{\sqrt{4Nn-(n+N-r^2)^2}}{2}, \text{ otherwise }.
\end{cases}
\end{align}
Summing up the above counting simply provides us
\begin{lemma}\label{lem-area}
With the notations above,
\[S(n)\sim\]
and
\[R(n)\sim\]
\end{lemma}
\begin{proof}
In the case of $n\leq N$, by (\ref{eq-integral}) we have
\begin{align*}
S(n)&=\sum_{r^2\leq N}s_{a,b}(n)\sim\sum_{r\leq \sqrt{N}-\sqrt{n}}\pi n+\sum_{r>\sqrt{N}-\sqrt{n}}s_{a,b}(n)\\
&\sim\pi^2n(\sqrt{N}-\sqrt{n})^2+N\sum_{r> \sqrt{N}-\sqrt{n}}\arccos\left(\frac{N-n+r^2}{2\sqrt{N}r}\right)\\
&\quad+n\sum_{r> \sqrt{N}-\sqrt{n}}\arccos\left(\frac{n-N+r^2}{2\sqrt{n}r}\right)-\sum_{r>\sqrt{N}-\sqrt{n}}\frac{\sqrt{4Nn-(n+N-r^2)^2}}{2}.
\end{align*}
Then simply by integral approximation,
\begin{align*}
&\sum_{r> \sqrt{N}-\sqrt{n}}\arccos\left(\frac{N-n+r^2}{2\sqrt{N}r}\right)\sim 2\pi\int_{\sqrt{N}-\sqrt{n}}^{\sqrt{N}}\arccos\left(\frac{N-n+r^2}{2\sqrt{N}r}\right)rdr\\
&=\pi r^2\arccos\left(\frac{N-n+r^2}{2\sqrt{N}r}\right)\mid_{\sqrt{N}-\sqrt{n}}^{\sqrt{N}}+\pi\int_{\sqrt{N}-\sqrt{n}}^{\sqrt{N}}r^2\frac{-\frac{N-n}{2\sqrt{N}r^2}+\frac{1}{2\sqrt{N}}}{\sqrt{1-\frac{(N-n+r^2)^2}{4Nr^2}}}dr\\
&=\pi N\arccos\left(1-\frac{n}{2N}\right)+\pi\int_{\sqrt{N}-\sqrt{n}}^{\sqrt{N}}\frac{n-N+r^2}{\sqrt{4Nr^2-(N-n+r^2)^2}}rdr\\
&=\pi N\arccos\left(1-\frac{n}{2N}\right)+\pi\sqrt{Nn}\int_{\sqrt{N}-\sqrt{n}}^{\sqrt{N}}\frac{\frac{\sqrt{n}}{\sqrt{N}}+\frac{-n-N+r^2}{2\sqrt{Nn}}}{\sqrt{1-\left(\frac{-n-N+r^2}{2\sqrt{Nn}}\right)^2}}d(\frac{-n-N+r^2}{2\sqrt{Nn}})\\
&=\pi N\arccos\left(1-\frac{n}{2N}\right)+\pi\sqrt{Nn}\int_{-1}^{-\frac{\sqrt{n}}{2\sqrt{N}}}\frac{\frac{\sqrt{n}}{\sqrt{N}}+t}{\sqrt{1-t^2}}d(t)\\
&=\pi N\arccos\left(1-\frac{n}{2N}\right)+\pi\sqrt{Nn}\left(\frac{\sqrt{n}}{\sqrt{N}}\arcsin t-\sqrt{1-t^2}\right)\mid_{-1}^{-\frac{\sqrt{n}}{2\sqrt{N}}}\\
&=\pi N\arccos\left(1-\frac{n}{2N}\right)+\frac{\pi^2n}{2}-\pi n\arcsin\left(\frac{\sqrt{n}}{2\sqrt{N}}\right)-\frac{\pi\sqrt{n}}{2}\sqrt{4N-n}.
\end{align*}
Similarly,
\begin{align*}
&\sum_{r> \sqrt{N}-\sqrt{n}}\arccos\left(\frac{n-N+r^2}{2\sqrt{n}r}\right)\\
=&\pi N\arccos\left(\frac{\sqrt{n}}{2\sqrt{N}}\right)+\pi^2(\sqrt{N}-\sqrt{n})^2+\pi\int_{\sqrt{N}-\sqrt{n}}^{\sqrt{N}}\frac{N-n+r^2}{\sqrt{4nr^2-(n-N+r^2)^2}}rdr\\
=&\pi N\arccos\left(\frac{\sqrt{n}}{2\sqrt{N}}\right)+\pi^2(\sqrt{N}-\sqrt{n})^2+\pi\sqrt{Nn}\int_{\sqrt{N}-\sqrt{n}}^{\sqrt{N}}\frac{\frac{\sqrt{N}}{\sqrt{n}}+\frac{r^2-N-n}{2\sqrt{Nn}}}{\sqrt{1-\left(\frac{r^2-N-n}{2\sqrt{Nn}}\right)^2}}d(\frac{r^2-N-n}{2\sqrt{Nn}})\\
=&\pi N\arccos\left(\frac{\sqrt{n}}{2\sqrt{N}}\right)+\pi^2(\sqrt{N}-\sqrt{n})^2+\pi\sqrt{Nn}\int_{-1}^{-\frac{\sqrt{n}}{2\sqrt{N}}}\frac{\frac{\sqrt{N}}{\sqrt{n}}+t}{\sqrt{1-t^2}}d(t)\\
=&\pi N\arccos\left(\frac{\sqrt{n}}{2\sqrt{N}}\right)+\pi^2(\sqrt{N}-\sqrt{n})^2+\pi\sqrt{Nn}\left(\frac{\sqrt{N}}{\sqrt{n}}\arcsin t-\sqrt{1-t^2}\right)\mid_{-1}^{-\frac{\sqrt{n}}{2\sqrt{N}}}\\
=&\pi N\arccos\left(\frac{\sqrt{n}}{2\sqrt{N}}\right)+\pi^2(\sqrt{N}-\sqrt{n})^2+\frac{\pi^2N}{2}-\pi N\arcsin\left(\frac{\sqrt{n}}{2\sqrt{N}}\right)-\frac{\pi\sqrt{n}}{2}\sqrt{4N-n}.
\end{align*}
Also, the last summation can be approximated by
\begin{align*}
\sum_{r>\sqrt{N}-\sqrt{n}}\frac{\sqrt{4Nn-(n+N-r^2)^2}}{2}\sim& \pi\int^{\sqrt{N}}_{\sqrt{N}-\sqrt{n}}\frac{\sqrt{4Nn-(r^2-N-n)^2}}{2}d(r^2-n-N)\\
=&\frac{\pi}{2}\int^{-n}_{-2\sqrt{Nn}}\sqrt{4Nn-t^2}dt\\
=&\frac{\pi}{4}\left(t\sqrt{4Nn-t^2}+4Nn\arcsin\frac{t}{2\sqrt{Nn}}\right)\mid_{-2\sqrt{Nn}}^{-n}\\
=&\frac{-\pi n}{4}\sqrt{4Nn-n^2}-\pi Nn\arcsin\left(\frac{\sqrt{n}}{2\sqrt{N}}\right)+\frac{\pi^2Nn}{2}.
\end{align*}
Summing up the above results provides us for $n\leq N$,
\begin{align*}
S(n)&\sim\pi^2n(\sqrt{N}-\sqrt{n})^2+\pi N^2\arccos\left(1-\frac{n}{2N}\right)+\frac{\pi^2Nn}{2}-\pi Nn\arcsin\left(\frac{\sqrt{n}}{2\sqrt{N}}\right)-\frac{\pi N\sqrt{4Nn-n^2}}{2}\\
\quad&+\pi Nn\arccos\left(\frac{\sqrt{n}}{2\sqrt{N}}\right)+\pi^2n(\sqrt{N}-\sqrt{n})^2+\frac{\pi^2 Nn}{2}-\pi Nn\arcsin\left(\frac{\sqrt{n}}{2\sqrt{N}}\right)-\frac{\pi n\sqrt{4Nn-n^2}}{2}\\
\quad&+\frac{\pi n}{4}\sqrt{4Nn-n^2}+\pi Nn\arcsin\left(\frac{\sqrt{n}}{2\sqrt{N}}\right)-\frac{\pi^2Nn}{2}\\
&=2\pi^2n(\sqrt{N}-\sqrt{n})^2+\frac{\pi^2Nn}{2}+\pi N^2\arccos\left(1-\frac{n}{2N}\right)-\pi Nn\arcsin\left(\frac{\sqrt{n}}{2\sqrt{N}}\right)\\
&\quad-\frac{\pi (2N+n)}{4}\sqrt{4Nn-n^2}\\
&=2\pi^2n(\sqrt{N}-\sqrt{n})^2+\frac{\pi^2Nn}{2}+2\pi N^2\arccos\left(\frac{\sqrt{n}}{2\sqrt{N}}\right)-\pi Nn\arcsin\left(\frac{\sqrt{n}}{2\sqrt{N}}\right)\\
&\quad-\frac{\pi (2N+n)}{4}\sqrt{4Nn-n^2}.
\end{align*}
\end{proof}
\fi
\section{The distance energy for general lattices and Epstein zeta functions}
In this section, we consider general lattices in ${\mathbb{R}^2$ and compare their distance energy. Due to technical reasons, we only deal with the pointwise distance energy, i.e. $E_{L,k}(N):=|\{(p_1,\dots,p_k)\in L^k, \|p_1\|^2=\cdots=\|p_k\|^2\leq N\}|$ for any lattice $L\subset{\mathbb{R}^2, k\in\mathbb{Z}_{\geq 0}$. Let $r_L(n)=|\{p\in L, \|p\|^2=n\}|$. Then
\begin{equation}
E_{L,k}(N)=\sum_{n\leq N}r_L(n)^k.
\end{equation}
We have already seen the estimates of $E_{L,k}(N)$ for the square lattice $L$ in the last section. Note that $E_{L,0}(N)$ counts the distinct distances.
A general lattice $L\subset{\mathbb{R}^2$ of covolume 1, after rotation, may be written as $\mathbb{Z} (a,0)\oplus\mathbb{Z}(b,\frac{1}{a})$ for some $a,b>0$. To estimate its distance energy, we need to study the value distribution of the quadratic form $Q_L(x,y)=(ax+by)^2+\frac{1}{a^2}y^2=a^2\left(x^2+2\frac{b}{a}xy+\left(\frac{1}{a^4}+\frac{b^2}{a^2}\right)y^2\right)$. For example, $a=\sqrt{\frac{2}{\sqrt{3}}},b=\frac{1}{2}\sqrt{\frac{2}{\sqrt{3}}}$ correspond to the hexagonal lattice (which we will always denote by $\Sigma$) and the quadratic form $\frac{2}{\sqrt{3}}(x^2+xy+y^2)$. If $\frac{a}{b}$ or $\frac{1}{a^4}+\frac{b^2}{a^2}$ is irrational, then integer solutions to $Q_L(x,y)=Q_L(x',y')$ would be very few, i.e. have small distance energy.
We will only concern about the lattices with $Q_L$ similar to norms of imaginary quadratic number fields, i.e. \textit{arithmetic} lattices, due to the following K\"{u}hnlein's criterion:
\begin{lemma}\label{lem-Kuhnlein}[K\"{u}hnlein \cite{Kuhn}]
Let $L\subset{\mathbb{R}^2$ be a lattice. Then $L$ is arithmetic if and only if there are $\geq 3$ pairwise linearly independent vectors in $L$ which have the same length.
\end{lemma}
This immediately implies
\begin{corollary}\label{cor-arithmetic}
Any distance in a non-arithmetic lattice is at most repeated 4 times. Hence the number of distinct distances in a non-arithmetic lattices of $N$ points is $\gtrsim N$ and its distance energy is $O(N^2)$.
\end{corollary}
\begin{remark}
For $L$ arithmetic, there is always $E_{L,0}(N)\lesssim\frac{N}{\sqrt{\log N}}$ like the square grids, see Moree and Osburn \cite{MO} for details. Moreover, they proved that in ${\mathbb{R}^2$ the hexagonal lattice attains the minimal number of distinct distances, i.e. the minimum of $E_{L,0}(N)$ for $N$ large.
\end{remark}
Now that arithmetic lattices are nothing but submodules of rings of integers of imaginary quadratic fields, it suffices to consider the norms of those rings. For any negative square-free integer $D$, if $D\equiv1\mod 4$, the ring of integers is $\mathcal{O}_D=\mathbb{Z}\left[\frac{1+\sqrt{D}}{2}\right]$ with \textit{discriminant} $D$; otherwise, $\mathcal{O}_D=\mathbb{Z}[\sqrt{D}]$ with discriminant $4D$. Hence we define
\begin{equation}\label{eq-Q_D}
Q_{D}(x,y)=\begin{cases}
(x+\frac{1+\sqrt{D}}{2}y)(x+\frac{1-\sqrt{D}}{2}y)=x^2+xy+\frac{1-D}{4}y^2,\ D\equiv1\mod 4;\\
(x+\sqrt{D}y)(x-\sqrt{D}y)=x^2-Dy^2,\ \text{otherwise}.
\end{cases}
\end{equation}
Note that the discriminant is just that of $Q_{D}$ and $Q_D$ are all positive definite. For example, if $D=-1$, it is the Gaussian ring $\mathbb{Z}[i]$ with norm $Q_{-1}(x,y)=x^2+y^2$; if $D=-3$, it is the Eisenstein ring $\mathbb{Z}\left[\frac{1+\sqrt{3}i}{2}\right]$ with norm $Q_{-3}(x,y)=x^2+xy+y^2$. To have covolume 1, the lattices need to be scaled by $S_D:=\sqrt{2}(-D)^{-\frac{1}{4}}$ or $(-D)^{-\frac{1}{4}}$, which was already seen in the case of hexagonal lattice. This is to assure the justness that there are always $\sim \pi N$ lattice points in a disc of radius $\sqrt{N}$.
For arithmetic lattices, we may write $r_D(n)$ for $r_L(n)$. Note that $r_{-1}(n)=r(n)$ and $r_{-3}(n)=|\{(x,y)\in\mathbb{Z}^2\mid x^2+xy+y^2=n\}|$ are the only two cases of counting integral points on circles, while the others on ellipses. Then we also write \begin{equation}E_{D,k}(N)=\sum_{n\leq N/S_D^2}r_{D}^k(n).\end{equation}
Then we can use Lemma \ref{lem-Muller} to give the asymptotics of $E_{D,2}(N)$. By calculation we see that
\begin{align}
E_{-3,2}(N)&=3\sqrt{3}N\log N+O(N),
\end{align}
which is larger than $E_{-1,2}(N)=4N\log N+O(N)$ as we have seen from (\ref{eq-k=2}). Note that the $2\times 2$ matrices as of Lemma \ref{lem-Muller} are
\[\begin{cases}
\begin{pmatrix}
2&1\\
1\frac{1-D}{2}
\end{pmatrix}, \text{ if }D\equiv 1\mod{4},\\
{}\\
\begin{pmatrix}
2&0\\
0&-2D
\end{pmatrix}, \text{ otherwise}.
\end{cases}\]
By more careful calculation on coefficients of the main terms, we see the following
\begin{theorem}\label{thm-energy for lattices}
Let $D$ be any square free negative integer and $N$ large. Then $E_{D,2}(N)<E_{-3,2}(N)$ for $D\equiv 1\mod{4}$, and $E_{D,2}(N)<E_{-1,2}(N)$ otherwise. In all, the pointwise distance energy $E_{L,k}(N)$ attains the maximum only when $L$ is the hexagonal lattice in ${\mathbb{R}^2$.
\end{theorem}
One may also be interested in higher pointwise energies $E_{D,k}(N)$ for $k\geq 3$. Explicit formulas of $r_D(n)$ may be found in Huard, Kaplan and Williams \cite{HKW} or Sun and Williams \cite{SW}, but estimating $E_{D,k}(N)$ from those formulae is hardly possible.
On the other hand, in general $r_Q(n)=|\{(x,y)\in\mathbb{Z}^2\mid Q(x,y)=n\}|$ is used to define the Epstein zeta function:
\begin{equation}\label{eq-Epstein}
Z_{Q}(s)=\sum_{m,n\neq 0}\frac{1}{Q(m,n)^s}=\sum_{n=1}^\infty\frac{r_Q(n)}{n^s}.
\end{equation}
which converges for $\Re{s}>1$. Moreover, it can be analytically continued to the whole complex plane with a simple pole at $s=1$ and satisfies the functional equation ($D=disc(Q)$)
\begin{equation}\label{eq-Epstein functional}
\left(\frac{\sqrt{D}}{2\pi}\right)^s\Gamma(s)Z_Q(s)=\left(\frac{\sqrt{D}}{2\pi}\right)^{1-s}\Gamma(1-s)Z_Q(1-s),
\end{equation}
see for instance Zhang and Williams \cite{ZW}. There is also a closed formula by Chowla and Selberg (see \cite{CS}), which states for $Q(x,y)=ax^2+bxy+cy^2, D=b^2-4ac$
\begin{align}\label{eq-Selberg Chowla formula}
Z_Q(s)&=a^{-s}\zeta(2s)+a^{-s}\sqrt{\pi}\frac{\Gamma(s-\frac{1}{2})}{\Gamma(s)}\zeta(2s-1)l^{1-2s}+R_Q(s),\\
R_Q(s)&=\frac{4a^{-s}l^{-s+\frac{1}{2}}}{\pi^{-s}\Gamma(s)}\sum_{n=1}^\infty n^{s-\frac{1}{2}}(\sum_{d\mid n}d^{1-2s})K_{s-\frac{1}{2}}(2\pi nl)\cos(\frac{n\pi b}{a}),\notag
\end{align}
where $K_\nu(z)$ is a modified Bessel function, $l=\frac{\sqrt{|D|}}{2a}$.
To investigate distribution of the higher distance energies $E_{D,k}(N)$, we initiate the study of higher moments of the Epstein zeta functions, i.e.
\begin{equation}\label{eq-Epstein higher moments}
Z_{Q,k}(s):=\sum_{n=1}^\infty\frac{r_Q(n)^k}{n^s},\ k\geq 3.
\end{equation}
\textit{Question}: Do these higher moments satisfy any functional equation or have closed formulae as of (\ref{eq-Epstein functional}) or (\ref{eq-Selberg Chowla formula})?\\
If they do, then we should be able to derive asymptotics for the average of $r_D^k(n)$ by Perron's formula as of Lemma \ref{lem-Perron}. It has been shown that $Z_{Q}(s)$ attains the minimum only for equivalent forms of $Q_{-3}$, i.e. for the hexagonal lattice, whenever $s\geq 0$, see Cassels \cite{Ca}. Thus, we wonder if this is true for all the higher moments and suggest the following
\begin{conjecture}
For all $k\geq 1$, $Z_{Q,k}(s)\geq Z_{Q_{-3},k}(s), \forall s>1$. After analytic continuation (if there is), this should be true for all $s\geq0$.
\end{conjecture}
\iffalse
The average of $r^2_{D}(n)$ was established in general by
\begin{lemma}\label{lem-KN}[K\"{u}hleitner and Nowak \cite{KN}]
For any negative integer $D$ and $N$ large,
\[\sum_{n\leq N}r_{D}^2(n)=\begin{cases}
\frac{12}{\pi^2}L(1,\chi_D)^2\left(\prod_{p\mid D}\frac{p}{p+1}\right)N\log N+O(N), D\equiv1\mod 4,\\
\frac{24}{\pi^2}L(1,\chi_{4D})^2\left(\prod_{p\mid D}\frac{p}{p+1}\right)N\log N+O(N), \text{ otherwise},
\end{cases}\]
where $L(s,\chi_m)$ is the Dirichlet L-function with $\chi_m$ the Dirichlet character modulo $m$ defined by $\chi_m(p)=0$ if $p\mid m$; $\chi_m(p)=1$ if $(p)$ splits in $\mathcal{O}_m$; or $\chi_m(p)=-1$ if $p$ is ramified.
\end{lemma}
By direction computation, we see that
\begin{align}\label{eq-square}
\sum_{n\leq N}r_{-1}^2(n)&\sim\frac{3}{2}N\log N,\\
\label{eq-hex}\sum_{n\leq N}r_{-3}^2(n)&\sim\frac{1}{3}N\log N,
\end{align}
noting that $L(1,\chi_{-4})=\frac{\pi}{4}$ and $L(1,\chi_{-3})=\frac{\pi}{\sqrt{27}}$.
\fi
\iffalse
Similarly as of (\ref{eq-pts on arc}), we need to count all integral points on conics defined by $Q_D(x-a,y-b)=n$ inside the circle $B_{0,0}\left(\frac{\sqrt{N}}{S_D}\right)$, due to different scaling from discriminant. Thus we define $R_D(n)=\sum\limits_{Q_D(a,b)S_D^2\leq N}r_{a,b,D}(n)$, in which
\begin{equation}\label{eq-pts on arc, D}
r_{a,b, D}(n):\ \begin{cases}=r_D(n), \text{ if }\{Q_{D}(x-a,y-b)=n\}\subset B_{0,0}\left(\frac{\sqrt{N}}{S_D}\right);\\
\sim\frac{r(n)}{\pi}\arccos\left(\frac{a^2+b^2+n-N}{2\sqrt{n(a^2+b^2)}}\right), \text{ if }\sqrt{N}-\sqrt{n}<\sqrt{a^2+b^2}\leq\sqrt{N},
\end{cases}
\end{equation}
counting integral points on $Q_D(x-a,y-b)=n$ inside the disk $B_{0,0}\left(\frac{\sqrt{N}}{S_D}\right)$.
\fi
|
1,108,101,562,956 | arxiv | \section{introduction}
At the heart of the definition of quantum cluster algebras is a desire to understand nice bases in quantum algebras arising from the representation theory of non-associative algebras. Of particular interest is the dual canonical basis in the quantized coordinate ring of a unipotent group, or more generally in the quantized coordinate ring of a double Bruhat cell. Through a meticulous study of these algebras and their bases the notion of a cluster algebra was discovered by Fomin and Zelevinsky \cite{fomin-zelevinsky1} with the notion of a quantum cluster algebra following in the work \cite{quantum} of Berenstein and Zelevinsky. Underlying the definition of quantum cluster algebras is a deep conjecture that the quantized coordinate rings described above in fact have the structure of a quantum cluster algebra and that the cluster monomials arising from these cluster structures belong to the dual canonical bases of the quantum algebras. The most pressing questions in the theory are thus related to understanding bases of a (quantum) cluster algebra.
Several bases are already known for both classical and quantum cluster algebras. Our main interest in bases of classical cluster algebras is related to the positivity phenomenon observed in the dual canonical basis \cite{Lu}. The constructions of interest originated with the atomic bases in finite types and affine type $A$ (in the sense of \cite{fomin-zelevinsky2}) which were discovered and constructed explicitly through the works of several authors. These atomic bases consist of all indecomposable positive elements of the cluster algebra, they are ``atomic" in the sense that an indecomposable positive element cannot be written as a sum of two nonzero positive elements.
This line of investigation was initiated by Sherman and Zelevinsky in \cite{sz-Finite-Affine} where atomic bases were constructed in type $A_1^{(1)}$, the so called Kronecker type. In the works \cite{cerulli-irelli1,cerulli-irelli2} atomic bases were constructed by Cerulli Irelli for finite types and type $A_2^{(1)}$ respectively. The construction of atomic bases for affine type $A$ was completed by Dupont and Thomas in \cite{dupont-thomas} using triangulations of surfaces with marked points on the boundary. The representation theory of quivers also played prominently in these works but since we will not pursue this direction here we refer the reader to the previously cited works for more details.
Pushing beyond affine types it was shown by Lee, Li, and Zelevinsky in \cite{LLZ2} that for wild types the set of all indecomposable positive elements can be linearly dependent and therefore do not form a basis. In contrast, these authors in \cite{llz} constructed for rank 2 coefficient-free cluster algebras a combinatorially defined ``greedy basis" which consists of a certain subset of the indecomposable positive elements.
Our main goal in this note is to establish the existence of a quantum lift of the greedy basis.
\subsection{Rank 2 cluster algebras and their greedy bases}\label{sec:greedy}
Fix positive integers $b,c>0$. The commutative cluster algebra $\mathcal{A}(b,c)$ is the subring of $\mathbb{Q}(x_1,x_2)$ generated by the \emph{cluster variables} $x_m$ ($m\in\mathbb{Z}$), where the $x_m$ are rational functions in $x_1$ and $x_2$, defined recursively by the \emph{exchange relations}
\[x_{m+1}x_{m-1}=\begin{cases} x_m^b+1 & \text{ if $m$ is odd;}\\ x_m^c+1 & \text{ if $m$ is even.}\end{cases}\]
The cluster variables are organized into clusters $\{x_m,x_{m+1}\}$ ($m\in\mathbb{Z}$).
It is a fundamental result of Fomin and Zelevinsky \cite{fomin-zelevinsky1} that, although the exchange relations appear to produce rational functions, one always obtains Laurent polynomials. They actually showed the following slightly stronger result which asserts in addition that the Laurent Phenomenon does not depend on the choice of an initial cluster.
\begin{theorem}[{\cite[Theorem 3.1]{fomin-zelevinsky1}, Laurent Phenomenon}]
For any $m\in\mathbb{Z}$ we have $\mathcal{A}(b,c)\subset\mathbb{Z}[x_m^{\pm1},x_{m+1}^{\pm1}]$.
\end{theorem}
The cluster algebra $\mathcal{A}(b,c)$ is of \emph{finite type} if the collection of all cluster variables is a finite set. In \cite{fomin-zelevinsky2} Fomin and Zelevinsky classified cluster algebras of finite type.
\begin{theorem}[{\cite[Theorem 1.4]{fomin-zelevinsky2}}]
The cluster algebra $\mathcal{A}(b,c)$ is of finite type if and only if $bc\le3$.
\end{theorem}
The proof of this theorem establishes a connection to the theory of rank 2 Kac-Moody root systems. Thus we say $\mathcal{A}(b,c)$ is of \emph{affine} (resp. \emph{wild}) type if $bc=4$ (resp. $bc\ge5$).
An element $x\in\bigcap_{m\in\mathbb{Z}}\mathbb{Z}[x_m^{\pm1},x_{m+1}^{\pm1}]$ is called \emph{universally Laurent} since the expansion of $x$ in each cluster is a Laurent polynomial. If the coefficients of the Laurent expansions of $x$ in each cluster are positive integers, then $x$ is called \emph{universally positive}. A nonzero universally positive element in $\mathcal{A}(b,c)$ is said to be \emph{indecomposable} if it cannot be expressed as a sum of two nonzero universally positive elements.
One of the main results of \cite{sz-Finite-Affine} is the following: if the commutative cluster algebra $\mathcal{A}(b,c)$ is of finite or affine type, then the indecomposable universally positive elements form a $\mathbb{Z}$-basis in $\mathcal{A}(b,c)$, moreover this basis contains all \emph{cluster monomials} $x_m^{a_1}x_{m+1}^{a_2}$ ($m\in\mathbb{Z}$, $a_1,a_2\in\mathbb{Z}_{\ge0}$). However, in the wild types the situation becomes much more complicated; in particular, it is shown by Lee, Li, and Zelevinsky in \cite{LLZ2} that for $bc\ge5$ the indecomposable universally positive elements of $\mathcal{A}(b,c)$ are linearly dependent. The \emph{greedy basis} of $\mathcal{A}(b,c)$ introduced in \cite{llz} is a subset (a proper subset in wild types) of the indecomposable positive elements which has a manifestly positive combinatorial description.
The elements of the greedy basis take on a particular form, which is motivated by a well-known pattern in the initial cluster expansion of cluster monomials. An element $x \in \mathcal{A}(b,c)$ is \emph{pointed} at $(a_1, a_2) \in \mathbb{Z}^2$ if it can be written in the form
\begin{equation}
\label{eq:pointed-expansion}
x=x_1^{-a_1} x_2^{-a_2} \sum_{p,q \geq 0} e(p,q) x_1^{bp} x_2^{cq}
\end{equation}
with $e(0,0)=1$ and $e(p,q):=e_{a_1,a_2}(p,q) \in \mathbb{Z}$ for all $p,q\ge0$.
The following two theorems summarize the results from \cite{llz}.
\begin{theorem}
\label{proposition 1.6 of llzp}
For each $(a_{1},a_{2})\in\mathbb{Z}^{2}$, there exists a unique element in $\mathcal{A}(b,c)$ pointed at $(a_1,a_2)$ whose coefficients $e(p,q)$ satisfy the following recurrence relation:
\begin{equation}\label{eq:classical recurrence}
e(p,q)= \begin{cases}
\sum\limits_{k=1}^p(-1)^{k-1}e(p-k,q){ [a_2-cq]_++k-1\choose k} & \text{ if $ca_1q\le ba_2p$;}\\
\sum\limits_{\ell=1}^q(-1)^{\ell-1}e(p,q-\ell){[a_1-bp]_++\ell-1\choose \ell} & \text{ if $ca_1q\ge ba_2p$.}
\end{cases}
\end{equation}
where we use the standard notation $[a]_+=\max(a,0)$.
\end{theorem}
We define the {\it greedy element} pointed at $(a_1,a_2)$, denoted $x[a_1,a_2]$, as the unique element determined by Theorem
\ref{proposition 1.6 of llzp}.
\begin{theorem
\label{main theorem-commutative}
Fix positive integers $b,c$.
{\rm(a)} The greedy elements $x[a_1,a_2]$ for $(a_1, a_2) \in \mathbb{Z}^2$ form a $\mathbb{Z}$-basis in $\mathcal{A}(b,c)$, which we refer to as the \emph{greedy basis}.
{\rm(b)} The greedy basis is independent of the choice of an initial cluster.
{\rm(c)} The greedy basis contains all cluster monomials.
{\rm(d)} Greedy elements are universally positive and indecomposable.
\end{theorem}
\iffalse
\begin{figure}
\begin{center}
\begin{tikzpicture}
\usetikzlibrary{patterns}
\draw[dashed] (0,3)--(.7,.7)--(2.5,0);
\fill [black!10] (0,3)--(.7,.7)--(2.5,0)--(0,0)--(0,3);
\draw (0,0) node[anchor=east] {\small$(0,0)$};
\draw (2.2,0) node[anchor=south west] {\small$(a_2,0)$};
\draw (.6,.6) node[anchor=south west] {\small$(a_1/b,a_2/c)$};
\draw (0,3) node[anchor=east] {\small$(0,a_1)$};
\draw (1.2,0.4) node[anchor=east] {\small$R_{\text{greedy}}$};
\draw[->] (0,0) -- (3,0)
node[right] {$p$};
\draw[->] (0,0) -- (0,3.3)
node[right] {$q$};
\end{tikzpicture}
\end{center}
\caption{Support region of (quantum) greedy elements.}
\end{figure}
\fi
Our goal in this work is to generalize the above two theorems to the setting of rank 2 quantum cluster algebras. The proof of Theorem~\ref{main theorem-commutative} given in \cite{llz} uses combinatorial objects called \emph{compatible pairs} in an essential way (cf. \cite[Theorem 1.11]{llz}). Unfortunately this method has limitations in generalizing to the study of quantum cluster algebras. More precisely, if one could construct quantum greedy elements by assigning powers of $q$ to each compatible pair then the quantum greedy elements would have to be universally positive, which is unfortunately false (for instance, see \cite[Section 3]{llrz}). So we shall take a completely different approach to prove the quantum analogue of this theorem.
\subsection{Rank 2 quantum cluster algebras}
We now define our main objects of study, namely quantum cluster algebras, and recall important fundamental facts related to these algebras. We restrict our attention to rank 2 quantum cluster algebras where we can describe the setup in very concrete terms. We follow (as much as possible) the notation and conventions of \cite{llz,triangular}.
We work in the quantum torus
$$\mathcal{T}:=\mathbb{Z}[v^{\pm 1}]\langle X_1^{\pm1},X_2^{\pm1}: X_2 X_1=v^2 X_1 X_2\rangle$$
(this setup is related to the one in \cite{rupel1} which uses the formal variable $q$ instead of $v$ by setting $q = v^{-2}$). There are many choices for quantizing cluster algebras, to rigidify the situation we require the quantum cluster algebra to be invariant under a certain involution.
The \emph{bar-involution} is the $\mathbb{Z}$-linear anti-automorphism of $\mathcal{T}$ determined by $\overline{f}(v)=f(v^{-1})$ for $f\in\mathbb{Z}[v^{\pm1}]$ and
\[\overline{fX_1^{a_1}X_2^{a_2}}=\overline{f}X_2^{a_2}X_1^{a_1}=v^{2a_1a_2}\overline{f}X_1^{a_1}X_2^{a_2}\quad\text{($a_1,a_2\in\mathbb{Z}$)}.\]
An element which is invariant under the bar-involution is said to be \emph{bar-invariant}.
Let $\mathcal{F}$ be the skew-field of fractions
of $\mathcal{T}$. The \emph{quantum cluster algebra} $\mathcal{A}_v(b,c)$ is the $\mathbb{Z}[v^{\pm1}]$-subalgebra of $\mathcal{F}$ generated by the \emph{cluster variables} $X_m$ ($m \in\mathbb{Z}$) defined recursively by the \emph{exchange relations}
\[X_{m+1}X_{m-1}=\begin{cases} v^{b}X_m^b+1 & \text{ if $m$ is odd;}\\ v^{c}X_m^c+1 & \text{ if $m$ is even.}\end{cases}\]
By a simple induction one can easily check the following (quasi-)commutation relations between neighboring cluster variables $X_m,X_{m+1}$ in $\mathcal{A}_v(b,c)$:
\begin{equation}\label{eq:commutation-A11}
X_{m+1} X_m = v^2 X_m X_{m+1} \quad (m \in \mathbb{Z}).
\end{equation}
It then follows from an easy induction that all cluster variables are bar-invariant, therefore $\mathcal{A}_v(b,c)$ is also stable under the bar-involution. Equation \eqref{eq:commutation-A11} implies that each \emph{cluster} $\{X_m, X_{m+1}\}$ generates a quantum torus
\[\mathcal{T}_m:=\mathbb{Z}[v^{\pm1}]\langle X_m^{\pm1},X_{m+1}^{\pm1}: X_{m+1} X_m = v^2 X_m X_{m+1}\rangle.\]
It is easy to see that the bar-involution does not depend on the choice of an initial quantum torus $\mathcal{T}_m$.
The appropriate quantum analogues of cluster monomials for $\mathcal{A}_v(b,c)$ are the (bar-invariant) \emph{quantum cluster monomials} which are normalized monomials in the quantum tori $\mathcal{T}_m$, more precisely they are
\[X^{(a_1,a_2)}_m = v^{a_1 a_2} X_m^{a_1} X_{m+1}^{a_2} \quad (a_1, a_2 \in \mathbb{Z}_{\geq 0}, \,\, m \in \mathbb{Z}).\]
For convenience we will often abbreviate $X^{(a_1,a_2)}:=X^{(a_1,a_2)}_1$.
The following quantum analogue of the Strong Laurent Phenomenon was proven by Berenstein and Zelevinsky in \cite{quantum}.
\begin{theorem}[{\cite[Theorem 5.1, Corollary 5.2 and Theorem 7.5]{quantum}}]\label{thm1}
For any $m\in\mathbb{Z}$ we have $\mathcal{A}_v(b,c)\subset\mathcal{T}_m$. Moreover, $$\mathcal{A}_v(b,c)=\bigcap\limits_{m\in\mathbb{Z}}\mathcal{T}_m=\bigcap\limits_{m=0}^{2}\mathcal{T}_m.$$
\end{theorem}
A nonzero element of $\mathcal{A}_v(b,c)$ is called {\it universally positive} if it lies in $$\bigcap\nolimits_{m\in\mathbb{Z}}\mathbb{Z}_{\geq 0}[v^{\pm1}][X_m^{\pm1},X_{m+1}^{\pm1}].$$
A universally positive element in $\mathcal{A}_v(b,c)$ is {\it indecomposable} if it cannot be expressed as a sum of two universally positive elements.
As a very special case of the results of \cite{Q, Ef, DMSS, N-quiver, KQ} one may see that cluster monomials are universally positive Laurent polynomials.
Explicit combinatorial expressions for the positive coefficients can be obtained from the results of \cite{ls, rupel2, LL:qgrass}.
\subsection{Quantum greedy bases}
This section contains the main result of the paper. Here we introduce the quantum greedy basis and present its nice properties.
Similar to the greedy elements, the elements of the quantum greedy basis take on the following particular form.
An element $X \in \mathcal{T}$ (resp. $X\in\mathcal{T}\otimes_\mathbb{Z} \mathbb{Q}$) is said to be {\it pointed} at $(a_1, a_2) \in \mathbb{Z}^2$ if it has the form
\begin{equation}
\label{eq:pointed-expansion-q}
X=\sum\limits_{p,q\ge0} e(p,q)X^{(-a_1+bp,-a_2+cq)}
\end{equation}
with $e(0,0)=1$ and $e(p,q)\in\mathbb{Z}[v^{\pm1}]$ (resp. $e(p,q)\in\mathbb{Q}[v^{\pm1}]$) for all $p$ and $q$.
All quantum cluster variables, hence all quantum cluster monomials, are pointed.
For $n,k\in\mathbb{Z}$ and for $w=v^{b}$ or $v^{c}$, define the bar-invariant quantum numbers and quantum binomial coefficients by
\[
\aligned
{}[n]_w&=\frac{w^n-w^{-n}}{w-w^{-1}}=\operatorname{sgn}(n)\big(w^{|n|-1}+w^{|n|-3}+\cdots+w^{-|n|+1});\\
{n\brack k}_w&=\frac{[n]_w[n-1]_w\cdots[n-k+1]_w}{[k]_w[k-1]_w\cdots[1]_w}.
\endaligned
\]
Note that ${n\brack k}_w$ is a Laurent polynomial in $w$ and hence in $v$ as well. It is immediate that $[-n]_w=-[n]_w$ and thus ${-n\brack k}_w=(-1)^k{n+k-1\brack k}_w$.
\begin{definition}\label{le,ge} A pointed element $\sum e(p,q)X^{(-a_1+bp,-a_2+cq)}$ is said to satisfy the recurrence relation $(\le,\ge)$ if the following recursion is satisfied for $(p,q)\ne(0,0)$:
\begin{equation}\label{eq:recurrence}
e(p,q)= \begin{cases}
\sum\limits_{k=1}^p(-1)^{k-1}e(p-k,q){[a_2-cq]_++k-1\brack k}_{v^b} & \text{ if }ca_1q\le ba_2p;\\
\sum\limits_{\ell=1}^q(-1)^{\ell-1}e(p,q-\ell){[a_1-bp]_++\ell-1\brack \ell}_{v^c} & \text{ if }ca_1q\ge ba_2p.\\
\end{cases}
\end{equation}
We denote by $(\le,>)$ the recurrence relation obtained from \eqref{eq:recurrence} by replacing ``$\ge$'' by ``$>$'', and $(<,\ge)$ the recurrence relation obtained from \eqref{eq:recurrence} by replacing ``$\le$'' by ``$<$''.
\end{definition}
\begin{theorem}\label{main theorem1}
For each $(a_{1},a_{2})\in\mathbb{Z}^{2}$, there exists a unique element pointed at $(a_1,a_2)$ whose coefficients $e(p,q)$ satisfy the recurrence relation $(\le,\ge)$.
\end{theorem}
\noindent Such an element is called a {\it quantum greedy element} and will be denoted by $X[a_{1},a_{2}]$.
\begin{corollary}\label{cor:greedy in cluster algebra}
Each quantum greedy element $X[a_{1},a_{2}]$ is in $\mathcal{A}_v(b,c)$.
\end{corollary}
The following theorem asserts that the quantum greedy elements possess all the desired properties described in Theorem \ref{main theorem-commutative} except for the positivity in part (d).
\begin{theorem}[Main Theorem]\label{main theorem2}
Fix positive integers $b,c$.
{\rm(a)} The quantum greedy elements $X[a_1,a_2]$ for $(a_1, a_2) \in \mathbb{Z}^2$ form a $\mathbb{Z}[v^{\pm1}]$-basis in $\mathcal{A}_v(b,c)$, which we refer to as the \emph{quantum greedy basis}.
{\rm(b)} The quantum greedy basis is bar-invariant and independent of the choice of an initial cluster.
{\rm(c)} The quantum greedy basis contains all cluster monomials.
{\rm(d)} If $X[a_1,a_2]$ is universally positive, then it is indecomposable.
{\rm(e)} The quantum greedy basis specializes to the commutative greedy basis by the substitution $v=1$.
\end{theorem}
The proof of Theorem \ref{main theorem1} and \ref{main theorem2} is given at the end of \S5. Regarding Theorem \ref{main theorem2}(d), we propose the following conjecture based on extensive computation.
\begin{conjecture}
If $b|c$ or $c|b$, then all quantum greedy elements are universally positive, hence indecomposable.
\end{conjecture}
The structure of the paper is as follows. Section 2 studies certain $\mathbb{Z}$-linear automorphisms on $\mathcal{A}_v(b,c)$. Section 3 gives a non-recursive characterization of quantum greedy elements which highlights the importance of the observations of Section 2. Section 4 defines the weaker version of quantum greedy elements, namely the upper (and lower) quasi-greedy elements, and gives a non-recursive characterization for them similar to that given in Section 2. We also show in Section 4 that an upper or lower quasi-greedy element is sent to an upper or lower quasi-greedy element by the automorphisms defined in Section 2. Section 5 studies the quasi-greedy elements, in particular we show that the upper and lower quasi-greedy elements are indeed identical and consequently prove the main theorems (Theorems \ref{main theorem1} and \ref{main theorem2}). In the Appendix (Section 6) we remind the reader of the quantum binomial theorem.\\
{\bf Acknowledgments:}
Most of the ideas toward this work from D.~R. and A.~Z. were had during their stay at the Mathematical Sciences Research Institute as part of the Cluster Algebras Program. They would like to thank the MSRI for their hospitality and support.
Research supported in part by NSF grants DMS-0901367 (K.~L.) and DMS-1103813 (A.~Z.), and by Oakland University URC Faculty Research Fellowship Award (L.~L.).
The authors would like to thank F.~Qin for valuable discussions.
Sadly Andrei Zelevinsky passed away in the early stages of writing. We hope to have achieved the clarity of exposition that Andrei's artful eye would have provided.
\section{Automorphisms acting on rank 2 quantum cluster algebras}\label{sec:symmetries}
We consider in this section a quantum analogue of the group of automorphisms of $\mathcal{A}(b,c)$ introduced in \cite{llz}.
\begin{lemma}\label{automorphism group:q}
For each integer $\ell$, there is a well-defined $\mathbb{Z}$-linear involutive automorphism $\sigma_\ell$ on $\mathcal{T}$ satisfying
$$\sigma_\ell(X_m) = X_{2 \ell-m},\quad \sigma_\ell(f(v))=f(v^{-1}).$$
It specializes to a $\mathbb{Z}$-linear automorphism of $\mathcal{A}(b,c)$. The group $W$ generated by $\{\sigma_\ell\}_{\ell\in\mathbb{Z}}$ is a dihedral group generated by $\sigma_1$ and $\sigma_2$, and is finite if and only if $bc\le 3$.
\end{lemma}
\begin{proof}
It suffices to check that $\sigma_\ell(X_{m+1}X_m)=v^{-2}\sigma_\ell(X_mX_{m+1})$. The proof is straightforward.
\end{proof}
Let $Y=\sum\limits_{p,q\ge0} d(p,q)X^{(-a_1+bp,-a_2+cq)}$ be an element in $\mathcal{T}$ or $\mathcal{T}\otimes_\mathbb{Z}\mathbb{Q}$ pointed at $(a_1,a_2)$. We say that $Y$ satisfies the {\it divisibility condition} if
\begin{equation}\label{eq:divisibility}
\; \left\{\begin{array}{ll} \sum\limits_{k\ge0}{a_2-cq\brack k}_{v^c}t^k\ \bigg|\ \sum\limits_{p\ge0} d(p,q)t^p,\text{\; for every }0\le q< a_2/c;\\
\sum\limits_{\ell\ge0}{a_1-bp\brack \ell}_{v^c}t^\ell \;\;\!\! \ \bigg| \ \sum\limits_{q\ge0} d(p,q)t^q,\text{\; for every }0\le p<a_1/b. \end{array}\right.
\end{equation}
\begin{lemma}\label{lem:divisibility}
Let $Y$ be as above.
{\rm(1)} $\sigma_1(Y)\in\mathcal{T}$ if and only if the first condition of \eqref{eq:divisibility} holds.
{\rm(2)} $\sigma_2(Y)\in\mathcal{T}$ if and only if the second condition of \eqref{eq:divisibility} holds.
As a consequence, $Y\in A_v(b,c)$ if and only if the divisibility condition \eqref{eq:divisibility} holds.
\end{lemma}
\begin{proof}
We only prove (2), the proof of (1) is similar. For every $0\le p\le a_2$, set $p'=a_2-p$, $a'_1=-a_1+ba_2$, and define $\{d'(p',q)\}_{q\ge0}$ using the power series expansion
\begin{equation}\label{eq:d'}
\sum_{q\ge0} d'(p',q)t^q=\Bigg(\sum\limits_{\ell\ge0}{-a_1+bp\brack \ell}_{v^c}t^\ell\Bigg)\Bigg(\sum_{q\ge0} d(p,q)t^q\Bigg),
\end{equation}
We claim that the following identity holds for every $p\ge0$:
\begin{equation}\label{eq:sigma2:q}
\sigma_2\Big(\sum_{q\ge0} d(p,q)X^{(-a_1+bp,-a_2+cq)}\Big)=
\sum_{q\ge0} d'(p',q)X^{(-a'_1+bp',-a_2+cq)}.
\end{equation}
Indeed, since $X_3=(v^cX_2^c+1)X_1^{-1}$, by Lemma~\ref{le:quantum binomial theorem} we have
\begin{align}\label{eq:variable power}
X_3^n&=\sum\limits_{\ell\ge0}{n\brack \ell}_{v^c} v^{-nc\ell}X_1^{-n}X_2^{c\ell}
\end{align}
so that the left hand side of \eqref{eq:sigma2:q} is
$$\aligned
&=\sum_{q\ge0} d(p,q)v^{-(-a_1+bp)(-a_2+cq)}X_3^{-a_1+bp}X_2^{-a_2+cq}\\
&=\Bigg(\sum\limits_{\ell\ge0}{-a_1+bp\brack \ell}_{v^c}v^{(a_1-bp)c\ell}X_1^{a_1-bp}X_2^{c\ell}\Bigg)\Bigg(\sum_{q\ge0} d(p,q)v^{(a_1-bp)(-a_2+cq)}X_2^{-a_2+cq}\Bigg)\\
&=X_1^{a_1-bp}(v^{a_1-bp}X_2)^{-a_2}\Bigg(\sum\limits_{\ell\ge0}{-a_1+bp\brack \ell}_{v^c}\big(v^{(a_1-bp)c}X_2^c\big)^\ell\Bigg)\Bigg(\sum_{q\ge0} d(p,q)\big(v^{(a_1-bp)c}X_2^c\big)^q\Bigg)\\
&\stackrel{\eqref{eq:d'}}{=}X_1^{a_1-bp}(v^{a_1-bp}X_2)^{-a_2}\sum_{q\ge0} d'(p',q)\big(v^{(a_1-bp)c}X_2^c\big)^q\\
&=\sum_{q\ge0} d'(p',q)v^{(a_1-bp)(-a_2+cq)}X_1^{a_1-bp}X_2^{-a_2+cq},\\
\endaligned
$$
which is equal to the right hand side of \eqref{eq:sigma2:q}.
Combining the equalities \eqref{eq:sigma2:q} for all $p$ we get
$$\sigma_2(Y)=\sum_{p',q\ge0}d'(p',q)X^{(-a'_1+bp',-a_2+cq)}.$$
It follows that $\sigma_2(Y)$ is in $\mathcal{T}$ if and only if all but finitely many elements in $\{d'(p',q)\}_{p',q\ge0}$ are 0, in particular the left hand side of \eqref{eq:d'} must be finite. Note that this finiteness is trivial for $bp\ge a_1$. By Lemma~\ref{cor:quantum convolution} we have
\[\sum\limits_{\ell\ge0}{-a_1+bp\brack \ell}_{v^c}t^\ell=\Bigg(\sum\limits_{\ell\ge0}{a_1-bp\brack \ell}_{v^c}t^\ell\Bigg)^{-1}\]
and so finiteness is then equivalent to the divisibility condition in (2).
The final claim follows immediately from Theorem \ref{thm1}.
\end{proof}
\section{An equivalent statement to Theorem~\ref{main theorem1}}
By analogy with the greedy elements $x[a_1,a_2]$ we will use the following definition in describing the support of our quantum greedy elements $X[a_1,a_2]$.
\begin{definition}\cite{llz}\label{df:PSR}
For integers $a_1,a_2$, the \emph{pointed support region} $R_{\text{greedy}}[a_1,a_2]$ of $x[a_1,a_2]$ is a subset of $\mathbb{R}_{\ge0}^2$ defined in the following six cases.
\begin{enumerate}
\item If $a_1 \leq 0$ and $a_2 \leq 0$, then $R_{\text{greedy}}[a_1,a_2] := \{(0,0)\}$.
\item If $a_1 \leq 0 < a_2$, then $R_{\text{greedy}}[a_1,a_2] := \{(p,0)\in\mathbb{R}_{\ge0}^2\big|\; 0 \leq p \leq a_2\}$.
\item If $a_2 \leq 0 < a_1$, then $R_{\text{greedy}}[a_1,a_2] := \{(0,q)\in\mathbb{R}_{\ge0}^2\big|\; 0 \leq q \leq a_1\}$.
\item If $0<ba_2\leq a_1$, then
$$
R_{\text{greedy}}[a_{1},a_{2}]:=\big\{(p,q)\in\mathbb{R}_{\ge0}^2\big|\;
p\le a_{2},\; q\le a_{1}-bp\big\}.
$$
\item If $0<ca_1\leq a_2$, then
$$
R_{\text{greedy}}[a_{1},a_{2}]:=\big\{(p,q)\in\mathbb{R}_{\ge0}^2\big|\;
q\le a_{1},\; p\le a_{2}-cq\big\}.
$$
\item If $0 < a_1 < ba_2$ and $0 < a_2 < ca_1$, then
$$\aligned
&R_{\text{greedy}}[a_{1},a_{2}]:=
\bigg\{(p,q)\in\mathbb{R}_{\ge0}^2\bigg|\; 0\le p< \frac{a_1}{b},\; q+\Big(b-\frac{ba_2}{ca_1}\Big)p<a_1\bigg\}\\
&\hspace{1.06in}\bigcup\bigg\{(p,q)\in\mathbb{R}_{\ge0}^2\bigg|\; 0\le q< \frac{a_2}{c},\; p+\Big(c-\frac{ca_1}{ba_2}\Big)q<a_2\bigg\}
\bigcup\Big\{(0,a_1),(a_2,0)\Big\}.
\endaligned
$$
\end{enumerate}
\end{definition}
\begin{remark}
(1) Here is a more geometric description (see Figure \ref{fig:PSR}). Denote
$O=(0,0), A=(a_2,0), B=(a_1/b, a_2/c), C=(0,a_1), D_{1}=(a_{2},a_{1}-ba_{2}), D_{2}=(a_{2}-ca_{1},a_{1}).$ Denote by $OAD_{1}C$ and $OAD_{2}C$ the regions of closed trapezoids, and denote by $OABC$ the region that includes the closed segments $OA$ and $OC$ but excludes the rest of the boundary. Then
$R_{\text{greedy}}[a_{1},a_{2}]$ is: $O$ in case (1); $OA$ in case (2); $OC$ in case (3); $OAD_{1}C$ in case (4); $OAD_{2}C$ in case (5); $OABC$ in case (6). Also note that $D_{1}\in OABC$ in case (4), and $D_{2}\in OABC$ in case (5).
(2) We compare some similar definitions we use in this paper and \cite{llz}. For an element $x \in \mathcal{A}(b,c)$ pointed at $(a_1, a_2)$, express $x$ in two ways:
$$
x = \sum_{p,q}d(p,q)x_1^px_2^q\; =\; x_1^{-a_1} x_2^{-a_2} \sum_{p,q \geq 0} e(p,q) x_1^{bp} x_2^{cq}
$$
The set $\{(p,q)\in\mathbb{Z}^{2}\; |\; d(p,q)\neq0 \}$ (resp. $\{(p,q)\in\mathbb{Z}_{\ge0}^{2}\; |\; e(p,q)\neq0 \}$)
is called the \emph{support} (resp. the \emph{pointed support}) of $x$. The support of $x$ is the image of the pointed support of $x$ under the transition map
$$\varphi: \mathbb{R}^2\to\mathbb{R}^2,\quad (p,q)\mapsto(-a_1+bp,-a_2+cq).$$
It was shown in \cite{llz} that the pointed support of the greedy element $x[a_1,a_2]$ is contained in the pointed support region $R_{\text{greedy}}[a_1,a_2]$, or equivalently, that the support of $x[a_1,a_2]$ is contained in the \emph{support region} of $x[a_1,a_2]$ defined as
$$S[a_1,a_2]:=\varphi(R_{\text{greedy}}[a_1,a_2])\subseteq \mathbb{R}^{2}.$$
\end{remark}
\begin{figure}[h]
\begin{tikzpicture}[scale=.7]
\begin{scope}[shift={(.5, 5.5)}]
\usetikzlibrary{patterns}
\draw (0,0) node[anchor=east] {\tiny $O$};
\draw[->] (0,0) -- (2.5,0)
node[above] {\tiny $p$};
\draw[->] (0,0) -- (0,2.5)
node[left] {\tiny $q$};
\fill (0,0) circle (2pt);
\draw (-1,-.5) circle (1.5pt);
\draw (-1,-.5) node[anchor=south] {\tiny $B$};
\draw (0.5,-.6) node[anchor=north] {\footnotesize (1) $a_1,a_2\le0$};
\end{scope}
\begin{scope}[shift={(6, 5.5)}]
\usetikzlibrary{patterns}
\draw (0,0) node[anchor=east] {\tiny$O$};
\draw (2,0) node[anchor=south] {\tiny$A$};
\draw[->] (0,0) -- (2.5,0)
node[above] {\tiny $p$};
\draw[->] (0,0) -- (0,2.5)
node[left] {\tiny $q$};
\fill (0,0) circle (1.5pt);
\fill (2,0) circle (1.5pt);
\draw [very thick] (0,0) -- (2,0);
\draw (-1,.5) circle (1.5pt);
\draw (-1,.5) node[anchor=south] {\tiny $B$};
\draw (1,-.6) node[anchor=north] {\footnotesize (2) $a_1\le 0<a_2$};
\end{scope}
\begin{scope}[shift={(11.5,5.5)}]
\usetikzlibrary{patterns}
\draw (0,0) node[anchor=east] {\tiny$O$};
\draw (0,2) node[anchor=east] {\tiny$C$};
\draw[->] (0,0) -- (2.5,0)
node[above] {\tiny $p$};
\draw[->] (0,0) -- (0,2.5)
node[left] {\tiny $q$};
\fill (0,0) circle (1.5pt);
\fill (0,2) circle (1.5pt);
\draw [very thick] (0,0) -- (0,2);
\draw (1,-.3) circle (1.5pt);
\draw (1,-.3) node[anchor=west] {\tiny $B$};
\draw (1,-.6) node[anchor=north] {\footnotesize (3) $a_2\le0<a_1$};
\end{scope}
\usetikzlibrary{patterns}
\draw (0,3)--(1.5,1.5)--(1.5,0);
\draw[dashed](0,3)--(2.15,1.25) (2.15,1.15)--(1.5,0);
\draw (2.2,1.2) circle (2pt);
\draw (2.3,1.2) node[anchor=west] {\tiny$B$};
\fill [black!10] (0,3)--(1.5,1.5)--(1.5,0)--(0,0)--(0,3);
\draw (0,0) node[anchor=east] {\tiny$O$};
\draw (1.5,-.1) node[anchor=south west] {\tiny$A$};
\draw (1.4,1.5) node[anchor= south west] {\tiny$D_{1}$};
\draw (0,3) node[anchor=east] {\tiny$C$};
\draw[->] (0,0) -- (3.5,0)
node[above] {\tiny $p$};
\draw[->] (0,0) -- (0,3.5)
node[left] {\tiny $q$};
\fill (0,0) circle (1.5pt);
\fill (0,3) circle (1.5pt);
\fill (1.5,0) circle (1.5pt);
\fill (1.5,1.5) circle (1.5pt);
\draw (1.5,-.5) node[anchor=north] {\footnotesize (4) $0<ba_2\leq a_1$};
\begin{scope}[shift={(5.5,0)}]
\usetikzlibrary{patterns}
\draw (3,0)--(1.3,1)--(0,1);
\draw[dashed](3,0)--(0.95,1.45) (0.85,1.47)--(0,1);
\draw (0.9,1.5) circle (2pt);
\draw (0.9,1.5) node[anchor=south] {\tiny$B$};
\fill [black!10] (3,0)--(1.3,1)--(0,1)--(0,0)--(3,0);
\draw (0,0) node[anchor=east] {\tiny$O$};
\draw (2.7,0) node[anchor=south west] {\tiny$A$};
\draw (1.3,.9) node[anchor=south west] {\tiny$D_{2}$};
\draw (0,1) node[anchor=east] {\tiny$C$};
\draw[->] (0,0) -- (3.5,0)
node[above] {\tiny $p$};
\draw[->] (0,0) -- (0,3.5)
node[left] {\tiny $q$};
\fill (0,0) circle (1.5pt);
\fill (0,1) circle (1.5pt);
\fill (3,0) circle (1.5pt);
\fill (1.3,1) circle (1.5pt);
\draw (1.5,-.5) node[anchor=north] {\footnotesize (5) $0<ca_1\leq a_2$};
\end{scope}
\begin{scope}[shift={(11,0)}]
\usetikzlibrary{patterns}
\draw[dashed] (0,3)--(1.5,1.8)--(2.5,0);
\fill [black!10] (0,3)--(1.5,1.77)--(2.5,0)--(0,0)--(0,3);
\draw (0,0) node[anchor=east] {\tiny$O$};
\draw (2.2,0) node[anchor=south west] {\tiny$A$};
\draw (1.5,1.8) node[anchor=south west] {\tiny$B$};
\draw (0,3) node[anchor=east] {\tiny$C$};
\draw[->] (0,0) -- (3.5,0)
node[above] {\tiny $p$};
\draw[->] (0,0) -- (0,3.5)
node[left] {\tiny $q$};
\fill (0,0) circle (1.5pt);
\fill (0,3) circle (1.5pt);
\fill (2.5,0) circle (1.5pt);
\draw (1.5,-.5) node[anchor=north] {\footnotesize (6) $0<a_1<ba_2$,};
\draw (1.5,-1) node[anchor=north] {\footnotesize \hspace{10pt} $0<a_2<ca_1$,};
\draw (1.5,-1.5) node[anchor=north] {\footnotesize \hspace{10pt} \tiny{$(a_1,a_2):$ non-imaginary root}};
\end{scope}
\begin{scope}[shift={(16,0)}]
\usetikzlibrary{patterns}
\draw[dashed] (0,3)--(.7,.78) (.78,.7)--(2.5,0);
\fill [black!10] (0,3)--(.7,.7)--(2.5,0)--(0,0)--(0,3);
\draw (0,0) node[anchor=east] {\tiny$O$};
\draw (2.2,0) node[anchor=south west] {\tiny$A$};
\draw (.6,.6) node[anchor=south west] {\tiny$B$};
\draw (0,3) node[anchor=east] {\tiny$C$};
\draw[->] (0,0) -- (3.5,0)
node[above] {\tiny $p$};
\draw[->] (0,0) -- (0,3.5)
node[left] {\tiny $q$};
\fill (0,0) circle (1.5pt);
\fill (0,3) circle (1.5pt);
\fill (2.5,0) circle (1.5pt);
\draw (1.5,-.5) node[anchor=north] {\footnotesize (6) $0<a_1<ba_2$,};
\draw (1.5,-1) node[anchor=north] {\footnotesize \hspace{10pt} $0<a_2<ca_1$,};
\draw (1.5,-1.5) node[anchor=north] {\footnotesize \hspace{10pt} \tiny{imaginary root}};
\end{scope}
\end{tikzpicture}
\caption{\thickmuskip=0mu\small $R_{\text{greedy}}[a_1,a_2]$.}
\label{fig:PSR}
\end{figure}
We aim to show that the quantum greedy elements defined by \eqref{eq:recurrence} satisfy the same support conditions. The following general result will be useful for many of the cases. In the following proposition and throughout the paper, the value of a vacuous product is by convention always equal to $1$.
\begin{proposition}\label{gen_recursion}
Let $m,n$ be nonnegative integers, $w=v^{d}\in\mathbb{Z}[v^{\pm1}]$ for a fixed integer $d$, and
$\{e_p\}_{p\ge0}$ be a sequence of elements in $\mathbb{Z}[v^{\pm1}]$ satisfying
\begin{equation}\label{general_recursion}
e_p=\sum\limits_{k=1}^p(-1)^{k-1}e_{p-k}{m+k-1\brack k}_w
\end{equation}
for all $p>n$. Then the sequence is uniquely determined by the first $n+1$ terms $e_{0},\dots,e_{n}$ and the following two conditions:
\smallskip
{\rm(a)} $e_p=0$ for all $p>n+m$,
\smallskip
{\rm(b)} $\sum\limits_{k=0}^m{m\brack k}_w t^k\ \bigg|\sum\limits_{p=0}^{m+n}e_pt^p.$
\end{proposition}
\begin{proof}
For $m=0$ there is nothing to show, so we assume $m\ge1$ for the rest of the proof.
Rewrite \eqref{general_recursion} as
\begin{equation}\label{new_eq:alternating sum=0}
\sum_{k=0}^p(-1)^k
e_{p-k}{m+k-1\brack k}_w=0
\end{equation}
Consider two power series in $\mathbb{Z}[v^{\pm1}][[t]]$:
$$F(t):=\sum_{k\ge0}(-1)^k{m+k-1\brack k}_w t^k=\Bigg(\sum\limits_{k=0}^m{m\brack k}_w t^k\Bigg)^{-1}, \quad E(t):=\sum_{p\ge0} e_p t^p,$$
where the second equality defining $F(t)$ follows from Lemma~\ref{cor:quantum convolution}. The left hand side of \eqref{new_eq:alternating sum=0} is the coefficient of $t^p$ in $F(t)E(t)$.
Thus \eqref{new_eq:alternating sum=0} being true for all $p>n$ is equivalent to saying that $F(t)E(t)$ is a polynomial in $t$ of degree at most $n$, which implies that
$\Big(\sum\limits_{k=0}^m{m\brack k}_w t^k\Big)F(t)E(t)=E(t)$ is a polynomial in $t$ of degree at most $n+m$. Therefore $e_{p}=0$ for all $p>n+m$ and
$\sum\limits_{k=0}^m{m\brack k}_w t^k$ divides $E(t)$.
Next we show the uniqueness. If there is another sequence $\{e'_p\}_{p=0}^\infty$ with $e'_p=e_p$ for $0\le p\le n$ which also satisfies (a) and (b), then $\sum\limits_{k=0}^m{m\brack k}_w t^k$ divides
$$\sum\limits_{p\ge0}(e_p-e'_p)t^p=t^{n+1}\sum\limits_{p=n+1}^{n+m}(e_p-e'_p)t^{p-n-1}.$$
But $\sum\limits_{k=0}^m{m\brack k}_w t^k$ and $t^{n+1}$ are coprime, so $\sum\limits_{k=0}^m{m\brack k}_w t^k$ divides $\sum\limits_{p=n+1}^{n+m}(e_p-e'_p)t^{p-n-1}$ which has degree less than $m$ and hence must be 0, i.e. $e_p=e'_p$ for $p>n$.
\end{proof}
Using this result we may compute the coefficients $e(p,0)$ and $e(0,q)$ for any $a_1$ and $a_2$.
\begin{corollary}\label{cor:baseline}
For any $(a_1,a_2)\in\mathbb{Z}^2$ the baseline greedy coefficients can be computed as
\[e(p,0)={[a_2]_+\brack p}_{v^b}\quad\text{and}\quad e(0,q)={[a_1]_+\brack q}_{v^c}\quad\text{for all $p,q\ge0$.}\]
\end{corollary}
\begin{proof}
If $a_1\le0$, then for $p=0$ the first recurrence in \eqref{eq:recurrence} always applies and, since this summation is empty, we see that $e(0,q)=0={0\brack q}_{v^c}$ for $q>0$. Since $e(0,0)=1={0\brack0}_{v^c}$, the equality $e(0,q)={[a_1]_+\brack q}_{v^c}$ holds in this case. A similar argument establishes the equality $e(p,0)={[a_2]_+\brack p}_{v^b}$ when $a_2\le0$.
Now assume $a_2>0$. Then $ca_1q\le ba_2p$ always holds for $q=0$ so in computing $e(p,0)$ only the first recurrence in \eqref{eq:recurrence} applies. To prove the equality $e(p,0)={a_{2}\brack p}_{v^{b}}$ we apply Proposition~\ref{gen_recursion} with $m=a_2$, $n=0$, $w=v^b$, and $e_p=e(p,0)$. From part (a) we obtain $e(p,0)=0$ for $p>a_2$. Now, since $e_0=e(0,0)=1$, part (b) gives $\sum\limits_{p=0}^{a_2}e_pt^p=\sum\limits_{p=0}^{a_2}{a_2\brack p}_{v^b}t^p$ and so $e(p,0)=e_p={a_2\brack p}_{v^b}$. A similar argument proves $e(0,q)={a_1\brack q}_{v^c}$ for $a_1>0$.
\end{proof}
For certain classes of $(a_1,a_2)\in\mathbb{Z}$ the recursion \eqref{eq:recurrence} can be computed very explicitly.
\begin{proposition}\label{prop:a1 a2 not positive}
Let $(a_1,a_2)\in\mathbb{Z}$ and define $e(p,q)$ as in \eqref{eq:recurrence}. We have the following:
{\rm(1)} If $a_{1}, a_{2}\le0$, then $e(p,q)=0$ for $(p,q)\neq(0,0)$.
{\rm(2)} If $a_{1}\le 0<a_{2}$, then $e(p,0)={a_{2}\brack p}_{v^{b}}$ and $e(p,q)=0$ for $q>0$.
{\rm(3)} If $a_{2}\le 0<a_{1}$, then $e(0,q)={a_{1}\brack q}_{v^{c}}$ and $e(p,q)=0$ for $p>0$.
{\rm(4)} If $0<ba_2\leq a_1$, then $e(p,q)={a_2\brack p}_{v^b}{a_1-bp\brack q}_{v^c}$.
{\rm(5)} If $0<ca_1\leq a_2$, then $e(p,q)={a_2-cq\brack p}_{v^b}{a_1\brack q}_{v^c}$.
\end{proposition}
\begin{proof}
In Corollary~\ref{cor:baseline} we have already established all of the desired formulas for $e(p,0)$ and $e(0,q)$.
For $a_{1}, a_{2}\le0$ both summations in \eqref{eq:recurrence} are empty and thus $e(p,q)=0$ for $(p,q)\ne(0,0)$. For $a_{1}\le 0<a_{2}$, Corollary~\ref{cor:baseline} gives $e(0,q)=0$ for $q>0$ and then a simple induction on $p$ shows that $e(p,q)=0$ whenever $q>0$. For $a_{2}\le 0<a_{1}$, we may similarly conclude that $e(p,q)=0$ whenever $p>0$.
Suppose $0<ba_2\leq a_1$. We aim to show that $e(p,q)={a_2\brack p}_{v^b}{a_1-bp\brack q}_{v^c}$ satisfies \eqref{eq:recurrence}. Indeed, substituting this expression into \eqref{eq:recurrence} gives
\begin{equation}\label{eq:case 4 recurrence}
{a_2\brack p}_{v^b}{a_1-bp\brack q}_{v^c} = \begin{cases}
\sum\limits_{k=1}^p(-1)^{k-1}{a_2\brack p-k}_{v^b}{a_1-b(p-k)\brack q}_{v^c}{[a_2-cq]_++k-1\brack k}_{v^b} & \text{ if }ca_1q\le ba_2p;\\
\sum\limits_{\ell=1}^q(-1)^{\ell-1}{a_2\brack p}_{v^b}{a_1-bp\brack q-\ell}_{v^c}{[a_1-bp]_++\ell-1\brack \ell}_{v^c} & \text{ if }ca_1q\ge ba_2p.\\
\end{cases}
\end{equation}
For $ca_1q\ge ba_2p$ this can be shown directly:
\begin{align*}
&\sum\limits_{\ell=1}^q(-1)^{\ell-1}{a_2\brack p}_{v^b}{a_1-bp\brack q-\ell}_{v^c}{a_1-bp+\ell-1\brack \ell}_{v^c}\\
&\quad\quad\quad\quad=-{a_2\brack p}_{v^b}\sum\limits_{\ell=1}^q{a_1-bp\brack q-\ell}_{v^c}{bp-a_1\brack \ell}_{v^c}={a_2\brack p}_{v^b}{a_1-bp\brack q}_{v^c},
\end{align*}
where the last equality follows from Corollary~\ref{cor:quantum convolution} with $m=a_1-bp$ and $n=-m$.
However, for $ca_1q\le ba_2p$ it appears to be quite mysterious that these quantities coincide: we were unable to pass directly from one to the other as we did in the preceding case. We will leave this as a challenge for the eager reader and proceed by another method, that is we explicitly compute an element of $\mathcal{A}_v(b,c)$ having these coefficients and utilize the symmetries introduced in Section~\ref{sec:symmetries}.
As in \eqref{eq:variable power} we have
\begin{align*}
X_3^n=\sum\limits_{k=0}^n{n\brack k}_{v^c}X^{(-n,ck)}\quad\text{and}\quad X_4^n=\sum\limits_{p=0}^n\sum\limits_{\ell=0}^{bp}{n\brack p}_{v^b}{bp\brack \ell}_{v^c}X^{(-bp,-n+c\ell)}
\end{align*}
and thus we may expand $X_3^{(a_1-ba_2,a_2)}$ as follows:
\begin{align*}
&v^{a_2(a_1-ba_2)}\sum\limits_{k=0}^{a_1-ba_2}\sum\limits_{p=0}^{a_2}\sum\limits_{\ell=0}^{bp}{a_1-ba_2\brack k}_{v^c}{a_2\brack p}_{v^b}{bp\brack \ell}_{v^c}X^{(-a_1+ba_2,ck)}X^{(-bp,-a_2+c\ell)}\\
&=\sum\limits_{p=0}^{a_2}\sum\limits_{k=0}^{a_1-ba_2}\sum\limits_{\ell=0}^{bp}{a_2\brack p}_{v^b}{a_1-ba_2\brack k}_{v^c}{bp\brack \ell}_{v^c}v^{c\ell(a_1-ba_2)-bckp}X^{(-a_1+b(a_2-p),-a_2+c(k+\ell))}\\
&=\sum\limits_{p=0}^{a_2}\sum\limits_{q=0}^{a_1-b(a_2-p)}{a_2\brack p}_{v^b}\left(\sum\limits_{k+\ell=q}v^{c\ell(a_1-ba_2)-ckbp}{a_1-ba_2\brack k}_{v^c}{bp\brack \ell}_{v^c}\right)X^{(-a_1+b(a_2-p),-a_2+cq)}\\
&=\sum\limits_{p=0}^{a_2}\sum\limits_{q=0}^{a_1-b(a_2-p)}{a_2\brack p}_{v^b}{a_1-b(a_2-p)\brack q}_{v^c}X^{(-a_1+b(a_2-p),-a_2+cq)}\\
&=\sum\limits_{p=0}^{a_2}\sum\limits_{q=0}^{a_1-bp}{a_2\brack p}_{v^b}{a_1-bp\brack q}_{v^c}X^{(-a_1+bp,-a_2+cq)},
\end{align*}
where the third equality used Lemma \ref{cor:quantum convolution}. Applying the symmetry $\sigma_1$ to this element we get $\sigma_1\big(X_3^{(a_1-ba_2,a_2)}\big)=X_{-2}^{(a_2,a_1-ba_2)}=\sum\limits_{p,q\ge0}d'(p,q')X^{(-a_1+bp,a_2-ca_1+cq')}$ whose pointed support region is $R_{\rm greedy}[a_1,ca_1-a_2]$, an example of the non-imaginary case (6) from Definition~\ref{df:PSR}. In particular, its coefficient $d'(p,q')$ is zero for $q'+\Big(b-\frac{b(ca_1-a_2)}{ca_1}\Big)p\ge a_1$ which is equivalent to $ca_1q\le ba_2p$ for $q'=a_1-q$. But notice that, as in \eqref{eq:d'}, the coefficient $d'(p,q')$ is exactly $\sum\limits_{k=0}^p(-1)^k{a_2\brack p-k}_{v^b}{a_1-b(p-k)\brack q}_{v^c}{a_2-cq+k-1\brack k}_{v^b}$ and the claim follows.
The claim for $0<ca_1\le a_2$ is established by a similar argument involving $X_{-1}^{(a_{1},a_2-ca_{1})}$.
\end{proof}
\begin{corollary}\label{cor:explicit cluster monomials}
Let $(a_1,a_2)\in\mathbb{Z}$ and define $e(p,q)$ as in \eqref{eq:recurrence}. We may compute quantum greedy elements as follows:
{\rm(1)} If $a_{1}, a_{2}\le0$, then $X[a_{1},a_{2}]=X_1^{(-a_{1},-a_{2})}$.
{\rm(2)} If $a_{1}\le 0<a_{2}$, then $X[a_{1},a_{2}]=X_{0}^{(a_{2},-a_{1})}$.
{\rm(3)} If $a_{2}\le 0<a_{1}$, then $X[a_{1},a_{2}]=X_{2}^{(-a_{2},a_{1})}$.
{\rm(4)} If $0<ba_2\leq a_1$, then $X[a_{1},a_{2}]=X_{3}^{(a_1-ba_{2},a_{2})}$.
{\rm(5)} If $0<ca_1\leq a_2$, then $X[a_{1},a_{2}]=X_{-1}^{(a_{1},a_2-ca_{1})}$.
\end{corollary}
\begin{proof}
The first case is immediate from Proposition~\ref{prop:a1 a2 not positive}.\\
To see that $X[a_{1},a_{2}]=X_{0}^{(a_{2},-a_{1})}$ for $a_{1}\le 0<a_{2}$ we expand the right hand side using Lemma~\ref{le:quantum binomial theorem} and see that the pointed coefficients are given as in Proposition~\ref{prop:a1 a2 not positive}:
\[X_{0}^{(a_{2},-a_{1})}=v^{-a_1a_2}X_0^{a_2}X_1^{-a_1}=v^{a_1a_2}X_1^{-a_1}(X_2^{-1}+v^{-b}X_1^bX_2^{-1})^{a_2}=\sum\limits_{p=0}^{a_2}{a_2\brack p}_{v^b}X^{(-a_1+bp,-a_2)}.\]
By a similar calculation we see for $a_{2}\le 0<a_{1}$ that
\[X_2^{(-a_2,a_1)}=\sum\limits_{q=0}^{a_1}{a_1\brack q}_{v^c}X^{(-a_1,-a_2+cq)}\]
in accord with Proposition~\ref{prop:a1 a2 not positive}. The claims for $0<ba_2\leq a_1$ and $0<ca_1\leq a_2$ were established in the course of proving Proposition~\ref{prop:a1 a2 not positive}.
\end{proof}
Let $Y=\sum\limits_{p,q\ge0} d(p,q)X^{(-a_1+bp,-a_2+cq)}$ be an element in $\mathcal{T}$ or $\mathcal{T}\otimes_\mathbb{Z}\mathbb{Q}$ pointed at $(a_1,a_2)$ and $R$ be a region in $\mathbb{R}^2$. We say that $Y$ satisfies the {\it pointed support condition for} $R$ if
\begin{equation}\label{eq:support}
\textrm{ $d(p,q)=0$\; if $(p,q)\notin R$.}
\end{equation}
The following lemma gives a non-recursive characterization of quantum greedy elements.
\begin{proposition}\label{prop:equivalent}
Theorem~\ref{main theorem1} is equivalent to the following statement:
for any $(a_1,a_2)\in\mathbb{Z}$, there exists a unique element pointed at $(a_1,a_2)$ that satisfies the divisibility condition \eqref{eq:divisibility} and the pointed support condition \eqref{eq:support} for $R_{\text{greedy}}[a_1,a_2]$.
\end{proposition}
\begin{proof}
We consider the six cases of Definition \ref{df:PSR} separately. For $(a_1,a_2)$ in cases (1)-(5), the greedy element uniquely exists and is explicitly given in Corollary~\ref{cor:explicit cluster monomials}, in particular we note that it is in $\mathcal{A}_v(b,c)$. Thus by Lemma~\ref{lem:divisibility} the greedy element satisfies the divisibility condition. We only need to show that it is the unique element pointed at $(a_1,a_2)$ that satisfies the pointed support condition for $R_{\text{greedy}}[a_1,a_2]$, but this follows directly from Proposition \ref{prop:a1 a2 not positive} and Proposition \ref{gen_recursion}.
Now take $0 < a_1 < ba_2$ and $0 < a_2 < ca_1$ and suppose $\{e(p,q)\}$ satisfies $(\le,\ge)$.
For $q>0$ and $p>\lceil ca_{1}q/(ba_{2})\rceil-1$ the sequence $\{e(p,q)\}_{p\ge0}$ is determined by the first recurrence relation of $(\le,\ge)$. So for $0<q< a_2/c$ we apply Proposition \ref{gen_recursion} with $m=a_{2}-cq$, $n=\lceil ca_{1}q/(ba_{2})\rceil-1$, $w=v^{b}$ to get
(i) $e(p,q)=0$ for $p>m+n$\,, and
(ii) $\sum\limits_{k=0}^{a_2-cq}{a_2-cq\brack k}_{v^b} t^k\ \Big|\ \sum\limits_{p\ge0} e(p,q)t^p$ holds for $0<q< a_2/c$.
Combining (ii) and its symmetric counterpart we see that the divisibility condition \eqref{eq:divisibility} holds. Meanwhile, the inequality $p>m+n$ is equivalent to $p\ge a_{2}-cq+ca_{1}q/(ba_{2})$. So (i) is equivalent to the condition that $e(p,q)=0$ if $p\ge a_{2}-cq+ca_{1}q/(ba_{2})$ and $0<q< a_2/c$. By the symmetric argument we see that $e(p,q)=0$ if $q\ge a_1-bp+ba_2p/(ca_1)$ and $0<p< a_1/b$. Combining these observations with Corollary \ref{cor:baseline}, we see that $e(p,q)=0$ if $(p,q)\notin OABC$. This proves \eqref{eq:support} for $R=OABC$. The above argument can be easily reversed to complete the proof of the claim.
Next, we claim that there is at most one element satisfying \eqref{eq:divisibility} and \eqref{eq:support} for $R=OABC$. It suffices to show that for $(p,q)\in OABC$, $e(p,q)$ is determined by those $e(i,j)$ with $(i,j)\le(p,q)$ and $(i,j)\neq(p,q)$. To verify this, assume $ba_2p\ge ca_1q$ without loss of generality. Then (i)(ii) hold. Proposition \ref{gen_recursion} implies that $e(p,q)$ is determined by those $e(i,q)$ with $i\le n=\lceil ca_{1}q/(ba_{2})\rceil-1$. Such integers $i$ satisfy $(i,q)\le (p,q)$ and $(i,q)\neq (p,q)$.
The above two claims immediately conclude case (6).
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:greedy in cluster algebra}]
Since each quantum greedy element satisfies the divisibility condition \eqref{eq:divisibility}, Lemma~\ref{lem:divisibility} shows that it is in $\mathcal{A}_v(b,c)$.
\end{proof}
\section{Definitions of upper and lower quasi-greedy elements}
In this section we define and study two variations of the quantum greedy elements, $\overline{X}[a_1,a_2]$ and $\underline{X}[a_1,a_2]$.
\begin{theorem}\label{recursive-definition_u}
For each $(a_{1},a_{2})\in\mathbb{Z}^{2}$, there exists a unique element in the quantum torus $\mathcal{T}$ pointed at $(a_1,a_2)$ whose coefficients $\bar{e}(p,q)$ (resp. $\underline{e}(p,q)$) satisfy the recurrence relation $(\le,>)$ (resp. $(<,\ge)$) in Definition \ref{le,ge}.
\end{theorem}
\begin{proof}
We prove the claim for $\bar{e}(p,q)$, the proof for $\underline{e}(p,q)$ is similar. If $a_1<0$ or $a_2<0$, then $\bar{e}(p,q)=e(p,q)$ as given in Proposition \ref{prop:a1 a2 not positive}. Thus for the remainder of the proof we assume $a_1,a_2>0$.
For $p\ge a_{1}/b$ and $q\ge a_{2}/c$ we use the fact that ${k-1\brack k}_{w}=0$ for all $k\ge 1$ to conclude that $\bar{e}(p,q)=0$. If $0\le p< a_{1}/b$, then
Proposition~\ref{gen_recursion} implies that $\bar{e}(p,q)=0$ for $q>a_1-(b-\frac{ba_2}{ca_1})p$, that is, when $(p,q)$ is above $BC$. Similarly, if $0\le q<a_2/c$, then $\bar{e}(p,q)=0$ when $(p,q)$ is on or to the right of $AB$. For $q=0$, we have $\bar{e}(p,0)={a_2\brack p}_{v^b}$. Thus $\bar{e}(p,q)=0$ outside the region $\overline{OABC}$.
In particular, all but finitely many $\bar{e}(p,q)$ are 0. This proves the existence.
The uniqueness is obvious.
\end{proof}
\begin{definition}\label{df:X}
Define the {\it upper quasi-greedy element} $\overline{X}[a_1,a_2]$ and {\it lower quasi-greedy element} $\underline{X}[a_1,a_2]$ in $\mathcal{T}$ as
$$\overline{X}[a_1,a_2]=\sum_{(p,q)}\bar{e}(p,q) X^{(-a_1+bp,-a_2+cq)}, \quad
\underline{X}[a_1,a_2]=\sum_{(p,q)}\underline{e}(p,q) X^{(-a_1+bp,-a_2+cq)}.$$
Define $\bar{\underline{e}}(p,q)=(\bar{e}(p,q)+\underline{e}(p,q))/2$ and define the {\it quasi-greedy element} in $\mathcal{T}\otimes_\mathbb{Z}\mathbb{Q}$ as
$$\overline{\underline{X}}[a_1,a_2]=(\overline{X}[a_1,a_2]+\underline{X}[a_1,a_2])/2=\sum_{(p,q)}\bar{\underline{e}}(p,q) X^{(-a_1+bp, -a_2+cq)}.$$
\end{definition}
\begin{corollary}
For $(a_1,a_2)$ from cases {\rm(1)--\rm(5)} of Definition~\ref{df:PSR} we have
\[\overline{X}[a_1,a_2]=\underline{X}[a_1,a_2]=\overline{\underline{X}}[a_1,a_2]=X[a_1,a_2].\]
\end{corollary}
\begin{proof}
This is immediate from Corollary~\ref{cor:explicit cluster monomials}.
\end{proof}
\subsection{Characterization of various quasi-greedy elements by axioms}
Now we give a non-recursive characterization of $\overline{X}[a_1,a_2]$, $\underline{X}[a_1,a_2]$ and $\overline{\underline{X}}[a_1,a_2]$ analogous to those given in Proposition~\ref{prop:equivalent}.
First we define support regions $\overline{R}_{\text{greedy}}$, $\underline{R}_{\text{greedy}}$ and $\overline{\underline{R}}_{\text{greedy}}[a_{1},a_{2}]$. Outside case (6) in Definition \ref{df:PSR}, we set
$$\overline{R}_{\text{greedy}}[a_1,a_2]=\underline{R}_{\text{greedy}}[a_1,a_2]=\overline{\underline{R}}_{\text{greedy}}[a_1,a_2]=R_{\text{greedy}}[a_1,a_2],$$
i.e. for all $(a_1,a_2)$ except when $0 < a_1 < ba_2$ and $0 < a_2 < ca_1$. In the final case, we define
$$\aligned
&\overline{R}_{\text{greedy}}[a_{1},a_{2}]={R}_{\text{greedy}}[a_{1},a_{2}]\cup\bigg\{(p,q)\in\mathbb{R}_{\ge0}^2\bigg|\; 0\le p< \frac{a_1}{b},\; q+\Big(b-\frac{ba_2}{ca_1}\Big)p= a_1\bigg\};\\
&\underline{R}_{\text{greedy}}[a_{1},a_{2}]={R}_{\text{greedy}}[a_{1},a_{2}]\cup\bigg\{(p,q)\in\mathbb{R}_{\ge0}^2\bigg|\; 0\le q< \frac{a_2}{c},\; p+\Big(c-\frac{ca_1}{ba_2}\Big)q= a_2\bigg\};\\
&\overline{\underline{R}}_{\text{greedy}}[a_{1},a_{2}]=\overline{R}_{\text{greedy}}[a_{1},a_{2}]\cup\underline{R}_{\text{greedy}}[a_{1},a_{2}].
\endaligned
$$
In other words, $\overline{R}_{\text{greedy}}[a_{1},a_{2}]=\overline{OABC}$, the region that excludes the interior of $AB$ and the point $B$ but includes the rest of the boundary; $\underline{R}_{\text{greedy}}[a_{1},a_{2}]=\underline{OABC}$, the region that excludes the interior of $BC$ and the point $B$ but includes the rest of the boundary; $\overline{\underline{R}}_{\text{greedy}}[a_1,a_2]=\overline{\underline{OABC}}$, the region that contains all the boundary except the point $B$ (see Figure \ref{fig:OABC}).
\begin{figure}[h]
\begin{tikzpicture}[scale=.9]
\begin{scope}[shift={(0,0)}]
\usetikzlibrary{patterns}
\draw[thick,dashed] (.75,.7)--(2.5,0);
\draw[thick] (0,3)--(.7,.75);
\fill [black!10] (0,3)--(.7,.7)--(2.5,0)--(0,0)--(0,3);
\draw (0,0) node[anchor=east] {\tiny$O$};
\draw (2.2,0) node[anchor=south west] {\tiny$A$};
\draw (.6,.6) node[anchor=south west] {\tiny$B$};
\draw (0,3) node[anchor=east] {\tiny$C$};
\draw[->] (0,0) -- (3.5,0)
node[below right] {$p$};
\draw[->] (0,0) -- (0,3.5)
node[left] {$q$};
\draw (.7,.7) circle (2pt);
\fill (0,3) circle (2pt);
\fill (2.5,0) circle (2pt);
\end{scope}
\begin{scope}[shift={(5,0)}]
\usetikzlibrary{patterns}
\draw [thick] (.75,.7)--(2.5,0);
\draw[thick, dashed] (0,3)--(.7,.75);
\fill [black!10] (0,3)--(.7,.7)--(2.5,0)--(0,0)--(0,3);
\draw (0,0) node[anchor=east] {\tiny$O$};
\draw (2.2,0) node[anchor=south west] {\tiny$A$};
\draw (.6,.6) node[anchor=south west] {\tiny$B$};
\draw (0,3) node[anchor=east] {\tiny$C$};
\draw[->] (0,0) -- (3.5,0)
node[below right] {$p$};
\draw[->] (0,0) -- (0,3.5)
node[left] {$q$};
\draw (.7,.7) circle (2pt);
\fill (0,3) circle (2pt);
\fill (2.5,0) circle (2pt);
\end{scope}
\begin{scope}[shift={(10,0)}]
\usetikzlibrary{patterns}
\draw [thick] (.75,.7)--(2.5,0);
\draw[thick] (0,3)--(.7,.75);
\fill [black!10] (0,3)--(.7,.7)--(2.5,0)--(0,0)--(0,3);
\draw (0,0) node[anchor=east] {\tiny$O$};
\draw (2.2,0) node[anchor=south west] {\tiny$A$};
\draw (.6,.6) node[anchor=south west] {\tiny$B$};
\draw (0,3) node[anchor=east] {\tiny$C$};
\draw[->] (0,0) -- (3.5,0)
node[below right] {$p$};
\draw[->] (0,0) -- (0,3.5)
node[left] {$q$};
\draw (.7,.7) circle (2pt);
\fill (0,3) circle (2pt);
\fill (2.5,0) circle (2pt);
\end{scope}
\end{tikzpicture}
\caption{Left: $\overline{OABC}$,\quad Center: $\underline{OABC}$,\quad Right: $\overline{\underline{OABC}}$}
\label{fig:OABC}
\end{figure}
\begin{proposition}\label{quasi-greedy_axiom}
Let $(a_1,a_2)\in\mathbb{Z}$.
{\rm(1)} $\overline{X}[a_1,a_2]$ (resp. $\underline{X}[a_1,a_2]$) is the unique element in the quantum torus $\mathcal{T}$
pointed at $(a_1,a_2)$ that satisfies the divisibility condition \eqref{eq:divisibility} and the support condition \eqref{eq:support} for $\overline{R}_{\text{greedy}}[a_1,a_2]$ (resp. for $\underline{R}_{\text{greedy}}[a_1,a_2]$).
{\rm(2)} $\overline{\underline{X}}[a_1,a_2]\in \mathcal{T}\otimes_\mathbb{Z}\mathbb{Q}$ is pointed at $(a_1,a_2)$ and satisfies the divisibility condition \eqref{eq:divisibility} and the support condition \eqref{eq:support} for $\overline{\underline{R}}_{\text{greedy}}[a_1,a_2]$.
As a consequence, $\overline{X}[a_1,a_2]$ and $\underline{X}[a_1,a_2]$ are in $\mathcal{A}_v(b,c)$ and $\overline{\underline{X}}[a_1,a_2]$ is in $\mathcal{A}_v(b,c)\otimes_\mathbb{Z}\mathbb{Q}$.
\end{proposition}
\begin{proof}
The proof of (1) is similar to Proposition~\ref{prop:equivalent}.
(2) follows from (1). The consequence follows from Lemma \ref{lem:divisibility}.
\end{proof}
Next we show that upper and lower quasi-greedy elements behave nicely under automorphisms $\sigma_1$ and $\sigma_2$ of $\mathcal{A}_v(b,c)$.
\begin{proposition}\label{prop:mutation invariant:q}
The automorphisms $\sigma_1$ and
$\sigma_2$ act on the upper and lower quasi-greedy elements as follows:
$$\aligned
&\sigma_1(\overline{X}[a_1,a_2])=\underline{X}[a_1,c[a_1]_+-a_2],\quad \sigma_1(\underline{X}[a_1,a_2])=\overline{X}[a_1,c[a_1]_+-a_2],\\
&\sigma_2(\overline{X}[a_1,a_2])=\underline{X}[b[a_2]_+-a_1,a_2],\quad \sigma_2(\underline{X}[a_1,a_2])=\overline{X}[b[a_2]_+-a_1,a_2].
\endaligned
$$
\end{proposition}
\begin{proof}
Since the proof of the four identities are similar,
we only prove the third identity
$$\sigma_2(\overline{X}[a_1,a_2])=\underline{X}[b[a_2]_+-a_1,a_2].$$
In cases (1)--(5) in Definition \ref{df:PSR}, the proof is straightforward using Proposition \ref{prop:a1 a2 not positive}.
For example in case (1),
$$\aligned
\sigma_2(\overline{X}[a_1,a_2])&=\sigma_2(X^{(-a_1,-a_2)})=v^{-a_1a_2}X_3^{-a_1}X_2^{-a_2}=v^{a_1a_2}X_2^{-a_2}X_3^{-a_1}\\
&=X_2^{(-a_2,-a_1)}=\underline{X}[-a_1,a_2]=\underline{X}[b[a_2]_+-a_1,a_2].\\
\endaligned
$$
So in the rest of the proof we focus on cases (6), that is, $0 < a_1 < ba_2$ and $0 < a_2 < ca_1$.
Write $a'_1=ba_2-a_1$ and define
$$Z:=\sigma_2(\overline{X}[a_1,a_2])=\sum_{p,q\ge0} d'(p,q) X^{(bp-a'_1,cq-a_2)}$$
It follows from Proposition~\ref{quasi-greedy_axiom} and Lemma~\ref{lem:divisibility} that $\overline{X}[a_1,a_2]\in\mathcal{A}_v(b,c)$ and thus $Z$ is an element in $\mathcal{A}_v(b,c)$ since the quantum cluster algebra is stable under all symmetries. By the same reasoning both $\sigma_1(Z)$ and $\sigma_2(Z)$ are in $\mathcal{A}_v(b,c)$ and it then follows from Lemma~\ref{lem:divisibility} that $Z$ satisfies the divisibility condition \eqref{eq:divisibility}. To prove $\underline{X}[a'_1,a_2]=Z$ it only remains to verify that $Z$ satisfies the support condition \eqref{eq:support} for $\underline{R}_{\text{greedy}}[a_1',a_2]$.
The same argument as in the proof of Corollary~\ref{cor:baseline} computes $\bar{e}(p,q)$ and $\underline{e}(p,q)$ when $p=0$ or $q=0$, so we focus on the interesting case when $p>0$. It suffices to show that
(i) if $1\le p\le a'_1/b$, then $d'(p,q)=0$ for $q\ge a'_1-\Big(b-\displaystyle\frac{ba_2}{ca'_1}\Big)p$;
(ii) if $a'_1/b<p<a_2$, then $d'(p,q)=0$ for $q>(a_2-p)\Big/\Big(c-\displaystyle\frac{ca'_1}{ba_2}\Big)$.\\
\noindent To prove (i) we fix $p$ and compute the degree of the polynomial $\sum_{q\ge0} d'(p,q)t^q$. Indeed, using \eqref{eq:d'} and the support condition for $\overline{X}[a_1,a_2]$ we get
$$\aligned
\deg\Big(\sum_{q\ge0} d'(p,q)t^q\Big)&=\deg\Big(\sum_{q\ge0}\bar{e}(a_2-p,q)t^q\Big)-\big(a_1-b(a_2-p)\big)\\
&=\deg \Big(\sum_{q\ge0}\bar{e}(a_2-p,q)t^q\Big)+(a'_1-bp)\\
&<p\Big/\Big(c-\frac{ca_1}{ba_2}\Big)+(a'_1-bp)\\
&=\frac{ba_2}{ca'_1}p+a'_1-bp=a'_1-\Big(b-\frac{ba_2}{ca'_1}\Big)p.
\endaligned
$$
\noindent
The proof of (ii) is similar and we leave it as an exercise for the reader.
\end{proof}
Recall that positive imaginary roots are lattice points in the set
\[\Phi^{im}_+:=\Big\{(a_1,a_2)\in\mathbb{Z}_{>0}^2: ca_1^2-bca_1a_2+ba_2^2\le 0\Big\}.\]
The real roots are lattice points in $\mathbb{Z}^2\setminus \Phi^{im}_+$.
It is well-known that the set of denominator vectors of all cluster monomials is exactly $\mathbb{Z}^2\setminus\Phi^{im}_+$.
\begin{corollary}\label{cor:cluster monomials:q}
Let $k$ be an integer and write $P_k,P_{k+1}\in\mathbb{Z}^2$ for the denominator vectors of $x_k$ and $x_{k+1}$ respectively. Then for $(a_1,a_2)=mP_k+nP_{k+1}$ with $m,n\in\mathbb{Z}_{\ge0}$, we have
$$\overline{X}[a_1,a_2]=\underline{X}[a_1,a_2]=X[a_1,a_2]=X_k^{(m,n)}.$$
\end{corollary}
\begin{proof}
The statement obviously holds for $k=1$.
All other cluster monomials can be obtained from $X_1^{(m,n)}$ by iteratively applying $\sigma_1$ and $\sigma_2$. Therefore the statement follows from the $k=1$ case and Proposition \ref{prop:mutation invariant:q}.
\end{proof}
\section{$\overline{\underline{X}}[a_1,a_2]$: basic properties}
Here we study the quasi-greedy elements $\overline{\underline{X}}[a_1,a_2]$ introduced in Definition \ref{df:X}.
\begin{proposition}\label{mean invariant:q}
The quasi-greedy elements form a mutation invariant $\mathbb{Q}[v^{\pm1}]$-basis of the cluster algebra $\mathcal{A}_v(b,c)\otimes_\mathbb{Z}\mathbb{Q}$. The automorphisms $\sigma_1$ and
$\sigma_2$ act on quasi-greedy elements as follows:
$$\sigma_1(\overline{\underline{X}}[a_1,a_2])=\overline{\underline{X}}[a_1,c[a_1]_+-a_2],\quad \sigma_2(\overline{\underline{X}}[a_1,a_2])=\overline{\underline{X}}[b[a_2]_+-a_1, a_2].$$
\end{proposition}
\begin{proof}
The second statement follows immediately from Proposition \ref{prop:mutation invariant:q}.
The proof of the first statement follows an argument similar to that in \S6 of \cite{llz} by comparing the family of quasi-greedy elements with the
standard monomial basis and showing that the transition matrix is invertible, so we omit the proof.
\end{proof}
Now we introduce the notion of special region which will be used in Lemma \ref{lemma:support:q}.
\begin{definition}\label{df:special region}
We call $R\subseteq\mathbb{R}^2$ a {\it special region} if it is the closed region bounded by a polygon $R_0R_1\cdots R_n$, where $R_i=(u_i,v_i)$ for $i=1,\dots, n$ with
$$u_1\le u_2\le \cdots \le u_n,\quad v_1\ge v_2\ge \cdots\ge v_n,$$
$R_0=(u_1,v_n)$,
and if $R$ is \emph{origin convex}, i.e. $R$ satisfies the property that for any point $p\in R$, the whole line segment connecting the origin $(0,0)$ and $p$ is in $R$.
\end{definition}
\begin{remark}
It follows from the definition of a special region that $u_1\le 0\le u_n$ and $v_1\ge 0\ge v_n$. Thus any special region can be pictured as in Figure \ref{fig:special region}.
\end{remark}
\begin{figure}[h]
\begin{tikzpicture}[scale=.8]
\begin{scope}[shift={(11,0)}]
\usetikzlibrary{patterns}
\draw[black] (0,5)--(1,3.5)--(2,3.2)--(5,.5)--(6.5,0)--(0,0)--(0,5);
\fill [black!10] (0,5)--(1,3.5)--(2,3.2)--(5,.5)--(6.5,0)--(0,0)--(0,5);
\fill (0,0) circle (1pt);
\fill (0,5) circle (1pt);
\fill (1,3.5) circle (1pt);
\fill (2,3.2) circle (1pt);
\fill (5,.5) circle (1pt);
\fill (6.5,0) circle (1pt);
\fill (1.5,1) circle (1pt);
\draw[black,dashed] (1.5,1)--(0,0) (1.5,1)--(0,5) (1.5,1)--(1,3.5) (1.5,1)--(2,3.2) (1.5,1)--(5,.5) (1.5,1)--(6.5,0);
\draw (0,0) node[anchor=east] {\tiny$R_0$};
\draw (0,5) node[anchor=east] {\tiny$R_1$};
\draw (0.9,3.7) node[anchor=west] {\tiny$R_2$};
\draw (2,3.3) node[anchor=west] {\tiny$R_3$};
\draw (3.1,2.3) node[anchor=west] {\tiny$\ddots$};
\draw (3.5,1.93) node[anchor=west] {\tiny$\ddots$};
\draw (4.8,.8) node[anchor=west] {\tiny$R_{n-1}$};
\draw (6.5,0) node[anchor=west] {\tiny$R_n$};
\draw (1.5,1) node[anchor=east] {\tiny$(0,0)$};
\end{scope}
\end{tikzpicture}
\caption{A special region $R$}
\label{fig:special region}
\end{figure}
Define $\overline{\underline{S}}[a_{1},a_{2}]=\varphi(\overline{\underline{R}}_{\textrm{greedy}}[a_{1},a_{2}])$. Note that it is a closed region in cases (1)--(5) of Definition \ref{df:PSR}, while in case (6) it is obtained by removing the origin from the boundary of a closed region.
\begin{lemma}\label{lemma:support:q}\mbox{}
\begin{enumerate}[\upshape (i)]
\item If $\overline{\underline{S}}[a_1,a_2]$ contains a point $(u,v)\in\mathbb{R}_{\ge0}^2$, then
$$a_1=-u,\quad a_2=-v, \quad
\overline{\underline{S}}[a_1,a_2]=\{(u,v)\}.$$
\item If $(a_1,a_2)\neq(a_1',a_2')$ and $\overline{\underline{S}}[a_1,a_2]\subseteq\overline{\underline{S}}[a_1',a_2']$, then
$$0 < a_1 < ba_2, \; 0 < a_2 < ca_1, \;0 < a_1' < ba_2', \; 0 < a_2' < ca_1', \; \textrm{ and }\; a_1:a_2=a_1':a_2'.$$
\item Let $Z$ be an element in the quantum cluster algebra $\mathcal{A}_v(b,c)$ with the linear expansion
$$Z=\sum_{a_1,a_2}d_{a_1,a_2}\overline{\underline{X}}[a_1,a_2], \quad \textrm{ where } d_{a_1,a_2}\in\mathbb{Q}[v^{\pm1}].$$
If the support of $Z$ is contained in a special region $R$,
then the support region $\overline{\underline{S}}[a_{1},a_{2}]$ of any quasi-greedy element $\overline{\underline{X}}[a_1,a_2]$ corresponding to a nonzero $d_{a_1,a_2}$ is contained in $R$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) and (ii) follow from an easy case-by-case study of the six cases in Definition \ref{df:PSR} and we skip the proof. (It is helpful to look at Figure \ref{fig:PSR} and observe that $\varphi(B)=(0,0)$.)
(iii) Aiming at a contradiction we assume that there are integers $a_1,a_2$ with $d_{a_1,a_2}\neq0$ such that there is a point $(u,v)\in\overline{\underline{S}}[a_1,a_2]$ with $(u,v)\notin R$. Without loss of generality, we may assume that there is no other $d_{a_1',a_2'}\neq0$ with $-a'_1\le -a_1$ and $-a'_2\le -a_2$. Then, since each is pointed, the monomial $X^{(-a_1,-a_2)}$ of $\overline{\underline{X}}[a_1,a_2]$ cannot appear in any other $\overline{\underline{X}}[a'_1,a'_2]$ and the point $(-a_1,-a_2)$ is in the support of $Z$, thus the point $(-a_1,-a_2)$ is in $R$. It follows that
\begin{equation}\label{R_0}
(-a_1,-a_2)\ge (u_1,v_n)=R_0.
\end{equation}
Since $(-a_1,-a_2)\le(u,v)$, we see that for $u<u_1$ or $v<v_n$ we have $(-a_1,-a_2)\notin R$, a contradiction. In particular, $u,v\le0$ implies $u<u_1$ or $v<v_n$, which we have seen is impossible. If $u,v\ge0$, then (i) asserts that $(-a_1,-a_2)=(u,v)\notin R$, again a contradiction. Thus we must have $u_1<u<0$ and $v>0$ or $u>0$ and $v_1<v<0$. In the rest we assume without loss of generality that $u_1<u<0$ and $v>0$, this is only possible in cases (3)(4)(6) of Definition \ref{df:PSR}.
We assert that $(-a_1,-a_2+ca_1)\notin R$. To show this, we consider each case separately:
\smallskip
\noindent Case (3): $a_2 \leq 0 < a_1$: The support region $\overline{\underline{S}}[a_1,a_2]$ is the vertical segment $\varphi(OC)$ connecting $\varphi(O)=(-a_1,-a_2)$ and $\varphi({C})=(-a_1,-a_2+ca_1)$. Since both $\varphi(O)$ and $\varphi({C})$ are weakly to the northeast of $R_0$, from the shape of $R$ we may conclude that $\varphi({C})\notin R$. Indeed, if $\varphi({C})$ is in $R$, then the whole segment $\varphi(OC)$ is in $R$, in particular $(u,v)\in\varphi(OC)$ is in $R$, a contradiction.
\smallskip
\noindent Case (4): $0<ba_2\le a_1$. The support region $\overline{\underline{S}}[a_1,a_2]$ is the trapezoid $\varphi(OAD_1C)$ which lies strictly to the west of the vertical line through $(0,0)$. The points $\varphi(O)$ and $\varphi(A)$ are in $R$, to the southwest of $(0,0)$ while the line through $\varphi(C)$ and $\varphi(D_1)$ passes below $(0,0)$, intersecting the vertical line through $(0,0)$ at $(0,-a_2)$ (see Figure \ref{fig:PSR}). By the shape of the special region $R$ we see that if $\varphi({C})$ is in $R$, then the whole segment $\varphi(OC)$ is in $R$. But, since $v>0$, the line through $(0,0)$ and $(u,v)$ intersects $\varphi(OC)$ at a point $P\in R$. But then $P\in R$ and $(u,v)\notin R$, contradicting the origin convexity of $R$.
\smallskip
\noindent Case (6): $0 < a_1 < ba_2$, and $0 < a_2 < ca_1$. The proof of this case is similar to Case (4) using that $\varphi(B)=(0,0)$.
\smallskip
Now consider all pairs $(a_1,a_2)$ with $d_{a_1,a_2}\neq0$ and $(-a_1,-a_2+ca_1)\notin R$ (by assumption and the above considerations there exists at least one such a pair), and take one with maximal $-a_2+ca_1$. Then the monomial $X^{(-a_1,-a_2+ca_1)}$ of $\overline{\underline{X}}[a_1,a_2]$ cannot appear in any other $\overline{\underline{X}}[a'_1,a'_2]$ and the point $(-a_1,-a_2+ca_1)$ is in the support of $Z$, thus the point $(-a_1,-a_2+ca_1)$ is in $R$. This contradiction completes the proof of (iii).
\end{proof}
The next lemma studies certain coefficients in the quasi-greedy elements. Recall the region $OABC$ in case (6): $0<a_1<ba_2$ and $0<a_2<ca_1$ of Figure \ref{fig:PSR}, which we reproduce here in Figure \ref{fig:p'q'} (for convenience we only draw one of the two figures). A lattice point $(p,q)$ is on the interior of the segment $OB$ if $0<p<a_1/b$ and $p:q=ca_{1}:ba_{2}$. It is easy to check for such a point $(p,q)$ that $(p,q+a+1-bp)$ is on the edge $BC$ while $(p+a_2-cq,q)$ is on the edge $AB$.
\begin{figure}[h]
\begin{tikzpicture}[scale=1.2]
\begin{scope}[shift={(2,0)}]
\usetikzlibrary{patterns}
\draw[thick] (0,3)--(0,0)--(2.5,0);
\draw[thick] (0,3)--(.7,.7)--(2.5,0);
\fill [black!10] (0,3)--(.7,.7)--(2.5,0)--(0,0)--(0,3);
\draw (0,0) node[anchor=east] {\tiny$O$};
\draw (2.4,0) node[anchor=west] {\tiny$A$};
\draw (.6,.6) node[anchor=south west] {\tiny$B$};
\draw (0,3) node[anchor=east] {\tiny$C$};
\draw[thick, black!50] (0,0)--(.7,.7);
\draw[thick, black!50] (1.7,.3)--(.3,.3)--(.3,2);
\fill (.3,.3) circle (1pt);
\fill (.3,2) circle (1pt);
\fill (1.75,.3) circle (1pt);
\draw (.28,.15) node[anchor=west] {\tiny$(p,q)$};
\draw (.3,2) node[anchor=west] {\tiny$(p,q')=(p,q+a_1-bp)$};
\draw (1.8,.4) node[anchor=west] {\tiny$(p',q)=(p+a_2-cq,q)$};
\draw (-2,.4) node[anchor=west] {};
\end{scope}
\end{tikzpicture}
\caption{}
\label{fig:p'q'}
\end{figure}
\begin{lemma}\label{lem:pq':q} Assume $0<a_1<ba_2$ and $0<a_2<ca_1$. Consider a lattice point $(p,q)$ on the interior of the segment $OB$. Denote $q':=q+a_1-bp$ and $p':=p+a_2-cq$.
{\rm(i)} The following equalities hold:
$$\bar{\underline{e}}(p,q')=\bar{e}(p,q')/2,\quad \bar{\underline{e}}(p',q)=\underline{e}(p',q)/2.$$
{\rm(ii)} Assume $\bar{e}(i,j)=\underline{e}(i,j)$ for every lattice point $(i,j)$ on the interior of the segment $OB$ with $i<p$. Then $\bar{e}(i,j)=\underline{e}(i,j)$ for every lattice point $(i,j)\neq(p,q)$ satisfying $0\le i\le p$ and $0\le j\le q$. Moreover,
$$\bar{\underline{e}}(p,q')=\frac{\bar{e}(p,q)-\underline{e}(p,q)}{2},\quad \bar{\underline{e}}(p',q)=\frac{\underline{e}(p,q)-\bar{e}(p,q)}{2}.$$
As a consequence, if $(p,q)$ is the highest interior point in $BC$ with $\bar{\underline{e}}(p,q)\neq0$, then
$$\bar{\underline{e}}(p,q)=\frac{\bar{e}(p,q-a_{1}+bp)-\underline{e}(p,q-a_{1}+bp)}{2}.$$
\end{lemma}
\begin{proof}
By considering the support regions of $\overline{X}[a_1,a_2]$ and $\underline{X}[a_1,a_2]$ we immediately see (i).
The first claim $\bar{e}(i,j)=\underline{e}(i,j)$ of (ii) follows from the definition of $\bar{e}(i,j)$ and $\underline{e}(i,j)$. To prove the second claim of (ii), observe that $(1+t)^{a_1-bp}$ divides
$$\sum_{i=0}^{q'}\bar{e}(p,i)t^i-\sum_{i=0}^{q'}\underline{e}(p,i)t^i=\sum_{i=q}^{q'}(\bar{e}-\underline{e})(p,i)t^i=t^q\sum_{i=0}^{a_1-bp}(\bar{e}-\underline{e})(p,q+i)t^i,$$ but the last sum has degree at most $a_1-bp$, so it is a constant multiple of $(1+t)^{a_1-bp}$. Then its coefficients of the lowest and highest degrees are equal, which gives
$$(\bar{e}-\underline{e})(p,q)=(\bar{e}-\underline{e})(p,q+a_1-bp)=(\bar{e}-\underline{e})(p,q')=\bar{e}(p,q'),$$
where we have used $\underline{e}(p,q')=0$ in the last equality. By (i) the first equality in the second claim of (ii) follows. The equality $\bar{\underline{e}}(p',q)=\frac{\underline{e}(p,q)-\bar{e}(p,q)}{2}$ can be proved similarly.
\end{proof}
We denote $\bar{e}(i,j)_{a_1,a_2}$ to illustrate the dependence of $\bar{e}(i,j)$ on $(a_1,a_2)$.
The meaning of $\underline{e}(i,j)_{a_1,a_2}$, $\underline{e}(i,j)_{a_1,a_2}$ and $(\bar{e}-\underline{e})(i,j)_{a_1,a_2}$ is similar.
\begin{lemma}\label{lem:linear}
Let $(a_{1},a_{2})$ be a positive imaginary root. Consider a lattice point $(p,q)$ on the interior of the segment $OB$ and assume $\bar{e}(i,j)_{a_1,a_2}=\underline{e}(i,j)_{a_1,a_2}$ for every lattice point $(i,j)$ on the interior of the segment $OB$ with $i<p$. Then the following are true for every positive integer $n$:
\smallskip
{\rm(i)} for every lattice point $(i,j)$ on interior of the segment $OB$ with $i<p$, we have
$$\bar{e}(i,j)_{na_1,na_2}=\underline{e}(i,j)_{na_1,na_2}.$$
{\rm(ii)} $\bar{e}(p,q)_{na_1,na_2}-\underline{e}(p,q)_{na_1,na_2}=n(\bar{e}(p,q)_{a_1,a_2}-\underline{e}(p,q)_{a_1,a_2}).$
\end{lemma}
\begin{proof}
We first introduce some notation and restate (i)(ii). For simplicity we write $X^{P}=X^{(i,j)}$ for $P=(i,j)$. For a positive integer $k$, we describe lattice points necessary for understanding the support region of the product $\overline{\underline{X}}[a_1,a_2]\cdot \overline{\underline{X}}[ka_1,ka_2]$:
\begin{equation}\label{OACD}
\aligned
&O=(0,0),\\
&A_k=k\cdot(-a_1+ba_2,-a_2),\quad C_k=k\cdot(-a_1,-a_2+ca_1),\quad D_k=k\cdot(-a_1,-a_2),\\
&E_{k}=A_k+C_1=(kba_2-(k+1)a_1,ca_1-(k+1)a_2),\\
&F=A_1+C_1=(ba_2-2a_1,ca_1-2a_2),\\
&G_{k}=A_1+C_k=(ba_2-(k+1)a_1,kca_1-(k+1)a_2),\\
&P_k=(-ka_1+bp,-ka_2+cq),\\
&P'_k=(-ka_1+bp,-ka_2+c(q+ka_1-bp)),\\
&P''_k=(-ka_1+b(p+ka_2-cq),-ka_2+cq).
\endaligned
\end{equation}
Then the support region
$\overline{\underline{S}}[ka_1,ka_2]$ is the region $D_kA_kOC_k\setminus\{O\}$. Note that the region is concave because $(ka_1,ka_2)$ is a positive imaginary root.
The support of the product $\overline{\underline{X}}[a_1,a_2]\cdot \overline{\underline{X}}[ka_1,ka_2]$ is
contained in the Minkowski sum of $D_1A_1OC_1$ and $D_kA_kOC_k$, which is
the closed region
$$R_{\rm prod}:=D_{k+1}A_{k+1}A_kE_{k}FG_{k}C_kC_{k+1}D_{k+1}.$$
\begin{figure}[h]
\begin{tikzpicture}[scale=1.2]
\begin{scope}[shift={(0,0)}]
\usetikzlibrary{patterns}
\draw[blue!80] (-.3,-.3)--(.9,-.3)--(0,0)--(-.3,.6)--(-.3,-.3);
\draw[red!80] (-1,-1)--(3,-1)--(0,0)--(-1,2)--(-1,-1);
\draw[black] (-1.3,-1.3)--(3.9,-1.3)--(3,-1)--(2.7,-.4)--(.6,.3)--(-.1,1.7)--(-1,2)--(-1.3,2.6)--(-1.3,-1.3);
\draw[dashed] (.9,-.3)--(.6,.3)--(-.3,.6);
\draw[blue] (.1,-.45) node[anchor=east] {\tiny$D_1$};
\draw[red] (-1.1,-1.35) node[anchor=south west] {\tiny$D_k$};
\draw[blue] (.85,-.35) node[anchor=south west] {\tiny$A_1$};
\draw[blue] (-.35,.55) node[anchor=south west] {\tiny$C_1$};
\draw (-1,-1.25) node[anchor=north] {\tiny$D_{k+1}$};
\draw (3.9,-1.25) node[anchor=north] {\tiny$A_{k+1}$};
\draw[red] (2.95,-1.05) node[anchor=south west] {\tiny$A_k$};
\draw (2.65,-.45) node[anchor=south west] {\tiny$E_{k}$};
\draw (.55,.25) node[anchor=south west] {\tiny$F$};
\draw (-.15,1.65) node[anchor=south west] {\tiny$G_{k}$};
\draw[red] (-1.05,1.95) node[anchor=south west] {\tiny$C_k$};
\draw (-1.3,2.6) node[anchor=west] {\tiny$C_{k+1}$};
\draw (-.05,-.05) node[anchor=south west] {\tiny$O$};
\draw (1,1) node[anchor=west] {$R_{\rm prod}$};
\end{scope}
\begin{scope}[shift={(6,0)}]
\usetikzlibrary{patterns}
\draw[blue!80] (-.3,-.3)--(.9,-.3)--(0,0)--(-.3,.6)--(-.3,-.3);
\draw[red!80] (-1,-1)--(3,-1)--(0,0)--(-1,2)--(-1,-1);
\draw[black] (-1.3,-1.3)--(3.9,-1.3)--(3,-1)--(2.7,-.4)--(.6,.3)--(-.1,1.7)--(-1,2)--(-1.3,2.6)--(-1.3,-1.3);
\draw[black!80] (-1.3,-1.3)--(0,0);
\filldraw[black] (-1.1,-1.1) circle(1pt);\draw (-1.15,-1.15) node[anchor=west] {\tiny$P_{k+1}$};
\filldraw[black] (-1.1,2.2) circle(1pt);\draw (-1.15,2.25) node[anchor=west] {\tiny$P'_{k+1}$};
\filldraw[black] (3.3,-1.1) circle(1pt);\draw (3.25,-1.025) node[anchor=west] {\tiny$P''_{k+1}$};
\filldraw[red] (-0.8,-0.8) circle(1pt);\draw[red] (-0.85,-0.85) node[anchor=west] {\tiny$P_{k}$};
\filldraw[red] (-.8,1.6) circle(1pt);\draw[red] (-.85,1.65) node[anchor=west] {\tiny$P'_{k}$};
\filldraw[red] (2.4,-0.8) circle(1pt);\draw[red] (2.35,-0.725) node[anchor=west] {\tiny$P''_{k}$};
\filldraw[blue] (-.1,-.1) circle(1pt);\draw[blue] (-.15,-.15) node[anchor=west] {\tiny$P_{1}$};
\filldraw[blue] (-.1,0.2) circle(1pt);\draw[blue] (-.15,0.25) node[anchor=west] {\tiny$P'_{1}$};
\filldraw[blue] (0.3,-.1) circle(1pt);\draw[blue] (0.25,-.025) node[anchor=west] {\tiny$P''_{1}$};
\end{scope}
\end{tikzpicture}
\caption{Minkowski sum $R_{\rm prod}$}
\label{fig:Minkowski sum}
\end{figure}
\noindent Note that by our choice of $(p,q)$ each point $P_k$ lies on the interior of the segment $OD_k$ while $P'_k$ (resp.~$P''_k$) is the intersection of the line $OC_k$ (resp.~$OA_k$) with the vertical (resp.~horizontal) line passing through $P_k$ (see Figure \ref{fig:Minkowski sum}).
For convenience, write $d^{(k)}_P$ for the coefficients in
$$\overline{\underline{X}}[ka_1,ka_2]=\sum_{P\in\mathbb{Z}^{2}}d^{(k)}_PX^{P}.$$
In other words, $d^{(k)}_{-ka_1+bi,-ka_2+cj}=\bar{\underline{e}}(i,j)_{ka_1,ka_2}$. Then using Lemma \ref{lem:pq':q}(ii), we can restate conditions (i) and (ii) as follows:
{\rm(i')} for every lattice point $P$ in the interior of $C_{n}P_{n}'$, we have $d^{(n)}_{P}=0$;
{\rm(ii')} $d^{(n)}_{P'_{n}}=n\, d^{(1)}_{P'_{1}}$.
We prove (i')(ii') by induction on $n$. For $n=1$ the first holds by our assumptions and Lemma \ref{lem:pq':q}(ii), while the second is trivial. So we assume that they hold for $n\le k$.
Denote
\begin{equation}\label{eq:product Laurent:q}
\overline{\underline{X}}[a_1,a_2]\cdot \overline{\underline{X}}[ka_1,ka_2]=\sum_{i,j}g_{i,j}X^{(i,j)}=\sum_{i,j}h_{i,j}\overline{\underline{X}}[i,j],
\end{equation}
where $g_{i,j},\; h_{i,j}\in\mathbb{Q}[v^{\pm1}]$. Note that $g_{i,j}\ne0$ implies $i=-(k+1)a_1+bs$ and $j=-(k+1)+ct$ for some $s,t\in\mathbb{Z}_{\ge0}$.
We first aim to compute the coefficient $g_{P'_{k+1}}$. We will need the simple observation that $X^{(a,b)}\cdot X^{(c,d)}=v^{bc-ad}X^{(a+c,b+d)}$ for any integers $a,b,c,d$. In particular, if $a:b=c:d$ then
$$X^{(a,b)}\cdot X^{(c,d)}=X^{(a+c,b+d)}=X^{(c,d)}\cdot X^{(a,b)},$$
in other words, $X^{(a,b)}$ and $X^{(c,d)}$ commute if the lattice points $(a,b), (c,d), (0,0)$ are collinear. Note that by assumption $d^{(1)}_P=0$ for every lattice point $P$ on the interior of $C_1P'_1$ and $d^{(k)}_P=0$ for every lattice point $P$ on the interior of $C_kP'_k$. It follows that $g_P=0$ for any lattice point $P$ on the interior of $C_{k+1}P'_{k+1}$ and that, to understand $g_{P'_{k+1}}$, we only need to consider two decompositions of ${P'_{k+1}}$ as the sum of a point in the support of $\overline{\underline{X}}[ka_1,ka_2]$ and a point in the support of $\overline{\underline{X}}[a_1,a_2]$, namely $C_1+P_k$ and $P'_1+C_k$.
Thus
$$\aligned
g_{P'_{k+1}}X^{P'_{k+1}}&=d^{(1)}_{C_{1}}X^{C_{1}}\cdot d^{(k)}_{P'_{k}}X^{P'_{k}}+d^{(1)}_{P'_{1}}X^{P'_{1}}\cdot d^{(k)}_{C_{k}}X^{C_k}\\
&=d^{(1)}_{C_{1}}d^{(k)}_{P'_{k}}X^{P'_{k+1}}+d^{(1)}_{P'_{1}}d^{(k)}_{C_{k}}X^{P'_{k+1}}
\endaligned
$$
where we used the fact that $C_{k}, P'_{k}, C_{1}, P'_{1}$ and $(0,0)$ are collinear.
Since $d^{(k)}_{C_{k}}=d^{(1)}_{C_{1}}=1$ (for example, $d^{(1)}_{C_{1}}=\bar{\underline{e}}(0,a_1)_{a_1,a_2}={a_1\brack a_1}_{v^c}=1$), we have
\begin{equation}\label{eq:d}
g_{P'_{k+1}}=d^{(k)}_{P'_{k}}+d^{(1)}_{P'_{1}}.
\end{equation}
Meanwhile by the induction assumption and Lemma \ref{lem:pq':q}(ii), for $n\le k$ we have
\begin{equation}\label{eq:dn}
d^{(n)}_{P'_{n}}=\bar{\underline{e}}(p,q+na_1-bp)_{na_1,na_2}=(\bar{e}-\underline{e})(p,q)_{na_1,na_2}/2.
\end{equation}
So \eqref{eq:d} and \eqref{eq:dn}, together with the inductive assumption (ii'), imply
\begin{equation}\label{eq:2d P'}
2g_{P'_{k+1}}=(\bar{e}-\underline{e})(p,q)_{ka_1,ka_2}+(\bar{e}-\underline{e})(p,q)_{a_1,a_2}=(k+1)(\bar{e}-\underline{e})(p,q)_{a_1,a_2}.
\end{equation}
Similarly,
\begin{equation}\label{eq:2d P''}
2g_{P''_{k+1}}=(\underline{e}-\bar{e})(p,q)_{ka_1,ka_2}+(\underline{e}-\bar{e})(p,q)_{a_1,a_2}=(k+1)(\underline{e}-\bar{e})(p,q)_{a_1,a_2}.
\end{equation}
By Lemma \ref{lemma:support:q}(iii), all quasi-greedy elements that appear in the right hand side of \eqref{eq:product Laurent:q} with nonzero coefficients have their support regions lying inside $R_{\rm prod}$. Therefore, if $h_{i,j}\neq0$ (recall that $h_{i,j}$ is defined in \eqref{eq:product Laurent:q}) and either $i\ge ka_{1}$ or $j\ge ka_{2}$, then
$(i,j)=(\lambda a_{1},\lambda a_{2})$ where $\lambda\in\mathbb{Q}$ satisfies $k\le \lambda\le k+1$. Indeed, write $i\ge ka_{1}$ as $(k+1)a_1-bs$ so the point of $OC_{k+1}$ above $-i$ is $\Big(-i,-(k+1)a_2+c\big(\frac{ba_2s}{ca_1}+i\big)\Big)$, where we note that $\Big(-i,-(k+1)a_2+\frac{ba_2s}{a_1}\Big)$ lies on $OD_{k+1}$. If $j<(k+1)a_2-\frac{ba_2s}{a_1}$ (i.e. $(-i,-j)$ lies above $OD_{k+1}$), then the point $(-i,-j+ci)$ from the support of $\overline{\underline{X}}[i,j]$ is not contained in $R_{\rm prod}$, a contradiction. A similar argument gives the claim for $j\ge ka_{2}$.
Clearly, $g_{P_{k+1}}=1$ and so we must have $h_{(k+1)a_1,(k+1)a_2}=1$. It follows that the support of $\overline{\underline{X}}[a_1,a_2]\cdot \overline{\underline{X}}[ka_1,ka_2]-\overline{\underline{X}}[(k+1)a_1,(k+1)a_2]$ must be contained in the region obtained from $R_{\rm prod}$ by removing a strip of width 1 from the west and south boundaries. Since $g_P=0$ for any lattice point $P$ on the interior of $C_{k+1}P'_{k+1}$ we see that $h_{i,j}$ must be zero for any $(-i,-j)$ strictly between $D_{k+1}P_{k+1}$, indeed for such a point the quasi-greedy element $\overline{\underline{X}}[i,j]$ contains the point $(-i,-j+ci)$ from the interior of $C_{k+1}P'_{k+1}$ in its support.
The claim (i') now follows for $k+1$. Indeed, the argument above implies $d^{(k+1)}_P=g_P=0$ for any lattice point $P$ in the interior of the line segment $C_{k+1}P'_{k+1}$. But then Lemma \ref{lem:pq':q}(ii) gives
\begin{align*}
d^{(k+1)}_{P'_{k+1}}&=\bar{\underline{e}}(p,q+(k+1)a_1-bp)_{(k+1)a_1,(k+1)a_2}=(\bar{e}-\underline{e})(p,q)_{(k+1)a_1,(k+1)a_2}/2;\\
d^{(k+1)}_{P''_{k+1}}&=\bar{\underline{e}}(p+(k+1)a_2-cq,q)_{(k+1)a_1,(k+1)a_2}=(\underline{e}-\bar{e})(p,q)_{(k+1)a_1,(k+1)a_2}/2.
\end{align*}
We also see that only $\overline{\underline{X}}[(k+1)a_1,(k+1)a_2]$ and $\overline{\underline{X}}[(k+1)a_1-bp,(k+1)a_2-cq]$ (recall that $P_{k+1}=(-(k+1)a_1+bp,-(k+1)a_2+cq)$\; ) contribute to the coefficients $g_{P'_{k+1}}$ and $g_{P''_{k+1}}$, i.e.
\[g_{P'_{k+1}}=d^{(k+1)}_{P'_{k+1}}+h_{-P_{k+1}}\quad\text{ and }\quad g_{P''_{k+1}}=d^{(k+1)}_{P''_{k+1}}+h_{-P_{k+1}}.\]
Adding these expressions and recalling \eqref{eq:2d P'} and \eqref{eq:2d P''} gives $0=0+2h_{-P_{k+1}}$, i.e. $h_{-P_{k+1}}=0$. From this we get (ii') for $k+1$, i.e.
\[d^{(k+1)}_{P'_{k+1}}=g_{P'_{k+1}}=(k+1)(\bar{e}-\underline{e})(p,q)_{a_1,a_2}/2=(k+1)d^{(1)}_{P'_1}.\]
This completes the inductive proof of (i') and (ii') and thus proves the lemma.
\end{proof}
\begin{lemma}\label{linear implies 0}\mbox{}
{\rm(i)} Both $\bar{e}(p,q)$ and $\underline{e}(p,q)$ are of the form
\begin{equation}\label{star}
\sum_{i,j=-N}^{N} c_{ij}(v)v^{ba_{2}i+ca_{1}j},
\end{equation}
where $N\in\mathbb{N}$ and $c_{ij}(v)\in \mathbb{Q}(v)$ depend on $b,c,p,q$ but do not depend on $a_{1}, a_{2}$.
Therefore
$$f_n:=f_n(v)=(\bar{e}-\underline{e})(p,q)_{na_1,na_2}/2=\sum_{i=-M}^{M}d_{i}(v)v^{ni},$$
where $M\in\mathbb{N}$ and $d_{i}(v)\in\mathbb{Q}(v)$ does not depend on $n$.
{\rm(ii)} For every positive integer $n$ we have $f_n(v)\equiv 0$.
\end{lemma}
\begin{proof}
(i) $\bar{e}(p,q)$ (resp. $\underline{e}(p,q)$) is a linear combination of products of quantum binomial coefficients of the form
$${a_{2}-cq'+k-1\brack k}_{v^{b}}\quad \textrm{ or } \quad {a_{1}-bp'+\ell-1\brack \ell}_{v^{c}}$$
where $p'\ge0,q'\ge0,k>0,\ell>0$ are integers. We may compute the first quantum binomial coefficient as
$${a_{2}-cq'+k-1\brack k}_{v^{b}}=\frac{\prod_{i=1}^{k}[a_{2}-cq'+k-i]_{v^{b}}}{\prod_{i=1}^{k}[i]_{v^{b}}}
=\frac{\prod_{i=1}^{k}(v^{b(a_{2}-cq'+k-i)}-v^{-b(a_{2}-cq'+k-i)})}{\prod_{i=1}^{k}(v^{bi}-v^{-bi})}
$$
which is of the form
$\sum_{i=-k}^{k} c_{i}(v)v^{ba_{2}i}$
where each $c_{i}(v)\in\mathbb{Q}(v)$ does not depend on $a_{1}, a_{2}$. Similarly the second quantum binomial coefficient is of the form
$\sum_{j=-\ell}^{\ell} c_{j}(v)v^{ca_{1}j}$.
So $\bar{e}(p,q)$ (resp. $\underline{e}(p,q)$) is of the given form \eqref{star}. The second claim of (1) follows immediately.
(ii) Multiplying by a common denominator we can assume all $d_{i}(v)$ are polynomials in $v$. Without loss of generality we assume $d_{M}(v)\neq0$. If $M>0$, then as $n$ grows to infinity, so does the degree of $v^{nM}$. But we have seen in Lemma~\ref{lem:linear}(ii) that $f_n=n\cdot f_1$ for all $n$ and, by the argument above, for $n$ sufficiently large the degree of $f_n(v)$ will be strictly larger than the degree of $f_1(v)$, a contradiction. Thus $M=0$ and $f_n(v)$ does not depend on $n$, in particular $f_1(v)=f_n(v)=nf_1(v)$ for all $n$, but this implies $f_1(v)\equiv 0$.
\end{proof}
\begin{theorem}\label{theorem:greedy exists:q}
For any $(a_1,a_2)\in\mathbb{Z}^2$ we have
\[\overline{X}[a_1,a_2]=\underline{X}[a_1,a_2]=\overline{\underline{X}}[a_1,a_2]=X[a_1,a_2].\]
\end{theorem}
\begin{proof}
If $(a_1,a_2)$ is not a positive imaginary root then the result was already proven in Corollary~\ref{cor:cluster monomials:q}. Let $(a_{1},a_{2})$ be a positive imaginary root. By Lemma~\ref{lem:linear}(ii) and Lemma~\ref{linear implies 0}(ii) we see that $\bar{e}(p,q)_{a_1,a_2}=\underline{e}(p,q)_{a_1,a_2}$ for every lattice point $(p,q)$ on the interior of the segment $OB$. Now Lemma~\ref{lem:pq':q} gives $\bar{\underline{e}}(p,q)=\bar{e}(p,q)=\underline{e}(p,q)=0$ for every $(p,q)$ on the boundary $\overline{\underline{R}}_{\text{greedy}}[a_1,a_2]\setminus R_{\text{greedy}}[a_1,a_2]$, i.e. each of $\overline{X}[a_1,a_2]$, $\underline{X}[a_1,a_2]$, $\overline{\underline{X}}[a_1,a_2]$ satisfies the pointed support condition for $R_{\text{greedy}}[a_1,a_2]$. The result now follows from Proposition~\ref{quasi-greedy_axiom} and Proposition~\ref{prop:equivalent}.
\end{proof}
We may now prove our main theorem.
\begin{proof}[Proof of Theorem \ref{main theorem1} and \ref{main theorem2}]
Theorem \ref{main theorem1} follows immediately from Theorem \ref{theorem:greedy exists:q} and Theorem~\ref{recursive-definition_u}.
The five parts in Theorem \ref{main theorem2} can be seen as follows. Claim (a) follows from Proposition~\ref{mean invariant:q} and Theorem~\ref{theorem:greedy exists:q}. For (b), we note that $X[a_1,a_2]$ is bar-invariant because, by definition, each coefficient $e(p,q)$, as well as each monomial $X^{(bp-a_1,cq-a_2)}$, is bar-invariant. The quantum greedy basis is independent of the choice of an initial cluster because $X[a_1,a_2]=\overline{\underline{X}}[a_1,a_2]$ by Theorem \ref{theorem:greedy exists:q} and $\overline{\underline{X}}[a_1,a_2]$ is mutation invariant by Proposition \ref{mean invariant:q}. Claim ({c}) is the content of Corollary \ref{cor:cluster monomials:q}. To see (d), assume that $X[a_1,a_2]$ is the sum of two universally positive elements in $\mathcal{A}_v(b,c)$. Since the specialization (by substituting $v=1$) of any universally positive element in $\mathcal{A}_v(b,c)$ is universal positive in $\mathcal{A}(b,c)$, the specialization $x[a_1,a_2]$ of $X[a_1,a_2]$ is also decomposable, in contradiction to Theorem \ref{main theorem-commutative} (d). Claim (e) follows immediately by comparing the recurrence relations \eqref{eq:classical recurrence} and \eqref{eq:recurrence} defining the commutative greedy basis and quantum greedy basis respectively.
\end{proof}
\section{Appendix: quantum binomial theorem}\label{ap:symmetric}
Here we recall some useful facts about quantum binomial coefficients. Let $w$ be an indeterminate.
\begin{lemma} \label{le:quantum binomial theorem}
Let $X$ and $Y$ be quasi-commuting variables with $YX=w^2XY$. Then for any integer $n$ we have
\begin{equation}
(X+Y)^n=\sum\limits_{k\ge0}{n\brack k}_ww^{k(n-k)}X^kY^{n-k}.
\end{equation}
\end{lemma}
\begin{proof}
We work by induction on $|n|$, the case $n=0$ being trivial. For $n>0$ the following calculation accomplishes the induction step:
\begin{align*}
(X+Y)^n&=(X+Y)(X+Y)^{n-1}\\
&=(X+Y)\sum\limits_{k\ge0}{n-1\brack k}_ww^{k(n-1-k)}X^kY^{n-1-k}\\
&=\sum\limits_{k\ge0}\left({n-1\brack k-1}_ww^{(k-1)(n-k)}+{n\brack k}_ww^{k(n-1-k)+2k}\right)X^kY^{n-k}\\
&=\sum\limits_{k\ge0}\left({n-1\brack k-1}_ww^{-n+k}+{n\brack k}_ww^k\right)w^{k(n-k)}X^kY^{n-k}\\
&=\sum\limits_{k\ge0}{n\brack k}_ww^{k(n-k)}X^kY^{n-k}.
\end{align*}
But notice that reading the same calculation backwards proves the result for $n<0$.
\end{proof}
\begin{lemma}\label{cor:quantum convolution}
For $m,n\in\mathbb{Z}$ and $k\ge0$ we have
\begin{equation}
{m+n\brack k}_w=\sum\limits_{r+s=k}w^{nr-ms}{m\brack r}_w{n\brack s}_w.
\end{equation}
\end{lemma}
\begin{proof}
Let $X$ and $Y$ be quasi-commuting variables with $YX=w^2XY$. The left hand side of the desired equality is the coefficient of $w^{k(m+n-k)}X^kY^{m+n-k}$ in the product $(X+Y)^{m+n}$ while the right hand side is its coefficient in the product $(X+Y)^m(X+Y)^n$.
\end{proof}
|
1,108,101,562,957 | arxiv | \section{Acknowledgments}
\label{sec:ack}
The research work supported by the National Key Research and Development Program of China under Grant No. 2018YFB1004300, the National Natural Science Foundation of China under Grant No. U1836206, U1811461, 61773361, the Project of Youth Innovation Promotion Association CAS under Grant No. 2017146.
\section{Conclusion}
\label{sec:conclusion}
To overcome the problem of aligning distributions of only one single representation from the source and target domains, we tried to propose Multi-Representation Adaptation (MRA) to align the distributions of multiple representations. Along this line, we proposed a framework of Multi-Representation Adaptation Networks (MRAN) to learn multiple domain-invariant representations which might contain more information. In particularly, we proposed a hybrid neural structure named Inception Adaptation Module (IAM) to extract multiple representations from images. Note that, our framework can be adapted to different networks. Moreover, we extended the marginal distribution discrepancy measure MMD to conditional MMD, which is effectively incorporated in our model. Finally, Extensive experiments are conducted on three datasets to demonstrate the effectiveness of the proposed model.
\section{Experiments}\label{experiment}
We evaluate the Multi-Representation Adaptation Network (MRAN) against state-of-the-art domain adaptation methods on three datasets: \textbf{ImageCLEF-DA}, \textbf{Office-31} and \textbf{Office-Home}.
\subsection{Experimental setup}
\subsubsection{Datasets}
\textbf{ImageCLEF-DA}\footnote{http://imageclef.org/2014/adaptation.} is a benchmark dataset for ImageCLEF 2014 domain adaptation challenge, which is organized by selecting the 12 common categories shared by the following three public datasets, each is considered as a domain: Caltech-256 (C), ImageNet ILSVRC 2012 (I), and Pascal VOC 2012 (P). There are 50 images in each category and 600 images in each domain. We use all domain combinations and build 6 transfer tasks: I $\rightarrow$ P, P $\rightarrow$ I, I $\rightarrow$ C, C $\rightarrow$ I, C $\rightarrow$ P, P $\rightarrow$ C.
\textbf{Office-31}~\cite{saenko2010adapting} is a benchmark for domain adaptation, comprising 4,110 images in 31 classes collected from three distinct domains: Amazon(A), which contains images downloaded from amazon.com, Webcam(W) and DSLR(D), which contain images taken by web camera and digital SLR camera with different photographical settings. The images in each domain are unbalanced across the 31 classes. To enable unbiased evaluation, we evaluate all methods on all six transfer tasks A $\rightarrow$ W, D $\rightarrow$ W, W $\rightarrow$ D, A $\rightarrow$ D, D $\rightarrow$ A, W $\rightarrow$ A as in~\cite{long2016deep,tzeng2014deep,ganin2015unsupervised}.
\textbf{Office-Home}~\cite{venkateswara2017deep} is a new dataset which consists 15,588 images larger than Office-31 and ImageCLEF-DA. It consists of images from 4 different domains: Artistic images (A), Clip Art (C), Product images (P) and Real-World images (R). For each domain, the dataset contains images of 65 object categories collected in office and home settings.
\subsubsection{Baselines}
We compare MRAN with various kinds of competitors, including Transfer Component Analysis (TCA)~\cite{pan2011domain}, Geodesic Flow Kernel (GFK)~\cite{gong2012geodesic}, Deep Convolutional Neural Network ResNet~\cite{he2016deep}, Deep Domain Confusion (DDC)~\cite{tzeng2014deep}, Deep Adaptation Network (DAN)~\cite{long2015learning}, Deep CORAL (D-CORAL)~\cite{sun2016deep}, Reverse Gradient (RevGrad)~\cite{ganin2015unsupervised}, Joint Adaptation Networks (JAN)~\cite{long2016deep}, Multi-Adversarial Domain Adaptation (MADA)~\cite{pei2018multi} and Collaborative and Adversarial Network (CAN)~\cite{zhang2018collaborative}.
To further validate the effectiveness of conditional distribution adaptation and IAM, we also evaluate several variants of MRAN: (1) MRAN (CMMD), which adds the CMMD module to ResNet; (2) MRAN (IAM), which uses IAM without adaptation loss; (3) MRAN (CMMD+IAM), which uses IAM with CMMD as the adaptation loss. Note that MRAN (CMMD) improves DAN~\cite{long2015learning} by replacing the multiple MMD penalties in DAN by the CMMD penalty. Besides, MRAN (IAM) imporves ResNet~\cite{he2016deep} by replacing the global average pooling layers by IAM. Inspired by GoogLeNet~\cite{szegedy2015going}, we use four substructures ($n_r=4$) in this work. However, you can set any number of substructures for other applications. (substructure1: conv1$\times$1, conv5$\times$5; substructure2: conv1$\times$1, conv3$\times$3, conv3$\times$3; substructure3: conv1$\times$1; substructure4: pool, conv1$\times$1).
\subsubsection{Implementation Details}
We employ ResNet~(50 layers) to learn transferable deep representations and use the activations of the last feature layer $pool5$ as image representation for baselines~\cite{long2016deep}. Following standard evaluation protocols for unsupervised domain adaptation~\cite{long2015learning,ganin2015unsupervised}, we use all labeled source examples as the source domain and all unlabeled target examples as the target domain. The average classification accuracy and standard error over three random trials are reported for comparison. For all baseline methods, we either follow their original model selection procedures or conduct transfer cross-validation~\cite{zhong2010cross} if their model selection strategies are not specified. For MMD-based methods (TCA, DDC, DAN, RTN, JAN, MRAN), we adopt Gaussian kernel with bandwidth set to median pairwise squared distances on the training data~\cite{gretton2012kernel}.
All deep methods are implemented base on the pytorch framework, and fine-tune from pytorch-provided models of ResNet~\cite{he2016deep}. We fine-tune all convolutional and pooling layers and train the classifier layer via back propagation. Since the classifier is trained from scratch, we set its learning rate to be 10 times that of the other layers. We use mini-batch stochastic gradient descent (SGD) with momentum of 0.9 and the learning rate annealing strategy in RevGrad~\cite{ganin2015unsupervised}: the learning rate is not selected by a grid search due to high computational cost, it is adjusted during SGD using the following formula: ${\eta}_p = \frac{\eta_0}{(1+\alpha p)^\beta}$, where $p$ is the training progress linearly changing from $0$ to $1$, $\eta_0 = 0.01$, $\alpha = 10$ and $\beta = 0.75$, which is optimized to promote convergence and low error on the source domain. To suppress noisy activations at the early stages of training, instead of fixing the adaptation factor $\lambda$, we gradually change it from $0$ to $1$ by a progressive schedule: $\lambda_p = \frac{2}{exp(-\gamma p)} - 1$, and $\gamma = 10$ is fixed throughout the experiments~\cite{ganin2015unsupervised}. This progressive strategy significantly stabilizes parameter sensitivity and eases model selection for MRAN.
\subsection{Results}\label{results}
All the results of three datasets are shown in Tables~\ref{tab:Image-CLEF}, ~\ref{tab:office31} and~\ref{tab:officehome}, respectively. From these results, we have the following insightful observations:
$\bullet$ MRAN (CMMD+IAM) outperforms all comparison methods on most transfer tasks. Particularly, MRAN (CMMD+IAM) substantially improves the accuracy by large margins on ImageCLEF-DA dataset, which have the same number of images in different domains and different classes. The encouraging results indicate the importance of incorporating CMMD and IAM and validate that MRAN (CMMD+IAM) is able to learn better transferable representations.
$\bullet$ Comparing DAN with MRAN (CMMD) with the same Gaussian kernel, the only difference is that MRAN (CMMD) aligns the conditional distributions, while DAN aligns the marginal distributions. MRAN (CMMD) is better than DAN, and the reason may be that the data samples from the same category should lay in the same subspace, even if they belong to different domains~\cite{elhamifar2013sparse}.
$\bullet$ MRAN (CMMD + IAM) substantially outperforms MRAN (CMMD), which shows the importance of aligning distributions of multiple representations rather than single representation.
$\bullet$ MRAN (CMMD+IAM) performs better than MRAN (IAM) while other deep transfer learning methods perform better than ResNet, which indicates the importance of transfer learning.
Note that, different from all the previous deep transfer learning methods that only align the marginal distributions of representations extracted by a single structure, while our model aligns the conditional distributions of multiple representations extracted by a hybrid structure (IAM), which implies that MRAN (CMMD+IAM) has a more powerful transferrable ability.
\begin{table}[!th]
\small
\centering
\caption{Accuracy(\%) on ImageCLEF-DA for unsuperevised domain adaptation (ResNet)} \label{tab:Image-CLEF}
\begin{tabular}{@{}cccccccc@{}}
\toprule
Method & I $\rightarrow$ P & P $\rightarrow$ I & I $\rightarrow$ C & C $\rightarrow$ I & C $\rightarrow$ P & P $\rightarrow$ C & Avg \\
\midrule
ResNet~\cite{he2016deep} & 74.8$\pm$0.3 & 83.9$\pm$0.1 & 91.5$\pm$0.3 & 78.0$\pm$0.2 & 65.5$\pm$0.3 & 91.2$\pm$0.3 & 80.7\\
DDC~\cite{tzeng2014deep} & 74.6$\pm$0.3 & 85.7$\pm$0.8 & 91.1$\pm$0.3 & 82.3$\pm$0.7 & 68.3$\pm$0.4 & 88.8$\pm$0.2 & 81.8 \\
DAN~\cite{long2015learning} & 75.0$\pm$0.4 & 86.2$\pm$0.2 & 93.3$\pm$0.2 & 84.1$\pm$0.4 & 69.8$\pm$0.4 & 91.3$\pm$0.4 & 83.3\\
RevGrad~\cite{ganin2015unsupervised} & 75.0$\pm$0.6 & 86.0$\pm$0.3 & \textbf{96.2}$\pm$0.4 & 87.0$\pm$0.5 & 74.3$\pm$0.5 & 91.5$\pm$0.6 & 85.0\\
D-CORAL~\cite{sun2016deep} & 76.9$\pm$0.2 & 88.5$\pm$0.3 & 93.6$\pm$0.3 & 86.8$\pm$0.6 & 74.0$\pm$0.3 & 91.6$\pm$0.3 & 85.2\\
JAN~\cite{long2016deep} & 76.8$\pm$0.4 & 88.0$\pm$0.2 & 94.7$\pm$0.2 & 89.5$\pm$0.3 & 74.2$\pm$0.3 & 91.7$\pm$0.3 & 85.8\\
MADA~\cite{pei2018multi} & 75.0$\pm$0.3 & 87.9$\pm$0.2 & 96.0$\pm$0.3 & 88.8$\pm$0.3 & 75.2$\pm$0.2 & 92.2$\pm$0.3 & 85.8\\
CAN~\cite{zhang2018collaborative} & 78.2 & 87.5 & 94.2 & 89.5 & 75.8 & 89.2 & 85.8\\
\midrule
MRAN (IAM) & 76.2$\pm$0.7 & 88.4$\pm$0.5 & 91.4$\pm$0.2 & 84.2$\pm$0.1 & 69.2$\pm$0.2 & 88.6$\pm$0.3 & 83.0\\
MRAN (CMMD) & 78.7$\pm$0.2 & 91.1$\pm$0.2 & 94.2$\pm$0.4 & 88.9$\pm$0.1 & 75.1$\pm$0.3 & \textbf{93.1}$\pm$0.1 & 86.9\\
MRAN (CMMD+IAM) & \textbf{78.8}$\pm$0.3 & \textbf{91.7}$\pm$0.4 & 95.0$\pm$0.5 & \textbf{93.5}$\pm$0.4 & \textbf{77.7}$\pm$0.5 & \textbf{93.1}$\pm$0.3 & \textbf{88.3}\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[!th]
\small
\centering
\caption{Accuracy(\%) on Office-31 for unsuperevised domain adaptation (ResNet)} \label{tab:office31}
\begin{tabular}{@{}cccccccc@{}}
\toprule
Method & A $\rightarrow$ W & D $\rightarrow$ W & W $\rightarrow$ D & A $\rightarrow$ D & D $\rightarrow$ A & W $\rightarrow$ A & Avg \\
\midrule
ResNet~\cite{he2016deep} & 68.4$\pm$0.5 & 96.7$\pm$0.5 & 99.3$\pm$0.1 & 68.9$\pm$0.2 & 62.5$\pm$0.3 & 60.7$\pm$0.3 & 76.1\\
TCA~\cite{pan2011domain} & 74.7$\pm$0.0 & 96.7$\pm$0.0 & 99.6$\pm$0.0 & 76.1$\pm$0.0 & 63.7$\pm$0.0 & 62.9$\pm$0.0 & 79.3\\
GFK~\cite{gong2012geodesic} & 74.8$\pm$0.0 & 95.0$\pm$0.0 & 98.2$\pm$0.0 & 76.5$\pm$0.0 & 65.4$\pm$0.0 & 63.0$\pm$0.0 & 78.8\\
DDC~\cite{tzeng2014deep} & 75.8$\pm$0.2 & 95.0$\pm$0.2 & 98.2$\pm$0.1 & 77.5$\pm$0.3 & 67.4$\pm$0.4 & 64.0$\pm$0.5 & 79.7\\
DAN~\cite{long2015learning} & 83.8$\pm$0.4 & 96.8$\pm$0.2 & 99.5$\pm$0.1 & 78.4$\pm$0.2 & 66.7$\pm$0.3 & 62.7$\pm$0.2 & 81.3\\
D-CORAL~\cite{sun2016deep} & 77.7$\pm$0.3 & 97.6$\pm$0.2 & 99.7$\pm$0.1 & 81.1$\pm$0.4 & 64.6$\pm$0.3 & 64.0$\pm$0.4 & 80.8\\
RevGrad~\cite{ganin2015unsupervised} & 82.0$\pm$0.4 & 96.9$\pm$0.2 & 99.1$\pm$0.1 & 79.7$\pm$0.4 & 68.2$\pm$0.4 & 67.4$\pm$0.5 & 82.2\\
JAN~\cite{long2016deep} & 85.4$\pm$0.3 & 97.4$\pm$0.2 & 99.8$\pm$0.2 & 84.7$\pm$0.3 & 68.6$\pm$0.3 & 70.0$\pm$0.4 & 84.3\\
MADA~\cite{pei2018multi} & 90.0$\pm$0.1 & 97.4$\pm$0.1 & 99.6$\pm$0.1 & \textbf{87.8}$\pm$0.2 & \textbf{70.3}$\pm$0.3 & 66.4$\pm$0.3 & 85.2\\
CAN~\cite{zhang2018collaborative} & 81.5 & \textbf{98.2} & 99.7 & 85.5 & 65.9 & 63.4 & 82.4\\
\midrule
MRAN (IAM) & 77.4$\pm$0.5 & 96.1$\pm$0.5 & 99.5$\pm$0.1 & 81.9$\pm$0.8 & 64.2$\pm$0.4 & 64.8$\pm$0.9 & 80.7\\
MRAN (CMMD) & 87.0$\pm$0.4 & 97.7$\pm$0.2 & \textbf{100.0}$\pm$0.0 & 85.8$\pm$0.5 & 67.3$\pm$0.1 & 66.2$\pm$0.2 & 84.0\\
MRAN (CMMD+IAM) & \textbf{91.4}$\pm$0.1 & 96.9$\pm$0.3 & 99.8$\pm$0.2 & 86.4$\pm$0.6 & 68.3$\pm$0.5 & \textbf{70.9}$\pm$0.6 & \textbf{85.6}\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[!th]
\small
\centering
\caption{Accuracy(\%) on Office-Home for unsuperevised domain adaptation (ResNet)} \label{tab:officehome}
\begin{tabularx}{\textwidth}{cXXXXXXXXXXXXX}
\toprule
Method & A$\rightarrow$C & A$\rightarrow$P & A$\rightarrow$R & C$\rightarrow$A & C$\rightarrow$P & C$\rightarrow$R & P$\rightarrow$A & P$\rightarrow$C & P$\rightarrow$R & R$\rightarrow$A & R$\rightarrow$C & R$\rightarrow$P & Avg \\
\midrule
ResNet~\cite{he2016deep} & 48.5 & 68.3 & 75.4 & 53.8 & 64.4 & 66.1 & 52.7 & 42.8 & 74.1 & 65.3 & 49.6 & 79.7 & 61.1\\
DDC~\cite{tzeng2014deep} & 50.5 & 66.5 & 75.0 & 53.6 & 62.6 & 65.1 & 53.2 & 44.8 & 73.7 & 64.1 & 50.8 & 78.2 & 61.5\\
D-CORAL~\cite{sun2016deep} & 51.5 & \textbf{68.9} & \textbf{76.3} & 55.8 & 65.1 & 67.2 & 54.7 & 45.3 & 75.2 & 67.0 & 53.6 & 80.3 & 63.4\\
DAN~\cite{long2015learning} & 53.3 & 68.8 & 75.9 & 56.9 & 64.8 & 66.5 & 56.0 & 49.7 & 75.0 & 68.2 & 56.5 & 80.3 & 64.3\\
JAN~\cite{long2016deep} & 52.6 & \textbf{68.9} & \textbf{76.3} & \textbf{57.7} & 66.0 & 67.6 & 56.3 & 48.5 & 76.0 & 68.1 & 55.7 & 81.2 & 64.6\\
\midrule
MRAN (IAM) & 49.6 & 68.3 & 75.0 & 51.1 & 62.6 & 64.3 & 53.4 & 43.6 & 73.9 & 65.2 & 50.7 & 79.1 & 61.4\\
MRAN (CMMD) & 53.5 & 68.7 & 76.1 & 57.5 & 66.1 & 68.2 & 57.3 & 51.4 & 75.9 & 68.3 & 57.4 & 81.1 & 65.1\\
MRAN (CMMD+IAM) & \textbf{53.8} & 68.6 & 75.0 & 57.3 & \textbf{68.5} & \textbf{68.3} & \textbf{58.5} & \textbf{54.6} & \textbf{77.5} & \textbf{70.4} & \textbf{60.0} & \textbf{82.2} & \textbf{66.2}\\
\bottomrule
\end{tabularx}
\end{table}
\subsection{Analysis}
To study the learnt representations of our model, we use MRAN (r1) as the representations extracted by the substructure1 (conv1$\times$1, conv5$\times$5), MRAN (r2) as the representations extracted by the substructure2 (conv1$\times$1, conv3$\times$3, conv3$\times$3), MRAN (r3) as the representations extracted by the substructure3 (conv1$\times$1) and MRAN (r4) as the representations extracted by the substructure4 (pool, conv1$\times$1). In addition, MRAN means the combined representations after fully connected layer.
\begin{figure}[t!]
\centering
\subfigure[MRAN]{
\begin{minipage}[b]{0.3\linewidth}
\centering
\includegraphics[width=1.5in,height=1.5in]{figure/MRANoutput5_all.eps}
\label{fig:4a}
\end{minipage}
}
\subfigure[MRAN(r1)]{
\begin{minipage}[b]{0.3\linewidth}
\centering
\includegraphics[width=1.5in,height=1.5in]{figure/MRANoutput1_all.eps}
\label{fig:4b}
\end{minipage}
}
\subfigure[MRAN(r2)]{
\begin{minipage}[b]{0.3\linewidth}
\centering
\includegraphics[width=1.5in,height=1.5in]{figure/MRANoutput2_all.eps}
\label{fig:4c}
\end{minipage}
}
\subfigure[MRAN(r3)]{
\begin{minipage}[b]{0.3\linewidth}
\centering
\includegraphics[width=1.5in,height=1.5in]{figure/MRANoutput3_all.eps}
\label{fig:4d}
\end{minipage}
}
\subfigure[MRAN(r4)]{
\begin{minipage}[b]{0.3\linewidth}
\centering
\includegraphics[width=1.5in,height=1.5in]{figure/MRANoutput4_all.eps}
\label{fig:4e}
\end{minipage}
}
\subfigure[DAN]{
\begin{minipage}[b]{0.3\linewidth}
\centering
\includegraphics[width=1.5in,height=1.5in]{figure/DDCall.eps}
\label{fig:5a}
\end{minipage}
}
\caption{Feature visualization: t-SNE of representations on source and target (a) MRAN; (b) MRAN (r1); (c) MRAN (r2); (d) MRAN (r3); (e) MRAN (r4); (f) DAN.}\label{fig:4}
\end{figure}
\begin{figure}[t!]
\centering
\subfigure[$\mathcal{A}$-distance]{
\begin{minipage}[b]{0.55\linewidth}
\centering
\includegraphics[width=2.5in, height=1.5in]{figure/distance.eps}
\label{fig:5c}
\end{minipage}
}
\subfigure[Accuracy w.r.t $\lambda$]{
\begin{minipage}[b]{0.4\linewidth}
\centering
\includegraphics[width=1.8in,height=1.5in]{figure/lingmindu.eps}
\label{fig:5b}
\end{minipage}
}
\caption{(a) $\mathcal{A}$-distance; (b) parameter sensitivity of $\lambda$. }
\label{fig:5}
\end{figure}
\textbf{Feature Visualization: } We visualize the network representations of task A $\rightarrow$ W learned by MRAN and DAN using t-SNE embeddings~\cite{donahue2014decaf} in Figure~\ref{fig:4} and Figure~\ref{fig:5a}.
Comparing the representations given by MRAN (r1), MRAN (r2), MRAN (r3), MRAN (r4) in Figures~\ref{fig:4b}-~\ref{fig:4e}, the multiple representations extracted by different substructures have different distributions and a different number of wrong clustering. All of these demonstrate that different neural structures have the ability to extract different representations and the multiple representations have different information. Comparing with the representations given by MRAN and DAN in Figure~\ref{fig:4a} and Figure~\ref{fig:5a}, The combined representations given by MRAN in Figure~\ref{fig:4a} shows that the target categories are discriminated much more clearly.
\textbf{Distribution Discrepancy:} The domain adaptation theory~\cite{ben2010theory,mansour2009domain} suggests the distribution discrepancy measure $\mathcal{A}$-distance, together with the source risk, will bound the target risk. Specifically, the proxy $\mathcal{A}$-distance is defined as $d_\mathcal{A} = 2(1 - 2\epsilon)$, where $\epsilon$ is the generalization error of a classifier (e.g. kernel SVM) trained on the binary problem of discriminating the source and target domain data. Figure~\ref{fig:5c} shows the results of $d_\mathcal{A}$ on tasks A $\rightarrow$ W, W $\rightarrow$ D with learnt representations of CNN, DAN and MRAN. The $d_\mathcal{A}$ of the combined representations in Figure~\ref{fig:5c} are smaller than the $d_\mathcal{A}$ of CNN, DAN, MRAN (r1), MRAN (r2), MRAN (r3) and MRAN (r4). All of these results demonstrate that the combined representations from multiple representations are better transferable than single representation. In addition, these also prove that MRA which align distributions of multiple representations extracted by a hybrid structure could achieve better performance than previous single-representation adaptation methods.
\textbf{Parameter Sensitivity:} We check the sensitivity of multiple adaptation loss parameter $\lambda$, which controls the relative weight for multiple adaptation loss. Figure~\ref{fig:5b} shows the performance of MRAN based on ResNet on tasks A $\rightarrow$ W and C $\rightarrow$ P by varying $\lambda \in \{0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2 \}$. The accuracy of MRAN first increases and then decreases as $\lambda$ varies and displays as a bell-shaped curve. From the results, we can find a proper trade-off value about 0.5 to achieve good transfer performance.
\textbf{Time Complexity:} CMMD loss involving pseudo label and the IAM indeed need some extra computations. We conduct additional experiments to record the time consuming of ResNet, DAN, MRAN(IAM) and MRAN(CMMD) for each iteration. All the experiments are conducted on a GeForce GTX 1080 Ti GPU, and the average time of each iteration over 100 iterations is recorded. The results are listed as follows: ResNet (0.147s), DAN (0.277s), MRAN(IAM) (0.173s) and MRAN(CMMD) (0.291s). Comparing MRAN(IAM) with ResNet, we could find that the IAM takes 0.025s more for each iteration which is very small. Comparing DAN with MRAN(CMMD), the only difference is that DAN uses MMD to align distributions and MRAN(CMMD) uses CMMD to align distributions. MRAN(CMMD) spends 0.014s more for each iteration than DAN, which is a common property of conditional alignment methods~\cite{pei2018multi}. Overall, though the use of IAM and CMMD would slightly increase the time complexity, it is reasonably and can greatly improve the performance.
\textbf{Insightful Findings:}
We get some findings from the experiments. (\textbf{1}) The IAM could extract multiple representations from images. The different representations could represent diverse abstract (underlying) views and we could see the t-SNE embeddings of different representations are different in Figure 3. (\textbf{2}) The MRA method (our MRAN) could achieve better performance than the single-representation adaptation methods as we could see in the Section 4.2, which also demonstrate the effectiveness of MRA. (3) Aligning conditional distributions are more effective than aligning marginal distributions.
\section{Introduction}\label{sec:intro}
As one of the fundamental technologies in computer vision, image classification has widely been researched. There are many applications of image classification, such as face recognition~\cite{parkhi2015deep}, handwritten recognition~\cite{lecun1990handwritten}, and human activity recognition~\cite{bulling2014tutorial}. To successfully construct an image classification system, a sufficient number of manually annotated images for each specific target domain are required beforehand. With a large amount of labeled training data and substantial computation resources, the satisfying performances can be achieved by deep neural networks recently~\cite{simonyan2015very,he2016deep}.
Nevertheless, in real situations, it is usually impractical to obtain sufficient manually labeled training data for every new scenario. Moreover, it is often prohibitively difficult and expensive to obtain enough labeled data. To alleviate this problem, domain adaptation~\cite{pan2010survey,zhuang2015supervised}, which aims to adapt the feature representation learned in the source domain with rich label information to the target domain with less or even no label information, has received much attention in recent years.
Recent domain adaptation methods achieve remarkable results by embedding domain adaptation modules in the pipeline of deep feature learning to extract domain-invariant representations. This can generally be achieved by optimizing some measures of domain shift~\cite{quionero2009dataset,pan2010survey}, e.g., maximum mean discrepancy~\cite{tzeng2014deep,long2015learning}, correlation distances~\cite{sun2016return,sun2016deep}, or minimizing an approximate domain discrepancy distance through an adversarial objective with respect to a domain discriminator~\cite{ganin2015unsupervised,tzeng2017adversarial}.
Most of the recent deep domain adaptation methods are based on convolutional neural networks which have the ability to extract abstract representations from high-dimensional images. However, this feature extraction process might lose some important information. Hence, comparing to the original images, the representations may only contain partial instead of complete information, e.g., only contain part of the saturation, brightness, and hue information. An intuitive example is given in Figure~\ref{fig1}. Figure~\ref{fig1}(a) is the original image, while Figures~\ref{fig1}(b)$\sim$(d) are the transformed forms. More importantly, we find that all transformed images only contain partial information, thus they may give us wrong or distorted facts of the real image. Therefore, we need to observe the objects from multiple points to get a comprehensive understanding.
\begin{figure}[t!]
\centering
\begin{minipage}[b]{1\linewidth}
\centering
\includegraphics[width=1\linewidth]{figure/motivation.eps}
\end{minipage}
\caption{(a) is the original image, while the parts of information in (b), (c), (d) are captured from (a) by different structures. (b), (c), (d) only contain part of the saturation, brightness and hue information, respectively. (The original image is from the Office31 dataset~\cite{saenko2010adapting}.)}\label{fig1}
\end{figure}
The previous deep domain adaptation methods mainly align the distributions from representations extracted from the source and target domain data by a single structure, i.e., they are single-representation adaptation methods. Similar to the transformed images in the example, the representations only contain partial information, hence the alignment also focuses on partial information. Hence, this might lead to unsatisfying transfer learning performance. To fully understand the objects, more representations should be considered when aligning the distributions. To this end, different structures of convolutional neural networks provide an option to extract multiple representations from images. Along this line, we propose Multi-Representation Adaptation (MRA), which tries to align the distributions of the source and target domains using multiple representations extracted by a hybrid neural structure.
Specifically, we propose a Multi-Representation Adaptation Networks (MRAN) to align distributions of multiple representations in a domain-specific layer across domains for unsupervised domain adaptation. To enable MRA, we propose a hybrid neural structure named Inception Adaptation Module (IAM) to extract multiple representations from images. A key novelty over previous single-representation adaptation methods is the capability of MRAN to learn multiple domain-invariant representations which contain more information. Furthermore, the nonparametric Maximum Mean Discrepancy~\cite{gretton2012kernel} (MMD) is extended to compute the adaptation loss based on conditional distribution, and integrated into deep neural networks. The IAM method can be implemented by most feed-forward models and trained efficiently using standard back-propagation. Extensive experiments performed on three benchmark datasets show that MRAN can achieve remarkable performance compared with state-of-the-art competitors.
The contributions of this paper are summarized as follows. (1) To the best of our knowledge, we are the first to learn multiple different domain-invariant representations by Inception Adaptation Module (IAM) for cross-domain image classification. (2) A novel Multi-Representation Adaptation Network (MRAN) is proposed to align distributions of multiple different representations which might contain more information about the images. (3) MMD is extended to measure the discrepancy of conditional distributions across different domains in deep neural networks. (4) Finally, we conduct extensive experiments to validate the effectiveness of MRAN.
\section{Multi-Representation Adaptation Networks}
\label{sec:model}
In unsupervised domain adaptation, we are given a source domain $\mathcal{D}_s=\{(\mathbf{x}^s_i,y^s_i)\}^{n_s}_{i=1}$ of $n_s$ labeled examples where $y^s_i \in \{ 1, 2, \dots, C \}$ and a target domain $\mathcal{D}_t=\{ \mathbf{x}^t_j\}^{n_t}_{j=1}$ of $n_t$ unlabeled examples. The source domain and the target domain are sampled from different probability distributions $P$ and $Q$ respectively, and $P \neq Q$. The goal is to design a deep neural network $y=f(\mathbf{x})$ that formally reduces the shifts of the distributions across domains and enables learning multiple transferable representations, such that the target risk $R_t(f)=\mathbb{E}_{(\mathbf{x}, y)~Q}[f(\mathbf{x}) \neq y]$ can be minimized by minimizing the source risk and domain discrepancy.
In recent years, the deep transfer networks have achieved remarkable results~\cite{long2015learning,tzeng2017adversarial}. We call these methods as single-representation adaptation methods since they only align the distributions from representations extracted by a single structure. However, the single-representation adaptation methods focus on the partial information of the samples mentioned above, thus they might not work well for diverse scenarios. Comparing to single-representation methods which only contain partial information, multi-representation models might cover more information. In other words, we aim to learn multiple domain-invariant representations. Along this line, we propose a hybrid structure named Inception Adaptation Module (IAM) which contains multiple substructures to extract multiple representations from images.
To achieve MRA, it is necessary to minimize the discrepancy between the distributions of the multiple representations extracted from the source and target domains. To this end, Maximum mean discrepancy (MMD)~\cite{tzeng2014deep,long2015learning} is extended to conditional maximum mean discrepancy (CMMD) which could compute the discrepancy of conditional distributions for multiple representations. Based on IAM and CMMD, we propose multi-representation adaptation network (MRAN). Note that, different from previous methods minimizing the discrepancy between the distributions of single representation, MRAN can align the distributions of multiple representations.
\subsection{Inception Adaptation Module}
Similar structures are adopted for recent convolutional neural networks, e.g., ResNet~\cite{he2016deep}, DenseNet~\cite{huang2017densely}, and generally the structure $y = f(\mathbf{x})$ is divided into three parts $g(\cdot)$, $h(\cdot)$, $s(\cdot)$. The first part is the convolutional neural network $g(\cdot)$, which is used to convert high-pixel images to low-pixel ones; the second part $h(\cdot)$ is the global average pooling to extract representations from low-pixel images; the third part is the classifier $s(\cdot)$ to predict labels. Hence, $y = f(\mathbf{x})$ is reformulated as $y = (s \circ h \circ g)(\mathbf{x})$ ( $(h \circ g)(\mathbf{x}) = h( g (\mathbf{x}) )$ ).
Some recent deep transfer methods~\cite{long2016deep,pei2018multi} use the activations of the global average pooling layer as image representations and then align the distributions of the single representation. However, this single-representation adaptation manner might miss some important information for further performance improvement. Thus it is necessary to learn multiple domain-invariant representations by minimizing the discrepancy between the distributions of multiple representations.
To learn multiple different domain-invariant representations, the easiest way is to train multiple different convolutional neural networks. However, it would be very time-consuming to train multiple convolutional neural networks. It is well known that different structure could extract different representations from images. Hence, we use the a hybrid structure IAM consisted of multiple substructures to extract multiple representations from low-pixel images. As an intuitive example shown in Figure~\ref{network}, IAM has multiple substructures $h_1(\cdot), \dots, h_{n_r}(\cdot)$ ($n_r$ is the number of substructures), which are different from each other. With the IAM replacing the global average pooling, multiple representations $(h_1 \circ g)(\mathbf{X}), \dots, (h_{n_r} \circ g)(\mathbf{X})$ can be obtained.
Comparing to the single representation, the multiple representations could cover more information. Hence, aligning the distributions of the multiple representations with more information could achieve better performance.
The adaptation task could be achieved by minimizing the discrepancy of distributions based on the multiple representations:
\begin{equation}
\min_{f} \sum_i^{n_r} \hat{d}((h_i \circ g)(\mathbf{X}_s), (h_i \circ g)(\mathbf{X}_t) ),
\label{adaploss}
\end{equation}
where $\mathbf{X}$ is the set of $\mathbf{x}$ and $\hat{d}(\cdot, \cdot)$ is an estimator of discrepancy between two distributions. To achieve the classification task, the concatenated vectors $[(h_1 \circ g)(\mathbf{X}); \dots; (h_{n_r} \circ g)(\mathbf{X})]$ are put into the classifier $s(\cdot)$ which contains a fully connected layer and a softmax layer. The fully connected layer is mainly used to recombine the multiple representations, and the softmax layer is used to output the predicted labels. Finally, the neural network $y = f(\mathbf{x})$ with IAM is reformulated as:
\begin{equation}
y = f(\mathbf{x}) = s( [(h_1 \circ g)(\mathbf{X}); \dots; (h_{n_r} \circ g)(\mathbf{X})] ).
\end{equation}
Different from previous single-representation adaptation networks, the deep transfer networks with IAM are capable to learn multiple domain-invariant representations. The IAM is a multi-representation extractor. Moreover, the multiple domain-invariant representations can cover more information.
It is worth noting that the IAM can be implemented by most feed-forward models. When you implement IAM in other networks, you just replace the last average pooling layer with IAM.
\begin{figure}[t!]
\centering
\begin{minipage}[b]{1\linewidth}
\centering
\includegraphics[width=4.5in]{figure/networks.eps}
\end{minipage}
\caption{Multi-Representation Adaptation Network (MRAN) aligns the conditional distributions of multiple representations. Inception Adaptation Module (IAM) could extract multiple representations from low-pixel images. By minimizing CMMD loss, the conditional distributions between the source and target domains are drawn close.}\label{network}
\end{figure}
\subsection{Conditional Maximum Mean Discrepancy}
To measure the Equation~\ref{adaploss}, a major issue is to choose a proper distance measure. We first introduce the non-parametric distance estimation Maximum Mean Discrepancy (MMD)~\cite{gretton2012kernel} which has been widely used to measure the discrepancy of marginal distributions:
\begin{equation}
\begin{split}
\hat{d}_\mathcal{H}(\mathbf{X}_s,\mathbf{X}_t)=\left\| \frac{1}{n_s} \sum_{\mathbf{x}_i\in \mathcal{D}_{\mathbf{X}^s}}\phi (\mathbf{x}_i)-\frac{1}{n_t} \sum_{\mathbf{x}_j\in \mathcal{D}_{\mathbf{X}^t}}\phi (\mathbf{x}_j)\right\|^2_\mathcal{H}.
\label{unbiased-mmd}
\end{split}
\end{equation}
By minimizing Equation~\ref{unbiased-mmd}, the marginal distributions between the source and target domains are drawn close.
According to~\cite{elhamifar2013sparse}, the data samples from the same class should lay on the same subspace, even if they belong to different domains. Hence, we reduce the difference in the conditional distributions instead of marginal distributions. Indeed, minimizing the discrepancy between the conditional distributions $P_s(y_s|\mathbf{x}_s)$ and $Q_t(y_t|\mathbf{x}_t)$ is crucial for robust distribution adaptation~\cite{sun2011two}. Unfortunately, it is nontrivial to match the conditional distributions, even by exploring sufficient statistics of the distributions, since there are no labeled data in the target domain, i.e., $Q(y_t,\mathbf{x}_t)$ cannot be modeled directly.
Fortunately, the output of the deep neural network $\hat{y}_i^t=f(\mathbf{x}_i^t)$ could be used as the pseudo label for data in target domain. Since the posterior probabilities $P(y_s|\mathbf{x}_s)$ and $Q(y_t|\mathbf{x}_t)$ are hard to represent~\cite{long2013transfer}, we resort to explore the sufficient statistics of class-conditional distributions $P(\mathbf{x}_s|y_s = c)$ and $Q(\mathbf{x}_t|y_t = c)$ instead w.r.t. each class $c\in \{1,\dots,C\}$. Now with the true labels of source domain data and pseudo labels of target domain data, we can essentially match the class-conditional distributions $P(\mathbf{x}_s|y_s = c)$ and $Q(\mathbf{x}_t|y_t = c)$. Here we modify MMD to measure the distance between the class-conditional distributions $P(\mathbf{x}_s|y_s = c)$ and $Q(\mathbf{x}_t|y_t = c)$, called CMMD:
\begin{equation}
\begin{split}
\hat{d}_\mathcal{H}(\mathbf{X}_s,\mathbf{X}_t)=\frac{1}{C} \sum^C_{c=1} \left\| \frac{1}{n_s^{(c)}} \sum_{\mathbf{x}_i^{s(c)}\in \mathcal{D}_{\mathbf{X}^s}^{(c)}}\phi (\mathbf{x}_i^{s(c)})-\frac{1}{n_t^{(c)}} \sum_{\mathbf{x}_j^{t(c)}\in \mathcal{D}_{\mathbf{X}^t}^{(c)}}\phi (\mathbf{x}_j^{t(c)})\right\|^2_\mathcal{H}.
\label{cmmd}
\end{split}
\end{equation}
By minimizing Equation~\ref{cmmd}, the conditional distributions between the source and target domains are drawn close. Though we adopt the pseudo labels of the target domain, we expect to iteratively improve the labeling quality during the optimization.
\subsection{Multi-Representation Adaptation Network}
To enable effective unsupervised domain adaptation, we propose Multi-Representation Adaptation Network (MRAN) as shown in Figure~\ref{network}, which aligns the distributions of multiple representations extracted by IAM in an end-to-end deep learning model. Note that the features in the lower layers of the network are transferable and hence will not require a further distribution matching~\cite{yosinski2014transferable}. The loss of MRAN is formulated as:
\begin{equation}
\min_{f} \frac{1}{n_s} \sum^{n_s}_{i=1} J(f( \mathbf{x}^s_i ), \mathbf{y}^s_i) + \lambda \sum_i^{n_r} \hat{d}( (h_i \circ g)(\mathbf{X}_s), (h_i \circ g)(\mathbf{X}_t) ),
\end{equation}
where $J(\cdot,\cdot)$ is the cross-entropy loss function (classification loss), $\hat{d}(\cdot,\cdot)$ is domain adaptation loss calculated by Equation~\ref{cmmd}, and $\lambda > 0$ is the trade-off parameter. We implement MRAN based on ResNet and replace the global average pooling by IAM. Specifically, these layers in the network are tailored to task-specific structures, which are adapted by minimizing classification error and CMMD.
Note that, training deep CNN requires a large amount of labeled data, which is prohibitive for many domain adaptation applications, so we start with the CNN pre-training on ImageNet2012 data and fine-tune it as~\cite{long2016deep}. The training of MRAN mainly follows standard mini-batch stochastic gradient descent(SGD) algorithm. In each mini-batch, we sample the same number of source domain data and target domain data to eliminate the bias caused by domain size.
\section{Related Work}\label{sec:relatedWork}
Our work mainly belongs to domain adaptation, and we will introduce the related work in three aspects: image classification, domain adaptation, and multi-view learning.
\textbf{Image Classification}. As one of the fundamental technologies in computer vision, image classification has widely been researched. On the basis of the assumption of the parameter on data, the image classifiers could be divided into parametric and non-parametric classifier. For parametric classifier, the parameters like mean vector and covariance matrix are frequently generated from training samples, such as Maximum likelihood, linear discriminant analysis. While non-parametric classifiers do not make use of statistical parameters to calculate class separation, such as neural network, svm, decision tree. Recently, the deep neural networks~\cite{simonyan2015very,he2016deep,wang2018scene,wang2018locality} have achieved remarkable performance for image classification. With the guidance of the human visual system (HVS), Wang et al.~\cite{wang2018scene} explore the attention mechanism and propose a novel endto-end attention recurrent convolutional network (ARCNet) for scene classification. LSLRR~\cite{wang2018locality} improves the classical low-rank representation with locality constraint criterion and structure preserving
strategy. However, they assume the training and test sets have the same distributions. Hence, these methods could not solve the cross-domain problems.
\textbf{Domain Adaptation}. Recent years have witnessed many approaches to solve the visual domain adaptation problem, which is also commonly framed as the visual dataset bias problem~\cite{quionero2009dataset,pan2010survey,zhuang2015survey}. Previous shallow methods for unsupervised adaptation include re-weighting the training data so that they could more closely reflect those in the test distribution~\cite{jiang2007instance}, and finding a transformation in a lower-dimensional manifold that draws the source and target subspaces closer~\cite{gong2012geodesic,pan2011domain}.
Most existing methods learn a shallow representation model to minimize domain discrepancy, which can not suppress domain-specific explanatory factors of variations. Deep networks learn abstract representations that disentangle the explanatory factors of variations behind data~\cite{bengio2013representation} and extract transferable factors underlying different populations~\cite{glorot2011domain,oquab2014learning}. Thus deep neural networks have been explored for domain adaptation~\cite{tzeng2014deep,ganin2015unsupervised,long2015learning,zhuang2015supervised,sun2016deep,tzeng2017adversarial,zhu2019adaptively}, where significant performance gains have been witnessed compared to prior shallow transfer learning methods.
The main strategy of deep transfer networks is to guide feature learning by minimizing the difference between the source and target distributions.
Some recent works bridge deep learning and domain adaptation~\cite{long2015learning,long2016deep,ganin2015unsupervised,pei2018multi,tzeng2015simultaneous,tzeng2017adversarial}, which extend deep convolutional neural networks (CNNs) to domain adaptation. These works are mainly divided into two class, embedding methods by adding adaptation layers through which the embedding of distributions are matched~\cite{tzeng2014deep,long2015learning,long2016deep,zhu2019aligning}, and adversarial methods by adding a subnetwork as domain discriminator while the deep features are learned to confuse the discriminator in a domain-adversarial training paradigm~\cite{bousmalis2017unsupervised,ganin2016domain,hoffman2017cycada,liu2016coupled,saito2018maximum,tzeng2017adversarial,zhang2018collaborative,kang2018deep,kumar2018co}. And recent related work extends the adversarial methods to a generative adversarial way~\cite{bousmalis2017unsupervised}. Besides of these two mainstreams, there are diverse methods to learn domain-invariant features: DRCN~\cite{ghifary2016deep} reconstructs features to images and makes the transformed images are similar to original images. D-CORAL~\cite{sun2016deep} ``recolors" whitened source features with the covariance of features from the target domain. All of these methods focus on aligning distributions of representations extracted by a single structure. However, the representations might only contain partial information. Our MRA could cover more information by aligning distributions of multiple representations extracted by a hybrid structure. Therefore, the representation capability can be enhanced.
\textbf{Multi-view learning} is concerned with the problem of machine learning from data represented by multiple distinct feature sets. The recent emergence of this learning mechanism is largely motivated by the property of data from real applications where examples are described by different feature sets or different `views'. And multi-view learning arouses
amounts of interests in the past decades~\cite{blum1998combining,gonen2011multiple,yarowsky1995unsupervised}. Different from multi-view learning which needs data represented by multiple distinct feature sets, Multi-Representation learning focuses on extracting the multiple representations from the single view of data by a hybrid structure.
|
1,108,101,562,958 | arxiv | \section{Introduction}
While the broad emission-line region (BELR) in active galactic nuclei (AGN) remains an enigma, in recent years there has been considerable focus on radiation line driven and magneto hydrodynamic driven winds from an accretion disk \citep[e.g.,][]{bp82,kk94,mcgv95,psk00,everett05}
as a key component. The existence of such winds has been demonstrated for accretion disks in cataclysmic variable star systems and hydrodynamic simulations show that the phenomenon scales to the much larger quasar black hole masses \citep[e.g.,][]{psk00}; however, the detailed gas and kinematic structure of the wind remains uncertain. This is an acute problem, which is important to our understanding of black hole growth and its influence on the evolution of the host galaxy, especially if such winds are a significant source of AGN ``feedback'' \citep[e.g.,][]{hhc+06,cop10}.
Perhaps the most important/most common wind diagnostic is the emission-line blueshift \citep{gas82}, which we have studied for a large sample of quasars \citep{rvr+02,Richards2011} and in the context of the narrow-line Seyfert 1 (NLS1) class \citep{lei01,Leighly04}. Specifically, high-ionization emission lines, like \ion{C}{4}\ $\lambda$\,1549 are found to peak at wavelengths blueward of their expected laboratory wavelength when referenced to low ionization lines (which are more indicative of the quasar systemic redshift; e.g., \citealt{tf92,HW10}). As far back as the early 1980s, this ``\ion{C}{4} blueshift'' had been suggested as the signature of an outflow \citep{gas82,Wilkes84}. While the exact physical origin of this blueshift is not agreed upon \citep{gaskell09}, one possibility is that it is related to the strength of the radiation pressure compared with the black hole mass and is sensitive to the Eddington accretion rate \citep{sbm+07}.
It may be that NLS1s are the low-luminosity, local analogues of large CIV blueshift quasars. If the typically steep hard-band photon indices of NLS1s ($\Gamma>2$) are signatures of high Eddington accretion rates \citep{pdo95,bme97,Leighly99},
then the same might be the case for large CIV blueshift quasars \citep[e.g.,][]{wwz+11} since a high accretion rate for a given mass is expected to be more conducive to wind driving through radiation pressure on spectral lines \citep{psd98}. Indeed, the common link between large \ion{C}{4}\ blueshift quasars, broad absorption line quasars (BALQSOs; \citealt{wmf+91}), and traditionally defined NLS1s \citep{op85} is likely a strong radiation-pressure driven wind as all three categories exhibit wind features in emission and/or absorption \citep{rvr+02,rrh+03,Leighly04} and may tend toward high $L/L_{\rm Edd}$ \citep[e.g.,][]{bg92,bor02,gbc+07}.
The premise of a radiation line driven wind \citep[e.g.,][]{ls70,cak75} is fairly simple and has applications to quasars \citep[e.g.,][]{mcgv95,psk00}. The integrated opacity in ultra-violet (UV) bound-bound transitions accelerates gas as photons at specific wavelengths are scattered by ions in the gas. As the gas velocity increases, the Doppler shift makes photons in new wavelength ranges able to contribute to the acceleration. This process allows a large amount of momentum to be extracted from continuum and emission-line photons, thus driving an outflow from the accretion disk.
For this process to work, the quasar must reduce its X-ray continuum such that
the ions which absorb in the UV are not destroyed by over-ionization.
To some extent, we know that outflows driven at least in part by radiation pressure must be occurring in BALQSOs because we see large equivalent width absorption troughs where both energy and momentum have been removed from the continuum \citep{wmf+91,akb+95}.
In this paper, we follow-up on the work of \citet{rvr+02}, \citet{grh+05}, and \citet{Richards2011} using SDSS quasars at $z>1.54$ where the presence of \ion{C}{4}\ emission in the optical provides a powerful diagnostic of accretion disk physics. In particular, \citet{Richards2011} showed that a number of different emission-line features were consistent with a two-component disk+wind model of the BELR \citep[e.g.,][]{cdm+88,Leighly04,Collin06}. Emission-line features suggestive of a weaker than average ionizing spectrum (relative to the UV) were found in quasars with weak, highly blueshifted \ion{C}{4}\ emission. On the other hand, quasars with emission-line features that indicate a strong ionizing spectrum were found to have very strong \ion{C}{4}\ emission lines at the systemic redshift.
These extrema can be crudely categorized respectively as being dominated by a ``wind'' component and by a ``disk'' component of the BELR, although it seems that both components may be present in all objects. While some of the differences seen in the BELR are likely derived from intrinsic differences in the spectral energy distribution (SED; e.g., a large {\em intrinsic} UV to X-ray flux ratio is needed to produce a strong radiation line driven wind; \citealt{mcgv95,pk04}), processing of the ionizing continuum {\em through} the wind can have a significant affect on the emission-line features that arise within the disk component \citep{Leighly04,lc07}. Here we explore the X-ray properties of quasars (such as the optical/UV to X-ray flux ratio, $\alpha_{\rm ox}$) over the full range of \ion{C}{4}\ properties to see if the evidence that the extrema in \ion{C}{4}\ emission-line properties traces extrema in the ionizing continuum is consistent with the X-ray properties of quasars.
While our work focuses specifically on the \ion{C}{4}\ emission line, our results should be viewed in the context of the broader ``eigenvector 1 (EV1)'' literature that stemmed from the ground-breaking work of \citet{bg92}. It is beyond the scope of this paper to provide a full review of this literature, but a newcomer to the field would be well served by a review of \citet{smd00} and \citet{bf99} and references therein; Table~2 in the latter nicely summarizes a broad range of correlated emission and continuum properties of quasars. While much of the EV1 work is founded in low-redshift quasars and investigations of optical emission lines, investigations that bridge the gap to high-redshift quasars and UV emission lines include \citet{msd+96}, \citet{wzg96}, and \citet{wlb+99}. More recent work focused on \ion{C}{4}\ includes \citet{bms+04}, \citet{bl04}, \citet{bl05}, \citet{sbm+07}, \citet{msn+10}, and \citet{wwz+11}. Lastly, \citet{Green96} and \citet{lfe+97} are among those notable for making connections between optical/UV emission lines and X-ray properties of quasars.
This paper is organized as follows. In Section~\ref{sec:data} we describe the data used in our analysis. In Section~\ref{sec:analysis} we investigate the range of X-ray properties as a function of \ion{C}{4}\ emission-line parameters. In Section~\ref{sec:discussion} we discuss the resulting SEDs. Finally, our conclusions are presented in Section~\ref{sec:conclusions}. Equivalent widths are given in units of rest-frame \AA, and velocities in km\,s$^{-1}$ with a sign convention such that positive velocities represent outflows in the quasar frame. Spectral indices are given according to $f_{\nu} \propto \nu^{\alpha}$ throughout, such that $\alpha$ represents the local slope of the SED in a log-log plot.
\section{Data}
\label{sec:data}
Our investigation is based on archival data described in Section~\ref{sec:archive} and new {\em Chandra} data described in Section~\ref{sec:newdata}. Many of the selection criteria used are common to all samples and are summarized here. In general, we have limited ourselves to quasars included in the \citet{srh+10} quasar catalog from the 7th Data Release (DR7; \citealt{aaa+09}) of the Sloan Digital Sky Survey (SDSS; \citealt{yaa+00}) such that we can make use of the SDSS spectroscopy of these sources. The exception is for some PG/BQS \citep{sg83} quasars where there exists space-based UV spectra of the \ion{C}{4}\ region.
We use the rest-frame \ion{C}{4}\ EQW and \ion{C}{4}\ blueshift as our main diagnostics. We have chosen not to include the commonly-used \ion{C}{4}\ FWHM measurement in our analysis as that parameter is difficult to interpret in the disk+wind model of the \ion{C}{4}\ emission region that we have adopted in \citet{Richards2011}. This concern is supported by the work of \citet{wwz+11}, who found that \ion{C}{4}\ has contributions from both an outflowing and a gravitationally bound component (the relative contributions of which may be dependent on the Eddington ratio). \citet{sgs+08} showed that \ion{C}{4}\ FWHM provides a biased estimator of the black hole mass, but argued that this bias can be calibrated out in the ensemble average.
Except for the PG quasars, we matched the objects to the database that we used in \citet{Richards2011} and extracted our own values for the \ion{C}{4}\ line parameters and corrected SDSS redshifts as given by \citet{HW10}\footnote{Redshifts for all the quasars included in the Schneider et al.\ (2010)
DR7 quasar catalogue can be obtained from http://www.sdss.org/dr7/products/value\_added/index.html\#quasars .}. BALQSOs and radio-loud (RL) quasars were excluded as the former are known to be absorbed in the X-ray \citep[e.g.,][]{gm96,gbc+02} and the latter are known to have jet-enhanced X-ray emission \citep[e.g.,][]{wtg+87,mbs+11}. BALQSOs were taken from \citet{Allen10} and RL quasars were defined according to $\log L_{\rm 20\,cm} >32.0\; {\rm ergs\,s^{-1}\,cm^{-2}\,Hz}$ \citep[e.g.,][]{gkm+99}, or (for the PG quasars) $\log(R)>1$; see \citet{Richards2011}.
The following criteria were applied to limit the samples to objects with robust \ion{C}{4}\ measurements from SDSS and that are not significantly reddened (see \citealt{Richards2011} for more details): $z_{\rm em}>1.54$, EQW$_{CIV} >$ 5\AA (rest frame), $\alpha_{\rm UV} >-9$ (an error code), $\sigma_{\lambda CIV} <$ 10\AA , FWHM$_{CIV} >$ 1000, FWHM$_{CIV} > 2\sigma_{\rm FWHM_{CIV}}$, EQW$_{CIV} > 2\sigma_{EQW_{CIV}}$, $\Delta (g-i)\le0.3$ \citep{rhv+03}. We further excluded objects
with error codes as noted in the papers below.
Finally, given the heterogeneous nature of the data set, we do not attempt to include objects with X-ray non-detections using partial correlation analysis \citep[e.g.,][]{ifn86}; however, this choice imposes a bias that should not be completely neglected.
Table~\ref{tab:tab1} lists all of the data sources, the number of objects matched to our database of SDSS-DR7 spectral properties, the number removed because they are BALs, RL, outside of the redshift range covered (\ion{C}{4}\ must be visible in the SDSS spectra), or have optical properties that do not meet the above criteria. Objects can be rejected for more than one reason.
Thus, we also tabulate the final number of objects kept (after resolving duplicate entries [in favor of the most recent data] between the samples).
Finally, we tabulate the redshift and $l_{\rm uv}$ range of the objects kept from each sample. The objects from \citet{ssb+06} are not tabulated in Table~\ref{tab:tab1} as they are not matched to the SDSS database; see below.
In the end, we have a sample of 409 unique quasars.
Figure~\ref{fig:distribution} shows the distribution of the objects in \ion{C}{4}\ EQW-blueshift parameter space.
Of these objects, 164 have values for the X-ray photon index, $\Gamma$; their distribution is shown in Figure~\ref{fig:gamdistrb}.
\subsection{Archival Data}
\label{sec:archive}
In \citet{grh+05} we used {\em Chandra} to explore the X-ray properties of SDSS quasars at the extrema of the \ion{C}{4}\ blueshift distribution; archival observations filled in the blueshift distribution and provided additional objects at the extrema. In the interim many more investigations of (largely) serendipitous quasar observations have been conducted by a number of groups (culminating in the {\em Chandra} Source Catalog\footnote{http://cxc.harvard.edu/csc} project), greatly simplifying the use of archival data. As such, in addition to new {\em Chandra} observations described below, this investigation takes advantage of archival X-ray data from the following sources: \citet{grh+05}, \citet{sbs+05}, \citet{ssb+06}, \citet{kbs+07}, \citet{jbs+07}, \citet{gbs08}, \citet{gar+09}, and \citet{yer09}. See \citet{wbh+11} for an investigation of objects even more extreme than considered herein.
The values taken from the following papers were $L_{2500}$, $\alpha_{\rm ox}$, $\Delta \alpha_{\rm ox}$, and $\Gamma$, where $\alpha_{ox} \equiv 0.384 \times \mbox{log}(f_{2 keV}/f_{2500})$ is the flux ratio between the X-ray portion of the SED (at $2\,{\rm keV}$) and the optical/UV portion (at $2500\,{\rm \AA}$) of the SED. We will follow standard convention and abbreviate $\log L_{2500}$ as $l_{\rm uv}$. $\Delta \alpha_{\rm ox}$\ is the luminosity-corrected value of $\alpha_{\rm ox}$, defined by $\Delta \alpha_{\rm ox} \equiv \alpha_{\rm ox} - \alpha_{\rm ox}(L_{2500})$ using the $L_{\rm UV}$--$\alpha_{\rm ox}$ relationship from \citet{jbs+07}. Our sign convention is such that for both $\alpha_{\rm ox}$\ and $\Delta \alpha_{\rm ox}$, more negative values indicate (relatively) weaker X-ray sources. $\Gamma$ is the X-ray photon index, which is related to the standard spectral index in the X-ray regime by $\Gamma\equiv1-\alpha_{\rm x}$.
If $L_{2500}$ was not given, we instead used $L_{2800}$ from our database. For a power-law spectral index of $\alpha_{\rm opt}=-0.44$ \citep{vrb+01}, $\log L_{2500} = \log L_{2800}-0.02$ (and $\log L_{2500} = \log L_{1550}+0.09$). While our 2800\,\AA\ luminosities are continuum luminosities and $L_{2500}$ typically includes emission-line flux, the difference is typically negligible when it comes to computing $\alpha_{\rm ox}$\ ($\alpha_{\rm ox}$\ changes by only $0.04$ for a 10\% line contribution). If $\Delta \alpha_{\rm ox}$\ was not given, we derived it from $L_{2500}$ and $\alpha_{\rm ox}$.
Only some of the samples included measurements of $\Gamma$.
\citet{sbs+05} cataloged 155
SDSS-DR2 quasars in {\em ROSAT} medium-deep fields.
In all, 86\% were detected in the X-ray. The {\em ROSAT} objects had an average exposure time of 16.7\,ks and were not targeted by SDSS solely because of their X-ray detections; only four {\em ROSAT} detections were from pointed observations of the quasars themselves. From their Table 1 we extracted $L_{2500}$ and $\alpha_{\rm ox}$. There is no information on $\Gamma$ for these data. All other parameters (e.g., \ion{C}{4}\ and $\Delta \alpha_{\rm ox}$) were determined as described above.
\citet{ssb+06} extended the work of \citet{sbs+05} to lower luminosities using data from the COMBO-17 survey in the E-CDF-S \citep{wmk+04} and with 46 low-$z$ quasars from the PG sample. As \citet{bl05} provide the \ion{C}{4}\ measurements for the PG sample, we were able to include the objects from the PG sample in our analysis; however, these are not matched to the SDSS spectral database and thus are not tabulated in Table~\ref{tab:tab1}. Table~2 from \citet{ssb+06} provided values for $L_{\rm 2500}$ and $\alpha_{\rm ox}$; no information on $\Gamma$ is available.
No SDSS \ion{C}{4}\ data exist for the COMBO-17 objects and they are not included here. In addition, while \citet{ssb+06} also utilize 19 $z>4$ quasars, many of these are outside of the SDSS footprint --- as such, they lack other parameters needed for our analysis and they are also excluded.
\citet{kbs+07} report X-ray properties for seven new high-redshift quasars and 167 archival quasar with $z<4$; of these 157 match to our SDSS-DR7 spectral database. The values for $\alpha_{\rm ox}$, $\Gamma$, and $L_{2500}$ were extracted from their Table 4. The energy range for $\Gamma$ was 0.3--7.0\,keV; most sources had insufficient counts to fit simultaneously for intrinsic $N_{\rm H}$. Using these data, \citet{kbs+07} confirmed (at the $3.5\sigma$ level) the anti-correlation between $\Gamma$ and $\alpha_{\rm UV}$ seen by \citet{grh+05}.
\citet{jbs+07} examine 59 highly luminous quasars with $z>1.5$, including 21 objects with new {\em Chandra} observations; 32 objects are SDSS-DR3 quasars.
From their Table 4, we extracted $L_{2500}$, $\alpha_{\rm ox}$, and $\Delta \alpha_{\rm ox}$.
$\Gamma$ values are taken from their Table 3; the energy range was 0.3--8.0\,keV and the signal-to-noise ratio was not high enough to justify additional constraints on $N_H$ for individual sources.
For most sources no intrinsic absorption was allowed when fitting for $\Gamma$; however, sources with over 200 counts were simultaneously fit for $\Gamma$ and $N_H$.
\citet{gbs08} catalog {\em Chandra} and {\em XMM-Newton} observations of 536 SDSS-DR5 quasars covering $1.7<z<2.7$. Most of the observations are serendipitous; less than 9\% of the quasars were the targets of the X-ray observations. We obtained $L_{2500}$, $\alpha_{\rm ox}$ and $\Delta \alpha_{\rm ox}$\ from their Table 1. No X-ray spectral indices ($\Gamma$) are available. Using these data \citet{gbs08} argue that the spread of $\Delta \alpha_{\rm ox}$\ is mostly due to variability (see also \citealt{hgr+06}) and that the fraction of quasars that are intrinsically X-ray weak by a factor of 10 or more is $<2$\%.
\citet{gar+09} present {\em Chandra} data for 1135 spectroscopic and photometric SDSS-DR6 quasars analyzed by the Chandra Multiwavelength Project (ChaMP; \citealt[][]{gsc+04}); we include the fraction of these that have SDSS spectroscopic redshifts. From their Table~2 we extract values for $\Gamma$, $L_{2500}$, and $\alpha_{\rm ox}$\ (flipping the sign of $\alpha_{\rm ox}$\ to agree with our convention). For objects with more than 200 counts, $N_H$ was fit in addition to $\Gamma$; the energy range was 0.5--8.0\,keV.
Finally, \citet{yer09} tabulate serendipitous {\em XMM-Newton} observations of 792 SDSS-DR5 quasars. 473 of these quasars had sufficient S/N to determine $\Gamma$ (and $N_H$, if warranted). The spectral range for fitting was 0.5--10\,keV. $\Gamma$ and $\alpha_{\rm ox}$\ values are taken from their Table~2.
\subsection{New Data}
\label{sec:newdata}
While our goal is to understand the physics of quasars in the ensemble average, we often gain critical insight by first trying to understand the extrema. This is as true for \ion{C}{4}\ blueshifts as it is for any other quasar parameter. As outliers in any distribution are rare, we have expanded the sample of high-blueshift quasars with X-ray data by observing seven new radio-quiet sources in {\em Chandra's} Cycle 9. We also obtained additional time on three sources from \citet{grh+05} to improve the S/N for those sources. The new targets were chosen to be the highest blueshift quasars ($>1500{\rm \,km\,s}^{-1}$) from the SDSS-DR5 sample that met the following criteria. Redshift was limited to $1.6<z<2.2$ such that both CIV and MgII are seen. BALQSOs and RL quasars were excluded as discussed above.
Finally, to maximize our X-ray counts, we further limited the sample to $i<17.5$ and Galactic $E(B-V)<0.04$.
The targets are summarized in Table~\ref{tab:chandra}. Exposure times were set so as to achieve 100 total counts for each object. All targets were detected with between 63 and 140 counts, and fluxes, luminosities, and photon indices were derived from the soft- and hard-band photometry following the methods described in \citet{gbc+06}. Spectra with sufficient counts were analyzed with XSPEC \citep{XSPEC} using unbinned data and C-statistics to constrain $N_H$ and $\Gamma$. The X-ray properties of the sample are summarized in Table~\ref{tab:xcalc} and these objects are combined with the archival data above in our analysis.
Three sources in the \citet{grh+05} sample, J0051$-$0102, J0147$+$0001, and J0208$+$0022, were reobserved in our new program with the goal of increasing the counts in the combined spectra for each object. Between the observations, the sources showed significant flux variability of $-$52\%, $+$26\%, and $+$45\%, respectively. While the $\Gamma_{\rm HR}$ values between epochs were consistent within the 1$\sigma$ error bars, the low counts in the individual spectra could obscure significant spectral variability as well. Combined fitting of the new and old data together was inconclusive (in terms of the presence of absorption), likely because of the substantial variability between epochs.
\section{Analysis}
\label{sec:analysis}
The key results from \citet{Richards2011} and \cite{grh+05} were to provide a bridge between properties of the broad emission lines in quasars with their spectral energy distributions. With this in mind, we further consider the X-ray properties of the heterogeneous data set described above. We specifically consider the distributions of $\alpha_{\rm ox}$, $\Delta \alpha_{\rm ox}$, and $\alpha_{\rm x}$\ in the \ion{C}{4}\ EQW-blueshift parameter space as defined by \citet{Richards2011}. If the differences seen across the range of \ion{C}{4}\ emission-line properties are indicative of a range of abilities to drive a strong accretion disk wind, and if radiation line driving plays a significant role in driving such a wind, then we might expect to see differences in X-ray properties of quasars across the \ion{C}{4}\ EQW-blueshift parameter space. Ultimately, our hope is that this work is a first step toward improving the effective resolution of quasar SEDs in the ``unseen'' part of the continuum that covers nearly two decades in frequency between the optical and the X-ray.
\subsection{$\alpha_{\rm ox}$}
In a disk-wind picture, quasars with radiation-pressure dominated winds should be {\em intrinsically} X-ray weak relative to the optical/UV and are likely to display X-ray warm absorbers \citep{grh+05} that may {\em additionally} reduce the soft X-ray flux along our line of sight. Our working hypothesis is that an object's location in the \ion{C}{4}\ EQW-blueshift parameter space reflects its intrinsic ability to drive a strong accretion disk wind. Assuming that radiation line driving dominates the wind component, we expect to see a reduction in $\alpha_{\rm ox}$\ across the \ion{C}{4}\ EQW-blueshift parameter space (from top left to bottom right in Figure~\ref{fig:distribution}) as the hypothetical wind strength increases. See \citet[Figure~7]{lm04}, for evidence of this behavior in NLS1s.
Figure~\ref{fig:aoxhist} gives the range of $\alpha_{\rm ox}$\ values in the sample, while
Figure~\ref{fig:aox} shows how $\alpha_{\rm ox}$\ changes across the \ion{C}{4}\ EQW-blueshift parameter space using two different diagnostics. First we show the location of the individual quasars in our heterogeneous sample color-coded by $\alpha_{\rm ox}$\ (circles) as indicated by the color-bar on the right-hand side. The dashed lines indicate the divisions used in \citet{Richards2011} to create composite spectra as a function of \ion{C}{4}\ emission-line properties (see also Section~\ref{sec:seds}). There appears to be a general trend towards more negative $\alpha_{\rm ox}$\ values (objects relatively weaker in the X-ray) from the top-left to the lower-right. However, as there is considerable scatter in the diagram, we also bin the results to better discern the general trend. Specifically the colored squares indicate the median values of \ion{C}{4}\ EQW and blueshift after sorting the values of $\alpha_{\rm ox}$\ and grouping them into 10 bins.
Although the median values span a much smaller range in \ion{C}{4}\ properties than the raw sample, they clearly reveal a general trend towards X-ray weaker objects in the lower right-hand corner.
Further evidence for a trend in $\alpha_{\rm ox}$\ with \ion{C}{4}\ properties comes from performing a Student's $t$-test for different means \citep[e.g.,][]{ptv+92} using the objects in the upper-left and lower-right regions of the \ion{C}{4}\ parameter space in Figure~\ref{fig:aox}. We find that the mean values for $\alpha_{\rm ox}$\ in those bins are $-1.497$ and $-1.700$, respectively, with a standard deviation of $0.021$, yielding a $t$ value of $9.85$ with very high significance that the samples have different means.
This trend of decreasing relative X-ray strength as we move from the upper-left (large EQW, small blueshift) to the lower-right
(small EQW, large blueshift) in Figure~\ref{fig:aox} is consistent with the expectations of a model of the BELR where a radiation line driven wind is influencing the properties of \ion{C}{4}. When the X-ray flux is high, a strong radiation line driven wind cannot form (though perhaps there is still an MHD-driven wind; \citealt{Proga03}) and the ionizing continuum photons can create a large population of triply ionized carbon atoms in the outer part of the accretion disk. If instead, the X-ray flux is weaker, a strong radiation line driven wind can form, shielding the outer accretion disk from an already weaker X-ray flux. In this case the \ion{C}{4}\ line is formed mostly in the wind itself and may be blueshifted due to the outflowing nature of the source and lower in equivalent width due to the overall reduction in ionizing photons \citep{Leighly04}. It is less clear what is happening in the lower-left corner (small EQW, and small blueshifts), but it would appear that those objects have more heterogeneous $\alpha_{\rm ox}$\ values than the objects in the upper-left or lower-right corners. Third parameters, such as covering fraction, orientation, absorption, and/or variability effects could explain objects in the lower left-hand corner (\citealt[e.g.,][]{hc10}; Bowler, Allen, \& Hewett, in preparation). As noted in \citet{Richards2011}, objects with both large \ion{C}{4}\ blueshift and large \ion{C}{4}\ EQW do not seem to exist and this is unlikely to be a selection effect.
\iffalse
[Karen Page 17 About those low equivalent width, low blueshift quasars.
Remember that covering fraction is a parameter that has not been
taken into account in this discussion, and indeed, that could be
responsible for at least part of the scatter. But that doesn't
explain why there are more objects are preferentially in that
lower-left corner.
Another possibility - perhaps you've thought through this, though:
if the line emission varies at a lower amplitude than the
continuum, which it should, an object which increases its continuum
flux might be observed to have weak lines. And you'd
preferentially detect those versus the ones with negative
fluctuation (and so stronger lines) due to the flux-limited nature
of the survey. Is that possible?]
\fi
Our results are consistent with the results of \citet{wvb+09} who have done a multivariate regression of EQW against both $l_{\rm uv}$ and $\alpha_{\rm ox}$. They find that \ion{C}{4}\ EQW correlates with $\alpha_{\rm ox}$\ such that harder spectra have stronger \ion{C}{4}. What our work does is to further separate low EQW objects in another dimension. While there is no degeneracy for large EQW quasars, those with smaller \ion{C}{4}\ EQW values can have a range of \ion{C}{4}\ blueshifts. Since $\alpha_{\rm ox}$\ is also correlated with \ion{C}{4}\ blueshift (in a way that is not trivially related to the \ion{C}{4}\ EQW), it is important to explore trends in $\alpha_{\rm ox}$\ in this 2-D parameter space.
\iffalse
Student's t results:
aox
t = 9.85303744323
significance = 1.0071271342e-16
mean and S_D values:
aox
Bin 1 mean: -1.49680769231
Bin 8 mean: -1.70046551724
S_D: 0.0206695474474
\fi
\subsection{$\Delta \alpha_{ox}$}
As it is well known that $\alpha_{\rm ox}$\ is correlated with UV luminosity \citep[e.g.,][]{at82,jbs+07}, the above trends with $\alpha_{\rm ox}$\ in the previous section might instead be ascribed to luminosity. Indeed, we find that UV luminosity is increasing from the upper-left to the lower-right in Figure~1. As such, it has become common to reference $\alpha_{\rm ox}$\ values to the mean $\alpha_{\rm ox}$\ values observed for objects with the same UV luminosity according to $\Delta \alpha_{ox} \equiv \alpha_{ox} - \alpha_{ox}(L_{2500})$. Here we adopt the expected value of $\alpha_{\rm ox}$\ from \citet{jbs+07}, specifically $\alpha_{ox}(L_{2500}) = (-0.140 \pm 0.007) \times \mbox{log}(L_{2500}) + (2.705 \pm 0.212)$. $\Delta \alpha_{ox}$ indicates the relative amount of X-ray emission, whereby negative values of $\Delta \alpha_{ox}$ indicate a relative deficit of X-rays, while positive values indicate an X-ray surplus. Figure~\ref{fig:daoxhist} shows the distribution of $\Delta \alpha_{\rm ox}$\ values in our sample.
By using $\Delta \alpha_{\rm ox}$\ instead of $\alpha_{\rm ox}$, luminosity is marginalized and any remaining SED effects must instead be due to the {\em shape} of the SED rather than its absolute value. It is therefore significant that Figure~\ref{fig:daox} shows the same general trend as does Figure~\ref{fig:aox}. While taking out the luminosity dependence has apparently reduced the trend for individual objects, the median values of \ion{C}{4}\ EQW and blueshift (using 10 bins sorted in $\Delta \alpha_{\rm ox}$) again show a significant trend from more X-ray luminous objects in the upper left-hand corner to relatively weaker X-ray sources in the lower right-hand corner.
\citet{gbs08} similarly find that low EQW objects and large blueshift objects have more negative values of $\Delta \alpha_{\rm ox}$\ (their Figures 8 and 10), although it should be noted that half of our sample comes from their paper, so our results are not independent.
As with $\alpha_{\rm ox}$, a Student's $t$-test confirms that the mean values of $\Delta \alpha_{\rm ox}$\ in the upper-left and lower-right parts of the \ion{C}{4}\ distribution in Figure~\ref{fig:daox} are significantly different. We find respective mean values of $0.103$ and $-0.030$ with a standard deviation of $0.021$, yielding $t=6.55$ with high significance.
Unfortunately our understanding of the X-ray SED data is in roughly the same state as our understanding of the optical/UV SED was in the 1990s. \citet{fhf+91} were able to establish that the optical/UV spectral index was $\sim-0.4$ with an error of $\sigma_{\alpha}\sim0.5$; however, it wasn't until \citet{rhv+03} that the photometric errors were small enough that the $\alpha_{\rm uv}$ distribution could be shown to have an intrinsic spread rather than a single characteristic value smeared by measurement errors. In the case of the X-ray, variability likely plays more of a role than measurement errors especially since both the optical and X-ray measurements are subject to changes over time and are non-simultaneous. Indeed, the majority of the width in the $\Delta \alpha_{\rm ox}$\ distribution can be attributed to variability \citep{hgr+06,gbs08}. The width of the $\Delta \alpha_{\rm ox}$\ distribution notwithstanding, there does appear to be a trend towards relatively weaker X-ray sources in objects with large \ion{C}{4}\ blueshifts.
\subsection{$\Gamma$}
While the $\alpha_{\rm ox}$\ and $\Delta \alpha_{\rm ox}$\ trends with \ion{C}{4}\ EQW and blueshift are expected in a radiation line driven wind model, we have less intuition regarding how the X-ray spectral indices ($\alpha_{\rm x} = 1- \Gamma$) should behave. At low redshift, where X-ray data sample the soft X-ray part of the spectrum, it is observed that $\Gamma_{\rm soft}$ correlates with the strength of \ion{Fe}{2} (relative to H$\beta$) and anti-correlates with the FWHM of H$\beta$ \citep{szm+00}; X-ray softer objects thus have stronger \ion{Fe}{2} and narrower H$\beta$ \citep{lfe+97}. In the context of black hole binaries, such soft-spectrum objects (including NLS1s; \citealt{bbf96}) are thought to have high accretion rates \citep{pdo95}.
Unfortunately there are relatively few objects with spectral coverage of both the H$\beta$ and \ion{C}{4}\ line, so it is hard to know how these trends would propagate to higher redshift objects. However, using {\em Hubble Space Telescope} data on \ion{C}{4}\ for low-$z$ quasars, \citet{sbm+07} have shown that large blueshift objects tend towards softer $\Gamma_{\rm soft}$. The $\Gamma$ values tabulated here (Fig.~\ref{fig:gammahist}) generally refer to the harder part of the X-ray spectrum from 2\,keV to 10\,keV, so it does not necessarily follow that large blueshift objects should tend towards softer $\Gamma_{\rm hard}$; however, that does seem to be the case. If higher UV luminosity results in more Compton cooling of the corona, then one indeed might expect $\Gamma_{\rm hard}$ to follow $\Gamma_{\rm soft}$ \citep{pdo95,lfe+97} and that quasars with large blueshift might have high accretion rates.
Figure~\ref{fig:gamma} shows a weak trend of increasing (softer) $\Gamma$ with increasing blueshift and decreasing \ion{C}{4}\ EQW. As with the $\alpha_{\rm ox}$\ and $\Delta \alpha_{\rm ox}$\ figures the median \ion{C}{4}\ properties as a function of $\Gamma$ show a clearer trend than can be discerned from looking at the distribution of individual objects.
We find a Student's $t$ of $-3.221$ with significance of $0.002$, showing that there is a clear difference when comparing the means of objects in the top-left ($\Gamma=1.829$) and the lower-right ($\Gamma=2.052$) parts of Figure~\ref{fig:gamma}; the standard deviation is $0.069$. Clearly further work is needed to better understand whether the the harder $\Gamma$ values for strong \ion{C}{4}\ quasars are indicative of the intrinsic X-ray spectrum.
\section{Discussion}
\label{sec:discussion}
\subsection{Empirical SEDs}
\label{sec:seds}
The values of $\alpha_{\rm ox}$, $\Delta \alpha_{\rm ox}$, and $\Gamma$ are perhaps more meaningful within the context of the overall SED. As such, in Figure~\ref{fig:sed} we create typical SEDs spanning the \ion{C}{4}\ EQW-blueshift bins used by \citet{Richards2011}; the \ion{C}{4}\ properties of each SED are shown in Figure~\ref{fig:sedpos}.
For this analysis, we imposed a $z<2.2$ limit such that both \ion{C}{4}\ and \ion{Mg}{2} are observed and a UV spectral index between the continua underlying those lines could be determined empirically. The SEDs were created using the {\em median} values of L$_{1550}$, $\alpha_{\rm opt}$, $\alpha_{\rm ox}$ and $\Gamma$ for each sub-sample, where the first two quantities were taken from our own work and the last two quantities were taken from the literature as described above (with the exception of our new {\em Chandra} data). To make the SEDs, the median value of $\log(L_{1550}$) was used with the median $\alpha_{\rm opt}$ to calculate a value for log(L$_{\nu}$) at 2500\,\AA. In turn, log(L$_{2500}$) was used with $\alpha_{\rm ox}$ to determine a value for log(L$_{2\, \rm keV}$). Then $\alpha_{\rm x}$ was used to plot a point at $L_{10\,\rm keV}$. In addition to our optical and X-ray results, we have included the UV results of \citet{Scott2004} who found that the UV spectral index over $\sim$650--1100\,\AA\ is a function of luminosity. Specifically, we have used the results of Table~4 and Figure~18 from \citet{Scott2004} and our value of $L_{1550}$ (as a proxy for $L_{1100}$ used by \citealt{Scott2004}) to estimate the spectral slope over $500<\lambda ({\rm \AA})<1216$ [$15.78>\log({\rm freq [Hz]})>15.39$]. The luminosity dependences of $\alpha_{\rm UV}$ and $\alpha_{\rm ox}$\ appear to be fairly comparable as the break at 500\,\AA\ is difficult to discern, which is consistent with the findings of \citet{lfe+97}.
We emphasize that there is little information on the shape of the SED between 500\,\AA\ and $\sim0.2$\,keV for luminous quasars; $\alpha_{\rm ox}$\ only parametrizes the UV to X-ray flux ratio, not the actual shape of the SED, which has implications for bolometric corrections; see Section~\ref{sec:bc}.
Interestingly, those objects that are relatively stronger at 2\,keV relative to 2500\,{\rm \AA} (i.e., have a harder ionizing spectrum) are also apparently harder (i.e., flatter) over 2--10\,keV.
Although the differences between the SEDs are subtle, the effects are clearly systematic. Relative to their 2500\,\AA\ luminosities, the large \ion{C}{4}\ EQW objects have ``harder'' SEDs than the large \ion{C}{4}\ blueshift objects. We illustrate this more clearly in the inset of Figure~\ref{fig:sed} by normalizing all of the SEDs to the same 2500\,\AA\ luminosity. It is important to realize that while the $\alpha_{\rm ox}$\ distribution for these SDSS quasars nowhere near spans the extrema as characterized by the weak-lined quasar PHL 1811 ($\alpha_{\rm ox}=-2.3$; \citealt{lhj+07}) and narrow-line Seyfert 1 galaxy RE 1034+39 ($\alpha_{\rm ox}=-1.2$; \citealt{clb06}), nevertheless, for the same UV luminosity, a change in $\alpha_{\rm ox}$\ of just $0.2$ corresponds to a change in flux at 2\,keV of a factor of 3. This difference is simply a reflection of the large lever arm between 2500\,\AA\ and 2\,keV (2.6 decades in frequency). As such it is not unreasonable to expect significant differences in the ability of quasars to drive a wind through radiation line pressure as we move across the \ion{C}{4}\ EQW-blueshift parameter space.
\subsection{Comparison with Model SEDs}
Since there are few {\em empirical} constraints on the 500\,\AA\ to 0.2\,keV part of the SED, it is useful to compare our empirical SEDs to some example model SEDs. We specifically consider model SEDs similar to those described in \citet{clb06} in accordance with the AGN spectrum in the CLOUDY package \citep{fer03}. The optical/UV portion of the SED is modeled as a power law with exponential cutoffs at high and low energies as parametrized by a cutoff temperature (the temperature of the inner edge of the accretion disk for the high energy cutoff). The optical spectral index is taken to be $\alpha_{\rm opt}=-0.33$ and the X-ray spectral index is fixed to $\alpha_{\rm x}=-1\; (\Gamma = 2)$. The optical/UV part of the spectrum is related to the X-ray part of the spectrum by the well-known $L_{\rm UV}$--$\alpha_{\rm ox}$\ relationship \citep[e.g.,][]{jbs+07}. Note that, in this model, there is a degeneracy between $L_{\rm disk}$, $L_{\rm UV}$, $L_{\rm X}$, and $T_{\rm cut}$ that means that the SEDs are uniquely described by a single parameter that can be characterized by $\alpha_{\rm ox}$.
In Figure~\ref{fig:bb} we show a model SED (solid black line) that best represents the average quasar in our sample. Two examples that are meant to bracket the range of SEDs spanned by our \ion{C}{4}\ sample (in terms of $L_{\rm UV}$ and $\alpha_{\rm ox}$) are also shown.
The two extreme SEDs shown have cutoff temperatures of 20\,eV and 50\,eV and have, respectively, $\alpha_{\rm ox} = -1.74$ and $\alpha_{\rm ox} = -1.5$, and
$l_{\rm UV}$ of 31.8 and 30.0.
As noted by \citet{lfe+97}, such model SEDs with very ``hot'' accretion disks are apparently inconsistent with the UV part of empirical SEDs \citep[e.g.,][]{ewm+94,rls+06}. Indeed, the extreme models shown in Figure~\ref{fig:bb} have spectral indices of $-1.24$ and $-0.69$ at 800\,\AA, respectively, which are within the range observed by \citet{Scott2004}, but are too blue/hard for these luminosities as can be seen by comparison with our empirical SEDs (dotted lines). However, there is evidence that the SED the emission-line gas sees may not be the same as the SED that we see \citep{kfb97} and our work herein and in \citet{Richards2011} might be taken as further evidence of such a situation. If both the observer and the emitting gas {\em do} see the same SED then, if this model parametrization is accurate, our empirical data would suggest that a lower cutoff temperature might be more appropriate for the luminous quasars in our sample. We are currently pursuing a more detailed characterization of the near-UV part of the spectrum for SDSS quasars (Krawczyk et al.\ 2011, in preparation).
Aside from the near-UV slope mismatch, the 20\,eV peaked spectrum is representative of the most luminous quasars in our sample (in terms of $\alpha_{\rm ox}$). The relatively soft spectrum (X-ray weak relative to UV) would be conducive to driving a strong wind through radiation line driving, yielding a large \ion{C}{4}\ blueshift. On the other hand, the 50\,eV peaked spectrum is more representative of the less luminous quasars in our sample. For such quasars, the relatively hard spectrum is more likely to overionize the accretion disk atmosphere and has less UV flux, both of which would tend to inhibit a radiation line driven wind. This harder SED is one that we associate with quasars that have strong \ion{C}{4}\ emission that is dominated by the disk component of the BELR.
Although there is a lack of data to characterize the shape of the SED over much of the crucial ionizing part of the spectrum, our model SEDs represent extrema characterizing the ionizing SED for a given $\alpha_{\rm ox}$\ (c.f., the simple power-law model, dotted lines in Figure~\ref{fig:bb}) in that they have as much far-UV flux as one could imagine. Thus, these extrema are particularly interesting to consider in terms of bolometric corrections; see Section~\ref{sec:bc}. An intermediate model might be one that has a cooler accretion disk with a Comptonized extension to soft X-ray energies \citep[e.g.,][]{lfe+97,gd04}, with another, hotter Comptonized region producing the hard X-ray spectrum. For example, even if the accretion disk is not as hot as depicted by the models in Figure~\ref{fig:bb}, \citet{kfb97} suggest that quasars with strong \ion{He}{2} (which tend to have strong \ion{C}{4}\ at the systemic redshift; \citealt{Richards2011}) would require a factor of a few higher flux at $\sim50$\,eV than most empirical SEDs (including ours) would suggest.
\iffalse
X values for all SEDs: [15.079181246047625, 15.286789556549371, 15.778151250383644, 17.68444107586626, 18.383411080202279]
Y values:
1: [45.824299789225691, 45.895555533849368, 45.71531315062834, 44.521669874316373, 44.591566874749972]
2: [45.832601746814539, 45.879568986049371, 45.703175639659065, 44.347603643817912, 44.47691309462008]
3: [46.05948146646201, 46.143108022699373, 45.90326311246357, 44.274878483036247, 44.072177181778798]
4: [45.99986485309438, 46.092615352549373, 45.864927422639823, 44.46276155350138, 44.508194603783224]
5: [45.975707955510615, 46.064411513249368, 45.843514143602853, 44.308341664426678, 44.25941376412316]
6: [46.038919794266711, 46.12732101114937, 45.891277095954109, 44.423658699779153, 44.472586600082678]
7: [46.106010890707509, 46.210943690549371, 45.954766174471573, 44.3865394030272, 44.407508503157288]
8: [46.384655655710873, 46.477307544749372, 46.156998341329356, 44.53492117653964, 44.465024176106034]
\fi
Comparison of the shape of the model SEDs in Figure~\ref{fig:bb} is relevant not only for bolometric corrections, but may also relate to the origin
of soft X-ray excesses \citep[e.g.,][]{abc+85,wf93,gd04,cfg+06,dc07,sd07}. Specifically, since the UV through X-ray SEDs of radio-quiet quasars appear to change significantly from one corner of \ion{C}{4}\ parameter space to the other, it is interesting to ask where soft X-ray excess objects lie in the \ion{C}{4}\ EQW-blueshift parameter space. Unfortunately the bandpasses of {\em Chandra} and {\em XMM-Newton} are such that it is difficult to identify soft X-ray excesses in high-redshift quasars. However, the 21 low-redshift PG quasars studied by \citet{pro+04} have UV measurements of \ion{C}{4}\ from \citet{bl04}, \citet{bl05}, and \citet{sbm+07} and can be placed in our \ion{C}{4}\ parameter space.
We find that the \citet{pro+04} objects are generally in the upper-left of the \ion{C}{4}\ parameter space (i.e., they tend to have large EQW and small blueshift, which, ironically, suggest that soft excess objects are actually X-ray hard [relative to the UV]). Although the \citet{pro+04} objects are much less luminous than the average quasars in our sample ($l_{\rm uv}\sim29.7$), they are actually well matched to the lowest luminosity (and generally hard-spectrum) objects of our sample (being bright objects at low redshift as opposed to faint objects at high redshift).
As such, it would be valuable to explore the possibility of correlations between the \ion{C}{4}\ properties of quasars and the presence of a soft X-ray excess.
We specifically predict that the average high-luminosity, high-$z$ quasar does not have an X-ray soft excess based on the relative locations in the 2D \ion{C}{4}\ parameter space of low-$z$ soft excess objects.
Lastly, we note that
a strict linear interpolation between 2500\,\AA\ and 2\,keV would predict a {\em significant} soft X-ray excess in all of our sources (compare the empirical and model SEDs in Figure~\ref{fig:bb} for energies higher than 0.2\,keV). A lack of such a strong soft X-ray excess in luminous quasars would mean that while $\alpha_{\rm ox}$\ may accurately indicate the 2\,keV flux density in quasars, it is a poor characterization of the shape of the true ionizing SED over $\sim10$--1000\,eV and we reiterate that the models shown in Figure~\ref{fig:bb} essentially represent the opposite extrema. Here we close by merely pointing out that the apparent hard-spectrum nature of disk-dominated BELRs and soft-spectrum nature of wind-dominated BELRs suggests that these two extrema may have different soft X-ray excess properties and that the presence or lack of a soft X-ray excess might provide a way to reverse engineer the detailed shape of the ionizing SED (which is important to radiation line driving).
\subsection{Bolometric Corrections}
\label{sec:bc}
AGN luminosities are often derived from observations in one, or at best a few, narrow bandpasses, whereas one wants the luminosity integrated
over the full SED (or perhaps just the accretion disk's contribution; \citealt{mrg+04}), i.e., the bolometric luminosity. For example, the bolometric luminosity is the relevant quantity needed to estimate the accretion rate through $L=\eta\dot{M}c^2$.
Considering our empirical SEDs and the range of $\alpha_{\rm ox}$\ values in our sample, we can see from Figure~\ref{fig:sed} that while the difference in SEDs may not appear to be very large, it is clearly {\em systematic}. If we assume the same bolometric correction for all quasars --- regardless of their SEDs, then there will be a systematic error in $L_{\rm bol}$ as a function of their location in \ion{C}{4}\ EQW-blueshift parameter space. Indeed, a change of 0.2 in $\alpha_{\rm ox}$\ corresponds to a 35\%
change in the integrated luminosity between 2500\,\AA\ and 2\,keV (assuming that the SED is described only by $\alpha_{\rm ox}$).
If there are additional features in the SED at far-UV wavelengths \citep[e.g.,][]{Scott2004} or if the shape of the EUV spectrum is more like the models in Figure~\ref{fig:bb} than our power-law fits, then that is an additional source of systematic error in the bolometric correction that arises through the adoption of a single mean SED (or even one parameterized by the $L_{\rm UV}$--$\alpha_{\rm ox}$ relation).
Indeed, \citet{rls+06} found that the there is no single bolometric correction, but rather a roughly factor of four range around the mean value. While \citet{rls+06} were unable to determine which classes of quasars should use lower/higher bolometric corrections, our work here would suggest that bolometric corrections may be systematically underestimated for large EQW-small blueshift quasars (i.e., those with disk-dominated BELRs) and systematically overestimated for small EQW-large blueshift quasars (i.e, those with wind-dominated BELRs). It is beyond the scope of this paper to specifically parameterize the bolometric correction as a function of the indirectly measured ionizing flux in the EUV; however, we hope to pursue that in a future publication (Krawczyk et al.\ 2011, in preparation). Given the trends with UV luminosity in the EQW-blueshift parameter space, we might expect that more accurate bolometric corrections would result in a narrower distribution of $L_{\rm bol}$. As $L_{\rm bol}$ is generally used to estimate the Eddington ratio (i.e., mass weighted accretion rate), $L_{\rm bol}/M_{\rm virial}$, those values may also have systematic errors.
\subsection{Nomenclature}
Given our findings on the systematic changes in SED over \ion{C}{4}\ EQW-blueshift parameter space, we consider the issue of nomenclature for these differences.
In \citet{Richards2011} we referred to the RQ quasars with small blueshifts and large EQWs as ``RL-like'' because RL quasars are predominantly found in that part of \ion{C}{4}\ parameter space.
Based on our results above, we suggest a simpler, yet more descriptive nomenclature. As RQ quasars in the RL part of parameter space have harder (more ionizing) spectra, such objects could be broadly categorized as hard-spectrum quasars (HSQs). The high-blueshift, low-EQW quasars on the other hand have softer (less ionizing) spectra and could be considered to be soft-spectrum quasars (SSQs). These distinctions are not dissimilar from the Population A/B terminology used by \citet{szm+00}, where the most extreme Population A objects would correspond roughly to SSQ and Population B to the HSQ objects. The hard/soft terminology simply has the advantage of being physically motivated.
It is less clear what is happening in the low-EQW, low-blueshift quasars (occupying the other part of the ``Population A'' parameter space). Weak lines
could indicate
a relatively weak ionizing continuum, but the lack of a strong blueshift might indicate that the ionizing continuum must be intrinsically weak rather than $\alpha_{\rm ox}$\ being soft. Another possibility is that $\alpha_{\rm ox}$\ could be soft, but the UV flux is insufficient to drive a wind. Further {\em Chandra} and/or {\em XMM-Newton} observations are needed to better understand the differences between sources with weak \ion{C}{4}\ that is systemic and weak \ion{C}{4}\ that is highly blueshifted.
If the winds in high-blueshift, low-EQW quasars (i.e., SSQs) are predominantly radiatively driven, their soft-spectrum nature cannot be generally due
to absorption along our line of sight, despite the fact that these objects appear to be the parent sample of BALQSOs. That is because the SED needs to be {\em intrinsically} soft in order to drive a strong radiation pressure driven wind. (MHD driving could also be at work, and may be a source of scatter in our diagrams.) Rather, it must be that for BALQSOs, there is additional absorption of an SED that already intrinsically lies at the soft (X-ray weaker) extrema of the range of normal SEDs spanned by quasars
\citep[e.g.,][]{gm96,gbc+02}.
\section{Conclusions}
\label{sec:conclusions}
Much as the FWHM of H$\beta$, strength of \ion{Fe}{2} (relative to H$\beta$), and soft X-ray properties can be used to divide low-$z$ quasars into extrema in the ``Eigenvector 1'' context \citep{bg92,lfe+97,sbm+07}, so can the properties of the \ion{C}{4}\ emission line at high redshift. In \citet{Richards2011}, we showed that the extrema of large (small) \ion{C}{4}\ EQW and small (large) \ion{C}{4}\ blueshift can be attributed to different components of the BELR: a disk and a wind, respectively. Here, we consider the UV through X-ray properties of radio-quiet quasars in the \ion{C}{4}\ EQW-blueshift parameter space and argue that these extrema are likely due to a hard ionizing spectrum in the ``disk''-dominated systems and a soft ionizing spectrum in the ``wind''-dominated systems (Figures~\ref{fig:aox}, \ref{fig:daox}, and \ref{fig:sed}). Indeed these extrema could instead be considered as hard-spectrum quasars (HSQ) and soft-spectrum (SSQ) quasars. More work is needed to understand the nature of quasars with weak \ion{C}{4}\ that appears at the systemic redshift and the relationship between the optical to X-ray flux ratio ($\alpha_{\rm ox}$) and both the soft and hard X-ray spectral indices. We argue that differences in the \ion{C}{4}\ properties of quasars could be used to better trace the structure of the SED in the two decades of frequency space between UV and X-ray measurements, allowing for more accurate bolometric corrections for individual quasars (Figures~\ref{fig:sed} and \ref{fig:bb}).
The \ion{C}{4}\ emission line may further allow a quantification the differences in the ``unseen'' part of the ionizing SED (e.g., an excess in the Helium continuum above the nominal assumed from $\alpha_{\rm ox}$) and may help identify which objects are likely to show a soft X-ray excess feature.
\acknowledgments
Support for this work was provided by the National Aeronautics and Space Administration through Chandra Award Number G08-9103X issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics Space Administration under contract NAS8-03060. GTR acknowledges support from an Alfred P. Sloan Research Fellowship and NASA grant 07-ADP07-0035. SCG thanks the National Science and Engineering Research Council of Canada and an Ontario Early Research Award for support. KML acknowledges support by NSF AST-0707703. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, Cambridge University, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
\dataset [ADS/Sa.CXO#obs/9223] {Chandra ObsId 9223}
\dataset [ADS/Sa.CXO#obs/9224] {Chandra ObsId 9224}
\dataset [ADS/Sa.CXO#obs/9225] {Chandra ObsId 9225}
\dataset [ADS/Sa.CXO#obs/9226] {Chandra ObsId 9226}
\dataset [ADS/Sa.CXO#obs/9227] {Chandra ObsId 9227}
\dataset [ADS/Sa.CXO#obs/9228] {Chandra ObsId 9228}
\dataset [ADS/Sa.CXO#obs/9229] {Chandra ObsId 9229}
\dataset [ADS/Sa.CXO#obs/9230] {Chandra ObsId 9230}
\dataset [ADS/Sa.CXO#obs/9322] {Chandra ObsId 9322}
\dataset [ADS/Sa.CXO#obs/9323] {Chandra ObsId 9323}
\clearpage
\input{ms.bbl}
\clearpage
\begin{figure}[h!]
\epsscale{0.75}
\plotone{f1.eps}
\caption{Distribution of quasars in the \ion{C}{4} blueshift vs.\ log(EQW) parameter space. There are 409 objects in all; color-coding is by the source reference given in the legend.}
\label{fig:distribution}
\end{figure}
\begin{figure}[h!]
\epsscale{0.75}
\plotone{f2.eps}
\caption{As per Fig~\ref{fig:distribution}, but limited to sources with measured values of $\Gamma\equiv 1-\alpha_x$. There are 164 objects in all.}
\label{fig:gamdistrb}
\end{figure}
\begin{figure}[h!]
\epsscale{0.7}
\plotone{f3.eps}
\caption{Histogram of $\alpha_{\rm ox}$\ values in the sample and used in Figure~\ref{fig:aox}. The mean and standard deviation are $-1.619\pm0.143$.
}
\label{fig:aoxhist}
\end{figure}
\iffalse
aox
mean:-1.61889873418
sigma:0.143476349022
\fi
\begin{figure}[h!]
\epsscale{0.7}
\plotone{f4.eps}
\caption{\ion{C}{4} blueshift vs.\ log(EQW) colored by $\alpha_{\rm ox}$\ values (transparent filled circles) as indicated by the color bar on the right. Also shown are the median values of \ion{C}{4} blueshift and log(EQW$_{CIV}$) after sorting the $\alpha_{\rm ox}$\ values and dividing them into 10 bins (colored squares). Objects with large \ion{C}{4}\ blueshift and small EQW are seen to have relatively X-ray weak SEDs. Dashed lines refer to the eight regions used in Section~\ref{sec:seds} to make composite SEDs.
}
\label{fig:aox}
\end{figure}
\begin{figure}[h!]
\epsscale{0.7}
\plotone{f5.eps}
\caption{Histogram of $\Delta \alpha_{\rm ox}$\ values in the sample used in Figure~\ref{fig:daox}. The mean and standard deviation are $0.008\pm0.126$.
}
\label{fig:daoxhist}
\end{figure}
\iffalse
daox
mean:0.00830267501134
sigma:0.12568448476
\fi
\begin{figure}[h!]
\epsscale{0.7}
\plotone{f6.eps}
\caption{\ion{C}{4} Blueshift vs.\ log(EQW) colored by $\Delta \alpha_{\rm ox}$\ values (transparent filled circles) as indicated by the color bar on the right. Also shown are the median values of \ion{C}{4} Blueshift and log(EQW$_{CIV}$) after sorting the $\Delta \alpha_{\rm ox}$\ values and dividing them into 10 bins (colored squares). Thus even accounting for luminosity effects, objects with large \ion{C}{4}\ blueshift and small EQW are seen to have relatively X-ray weak SEDs. Dashed lines refer to the eight regions used in Section~\ref{sec:seds} to make composite SEDs.
}
\label{fig:daox}
\end{figure}
\iffalse
daox
t = 6.54698903248
significance = 2.27319105483e-09
gamma
t = -3.22122604905
significance = 0.00209427021869
mean and S_D values:
aox
Bin 1 mean: -1.49680769231
Bin 8 mean: -1.70046551724
S_D: 0.0206695474474
daox
Bin 1 mean: 0.102648822983
Bin 8 mean: -0.0303527940039
S_D: 0.0203149289432
\fi
\begin{figure}[h!]
\epsscale{0.7}
\plotone{f7.eps}
\caption{Histogram of $\Gamma$ values in the sample used in Figure~\ref{fig:gamma}. The mean and standard deviation are $2.005\pm0.326$.
}
\label{fig:gammahist}
\end{figure}
\iffalse
gamma
mean:2.00470198675
sigma:0.325864609814
\fi
\begin{figure}[h!]
\epsscale{0.7}
\plotone{f8.eps}
\caption{
\ion{C}{4} Blueshift vs.\ log(EQW) colored by $\Gamma$ values (transparent filled circles) as indicated by the color bar on the right. Also shown are the median values of \ion{C}{4} Blueshift and log(EQW) after sorting the $\Gamma$ values and dividing them into 10 bins (colored squares). Dashed lines refer to the eight regions used in Section~\ref{sec:seds} to make composite SEDs.
}
\label{fig:gamma}
\end{figure}
\iffalse
gamma
t = -3.22122604905
significance = 0.00209427021869
mean and S_D values:
Gamma
Bin 1 mean: 1.82888888889
Bin 8 mean: 2.05242424242
S_D: 0.0693944945594
\fi
\begin{figure}[h!]
\plotone{f9.eps}
\caption{SEDs for each of the eight \ion{C}{4}\ EQW-blueshift bins defined in Fig.~\ref{fig:sedpos}. The legend shows the bin number and plotting color each line corresponds to. We also tabulate the median values for $L_{1550 \rm \AA}$, $\alpha_{\rm uv}$, $\alpha_{\rm ox}$ and $\alpha_{\rm x}$, the calculated value of $L_{2500\,{\rm \AA}}$ and the number of objects in each bin. (Inset:) SEDs normalized to the same $L_{2500\,{\rm \AA}}$ to show the differences in ionizing flux.}
\label{fig:sed}
\end{figure}
\begin{figure}[h!]
\plotone{f10.eps}
\caption{Distribution of individual objects that make up the eight SEDs shown in Fig.~\ref{fig:sed}. The color of the objects in each bin match the color of the SED they correspond to in Fig.~\ref{fig:sed}.}
\label{fig:sedpos}
\end{figure}
\begin{figure}[h!]
\epsscale{0.7}
\plotone{f11.eps}
\caption{Modified black-body SEDs. The dark gray/black/light gray curves show SEDs with cutoff temperatures of 20/30/50\,eV. Their luminosities are normalized in a way that is consistent with the $L_{\rm UV}$--$\alpha_{\rm ox}$ relationship from \citet{jbs+07}. Note that the change in UV luminosity between the SEDs is greater than the change in X-ray luminosity. The gray curves represent the extrema in terms of $\alpha_{\rm ox}$, while the black curve represents the median. We show the SEDs from Figure~\ref{fig:sed} (dotted curves) to highlight the differences in the far-UV spectral slope between the empirical and model SEDs and to emphasize the possible range of spectral shapes in the EUV.
As these are composite SEDs, their range is smaller than the extrema indicated by the gray model SEDs. Vertical dashed lines indicate 2500\,\AA, 500\,\AA ($=24.8$\,eV), 0.2\,keV and 2\,keV.
}
\label{fig:bb}
\end{figure}
\iffalse
\begin{figure}[h!]
\epsscale{0.7}
\plotone{c4beqwradPorquet.ps}
\caption{Distribution of soft X-ray excess quasars from \citet{pro+04} (blue) superimposed on the full distribution (at higher redshift). The one large blueshift object is I~Zw~1.}
\label{fig:porquet}
\end{figure}
\fi
\clearpage
\begin{deluxetable}{ccccccccc}
\rotate
\tablewidth{0pt}
\tablecaption{Archival Data\label{tab:tab1}}
\tablehead{
\colhead{Reference} &
\colhead{N Match} &
\colhead{N BAL} &
\colhead{N RL} &
\colhead{N $z$ cut} &
\colhead{N other cut} &
\colhead{N Kept\tablenotemark{a}} &
\colhead{$z_{\rm em}$} &
\colhead{$l_{\rm uv}$}
}
\startdata
Chandra C9 & 10 & 0 & 1 & 0 & 0 & 6 & 1.65--1.90 & 31.48--31.68 \\
Gallagher '05 & 14 & 1 & 2 & 0 & 3 & 10 & 1.69--2.17 & 31.00--31.73 \\
Strateva '05 & 155 & 0 & 10 & 94 & 89 & 23 & 1.55--2.24 & 30.35--31.54 \\
Kelly '07 & 157 & 1 & 9 & 95 & 112 & 6 & 1.57--2.03 & 30.07--30.93 \\
Just '07 & 32 & 3 & 10 & 26 & 21 & 5 & 1.91--2.27 & 32.05--32.23 \\
Gibson '08 & 536 & 52 & 71 & 100 & 210 & 181 & 1.70--2.31 & 30.34--32.11 \\
Green '09 & 281 & 11 & 29 & 146 & 182 & 46 & 1.54--2.26 & 30.49--31.79 \\
Young '09 & 792 & 33 & 78 & 423 & 604 & 91 & 1.55--2.27 & 30.10--31.75 \\
\enddata
\tablenotetext{a}{The number kept is determined by removing all of the BALs, low $z$ objects, and RL (or unmeasured radio) objects, but also by removing any duplicates between samples and objects failing to meet other cuts.
Thus the number of objects kept can be significantly smaller than the number of objects matched.}
\end{deluxetable}
\begin{deluxetable}{lcccccrl}
\tablewidth{0pt}
\tablecaption{New large \ion{C}{4}\ blueshift quasars observed with {\em Chandra}.\label{tab:chandra}}
\tablehead{
\colhead{Name} &
\colhead{$z_{\rm em}$} &
\colhead{$i$} &
\colhead{$\log L_{2800{\rm\AA}}$} &
\colhead{CIV shift} &
\colhead{CIV EQW} &
\colhead{Exptime} &
\colhead{OBSID} \\
\colhead{(SDSS J)} &
\colhead{} &
\colhead{} &
\colhead{(ergs/s/Hz)} &
\colhead{(km/s)} &
\colhead{(\AA)} &
\colhead{(ks)} &
\colhead{}
}
\startdata
005102.42$-$010244.4 & 1.889 & 17.366 & 31.571 & 1597& 20.3 & 3.5 & 9224 \\
014812.23+000153.2 & 1.708 & 17.386 & 31.411 & 1590 & 15.2 & 10.5 & 9225 \\
020845.53+002236.0 & 1.898 & 16.723 & 31.854 & 1555 & 23.1 & 3.5 & 9223 \\
090007.14+321921.9 & 1.851 & 17.095 & 31.650 & 1955 & 21.6 & 11.0 & 9323 \\
100401.28+423123.0 & 1.666 & 16.733 & 31.725 & 2281 & 16.8 & 8.1 & 9322 \\
102907.06+651024.6 & 2.171 & 16.730 & 31.968 & 1560 & 17.2 & 10.6 & 9228 \\
115351.11+113649.2 & 1.681 & 17.262 & 31.463 & 2026 & 22.9 & 11.0 & 9230 \\
141949.39+060654.0 & 1.649 & 17.119 & 31.385 & 1941 & 23.5 & 9.9 & 9226 \\
150313.63+575151.6 & 1.721 & 17.075 & 31.593 & 1555 & 15.9 & 10.0 & 9227 \\
162622.06+295237.4 & 1.902 & 17.017 & 31.700 & 1501 & 20.6 & 10.9 & 9229
\enddata
\end{deluxetable}
\iffalse
005102.42$-$010244.4 & 1.889 & 17.366 & 31.571 & 1597.2 & 20.31 & 3.5 & 9224 \\
014812.23+000153.2 & 1.708 & 17.386 & 31.411 & 1589.7 & 15.16 & 10.5 & 9225 \\
020845.53+002236.0 & 1.898 & 16.723 & 31.854 & 1554.9 & 23.13 & 3.5 & 9223 \\
090007.14+321921.9 & 1.851 & 17.095 & 31.650 & 1954.5 & 21.64 & 11.0 & 9323 \\
100401.28+423123.0 & 1.666 & 16.733 & 31.725 & 2280.5 & 16.83 & 8.1 & 9322 \\
102907.06+651024.6 & 2.171 & 16.730 & 31.968 & 1560.0 & 17.21 & 10.6 & 9228 \\
115351.11+113649.2 & 1.681 & 17.262 & 31.463 & 2026.1 & 22.87 & 11.0 & 9230 \\
141949.39+060654.0 & 1.649 & 17.119 & 31.385 & 1941.2 & 23.54 & 9.9 & 9226 \\
150313.63+575151.6 & 1.721 & 17.075 & 31.593 & 1554.8 & 15.91 & 10.0 & 9227 \\
162622.06+295237.4 & 1.902 & 17.017 & 31.700 & 1501.3 & 20.61 & 10.9 & 9229
\fi
\begin{deluxetable}{lcccccccr}
\tabletypesize{\footnotesize}
\rotate
\tablewidth{0pt}
\tablecaption{X-ray Properties
\label{tab:xcalc}
}
\tablehead{
\colhead{Name (SDSS J)} &
\colhead{$\Gamma_{\rm HR}$\tablenotemark{a}} &
\colhead{$\log(F_{\rm X})$\tablenotemark{b}} &
\colhead{$\log(f_{\rm 2 keV})$\tablenotemark{c}} &
\colhead{$\log(f_{\rm 2500})$\tablenotemark{c}} &
\colhead{$\log(L_{\rm 2500})$\tablenotemark{d}} &
\colhead{$\alpha_{\rm ox}$} &
\colhead{$\Delta \alpha_{\rm ox}$}
}
\startdata
005102.42$-$010244.4 & 2.75$^{+0.85}_{-0.62}$ & $-13.354\pm0.085$ & $-30.754\pm0.085$ & $-26.425$ & 31.510 & $-1.66$ & $-0.01$ \\
014812.23+000153.2 & 2.13$^{+0.32}_{-0.28}$ & $-13.340\pm0.052$ & $-30.980\pm0.052$ & $-26.426$ & 31.434 & $-1.75$ & $-0.09$ \\
020845.53+002236.0 & 1.62$^{+0.28}_{-0.26}$ & $-12.818\pm0.055$ & $-30.678\pm0.055$ & $-26.170$ & 31.768 & $-1.73$ & $-0.03$ \\
090007.14+321921.9 & 2.20$^{+0.25}_{-0.25}$ & $-13.229\pm0.043$ & $-30.818\pm0.043$ & $-26.286$ & 31.635 & $-1.74$ & $-0.05$ \\
100401.28+423123.0 & 1.65$^{+0.27}_{-0.27}$ & $-13.181\pm0.056$ & $-31.050\pm0.056$ & $-26.158$ & 31.677 & $-1.88$ & $-0.19$ \\
102907.06+651024.6 & 2.10$^{+0.23}_{-0.20}$ & $-13.083\pm0.038$ & $-30.662\pm0.038$ & $-26.074$ & 31.971 & $-1.76$ & $-0.03$ \\
115351.11+113649.2 & 2.10$^{+0.23}_{-0.20}$ & $-13.111\pm0.039$ & $-30.770\pm0.039$ & $-26.361$ & 31.484 & $-1.69$ & $-0.02$ \\
141949.39+060654.0 & 2.65$^{+0.35}_{-0.28}$ & $-13.252\pm0.044$ & $-30.742\pm0.044$ & $-26.295$ & 31.533 & $-1.71$ & $-0.04$ \\
150313.63+575151.6 & 2.30$^{+0.23}_{-0.23}$ & $-13.101\pm0.038$ & $-30.676\pm0.038$ & $-26.292$ & 31.574 & $-1.68$ & $0.00$ \\
162622.06+295237.4 & 2.60$^{+0.40}_{-0.30}$ & $-13.373\pm0.048$ & $-30.811\pm0.048$ & $-26.264$ & 31.679 & $-1.75$ & $-0.06$
\enddata
\tablenotetext{a}{$\Gamma_{\rm HR}$\ is a coarse measure of the hardness of the X-ray spectrum determined by comparing the observed {HR}\ to a simulated {HR}\ that takes into account spatial and temporal variations in the instrument response.}
\tablenotetext{b}{The full-band X-ray flux, $F_{\rm X}$, has units of \mbox{\,ergs~cm$^{-2}$~s$^{-1}$}\ and is calculated by integrating the power-law spectrum given by $\Gamma$ and normalized by the full-band count rate from 0.5--8.0~keV. The errors are derived from the 1$\sigma$ errors in the full-band count rate.}
\tablenotetext{c}{X-ray and optical flux densities were measured at rest-frame 2~keV and 2500\,\AA, respectively; units are \mbox{\,ergs~cm$^{-2}$~s$^{-1}$~Hz$^{-1}$}.}
\tablenotetext{d}{The 2500\,\AA\ luminosity density, $L_{\rm 2500}$, has units of \mbox{\,ergs~s$^{-1}$}~Hz$^{-1}$.}
\end{deluxetable}
\iffalse
\begin{verbatim}
# ObsID, RA, DEC, soft, soft_err, hard, hard_err, tot, tot_err, rate_soft, rate_soft_err, rate_hard, rate_hard_err, rate_tot, rate_tot_err, HR_obs
9223, 32.189583, 0.376667, 48.99, 7.07, 17.65, 4.24, 66.47, 8.25, 1.4061e-02, 2.0296e-03, 5.0658e-03, 1.2178e-03, 1.9079e-02, 2.3670e-03, -0.47
9224, 12.760000, -1.045667, 26.38, 5.20, 0.00, 0.00, 29.25, 5.48, 7.3846e-03, 1.4544e-03, 0.0000e+00, 0.0000e+00, 8.1877e-03, 1.5331e-03, --
9225, 27.050833, 0.031444, 64.36, 8.12, 6.70, 2.65, 77.71, 8.89, 6.1131e-03, 7.7161e-04, 6.3632e-04, 2.5132e-04, 7.3813e-03, 8.4420e-04, -0.81
9226, 214.955833, 6.115000, 93.50, 9.80, 10.86, 3.32, 106.24, 10.34, 9.4342e-03, 9.8862e-04, 1.0953e-03, 3.3466e-04, 1.0719e-02, 1.0437e-03, -0.79
9227, 225.806667, 57.864333, 114.30, 10.86, 18.41, 4.36, 136.27, 11.75, 1.1404e-02, 1.0838e-03, 1.8371e-03, 4.3489e-04, 1.3596e-02, 1.1721e-03, -0.72
9228, 157.279583, 65.173500, 111.99, 10.77, 25.43, 5.10, 140.03, 11.92, 1.0539e-02, 1.0136e-03, 2.3927e-03, 4.7983e-04, 1.3177e-02, 1.1214e-03, -0.63
9229, 246.592083, 29.877056, 78.08, 8.94, 9.79, 3.16, 89.98, 9.54, 6.8972e-03, 7.9014e-04, 8.6453e-04, 2.7935e-04, 7.9489e-03, 8.4270e-04, -0.78
9230, 178.462917, 11.613667, 112.09, 10.68, 24.62, 5.00, 137.65, 11.79, 1.0166e-02, 9.6836e-04, 2.2329e-03, 4.5345e-04, 1.2484e-02, 1.0693e-03, -0.64
9322, 151.005417, 42.523056, 46.77, 6.93, 14.53, 3.87, 63.41, 8.06, 6.1342e-03, 9.0868e-04, 1.9058e-03, 5.0804e-04, 8.3172e-03, 1.0574e-03, -0.53
9323, 135.029724, 32.322716, 90.38, 9.59, 17.54, 4.24, 108.60, 10.49, 8.3068e-03, 8.8161e-04, 1.6117e-03, 3.8992e-04, 9.9817e-03, 9.6399e-04, -0.67
\end{verbatim}
\fi
\end{document}
2500A = 10**15.079 Hz
2keV = 10**17.684
|
1,108,101,562,959 | arxiv | \section{}
\begin{acknowledgments}
We thank J. Bouma for technical support. This work was supported by the Space
Research Organization Netherlands (SRON), Grant No. MG-051, the "Cold Atoms"
program of the Dutch Foundation for Fundamental Research on Matter (FOM) and
the European Union, Grant No. HPRN-CT-2000-00125.
\end{acknowledgments}
|
1,108,101,562,960 | arxiv | \section{INTRODUCTION}
The electronic and optical properties of nanomaterials such as carbon
nanotubes (CNTs), graphene, and nanowires show unique behavior due to their
reduced dimensions. For example, the electronic properties of CNTs depend
strongly on their diameter\cite{leonard}, and many-body effects are known to
significantly increase CNT bandgaps compared to density functional theory
(DFT)\cite{Spataru1}. In addition, CNT optical properties are dominated by
excitons \cite{Ando1,Spataru1,Chang1,perebeinos,Mazumdar1,Heinz1} due to a
combination between weak electrostatic screening \cite{leonard} and enhanced
Coulomb effects in quasi-one-dimensional systems \cite{Ando1,Spataru1}.
While the electronic and optical properties of isolated CNTs are well
understood, external factors such as the dielectric
environment\cite{perebeinos,Ando2}, electrostatic
doping\cite{spataru2,spataru3}, and nanotube-nanotube
interactions\cite{rohlfing}, have recently been shown to modify CNT electronic
and optical properties. Another important external factor that impacts CNT
properties is strain. Indeed, it was recognized early on\cite{heyd,tombler}
that strain can significantly modify CNT electronic properties, and this has
been exploited to realize new types of nanoelectromechanical
devices\cite{zheng}. However, to date theoretical studies of the impact of
strain on CNT electronic properties have been mostly limited to tight-binding
models and DFT; given the importance of many-body effects in unstrained CNTs,
a question to address is the role of many-body effects in strained CNTs.
The role of many-body effects on the optical properties of strained CNTs has
received even less attention. Experimental reports of strain modulation of CNT
optical properties have recently emerged\cite{maki,huang,kaniber}, and
indicate that optical transition energies can shift by tens of meVs per
percent strain. However, interpretation of these results has relied on
non-interacting models developed for electronic transitions, which do not
capture excitonic effects that dominate the optical response in CNTs. Progress
in developing exciton-based models for the optical properties of strained CNTs
has focused on approaches relying on the tight-binding or the $\mathbf{k\cdot
p}$ methods \cite{yu1,yu,yu2,ando}. However, a full many-body \textit{ab
initio} calculation of CNT optical properties under strain is still missing.
In this paper, we present such calculations by combining the GW approach with
the Bethe-Salpeter (BSE) equation to study the electronic and optical
properties of strained semiconducting CNTs. We find that the dependence of the
electronic bandgaps on strain is more complex than previously predicted based
on tight-binding models or density-functional theory. In addition, we show
that the exciton energy and exciton binding energy depend significantly on
strain, with variations of tens of meVs per percent strain. Furthermore, the
absorbance is found to be nearly independent of strain as a consequence of the
increase in transition dipole matrix elements with increasing strain.
This paper is organized as follows. After this Introduction, section II
describes the methodology and results for the electronic properties of
strained CNTs. Section III discusses the methodology and results for the
optical spectra, exciton energies, and excition binding energies. A summary is
presented in Section IV.
\section{Impact of many-body effects on electronic properties}
We perform our \textit{ab initio} calculations on the semiconducting (11,0)
and (17,0) CNTs for uniaxial strains from 0\% to 5\%. We start by
investigating the ground-state properties (e.g. relaxed atomic structure,
electron density) within DFT. The DFT calculations are performed using the
Quantum Espresso package\cite{espresso} within the Local Density Approximation
(LDA), using \textit{ab initio} pseudopotentials in combination with a
plane-wave basis set with a kinetic energy cutoff of 60 Ryd, in a supercell
geometry with tube separation (center to center) of more than double the
nanotube diameter. The atomic structure is relaxed until forces are smaller
than 5 meV/\AA \ for both the strained and unstrained cases. In the unstrained
case the (11,0) and (17,0) CNTs have diameters of 8.6 and 13.2
\AA \ respectively. The strain is applied by stretching (w.r.t. to the
unstrained case) the nanotube unit cell lattice vector along the tube axis
followed by atomic relaxation. This leads to a decrease in nanotube diameter
and a Poisson ratio $\nu\approx0.15$.
\begin{figure}[h]
\centering
\includegraphics[width=3in]{Fig1.eps}
\caption{DFT (LDA) bandstructure
of the (17,0) and (11,0) CNTs at 0\% and 2\% strain. The electronic bandgaps
and optical transitions studied in this paper are indicated with arrows. $b_{3}$ is the length of the Brillouin zone.}
\end{figure}
Figure 1 shows the DFT bandstructures at 0\% and 2\% strain, with the
electronic bandgaps and optical transitions studied in this paper labelled in
the figure. We note that for the unstrained (17,0) CNT the $E_{33}$ transition
has higher energy than the $E_{44}$ transition due to trigonal
warping\cite{trigonal}.Within LDA (Table 1), the (17,0) fundamental bandgap is
found to decrease with strain with a change $\Delta E_{g}^{11}$ of -125
meV/\%, in agreement with previous DFT calculations\cite{sreekala,valavala}.
The higher energy gaps show changes $\Delta E_{g}^{22}=+108$ meV/\%, $\Delta
E_{g}^{33}=-136$ meV/\%, and $\Delta E_{g}^{44}=+83$ meV/\%. These values can
be compared with those obtained from the simple tight-binding (TB) expression
for small strain\cite{yang} applied to zigzag CNT
\begin{equation}
\Delta E_{g}^{kk}=(-1)^{k}3\gamma(1+\nu)\sigma,
\end{equation}
where $\gamma$ is the tight-binding overlap integral and $\sigma$ is the
strain. Reasonable agreement with the LDA values can be obtained if one uses
$\gamma=3.3$ eV and $\nu=0.15$, giving $\Delta E_{g}/\sigma=$ $\pm$114 meV/\%.
While LDA and TB calculations agree to a large extent, an open question is
whether many-body effects can change the above picture. To address this
question, we performed quasiparticle calculations using the many-body GW
approach\cite{HL}. The electron self-energy $\Sigma=iGW$ is obtained within
the $G_{0}W_{0}$ approximation, i.e. using the LDA eigenvalues and
wavefunctions to construct the 1-particle Green's function $G$. The screened
Coulomb interaction $W$ is evaluated within the Random Phase Approximation and
extended at non-zero frequencies using the Plasmon-Pole approximation\cite{HL
. We consider empty states up to an energy cutoff of $\sim$60 eV, and use the
`static-remainder' technique\cite{statrem} to ensure convergence with respect
to the number of empty states. Convergence with respect to k-point sampling is
achieved with 128 k-points in the one-dimensional Brillouin zone. Also, the
Coulomb potential is truncated\cite{Sohrab,ApplPhysASpataru} in order to
prevent tube-tube interactions or periodic image effects due to the use of a
periodic supercell.
\begin{figure}[h]
\centering
\includegraphics[
width=3in
{Fig2.eps
\caption{Strain dependence of the DFT, GW, and BSE gaps for the (17,0) CNT.
\end{figure}
Figure 2 and Table 1 show the calculated GW gaps as a function of strain for
the (17,0) CNT. The qualitative dependence on strain is similar to that
obtained within LDA, with the gaps increasing or decreasing with strain
depending on the band index. However, we find that strain effects are much
more complex within GW: for example $E_{11}$and $E_{33}$ decrease by 213
meV/\% and 231 meV/\% a much stronger change compared to LDA; on the other
hand $E_{22}$ and $E_{44}$ increase by 86 meV/\% and 46 meV/\%, much weaker
than LDA predicts
\begin{table}[tbp] \centering
\begin{tabular}
[c]{cccccc}\hline\hline
& Strain (\%) & E$_{11}$(eV) & E$_{22}$(eV) & E$_{33}$(eV) & E$_{44
$(eV)\\\hline
LDA & 0 & 0.606 & 0.968 & 2.495 & 2.153\\
& 2 & 0.356 & 1.184 & 2.223 & 2.329\\\hline
GW & 0 & 1.291 & 1.761 & 3.741 & 3.385\\
& 2 & 0.864 & 1.934 & 3.278 & 3.478\\\hline
BSE & 0 & 0.717 & 1.180 & 2.985 & 2.755\\
& 2 & 0.427 & 1.435 & 2.670 & 2.905\\\hline
$E_{b}$ & 0 & 0.574 & 0.581 & 0.756 & 0.630\\
& 2 & 0.437 & 0.499 & 0.608 & 0.573\\\hline\hline
\end{tabular}
\caption{Calculated transition energies for the (17,0) CNT using LDA, GW, and BSE. The exciton binding energy $E_b$ is
calculated as the difference between the transition energies from GW and BSE.}\label{TableKey
\end{table
The above results for the (17,0) CNT are limited to two strain values due to
the computational demands of the calculations. Tight-binding models predict a
linear dependence of the bandgaps on strain for relatively small strains with
changes in bandgaps independent of the tube diameter. To check whether this
trend holds when many-body effects are included, we performed GW calculations
for three different values of the strain for the $E_{11}$ gap of the (11,0)
CNT. As shown in Fig. 3 and Table 2, we obtain a linear dependence of the
bandgap on strain in agreement with the tight-binding prediction for small
strains and DFT. However, we find $\Delta E_{g}^{11}=-191$ meV/\% a value much
larger than the LDA value of $-127$ meV/\%; thus similar to the (17,0) CNT, we
find that many-body effects can significantly impact the electronic properties
of the strained (11,0) CNT
\begin{table}[tbp] \centering
\begin{tabular}
[c]{lllll}\hline\hline
Strain (\%) & LDA & GW & BSE & $E_{b}$\\\hline
0 & 0.950 & 1.922 & 1.062 & 0.860\\
2 & 0.694 & 1.540 & 0.775 & 0.764\\
5 & 0.319 & 0.837 & 0.348 & 0.488\\\hline\hline
\end{tabular}
\caption{Energies (in eV) for the $E_{11}$ transition in the (11,0) CNT calculated using LDA, GW, and BSE. The exciton binding
energy $E_b$ is calculated as the difference between the GW and BSE energies.}\label{TableKey copy(1)
\end{table
\begin{figure}[h
\centering
\includegraphics[
width=3in
{Fig3.eps
\caption{Strain dependence of the $E_{11}$transition for the (11,0) CNT.
\end{figure}
The differences between the GW and LDA results stem from the fact that with
reduction (augmentation) of the fundamental bandgap there is an increase
(decrease) in the dielectric screening $\varepsilon$ of the CNT. Consider the
case where the fundamental bandgap decreases with strain: we plot in Fig. 4
the dielectric screening $\varepsilon^{-1}\left( q,\omega=0\right) $ for the
(11,0) CNT for strains of 0\%, 2\%, and 5 \%. The increased screening affects
the screened Coulomb interaction $W=\varepsilon^{-1}v$ and hence the electron
self-energy $\Sigma=iGW$ present in the many-body calculations. More exactly,
the contribution $\Sigma_{g}$ of the electron self-energy to the quasiparticle
bandgap $E_{g}^{GW}=E_{g}^{LDA}-V_{g}^{xc}+\Sigma_{g}$, decreases appreciably
(same order of magnitude as the change in the LDA bandgap) upon strain:
$\delta\Sigma_{g}\equiv\Sigma_{g}(\sigma)-\Sigma_{g}(\sigma=0)<0$. This is a
many-body effect not captured by the LDA exchange-correlation Kohn-Sham
potential $V^{xc}$, and as expected we find $\delta V_{g}^{xc}/\delta
\Sigma_{g}\ll1$. Thus, the change in the fundamental quasiparticle bandgap
upon strain as obtained within GW is more pronounced than the one obtained
within a mean-field (LDA) theory, with appreciable contribution from
self-energy corrections: $\delta E_{g}^{GW}\approx\delta E_{g}^{LDA
+\delta\Sigma_{g}$. For $E_{11}$ and $E_{33}$ this leads to a larger decrease
in the bandgap compared to LDA since both $\delta E_{g}^{LDA}$ and
$\delta\Sigma_{g}$ are negative; in contrast, $\delta E_{g}^{LDA}$ is positive
for $E_{22}$ and $E_{44}$, and the still negative $\delta\Sigma_{g}$ leads to
a smaller increase of the bandgap with strain.
\begin{figure}[h
\centering
\includegraphics[width=3in
{Fig4.eps
\caption{Strain dependence of the dielectric screening $\varepsilon^{-1}$ for
the (11,0) CNT. $b_{3}$ is the length of the unit cell.
\end{figure}
\section{Impact of many-body effects on optical properties}
We next turn to the optical properties. To calculate these, we start from the
GW results and couple them with the BSE. Both the BSE and GW calculations were
performed using the BerkeleyGW package\cite{BerkeleyGW}. We solve the BSE for
excitons within the static approximation for the dielectric screening and
within the Tamm-Dancoff approximation for excitons \cite{Tamm}. Having
obtained the excitonic properties one can then obtain the optical response of
CNTs using the standard approach \cite{Spataru1,BerkeleyGW}. The optical
bandgap is equal to the quasiparticle bandgap minus the binding energy of the
lowest bright exciton, a quantity which results from the overall attractive
electron-hole interaction between the (quasi)electron and the (quasi)hole
forming the exciton.
Figure 5 shows the optical absorbance for the (17,0) CNT calculated within GW
(no excitonic effects) and calculated within BSE (i.e. including excitonic
effects) for the two lowest optical transitions. Here the absorbance is
obtained from $A(\omega)\sim\omega\varepsilon_{2}\left( \omega\right) $
where $\varepsilon_{2}\left( \omega\right) $ is the imaginary part of
$\varepsilon$. The peaks in the figure for the BSE results indicate the lowest
energy bright exciton for light polarization parallel to the nanotube axis. At
zero strain (Fig. 5a), the $E_{11}$ and $E_{22}$ transitions show strong
many-body effects, with exciton binding energies of 574 meV and 581 meV.
Similar results are obtained for the $E_{33}$ and $E_{44}$ transitions (Table
1) with $E_{b}^{33}=756$ meV and $E_{b}^{33}=630$ meV.
Upon application of strain (Fig. 5b) one can see that the $E_{11}$exciton
energy $\Omega_{11}$decreases by 145 meV/\% while $\Omega_{22}$ increases by
128 meV/\%. Results for $\Omega_{33}$ and $\Omega_{44}$ (Table 1) give values
of -157 meV/\% and +75 meV/\%, respectively. Thus, the qualitative trends
observed from the GW calculations are maintained with the optical properties;
however, because the quasiparticle bandgap and the exciton energy have a
different dependence on strain, $dE_{g}^{GW}/d\sigma\neq d\Omega/d\sigma$, one
can deduce that the exciton binding energy $E_{b}=E_{g}^{GW}-\Omega$ depends
on strain. This can be seen in Fig. 5 and in Table 1 where all of the exciton
binding energies are decreased under strain by amounts ranging from 28 meV/\%
for the $E_{44}$ transition to 74 meV/\% for the $E_{33}$ transition. Much
like the changes in the quasiparticle gap, the decrease in binding energy also
stems from the change in dielectric screening upon applied strain. Indeed, the
attractive interaction between the electron and hole forming the exciton is
mediated by the screened Coulomb interaction $W=\varepsilon^{-1}v$, and
because $\varepsilon$ is always determined by the lowest energy electronic
bandgap, all of the optical transitions will be affected in the same way
leading to the common decrease in binding energy.
It should also be noted that the exciton oscillator strength shows very small
variation with strain. Since the oscillator strength is $\sim\Omega\mu_{a
^{2}/a$, with $\mu_{a}^{2}/a$ the squared exciton transition dipole matrix
element per unit tube length \cite{Spataru2}, the implication is that $\mu
_{a}^{2}/a$ strongly increases with increasing strain. Indeed, for the (17,0)
CNT, we find that $\mu_{a}^{2}/a$ increases from $\sim$3.9 a.u. at zero strain
to $\sim$6.8 a.u. at 2\% strain.
\begin{figure}[h
\centering
\includegraphics[width=3in]{Fig5.eps
\caption{Optical absorption spectrum of the (17,0) CNT calculated without the
electron-hole interaction (GW) and with the electron-hole interaction (BSE).
Panel (a) is for the unstrained case and panel (b) is for 2\% strain.
\end{figure}
The optical results for the (17,0) CNT can be generalized to the (11,0) CNT as
well, at least for the lowest optical transition (Fig. 3 and Table 2). Indeed
we find $d\Omega^{11}/d\sigma=-142$ meV/\% and a reduction of the exciton
binding energy by severals tens of meV/\%. (At 2\% the reduction in $E_{b}$ is
about 3.5 times as large as that obtained using a tight-binding approach for
excitons\cite{yu} for the (11,0) CNT.) Furthermore, the exciton energy is
found to depend linearly on strain, and turns out to be relatively close to
the DFT result. As we discussed above, with decreasing bandgap the dielectric
screening gets enhanced, and thus the binding between electron and hole
decreases. This effect also explains why the change in optical gap $\Omega$
upon applied strain is similar to that obtained at the LDA level: it is due to
cancellation effects between quasiparticle self-energy corrections and
excitonic effects.
\section{Summary}
In summary, we performed many-body \textit{ab initio} calculations of the
electronic and optical properties of semiconducting zigzag CNTs under uniaxial
strain. We find that the fundamental electronic bandgap depends more strongly
on strain than previously predicted by non-interacting models. In addition, we
find that self-energy corrections generally decrease the bandgaps, which
enhances or reduces the impact of strain compared to DFT depending on which
transition is considered. Furthermore, the optical transitions are also found
to be affected by many-body effects. In particular, the exciton binding energy
decreases with increasing strain regardless of the transition, with variations
of several tens of meVs per percent strain. More generally, our results
indicate that quasiparticle and excitonic effects are strongly tied, and that
the interpretation of optomechanical experiments in CNTs requires a more
in-depth consideration of many-body effects. This is further supported by
other many-body calculations on strained bulk\cite{thulin, lambrecht} and
two-dimensional materials\cite{shi,liang} where material-specific and
dimensionality phenomena have been observed.
\textbf{ACKNOWLEDGEMENTS}
Work supported by the Laboratory Directed Research and Development program at
Sandia National Laboratories, a multiprogram laboratory managed and operated
by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin
Corporation, for the United States Department of Energy's National Nuclear
Security Administration under Contract DE-AC04-94AL85000.
|
1,108,101,562,961 | arxiv | \section{Introduction}
Cylindrical systems in Einstein's theory puzzles relativists since
Levi-Civita found its vacuum solution in 1919 \cite{Levi}. The precise
meaning of its two independent parameters
is still hard to grasp and, in particular, the one that describes the
Newtonian energy per unit length $\sigma$ looks the most elusive. The fact
that there are two parameters while in
its counterpart, Newtonian theory, has only one parameter looks a
sufficient justification for deserving more research. But the importance of
its research goes further if one notices
the close link between Levi-Civita, $\gamma$ and Schwarzschild spacetimes \cite{Herrera4}
and its peculiar properties.
Besides, there has been renewed interest in cylindrically symmetric sources
in relation with different, classical and quantum, aspects of gravitation
(see \cite{1} and references
therein). Such sources may serve as test--bed for numerical relativity,
quantum gravity and for probing cosmic censorship and hoop conjecture,
among other important issues, and
represent a natural tool to seek the physics that lies behind the two
independent parameters in Levi-Civita metric.
The purpose of this work is twofold. On the one hand we would like to
present systematically the field equations as well as all regularity and
junction conditions required to ensure
the correct behaviour of a source of a cylindrically symmetric spacetime
(Levi-Civita). On the other hand we want to bring out the relationship
between the Weyl tensor and different
aspects of the source. This last question is in turn motivated by the very
conspicuous link existing in the spherically symmetric case between the
Weyl tensor, the inhomogeneity of
the energy density and the anisotropy of pressure \cite{est}.
The paper is organized as follows: in section 2 we present the general form
of the energy momentum tensor, the line element, the Einstein equations,
the active gravitational mass and
the Weyl tensor. The exterior space-time as well as junction and
regularity conditions are discussed in section 3. In section 4 the
consequences derived from the condition of conformal
flatness are obtained. The non existence of conformally flat models
satisfying Darmois conditions is given in section 5. Finally, some
conclusions are presented in the last section.
\section{Interior spacetime}
We consider a static cylindrically symmetric anisotropic non-dissipative
fluid bounded by a cylindrical surface $\Sigma$ and with energy momentum
tensor given by
\begin{equation}
T_{\alpha\beta}=(\mu + P_r)V_{\alpha}V_{\beta}+P_rg_{\alpha\beta}+
(P_{\phi}-P_r)K_{\alpha}K_{\beta}+(P_z-P_r)S_{\alpha}S_{\beta}, \label{1}
\end{equation}
where, $\mu$ is the energy density, $P_r$, $P_z$ and $P_{\phi}$ are the
principal stresses and $V_{\alpha}$, $K_{\alpha}$ and $S_{\alpha}$ are
vectors satisfying
\begin{equation}
V^{\alpha}V_{\alpha}=-1, \;\; K^{\alpha}K_{\alpha}=S^{\alpha}S_{\alpha}=1, \;\;
V^{\alpha}K_{\alpha}=V^{\alpha}S_{\alpha}=K^{\alpha}S_{\alpha}=0. \label{2}
\end{equation}
We assume for the interior metric to $\Sigma$ the general static
cylindrically symmetric which can be written
\begin{equation}
ds^2=-A^2dt^2+B^2(dr^2+dz^2)+C^2d\phi^2, \label{3}
\end{equation}
where $A$, $B$ and $C$ are all functions of $r$. To represent cylindrical
symmetry, we impose the following ranges on the coordinates
\begin{equation}
-\infty\leq t\leq\infty, \;\; 0\leq r, \;\; -\infty<z<\infty, \;\;
0\leq\phi\leq 2\pi. \label{3a}
\end{equation}
We number the coordinates $x^0=t$, $x^1=r$, $x^2=z$ and $x^3=\phi$ and we
choose the fluid being at rest in this coordinate system, hence from
(\ref{2}) and (\ref{3}) we have
\begin{equation}
V_{\alpha}=-A\delta_{\alpha}^0, \;\; S_{\alpha}=B\delta_{\alpha}^2, \;\;
K_{\alpha}=C\delta_{\alpha}^3. \label{4}
\end{equation}
For the Einstein field equations, $G_{\alpha\beta}=\kappa T_{\alpha\beta}$
with (\ref{1}), (\ref{3}) and (\ref{4}) we have the non null components
\begin{eqnarray}
G_{00}=-\left(\frac{A}{B}\right)^2\left[
\left(\frac{B^{\prime}}{B}\right)^{\prime}+\frac{C^{\prime\prime}}{C}\right]
=\kappa\mu A^2, \label{5} \\
G_{11}=\frac{A^{\prime}}{A}\frac{C^{\prime}}{C}+\left(\frac{A^{\prime}}{A}+\frac{C^{\prime}}{C}
\right)\frac{B^{\prime}}{B}=\kappa P_rB^2, \label{6} \\
G_{22}=\frac{A^{\prime\prime}}{A}+\frac{C^{\prime\prime}}{C}+\frac{A^{\prime
}}{A}
\frac{C^{\prime}}{C}-\left(\frac{A^{\prime}}{A}+\frac{C^{\prime}}{C}\right)\frac{B^{\prime}}{B}
=\kappa P_zB^2, \label{7} \\
G_{33}=\left(\frac{C}{B}\right)^2\left[\frac{A^{\prime\prime}}{A}+
\left(\frac{B^{\prime}}{B}\right)^{\prime}\right]=\kappa P_{\phi} C^2, \label{8}
\end{eqnarray}
where the primes stand for differentiation with respect to $r$. Since we
have four equations for seven unknown functions, three additional
constraints (e.g. equations of state) should
be given in order to uniquely determine a solution.
There are two compact expressions that can be obtained from (\ref{6}-\ref{8}),
\begin{eqnarray}
\kappa(P_r+P_z)B^2=\frac{(AC)^{\prime\prime}}{AC}, \label{8d} \\
\kappa(P_z-P_{\phi})B^2=\frac{h^{\prime\prime}}{h}+\left(\frac{A^{\prime}}{A}+
\frac{B^{\prime}}{B}\right)\frac{h^{\prime}}{h}, \label{8e}
\end{eqnarray}
where
\begin{equation}
h=\frac{C}{B}. \label{8f}
\end{equation}
The conservation equation, $T_{r;\beta}^{\beta}=0$, with (\ref{1}) and
(\ref{3}) becomes
\begin{equation}
(\mu+P_r)\frac{A^{\prime}}{A}+P_r^{\prime}+(P_r-P_z)\frac{B^{\prime}}{B}+(P_
r-P_{\phi})
\frac{C^{\prime}}{C}=0, \label{8aa}
\end{equation}
which can substitute any of the independent field equations (\ref{5}-\ref{8}).
The Whittaker formula \cite{Whittaker} for the active gravitational mass
per unit length $m$ of a static
distribution of perfect fluid with energy density $\mu$ and principal
stresses $P_r$, $P_z$ and
$P_{\phi}$ inside a cylinder of surface $\Sigma$ is
\begin{equation}
m=2\pi \int_0^{r_{\Sigma}}(\mu +P_r +P_z +P_{\phi})\sqrt{-g}dr, \label{8a}
\end{equation}
where $g$ is the determinant of the metric. Now substituting (\ref{3}) and
(\ref{5}-\ref{8}) into (\ref{8a}) we obtain
\begin{equation}
m=\frac{4\pi}{\kappa}\int_0^{r_{\Sigma}}\left(\frac{A^{\prime\prime}}{A^{\prime}}+
\frac{C^{\prime}}{C}\right)A^{\prime}C\;dr, \label{8b}
\end{equation}
which can be recast into the simpler form
\begin{equation}
m=\frac{4\pi}{\kappa}\int_0^{r_\Sigma}(A^{\prime}C)^{\prime}dr. \label{8c}
\end{equation}
The spacetime (\ref{3}) has the following non-null components of the Weyl
tensor $C_{\alpha\beta\gamma\delta}$
\begin{eqnarray}
C_{1212}=-\left(\frac{B^2}{AC}\right)^2C_{0303}=\frac{B^2}{6}\left[\frac{A^{\prime\prime}}{A}-
2\left(\frac{B^{\prime}}{B}\right)^{\prime}+\frac{C^{\prime\prime}}{C}-
2\frac{A^{\prime}}{A}\frac{C^{\prime}}{C}\right], \label{9} \\
C_{1313}=-\left(\frac{C}{A}\right)^2C_{0202} \nonumber \\
=\frac{C^2}{6}\left[\frac{A^{\prime\prime}}{A}+
\left(\frac{B^{\prime}}{B}\right)^{\prime}-2\frac{C^{\prime\prime}}{C}-
3\left(\frac{A^{\prime}}{A}-\frac{C^{\prime}}{C}\right)\frac{B^{\prime}}{B}+
\frac{A^{\prime}}{A}\frac{C^{\prime}}{C}\right], \label{10} \\
C_{2323}=-\left(\frac{C}{A}\right)^2C_{0101} \nonumber \\
=\frac{C^2}{6}\left[-2\frac{A^{\prime\prime}}{A}+
\left(\frac{B^{\prime}}{B}\right)^{\prime}+\frac{C^{\prime\prime}}{C}+
3\left(\frac{A^{\prime}}{A}-\frac{C^{\prime}}{C}\right)\frac{B^{\prime}}{B}+
\frac{A^{\prime}}{A}\frac{C^{\prime}}{C}\right]. \label{11}
\end{eqnarray}
We obtain from (\ref{9}-\ref{11})
\begin{equation}
\left(\frac{C}{B}\right)^2C_{1212}+C_{1313}+C_{2323}=0, \label{14}
\end{equation}
hence we have only two independent components of the Weyl tensor for (\ref{3}).
\section{Exterior spacetime and junction conditions}
For the exterior spacetime of the cylindrical surface $\Sigma$, since the
system is static, we take the Levi-Civita metric \cite{Levi},
\begin{equation}
ds^2=-a^2\rho^{4\sigma}dt^2+b^2\rho^{4\sigma(2\sigma-1)}(d\rho^2+dz^2)
+c^2\rho^{2(1-2\sigma)}d\phi^2, \label{16}
\end{equation}
where $a$, $b$, $c$ and $\sigma$ are real constants. The coordinates $t$,
$z$ and $\phi$ in (\ref{16}) can be taken the same as in (\ref{3}) and with
the same ranges (\ref{3a}).
The radial coordinates in (\ref{3}) and (\ref{16}), $r$ and $\rho$, are not
necessarily continuous on $\Sigma$ as we see below by applying the junction
conditions. The constants $a$
and $b$ can be removed by scale transformations, while $c$ cannot be
transformed away if we want to preserve the range of $\phi$ in (\ref{16})
\cite{16}. The constant $\sigma$
represents the Newtonian mass per unit length. (For a discussion of the
number of constants in cylindrical spacetimes see
\cite{Silva,MacCallum,Bonnor}.)
In accordance with the Darmois junction conditions \cite{Darmois}, we
suppose that the first fundamental form which $\Sigma$ inherits from the
interior metric (\ref{3}) must be the same as the one it inherits from the
exterior metric (\ref{16}); and similarly, the inherited second fundamental
form must be the same. The conditions are necessary and sufficient for a
smooth matching without a surface layer.
The equation of $\Sigma$, for the interior and exterior spacetimes, can be
written respectively as
\begin{equation}
f(r)=r-r_{\Sigma}=0, \;\; g(\rho)=\rho-\rho_{\Sigma}=0, \label{20}
\end{equation}
where $r_{\Sigma}$ and $\rho_{\Sigma}$ are constants. From (\ref{20}) we
can calculate the continuity of the first and second fundamental forms, and
we obtain,
\begin{eqnarray}
A_{\Sigma}=a\rho_{\Sigma}^{2\sigma},
\;\;B_{\Sigma}=b\rho_{\Sigma}^{2\sigma(2\sigma-1)},
\;\; C_{\Sigma}=c\rho_{\Sigma}^{1-2\sigma} \label{21} \\
\left(\frac{A^{\prime}}{A}\right)_{\Sigma}=\frac{2\sigma}{\rho_{\Sigma}}, \;\;
\left(\frac{B^{\prime}}{B}\right)_{\Sigma}=\frac{2\sigma(2\sigma-1)}{\rho_{\Sigma}}, \;\;
\left(\frac{C^{\prime}}{C}\right)_{\Sigma}=\frac{1-2\sigma}{\rho_{\Sigma}}.\label{22}
\end{eqnarray}
Considering (\ref{6}) on the surface $\Sigma$ and substituting into the
junction conditions (\ref{20}) we obtain
\begin{equation}
P_{r\Sigma}=0, \label{23}
\end{equation}
as expected.
The Whittaker mass per unit length (\ref{8c}) after integration and using
the junction conditions (\ref{21}) and (\ref{22}) becomes
\begin{equation}
m=\frac{4\pi}{\kappa}\left[2ac\sigma-(A^{\prime}C)_0\right], \label{23a}
\end{equation}
where the index 0 means the quantity evaluated at the axis of the mass
distribution.
Next, regularity conditions on the the axis of symmetry imply \cite{Philbin}
\begin{equation}
A^{\prime}(0)=B^{\prime}(0)=C^{\prime \prime}(0)=C(0)=0, \;\;\;
B(0)=C^{\prime}(0)=1,
\label{regul}
\end{equation}
hence, considering the
gravitational coupling constant
$G=1$ then
$\kappa=8\pi$ and (\ref{23a}) reduces to
\begin{equation}
m=ac\sigma. \label{23b}
\end{equation}
\section{Conformally flat interior}
The conformally flat condition imposes the vanishing of all Weyl tensor
components, hence from (\ref{9}-\ref{14}) we have
\begin{eqnarray}
S^{\prime}+S^2-\frac{2h^{\prime}}{h}S+\frac{h^{\prime\prime}}{h}=0,
\label{32} \\
S^{\prime}+S^2+\frac{h^{\prime}}{h}S-\frac{2h^{\prime\prime}}{h}=0,
\label{33}
\end{eqnarray}
where
\begin{equation}
S=\frac{A^{\prime}}{A}-\frac{B^{\prime}}{B}. \label{13}
\end{equation}
Then it follows
\begin{eqnarray}
h^{\prime}S-h^{\prime\prime}=0, \label{35n} \\
S^{\prime}+S^2-\frac{h^{\prime\prime}}{h}=0, \label{36n}
\end{eqnarray}
which produces
\begin{equation}
h^{\prime\prime\prime}-\frac{h^{\prime\prime}h^{\prime}}{h}=0. \label{37n}
\end{equation}
Let us now examine the two possible cases, $h^{\prime}\neq 0$ and $h^{\prime}= 0$
\subsection{Case $h^{\prime} \neq 0$}
We obtain from (\ref{37n}) after integration
\begin{equation}
h=a_1\exp(a_2r)+a_3\exp(-a_2r), \label{44n}
\end{equation}
where $a_1$, $a_2$ and $a_3$ are integration constants with the condition that
\begin{equation}
h^2\geq 4a_1a_3. \label{45}
\end{equation}
However, regularity conditions on the axis (\ref{regul}) require
\begin{equation}
a_1=-a_3, \label{39n}
\end{equation}
and (\ref{44n}) reduces to
\begin{equation}
h=a_1\sinh(a_2r), \label{40n}
\end{equation}
where $a_1$ was redefined.
Substituting (\ref{40n}) into (\ref{35n}) and integrating we have
\begin{equation}
A=a_3\cosh(a_2r)B, \label{46n}
\end{equation}
where $a_3$ is another integration constant.
Thus, conformal flatness reduce the total number
of unknown functions by two, through (\ref{40n}) and (\ref{46n}). However,
since the total number of
variables is seven, we
still need one condition in order to determine a solution uniquely.
So, let us consider the three different cases of
isotropy.
\newpage
i) $P_z=P_{\phi}$
Then we obtain from (\ref{7}), (\ref{8}) and (\ref{8f})
\begin{equation}
\frac{h^{\prime
\prime}}{h}+\frac{h^{\prime}}{h}\left(\frac{A^{\prime}}{A}+\frac{B^{\prime}}
{B}\right)=0,
\label{n2}
\end{equation}
which together with (\ref{35n}) yields that $A^{\prime}=0$, this in turn
implies, because of (\ref{46n}) and assuming without lost of generality $A=1$,
\begin{equation}
B=\frac{1}{\cosh(a_2r)}, \label{n3}
\end{equation}
where we chose $a_3=1$ to satisfy (\ref{regul}).
Feeding back (\ref{40n}), (\ref{46n}) and (\ref{n3}) into (\ref{5}--\ref{8})
we obtain
\begin{equation}
P_r=P_z=P_{\phi}=-\frac{\mu}{3}=-\frac{a_2^2}{\kappa}.
\label{n4}
\end{equation}
Thus the solution represents an incompressible cylinder with isotropic
(negative) stresses.
ii) $P_r=P_z$
From (\ref{6}) and (\ref{7}), we have
\begin{equation}
\frac{A^{\prime
\prime}}{A}+\frac{B^{\prime\prime}}{B}-2\left(\frac{A^{\prime}}{A}+
\frac{C^{\prime}}{C}\right)\frac{B^{\prime}}{B}=0,
\label{n5}
\end{equation}
and with (\ref{40n}) and (\ref{46n}) we obtain the equation for $B$,
\begin{equation}
\frac{B^{\prime\prime}}{B}-2\left(\frac{B^{\prime}}{B}\right)^2+a_2^2=0,
\label{n6}
\end{equation}
and by choosing the integration constants to satisfy (\ref{regul}) its
solution is
\begin{equation}
B=\frac{1}{\cosh(a_2r)}. \label{n6}
\end{equation}
From (\ref{46n}) and (\ref{n6}) and assuming $A=1$ this case yields the
same solution as the preceding one.
iii) $P_r=P_{\phi}$.
From (\ref{6}) and (\ref{8}), it follows
\begin{equation}
\frac{A^{\prime \prime}}{A}+\frac{B^{\prime
\prime}}{B}-2\left(\frac{B^{\prime}}{B}\right)^2-2\frac{A^{\prime}}{A}\frac{
B^{\prime}}{B}-\frac{h^{\prime}}{h}
\left(\frac{A^{\prime}}{A}+\frac{B^{\prime}}{B}\right)=0,
\label{n7}
\end{equation}
and substituting into it (\ref{40n}) and (\ref{46n}) leads to the equation
for $B$,
\begin{equation}
\frac{B^{\prime\prime}}{B}-2\left(\frac{B^{\prime}}{B}\right)^2-a_2\coth(a_2r)
\frac{B^{\prime}}{B}=0, \label{n8}
\end{equation}
which has the solution satisfying the regularity conditions (\ref{regul})
\begin{equation}
B=\frac{1}{a_4[\cosh(a_2r)-1]+1}, \label{n8}
\end{equation}
where $a_4$ is an integration constant.
Then substituting (\ref{n8}) into (\ref{46n}) we get
\begin{equation}
A=\frac{a_3\cosh(a_2r)}{a_4[\cosh(a_2r)-1]+1}
\label{n10}
\end{equation}
Using field equations (\ref{5}-\ref{8}) together with (\ref{40n}),
(\ref{n8}) and
(\ref{n10}), we can obtain the expressions for the physical variables, which
are
\begin{eqnarray}
\kappa\mu=2a_2^2a_4\left[(1-a_4)\cosh(a_2r)+a_4-3\right]+3a_2^2, \label{n11} \\
\kappa P_r=\kappa
P_{\phi}=2a_2^2a_4\left[(a_4-1)\tanh(a_2r)\sinh(a_2r)+1\right]-a_2^2,
\label{n12} \\
\kappa P_z=2a_2^2a_4\left[\frac{1-a_4}{\cosh^2(a_2r)}+a_4+1\right]-a_2^2.
\label{n13}
\end{eqnarray}
Observe that in this case the matter distribution is not completely
isotropic in the stresses and the energy density is not homogeneous.
\subsection{Case $h^{\prime}=0$}
Then we have from (\ref{36n})
\begin{equation}
S=\frac{1}{b_1+r} \label{38n}
\end{equation}
where $b_1$ is an integration constant. Using (\ref{38n}) in (\ref{13}), we
obtain after
integration
\begin{equation}
A=Bb_2(b_1+r), \label{40nn}
\end{equation}
where $b_2$ is another integration constant.
However regularity conditions (\ref{regul}) imply from (\ref{40nn}) that
$A=0$, which is obviously unacceptable.
\vspace{1cm}
So far we have only assumed the spacetime to be conformally flat at the
interior, and regularity conditions to be satisfied. However as it can be
easily checked, neither of the
models above satisfy the Darmois conditions (\ref{21}-\ref{22}). As a
matter of fact, and as it will be shown in the next section, there is no
conformally flat interior solutions
satisfying Darmois (and regularity) conditions.
\section{Non existence of conformally flat solution satisfying Darmois
conditions}
As we have seen if the cylinder has a matter content that is conformally
flat and satisfies
regularity conditions on the axis then,
\begin{equation}
h=a_1\sinh(a_2r), \label{40nn}
\end{equation}
if $h^{\prime}\neq 0$.
Now considering the junction conditions (\ref{21}) and (\ref{22}) we obtain
from (\ref{40nn})
\begin{eqnarray}
a_1=\frac{c\rho_{\Sigma}^{1-4\sigma^2}}{b\sinh(a_2r_{\Sigma})}, \label{41} \\
a_2r_{\Sigma}\coth(a_2r_{\Sigma})=1-4\sigma^2. \label{42n}
\end{eqnarray}
Since always $a_2r_{\Sigma}\coth(a_2r_{\Sigma})>1$ then the condition
(\ref{42n}) can never be satisfied.
But if $h^{\prime}=0$, as we have seen before, regularityv conditions are not satisfied.
Hence we can state that {\it any static cylindrical source matched smoothly to
the Levi-Civita spacetime does not admit conformally flat solution}.
\section{Conclusions}
We have deployed the equations describing the static cylinder, as well as
the regularity and matching conditions. Then the consequences derived from
the assumption of conformal flatness
were obtained. It was shown that there exist no interior conformally flat
solution which satisfies regularity conditions and matches smoothly to
Levi-Civita spacetime on the
boundary surface. Of course if we relax Darmois conditions and allow for
the existence of shells at the boundary surface, the latter conclusion does
not hold.
It was also shown that the conformally flat, isotropic (in the stresses)
cylinder is necessarily incompressible ($\mu=$ constant). Inversely, since
the solution for the incompressible
isotropic cylinder is unique (there are four equations for four variables)
then it is clear that such solution is also conformally flat.
So, if we look for an incompressible cylinder matching smoothly to
Levi-Civita (hence not conformally flat), we have to relax the condition
of isotropy in the stresses . Thus for
example one could assume $\mu=$ constant,
$P_z=P_{\phi}\neq P_r$, then we can integrate (\ref{8e}) to obtain
\begin{equation}
ABh^{\prime}=c_1,
\label{44n}
\end{equation}
where $c_1$ is an integration constant. By considering junction conditions
we get
\begin{equation}
c_1=ac(1-4\sigma^2).
\label{45n}
\end{equation}
From (\ref{23b}) and (\ref{45n}) it follows that as $\sigma \rightarrow
1/2$, $m \rightarrow \infty$. This result gives further evidence that the
spacetime at this limit for $\sigma$
has plane symmetry \cite{Bonnor,Philbin,Herrera,Herrera1}. Of course to
fully specify a solution another condition has to be given.
Finally it is worth noting the diferences and the similarities between this
case and the the spherically symmetric situation. For spherical symmetry
there is only one independent component of the Weyl tensor, while for
cylindrical symmetry there are two
independent components. For spherical symmetry the conditions of
incompressibility
and isotropic pressure lead also to a unique solution, the interior
Schwarzschild solution, which is conformally flat \cite{Raychau}, however
unlike our present
case, that solution can be matched
smoothly on the boudary surface to the exterior solution. If the condition
of isotropic pressure is relaxed in the spherically symmetric case,
conformally flat solutions matching
smoothly to Schwarzschild spacetime exist, but are not incompressible
\cite{H}. The same happens in the cylindrically symmetric case with
$P_r=P_{\phi} \neq P_z$, however in this case
the solution does not satisfy Darmois conditions.
|
1,108,101,562,962 | arxiv | \section{Introduction}
The AdS/CFT correspondence \cite{ADSCFT} is one of the most fascinating theoretical breakthroughs
in recent years. In its original form it postulated the duality between two
completely different theories -- the supersymmetric gauge theory ${\cal N}=4$ Super-Yang-Mills theory
in 4 dimensions and superstring theory on a $AdS_5 \times S^5$ background. Since then it has been
extended in numerous directions. Apart from very important practical applications as a tool
for learning about the nonperturbative dynamics of gauge theory it is particularly
fascinating theoretically as it proposes the equivalence of a nongravitational theory
(the gauge theory ``on the boundary'') and quantum theory incorporating gravity.
This is a very explicit realization of the holographic principle~\cite{HOLOGRAPHY}.
However the very reason for which the AdS/CFT correspondence is so useful a tool for studying nonperturbative gauge
theory physics makes it difficult to understand its origin microscopically
from the gauge theory point of view. Indeed both sides of the duality become
simple in opposite limits. In particular
we do not know how to deal with string theory in the quantum gravity regime
corresponding to small coupling \emph{and} finite number of colors on the gauge theory side.
From the point of view of understanding holography the optimal setup would be
to have relatively simple and tractable quantum theories on both sides of the duality.
Some particularly intriguing generalizations of holography involved three dimensional free $O(N)$ vector model
which was proposed to be dual to four dimensional Vasiliev gravity \cite{ON, 4DHS}.
A lot of progress was made in the understanding of dual dynamics from the boundary
theory point of view (see e.g. \cite{JEVICKI, LEIGH}), however the gravitational
side is basically understood only at the classical and semi-classical level as
Vasiliev gravity \cite{VASILIEV} has not been quantized so far.
Reducing the number of dimensions, a class of two dimensional CFT's was proposed
to be dual to three dimensional Vasiliev gravity coupled to a scalar field \cite{GABGOP}.
In this case there is an explicit action for the Vasiliev theory which is a difference of
two Chern-Simons theories, however the total action incorporating interactions with the scalar field is unknown and it is very
difficult to study the duality in the finite $k$, finite $N$ case\footnote{In this case $k$
and $N$ are the parameters of the 2D coset CFT's.}.
Most recently the Sachdev-Ye-Kitaev (SYK) model \cite{SYK} (see also \cite{MALDACENASTANFORD} and subsequent developments) became intensively studied as
it is a quantum mechanical system which exhibits properties characteristic of
a dual holographic classical gravity description in terms of black holes.
In another line of investigation, it was realized that entanglement is crucially connected
with holography.
Surprising parallels were uncovered between
the description of ground state wave functions using MERA (Multi-scale Entanglement Renormalization Ansatz) \cite{SWINGLE,TAKAYANAGI}
and the Ryu-Takayanagi holographic prescription for computing entanglement entropy \cite{RYUTAKAYANAGI}.
More recently various models for holography were proposed incorporating various tensor network constructions
in particular the HaPPY proposal taking into account spatial error correcting features of the holographic dictionary \cite{HAPPY}.
Other recent advances include a path integral optimization framework \cite{OPTIM} and
the random tensor networks \cite{RANDOMTENSOR}.
One generic feature of the approach to understand holography in terms of tensor networks is that
these constructions are in a sense very kinematical. E.g. the HaPPY proposal provides a mapping of a boundary Hilbert space to a bulk Hilbert space
which is quite agnostic about the dynamics (Hamiltonian/action etc.) of the boundary theory.
If this intuition is true, it suggests that a holographic description should be in principle applicable to almost any system\footnote{By
a holographic description we mean throughout this paper a generic higher dimensional dual description which may be
very quantum and far from a description in terms of classical gravity. So we use the term in a much wider
sense than e.g. in \cite{POLCHINSKI}.}.
In this short note we would like to investigate whether one can formulate a holographic dual model for the arguably simplest
possible quantum system -- a free particle in 1 dimension. If successful, this could be a starting point of studying
more complicated setups with more degrees of freedom, interactions etc. in a context which is very much under control.
The plan of the paper is as follows. First we review some very basic requirements for a holographic description
of a given theory and for identifying a gravitational subsector of the holographic bulk theory.
Then we proceed to implement this program for the quantum mechanical free particle.
We close the paper with a summary and conclusions.
\section{The main features of a holographic description}
\label{s.features}
In this section we will summarize what we would expect from a holographic description of some theory.
Suppose that the field theory in question is defined in $d$ spacetime dimensions on some fixed
nondynamical geometry $\Sigma$.
\bigskip
{\bf I.} The dual holographic theory should be defined on a higher dimensional
manifold $M$, having $\Sigma$ as a boundary. At the very least we should be able to match partition
functions for the two theories
\begin{equation}
Z_{boundary} = Z_{bulk}
\end{equation}
{\bf II.} The above requirement is not really enough as we should expect to be able to link all correlation functions in the boundary theory
to the bulk theory through the GKP formula \cite{GKP, WITTEN}. Observables/operators in the boundary theory should be
associated to fields in the bulk theory. Moreover the corresponding sources in the generating function
of correlators in the boundary theory should be linked to the boundary values of the associated bulk fields\footnote{For simplicity we ignore potential $z^\Delta$
factors and assume that they have been incorporated in a redefinition of the bulk fields.}
namely
\begin{equation}
\label{e.genfunc}
\int D\phi\; e^{iS_{bndry}(\phi)+i\int_\Sigma j(x^\mu) O(x^\mu) d^dx} = Z_{bulk}\left(\Phi_O(z,x^\mu) \underset{z\to 0}{\longrightarrow} j(x^\mu)\right)
\end{equation}
Ultimately the boundary degrees of freedom would have been integrated out and the remaining vestiges of the boundary theory
would be just the sources i.e. boundary values of the bulk fields.
{\bf III.} Finally we would like to interpret a part of the bulk theory as a gravitational theory.
In all holographic constructions so far,
the bulk metric is the field associated to the energy momentum tensor of the boundary theory.
In other words its boundary values should be linked in some way\footnote{We are purposefully quite vague about the details here.
In standard AdS/CFT the dictionary is clearest in the Fefferman-Graham coordinates \cite{SKENDERIS}.
We do not want to impose \emph{a-priori} any specific prescription in the general case.} to the \emph{nondynamical} metric of
the boundary theory.
Of course, as in the case of higher spin gravity the whole picture may be more complex
with other massless higher spin fields making the geometric interpretation ambiguous,
but still in this way we may identify a natural gravitational subsector of the bulk theory.
\section{A holographic description of a quantum mechanical free particle}
The goal of this note is to try to satisfy the above requirements for one of the simplest systems possible,
the quantum mechanical free particle in one dimension.
\emph{A-priori} it is not at all clear if such a description exists for such a simple system.
If it does exist, it may well be that the outcome is too trivial and restricted, but we hope that even such failure
may be instructive and interesting as it may indicate a sharpening of the requirements for holography
with respect to the ones outlined in the preceding section. From another perspective it may be
a starting point for constructing holography for more nontrivial quantum mechanical systems.
This system can be understood as a QFT with no spatial
dimension with the action
\begin{equation}
S = \int dt\; \f{1}{2} \dot{q}^2
\end{equation}
Since this system as it stands does not have any coupling
or large $N$ parameter we expect the dual bulk theory to be necessarily quantum. This is in fact one of the key
motivations of this study. We will now build up the bulk theory in steps in order to satisfy the three
requirements described in section~\ref{s.features}.
\subsection*{The partition function}
Let us consider a two-dimensional abelian BF theory defined on the half plane
\begin{equation}
M=\{(t,z): z \geq 0\}
\end{equation}
The action is given by
\begin{equation}
S_{BF}= \int_M B dA = \int B(\partial_t A_z - \partial_z A_t) dt dz
\end{equation}
We would like to impose the following boundary conditions:
\begin{equation}
\label{e.Abc}
B=-A_t\; |_{z=0} \quad\quad\quad\quad A_t = 0\; |_{z \to \infty}
\end{equation}
In order for these boundary conditions to be consistent with the variational principle we have to add to the action a boundary term
\begin{equation}
S_{bulk}^I = S_{BF} +\f{1}{2} \int_{\{z=0\}} B^2 dt
\end{equation}
The variation of the action is now
\begin{equation}
\delta S_{bulk}^I = (EOM's) + \int_{\{z=0\}} B \delta A_t dt + \int_{\{z=0\}} B \delta B dt
\end{equation}
which vanishes due to the boundary condition $\delta A_t+\delta B = 0 |_{z=0}$.
The superscript on $S_{bulk}^I$ indicates that this will not be the full final bulk action
but will be still modified in the following sections.
Let us now evaluate the bulk action $S_{bulk}^I$.
The Lagrange multiplier field $B$ imposes the constraint that $A$ is a flat connection, hence we may set
\begin{equation}
A_z = -\partial_z \Phi \quad\quad\quad\quad A_t = -\partial_t \Phi
\end{equation}
The bulk part of the action $S_{bulk}^I$ on the constraint surfaces vanishes and we are left with just the boundary term
given through the $B$ field, which in turn due to our boundary conditions
can be expressed in terms of the temporal derivatives of the boundary values of $\Phi(t,z)$ field
\begin{equation}
\dot{q}(t) = \lim_{z \to 0} \partial_t \Phi(t,z)
\end{equation}
We thus reproduce the quantum mechanical free particle action\footnote{Similar computations as in this subsection have been done independently with different
motivations in \cite{BFQM1,BFQM2} in the case of nonabelian BF theories.}.
\begin{equation}
\int dt\; \f{1}{2} \dot{q}^2
\end{equation}
The above simple derivation is a two-dimensional analog of the three-dimensional link of Chern-Simons and 2d WZW \cite{CS1},
in the variant where the boundary conditions are $A_+ = \bar{A}_-=0$ (see e.g. \cite{CS2}).
\subsection*{Source for $q(t)$}
Let us now generalize the construction by adding a generic time dependent source for the particle position $q(t)$.
We thus have to reproduce an additional term in the boundary action
\begin{equation}
\int dt\; \f{1}{2} \dot{q}^2 + \int dt\; j(t)q(t)
\end{equation}
In terms of the BF theory gauge field, the particle position $q(t)$ can be understood essentially as a Wilson line
extending from the boundary to the interior at $z=\infty$ as we have
\begin{equation}
\int_{z=0}^\infty A_z\, dz = -\int_{z=0}^\infty \partial_z \Phi(t,z) = \Phi(t,0) - \Phi(t,\infty)
\end{equation}
Now due to the boundary condition at infinity $A_t = 0\; |_{z \to \infty}$, $\Phi(t,\infty)$ is a constant
and hence without loss of generality can be set to zero. Therefore we can make an identification
\begin{equation}
q(t) = \int_L A
\end{equation}
where the line $L$ is attached to the boundary at time $t$ and goes to infinity in the bulk.
Now we would like to rewrite the integral
\begin{equation}
\label{e.source}
\int dt\; j(t)q(t)
\end{equation}
as a two dimensional integral in terms of natural bulk quantities. We will also need a bulk field
associated to the boundary source $j(t)$.
To this end, we will introduce another two-dimensional abelian BF theory which we will denote by
\begin{equation}
\label{e.calpha}
\int C\, d\alpha
\end{equation}
In order to write the coupling (\ref{e.source}) we will introduce yet another ingredient: a globally defined
1-form in the bulk which we will denote by $dt$ (for the moment this can be understood as a gradient of the $t$ coordinate).
\emph{A-priori} the existence of such 1-form in the context
of nonrelativistic quantum mechanics is quite natural in view of Galilean symmetry.
We will, however, return to this point in the following section. For the moment we will treat the 1-form $dt$ as
fixed and given externally as a gradient of the global bulk $t$ coordinate.
We will now enlarge the bulk action to
\begin{equation}
S_{bulk}^{II} = \int_M B\, dA + C\,d\alpha+ \alpha \wedge A + D \, \alpha \wedge dt + \f{1}{2} \int_{\partial M} B^2 dt
\end{equation}
Integrating over the Lagrange multiplier $D$ restricts the general form of the $\alpha$ 1-form:
\begin{equation}
\alpha = j(t,z) dt
\end{equation}
Subsequently integrating over $C$ ensures that $j(t,z)$ is only a function of $t$:
\begin{equation}
\alpha = j(t) dt
\end{equation}
Now we may evaluate the bulk interaction term between the gauge fields of the two BF theories:
\begin{equation}
\int_M \alpha \wedge A = \int_M j(t) dt \wedge (A_t dt+ A_z dz) = \int j(t) \int_0^\infty A_z dz dt = \int j(t) q(t) dt
\end{equation}
obtaining exactly the boundary source term for $q(t)$.
In principle we should now perform the path integral over $A$ leaving an effective bulk action depending on
the scalar fields $B$, $C$, $D$ and gauge field $\alpha$. We will not attempt to do this in this work but rather
we will return to the 1-form $dt$.
\subsection*{Covariantizing $dt$ and the ``gravity'' subsector}
Since the quantum mechanical path integral is essentially just a QFT on a 1-dimensional worldline, one can
introduce a fixed 1-dimensional metric $g_{tt}(t)$ and write the action as
\begin{equation}
\f{1}{2} \int \sqrt{g}\, g^{tt} (\partial_t q)^2 = \f{1}{2} \int \f{1}{e} \dot{q}^2
\end{equation}
where we introduced the standard einbein notation, and $e=e(t)$ is a \emph{fixed} given function of time.
We would now like to complete the program sketched in section \ref{s.features} and introduce a bulk
field which would go over to the einbein on the boundary. At the same time we will get rid of the rather
artificial looking external 1-form $dt$ which was necessary to write the boundary source term in terms of
bulk fields. Since $dt$ understood as the gradient of the global bulk time coordinate is necessarily
a closed 1-form, it is extremely suggestive to consider it as a gauge field of a third abelian BF theory
which we will denote by
\begin{equation}
\label{e.eeta}
\int E\, d\eta
\end{equation}
As the boundary condition at the physical boundary $z=0$ we will fix the temporal component of $\eta$
\begin{equation}
\eta = \eta_t dt + \eta_z dz
\end{equation}
to a fixed value which we will identify shortly with the eibein $e(t)$.
More precisely we fix the pullback of $\eta$ to the boundary $\partial M$ to be equal to $e(t)dt$.
Thus in the case of (\ref{e.eeta}) (as well as for (\ref{e.calpha})) we do not need to add any boundary terms to the action
as was the case for the original $\int B\, dA$ theory.
We will also modify the boundary conditions (\ref{e.Abc}) at $z=0$ to
\begin{equation}
\label{e.bcnew}
A_t + \eta_t B = 0 |_{z=0}
\end{equation}
Accordingly we need to modify the additional boundary term
\begin{equation}
\f{1}{2} \int_{\{z=0\}} B^2 dt \longrightarrow \f{1}{2} \int_{\partial M} B^2\, \eta
\end{equation}
The cancellation of the boundary terms in the variational principle goes through since due to our boundary conditions $\delta \eta_t =0|_{z=0}$.
The resulting boundary action can be seen to be
\begin{equation}
\f{1}{2} \int_{\partial M} B^2\, \eta = \f{1}{2} \int \f{1}{\eta_t} A_t^2 dt = \f{1}{2} \int \f{1}{\eta_t} \dot{q}^2
\end{equation}
where we used (\ref{e.bcnew}). It is now clear that we have to identify the boundary value of $\eta_t$ with the einbein $e(t)$
as announced earlier. From the considerations of section \ref{s.features} we are led to identify the $E$, $\eta$ subsector
as a part of the ``gravitational'' sector of the bulk theory. Note that although this is a two dimensional BF theory
it is distinct from Jackiw-Teitelboim 2D gravity which is a nonabelian BF theory \cite{JACKIW}.
Let us now put together all ingredients introduced so far. Our final bulk action takes the form
\begin{equation}
\label{e.sbulkfin}
S_{bulk}^{III} = \int_M B\, dA + C\,d\alpha+ E\, d\eta + \alpha \wedge A + D \, \alpha \wedge \eta + \f{1}{2} \int_{\partial M} B^2 \eta
\end{equation}
with the boundary conditions at $z=0$
\begin{equation}
A_t + \eta_t B = 0 |_{z=0} \quad\quad \alpha_t =j(t) |_{z=0} \quad\quad \eta_t = e(t) |_{z=0}
\end{equation}
Let us make some comments on the above expression. Increasing the number of degrees of freedom
will increase the number of components of all fields except $\eta$ and $E$.
Adding interactions (on the quantum mechanical side) is rather nontrivial. One can either integrate over the source
or introduce separate sources for the monomials $q(t)^n$. Doing that seems to require a significant extension
to the formalism. Ultimately we would also like to integrate out $A$ and possibly $B$.
We leave these issues for future investigation.
\section{Conclusions}
The motivation for the construction presented in this note is the intuition arising from tensor network
interpretations of holography that a holographic description should exist for almost any system.
Hence it should be possible to find a holographic formulation of the most extreme simple system
that one could think of -- a one dimensional quantum mechanical free particle.
As we would like to have an explicit dual theory described by some concrete bulk action,
we did not take the approach through tensor network constructions but rather we worked directly
in the continuum with two dimensional topological BF theories having the Chern-Simons/WZW relation as a guiding principle.
The expected features of a holographic dual imposes, however, further requirements on the bulk theory going beyond the equality of
partition functions. In particular we should have additional
matter fields in the bulk theory which are associated to the operators of the boundary theory and which reduce to
the corresponding sources at the boundary. In this work we carried out the construction for the source for the particle position~$q(t)$.
We also identified a subsector of the bulk theory which reduces to the einbein on the boundary
and thus behaves like a ``gravitational'' sector of the bulk theory.
A characteristic feature of the simple quantum mechanical model considered here is the absence of
a large $N$ parameter. More precisely, one can consider this model to have $N=1$, with a straightforward
generalization to $N$ components. In the conventional examples of the AdS/CFT correspondence,
finite $N$ corresponds to a quantum bulk model (in these cases quantum gravity+other matter fields),
which was also a motivation for the present construction, where we treat the bulk theory on the quantum level
as we use the full path integral formalism. Indeed the role of a large $N$ limit in a generalized version
of the model (possibly with a singlet constraint) within a similar construction is a very interesting
problem which we plan to address in the future.
One qualitative feature of holography which is not explicitly captured by the present construction is the
interpretation of the holographic direction as an RG flow. In the present paper, on the other hand, the starting point of the construction
was a minimal implementation of the bulk formula for the generating function of correlators (\ref{e.genfunc}),
which does not lead to a direct RG interpretation (which in any case is not evident as the quantum mechanical
system lives on a worldline and thus has no spatial dimension). We suspect that to address this issue
one would have to integrate out the $A$ and $B$ fields and analyze the resulting theory of just the bulk fields
associated with sources of $q(t)$ and the einbein $e(t)$. Possibly for a local geometric interpretation
one would have to combine this procedure with the large $N$ limit discussed above.
This goes beyond the scope of the present paper but is definitely another important problem for future research.
There are also many other possible directions for further investigation, foremost of which is going to
nontrivial quantum mechanical systems. It is not completely clear whether to consider in addition sources
for monomials of $q(t)$ and to what extent the construction of the source sector performed
here is unique or optimal. On a more mundane level it would be interesting to analyze
the bulk theory in more detail and check to what extent our experience with holography
in higher number of dimensions carries through here.
We hope that the setup presented in this paper would be a good framework to address such questions.
\bigskip
\noindent{\bf Acknowledgements.}\\
This work was supported by NCN grant 2012/06/A/ST2/00396.
|
1,108,101,562,963 | arxiv | \section{Introduction}
Semantic segmentation \cite{7913730} involves classifying image pixels into a given category. While deep learning has vastly improved semantic segmentation performance, it requires large amounts of data with pixel-level annotation.
\begin{figure}[t]
\centering
\fbox{\includegraphics[width=0.47\textwidth,keepaspectratio]{figures/ash_intro_v9.eps}}
\caption{Brief summary of our proposed Adversarial Semantic Hallucination approach (ASH). Previous domain adaptation work require target domain data during training. Since target domain data are unavailable in our problem setting, we generate additional data via style transfer from style images to the source domain images in a class-wise manner.}
\label{figure:ASHintro}
\end{figure} Pixel-wise image annotation is both time-consuming and error-prone, making it impractical for real life applications. For training vision systems in autonomous driving vehicles, synthetic data are readily available and easily labeled. However, synthetic data (source domain data) differs visually from real world driving data (target domain data), causing models that are trained solely on synthetic data to perform poorly on real world data.
Domain adaptation methods \cite{conf/cvpr/BousmalisSDEK17,DBLP:series/acvpr/GaninUAGLLML17,pmlr-v80-hoffman18a, Luo_2019_ICCV, Saito_2018_CVPR, Luo_2019_CVPR} seek to minimize the domain gap between the source domain and target domain. This can be achieved if enough unlabeled target domain data are available when conducting domain alignment. Unfortunately, in some scenarios such as Domain Generalization (DG)\cite{Yue_2019_ICCV}, unlabeled target data are not accessible when training the network. With limited access to target data, it becomes quite difficult, if not impossible, to successfully apply previous unsupervised domain adaptation methods for training a general model. To solve this problem, hallucination-based approaches\cite{Kim_2020_CVPR,Yue_2019_ICCV,DBLP:conf/nips/LuoLGY020} have been proposed to address the lack of target domain data. These methods generate new images by varying texture information in the source domain images. This improves the sensitivity of the trained deep convolutional neural networks to shape information, which is more likely to be invariant across domains, compared to texture information. For example, Adversarial Style Mining \cite{DBLP:conf/nips/LuoLGY020} uses a single target domain image to hallucinate additional training data. The global statistics of a single target domain image are used to initialize style features, which are then used to adaptively stylize the source domain images. The 'difficulty' of the stylized images are progressively increased via adversarial training.
However, most hallucination approaches \cite{Kim_2020_CVPR,Yue_2019_ICCV,DBLP:conf/nips/LuoLGY020} introduce global variations to the data and fail to consider the variant difficulties between classes. This can be problematic because datasets might be imbalanced due to the collection and/or annotation difficulties, making some classes more difficult. For example, in the driving datasets\cite{Richter_2016_ECCV,RosCVPR16,Cordts2016Cityscapes}, a larger proportion of pixels correspond to 'road', 'building' or 'sky' classes compared to minority classes such as 'pole' or 'light' for each image. Applying the same degree of stylization to all classes without considering their different characteristics may reduce performance for minority classes.
While previous work have attempted to solve the class imbalance problem with focal loss \cite{Lin_2017_ICCV} or class balanced loss \cite{8953804}, these approaches also have their limitations. Class balancing methods like focal loss assume that source and target domain distributions are similar, which may not always hold true \cite{Jamal_2020_CVPR}. Additionally, hyperparameter selection for these methods is nontrivial and the hyperparameters may not be transferrable between datasets.
The characteristics of different classes can be represented by their semantic information, such as high-level semantic features or predicted logits. Recent work demonstrated that semantic information can improve performance for superresolution \cite{wang2018sftgan} and synthetic image generation \cite{park2019SPADE}. Both methods use semantic information to adaptively generate affine parameters for feature transformation. The transformed features are then decoded to generate high resolution images. In both cases \cite{wang2018sftgan, park2019SPADE}, the generated images demonstrated improved realism.
We propose a new method, Adversarial Semantic Hallucination (ASH), for unsupervised domain adaptive semantic segmentation when target data are unavailable during training. Inspired by ASM \cite{DBLP:conf/nips/LuoLGY020}, we integrate adversarial hallucination in our work. However, unlike ASM, our hallucination is fine-grained in a class-wise manner. Additionally, while ASM needs access to target domain data when training, our method does not need any target data, making it easily adopted for real life applications. Specifically, we use the semantic information from the segmentation probability map to generate \textit{class-wise} scaling and shifting transformation parameters for the style features. With these transformation parameters and the class probability map, ASH can conduct stylization in a class-wise manner. ASH collaborates with a discriminator in an adversarial manner, adaptively generating new challenging data for training the segmentation network for our target task. We also found empirically that combining noise perturbation together with ASH improves performance.
Our main contributions are summarized as follows: \\
1) We present an adversarial semantic hallucination (ASH) approach to solve domain generalization problems for semantic segmentation. ASH leverages semantic information from the segmentation output, allowing for class-wise stylization of the source domain images. Comparing to previous works, our method conducts the process in an adaptive and efficient manner;
2) We evaluate ASH on GTA5 \cite{Richter_2016_ECCV} $\rightarrow$ Cityscapes \cite{Cordts2016Cityscapes} and Synthia \cite{RosCVPR16} $\rightarrow$ Cityscapes \cite{Cordts2016Cityscapes}, making comparison with previous works under the same setting. The experimental results demonstrate the efficacy of our proposed method.
\section{Related Work}
In this section, we briefly survey previous works that are most related to ours, including unsupervised domain adaptation and generative adversarial networks.
\subsection{Unsupervised Domain Adaptation}
Unsupervised domain adaptation (UDA) is a subset of transfer learning. Given labeled source data and unlabeled target data, UDA aims to train a network to achieve satisfactory performance on target domain. Previous works on unsupervised domain adaptation \cite{Saito_2018_CVPR,DBLP:series/acvpr/GaninUAGLLML17} attempt to increase alignment between the source and target domains, minimizing the discrepancy between the two domains. By decreasing the discrepancy between source and target domain, the knowledge learned from source domain can be applied to the target domain. Unsupervised domain alignment methods can be generally divided into three categories, namely pixel-level alignment, feature-level alignment, and output-level alignment. Pixel-level domain adaptation\cite{conf/cvpr/BousmalisSDEK17} transforms the source domain data to visually mimic the target domain images. The transformed source domain images are included during training. Feature-level domain adaptation, such as \cite{DBLP:series/acvpr/GaninUAGLLML17,pmlr-v80-hoffman18a, Luo_2019_ICCV}, focuses on aligning the feature representations across domains, making the feature representations extracted from the source and target domain indistinguishable. This has been studied for image classification \cite{DBLP:series/acvpr/GaninUAGLLML17}, and semantic segmentation \cite{Luo_2019_ICCV}. Output-level domain adaptation \cite{Saito_2018_CVPR, Luo_2019_CVPR} maximises the similarity between domains at the output level. Tsai \etal\cite{Tsai_adaptseg_2018} and Luo \etal \cite{9372870} demonstrate that output-level alignment compared to feature-level alignment gives better performance for semantic segmentation.
The work most related to our approach is ASM \cite{DBLP:conf/nips/LuoLGY020}, which aims to solve unsupervised domain adaptive segmentation when limited unlabeled target data are available. Both ASM \cite{DBLP:conf/nips/LuoLGY020} and our approach propose to utilize a style transfer strategy to generate new data. However, there are significant differences between ASM \cite{DBLP:conf/nips/LuoLGY020} and our approach: (1) ASM requires target data (one single target domain image) for domain alignment, which limits practicality when target data are unavailable. Conversely, our approach does not need any target data for training, making it more applicable for real life scenarios. (2) ASM uses a global stylization approach. The stylized image is globally updated with the target task prediction loss on the stylized data. Consequently, pixels from different classes are uniformly stylized. However, we argue that the class-wise differences should be considered~\cite{Zou2018_eccv2018,Wang2020_cvpr2020}. Since some classes are more easily identifiable than other classes, uniform stylizing across all classes might not be optimal because this ignores the inherent class-wise differences. Therefore, a fine-grained strategy that accounts for these class-wise differences during stylization would be advantageous.
\subsection{Generative Adversarial Networks (GANs)}
GANs have garnered much attention since their introduction \cite{NIPS2014_5ca3e9b1}, and have been studied for a wide range of applications, most notably for generating synthetic images \cite{Karras_2019_CVPR}. A vanilla GAN typically comprises of a generator-discriminator pair optimized in a min-max fashion. The objective for the generator is to synthesize realistic images; for the discriminator, it is to distinguish between the synthesized images and the real images.
While GANs have been used for unsupervised domain adaptation \cite{Luo_2019_CVPR,conf/cvpr/BousmalisSDEK17}, the lack of target domain data for the domain generalization problem setting means that some modifications are required.
Since we need to generate additional training data to mitigate the domain gap between the source domain data and the unseen target domain data, we apply the principle behind conditional GANs \cite{8579015}. Conditional GANs give the user additional control over the generated output with prior information to the generator. We were also further inspired by recent work \cite{park2019SPADE,wang2018sftgan} which demonstrated that prior information improves synthesized image quality. Wang \etal \cite{wang2018sftgan} leverages semantic information to improve output image quality during super resolution. The probability map serves as a prior and is used to spatially transform the image features, improving recovery of fine details and image texture. Similarly, Park \etal \cite{park2019SPADE} use semantic information to condition the synthesized GAN output, by transforming the intermediate features in the generator. This enables their approach to generate realistic images, while also allowing the user to determine the content of the generated images.
We extend existing domain adaptation work by incorporating semantic information as a prior for domain generalization.
Our ASH module is lightweight and only consists of a few convolutional layers 1) to map the semantic information to latent space and 2) to compute the transformation coefficients for the style features. This minimizes the computational costs required for our proposed method.
\section{Method}
In this section, we firstly discuss our problem setting and preliminary background, then we provide the technical details of Adversarial Semantic Hallucination (ASH).
\begin{figure*}[t]
\begin{center}
\fbox{\includegraphics[width=0.7\linewidth,keepaspectratio]{figures/ash_workflow_final.eps}}
\end{center}
\caption{Illustrated workflow for generating stylized source domain images with Adversarial Semantic Hallucination (ASH). A pretrained VGG encoder extracts features from the source and style images. ASH conditions the style features with semantic information derived from the segmented source domain image. The semantic information from the segmentation output is used to generate the element-wise scaling and shifting parameters $\gamma_i$ and $\beta_i$ for $i$-th channel respectively. These transformation parameters are used to adjust the style features based on the predicted class of the pixels in the segmentation output. The content features are then re-normalized with the transformed style features via Adaptive Instance Normalization (AdaIN). The merged features are then decoded to output stylized source images. }
\label{figure:workflow_ash}
\end{figure*}
\subsection{Problem Setting}
The problem setting for domain generalization is defined as follows: We have source domain data $X_{s}$ with labels $Y_{s}$ during training, but we cannot access target domain data $X_{target}$. The source domain and target domain have different data distributions. Our goal is to develop a model $G$ that correctly predicts the labels for target domain data after training with source domain data $X_{s}$.
\subsection{Preliminary background}
Our method follows a two pronged approach. Firstly, our approach incorporates the style transfer method \cite{huang2017adain}. We augment the source domain data by stylizing it with images from a paintings dataset $X_{style}$, \textit{i.e.}, Painter by Numbers. The style features from the uncorrelated dataset are conditioned with semantic information obtained from the segmentation output of source domain data. Secondly, we train the different components: an ASH module, a segmentation network and a discriminator.
Similar to \cite{huang2017adain}, we use the pretrained VGG19 network to extract features from the source domain images and the style images. We then use adaptive instance normalization\cite{huang2017adain}:
\begin{equation}
\text{AdaIN}(f_{c}, f_{s}) = \sigma(f_{s}) \left( \frac{f_{c} - \mu(f_{c})}{\sigma(f_{c})} \right) + \mu(f_{s})
\label{eqn:adain}
\end{equation}
which re-normalizes the channelwise mean and variance of the content features $f_{c}$ to match that of the style features $f_{s}$.
ASM \cite{DBLP:conf/nips/LuoLGY020} uses the global statistics of a single target domain image to generate the initial style features. These style features are then used to stylize the source images. ASM iteratively updates the style features by applying small perturbations with same direction as the task loss. For ASH, we take a slightly different approach to generate additional styles. Our objective is to ensure that the trained segmentation model is texture invariant. At each iteration, we randomly sample a style image from a dataset to stylize the source image. By stylizing the source image with uncorrelated style information, the model learns to disregard texture information.
We increase diversity of the style features by introducing orthogonal noise \cite{wang2020diversified}. Orthogonal noise allows us to preserve the inner product of the style features, or its 'inherent style information', while simultaneously increasing the diversity of the style features. We regularize the segmentation output with a label smoothing function before conditioning the style features with the Adversarial Semantic Hallucination (ASH) module.
\subsection{Adversarial Semantic Hallucination}
As shown in Figure \ref{figure:workflow_ash}, our framework comprises of a segmentation network, a discriminator and an Adversarial Semantic Hallucination (ASH) module. At first, we introduce our module, Adversarial Semantic Hallucination (ASH), which conditions the style features with semantic information from the source data segmentation output. The semantic information is used to compute the scaling $\gamma_{i}$ and shifting $\beta_{i}$ transformation parameters (Figure \ref{figure:workflow_ash}), where $i$ refers to the class index. These transformation parameters condition the style features in latent space. Depending on the predicted class for each pixel, ASH is trained to maximize adversarial loss by assigning different scaling and shifting transformation parameters. At each step of training, the pixels corresponding to different semantic classes will be stylized by different degrees, making data augmentation and stylization adaptive to segmentation performance. We use adaptive instance normalization \cite{huang2017adain} to merge the content features with the conditioned style features.
\begin{algorithm*}
\caption{Adversarial approach for domain generalization}
\begin{algorithmic}[1]
\INPUT Source domain data $X_{s}$, Source domain label $Y_{s}$, Style image $X_{style}$, Segmentation network \textit {G}, Discriminator \textit {D}, Encoder \textit {Enc}, Decoder \textit {Dec}, Adversarial Semantic Hallucination ASH, Adaptive Instance Normalization (AdaIN), Number of iterations $Iter_{num}$
\OUTPUT Optimized segmentation network for domain generalization
\FOR{$0$, ..., $Iter_{num}$}
\STATE Generate source features $f_{src}$ with pretrained decoder $Dec(X_{s})$.
\STATE Generate style features $f_{style}$ with pretrained decoder $Dec(X_{style})$.
\STATE Obtain classwise scaling $\gamma_{i}$ and shifting $\beta{i}$ coefficients from segmentation output.
\STATE Perturb style features $f_{style}$ with classwise scaling $\gamma_{i}$ and shifting $\beta{i}$ coefficients
\STATE Merge source features $f_{src}$ and perturbed style features $f_{style}^{'}$ with AdaIN.
\STATE Generate stylized source image $X_{stylized}$ with the merged source features and perturbed style features.
\STATE Train ASH by minimizing the loss function $L_{ASH}(G,D,f_{src},f_{style}^{'})$.
\STATE Train $G$ with source domain data by minimizing segmentation loss $ \mathcal{L}_{seg}(G,X_{s},Y_{s})$.
\STATE Train $G$ with stylized source domain data by minimizing adversarial loss. $\mathcal{L}_{adv}(G,D,X_{stylized},\text{ASH})$.
\STATE Train $D$ by minimizing adversarial loss $\mathcal{L}_{adv}(G, D,X_{stylized},X_{s},\text{ASH})$.
\ENDFOR
\end{algorithmic}
\label{alg:workflow_alg}
\end{algorithm*}
The segmentation network $G$ \cite{Luo_2019_CVPR} is trained to minimize segmentation loss $L_{seg}$ and adversarial loss $L_{adv}$.The discriminator network $D$ is trained to maximize adversarial loss $L_{adv}$. Both loss functions are based on the formulation from \cite{Luo_2019_CVPR}. Segmentation loss $L_{seg}$ is derived from computing the cross entropy loss for the segmentation output.
We generate the classwise scaling $\gamma_{i}$ and shifting $\beta{i}$ coefficients from the segmentation output $G(X_{s})$, as shown in the following equation:
\begin{equation}
\gamma_{i},\beta{i}=\text{ASH}(G(X_{s}))
\label{eqn:classwise_scaleshift}
\end{equation}
where $i$ refers to the channel index of the features.
We then perturb the style features $f_{style}$ to generate perturbed style features $f_{style}^{'}$:
\begin{equation}
f_{style}^{'}=\gamma_{i}(f_{style}+1) + \beta{i}
\label{eqn:perturbed style features}
\end{equation}
We generate the stylized source domain images $X_{stylized}$ with the following equation:
\begin{equation}
X_{stylized} =Dec(\text{AdaIN}(f_{source},\text{ASH}(f_{style},G(X_{s})))
\label{eqn:stylizedsrc}
\end{equation}
where $Dec$ is the pretrained decoder, AdaIN is the adaptive instance normalization equation given in equation \ref{eqn:adain}, ASH is the adversarial semantic hallucination module, $f_{style}$ is the extracted style features, $f_{source}$ is the extracted source features, $G$ is the segmentation network and $X_{s}$ is the source domain data.
Adversarial loss is given by the following equation:
\begin{equation}
\begin{aligned}
L_{adv}(G,D,\text{ASH}) =& -E[\log(D(G(X_{src})))] \\
& -E[\log(1-D(G(X_{stylized}))]
\label{eqn:advloss}
\end{aligned}
\end{equation}
We optimize the ASH module by maximizing adversarial loss because we want the ASH module to create more 'difficult' data for the segmentation network while retaining semantic information in the source features and minimizing style loss from the perturbed style features. We compute the loss for ASH with the following equation:
\begin{equation*}
\begin{aligned}
&L_{ASH}(G,D,f_{src},f_{style}^{'}) = \\ &-L_{adv}(G,D,ASH)\\ &+ L_{c}(f_{src},\text{AdaIN}(f_{src},f_{style}^{'}))\\ &+ L_{s}(f_{style}^{'},\text{AdaIN}(f_{src},f_{style}^{'}))\\ &- L_{s}(f_{src},\text{AdaIN}(f_{src},f_{style}^{'}))\\
\label{eqn:ASHloss}
\end{aligned}
\end{equation*}
We choose the same formula for content and style loss as defined in \cite{huang2017adain}.
\begin{comment}
Content loss is given by the following equation:
\begin{equation}
\begin{aligned}
L_{c}(x_{1},x_{2})=\lVert(x_{1}-x_{2}) \rVert_{2}
\label{eqn:contentloss}
\end{aligned}
\end{equation}
where $x_{1,2}$ refer to the features.
Style loss is given by the following equation:
\begin{equation}
\begin{aligned}
L_{s}(x,y)= \lVert\mu(x_{1})-\mu(x_{2})\rVert_{2} +\lVert\sigma(x_{1})-\sigma(x_{2}) \rVert_{2}
\label{eqn:styleloss}
\end{aligned}
\end{equation}
where $x_{1,2}$ refer to the features, $\mu(x)$ and $\sigma(x)$ refer to the channelwise mean and standard deviation of the features respectively.
\end{comment}
We minimize the amount of style information retained from the content images while simultaneously preserving semantic information in the stylized images.
The training workflow is summarized in Algorithm \ref{alg:workflow_alg}.
The weights for the pretrained encoder and decoder that are used during stylization are not updated during training. We only need the segmentation network for evaluation, neither the ASH module or discriminator are required after training.
\section{Experiments}
In this section, we introduce the experimental details. We first illustrate the datasets utilized in this work in Section. \ref{subsection:datasets}. Secondly, we provide implementation details in Section \ref{subsection:implementationdetails}. We provide details for all experimental results in Section \ref{subsection:peerstudies} - \ref{subsection:ablationstudy}. Section \ref{subsection:peerstudies} presents the performance of our approach on the benchmark datasets and compares it with the state of the art unsupervised domain adaptation and domain generalization methods. We discuss the ablation study in Section \ref{subsection:ablationstudy}.
\subsection{Datasets}
\label{subsection:datasets}
We use the synthetic datasets GTA5 \cite{Richter_2016_ECCV}, Synthia \cite{RosCVPR16} as source domains and the real world dataset Cityscapes \cite{Cordts2016Cityscapes} as the target domain. The GTA5 dataset \cite{Richter_2016_ECCV} has 24,966 densely annotated images with resolution $1914 \times 1052$ pixels, while the Synthia dataset\cite{RosCVPR16} has 9,400 densely annotated images with $1280 \times 760$ pixels. Models are trained on the labeled source domain images and evaluated on the Cityscapes validation set. Similar to \cite{huang2017adain}, we use a paintings dataset from WikiArt to provide 45,203 style images.
\begin{table*}[t]
\begin{center}
\scriptsize
\setlength{\tabcolsep}{1pt}
\begin{tabular}{p{2cm}| c|abababababababababab|c}
\toprule
\multicolumn{23}{c}{\textbf{GTA5 $\rightarrow$ Cityscapes}} \\
\toprule & \rotatebox{90}{Year} & \rotatebox{90}{Arch.}
& \rotatebox{90}{road} & \rotatebox{90}{side.} & \rotatebox{90}{buil.} & \rotatebox{90}{wall} & \rotatebox{90}{fence} & \rotatebox{90}{pole} & \rotatebox{90}{light} & \rotatebox{90}{sign} & \rotatebox{90}{vege.} & \rotatebox{90}{terr.} & \rotatebox{90}{sky} & \rotatebox{90}{pers.} & \rotatebox{90}{rider} & \rotatebox{90}{car} & \rotatebox{90}{truck}& \rotatebox{90}{bus} & \rotatebox{90}{train} & \rotatebox{90}{motor} & \rotatebox{90}{bike} & \rotatebox{90}{\textbf{mIoU}} \\
\toprule
\midrule
Source only & - &R & 75.8 & 16.8 & 77.2 & 12.5 & 21.0 & 25.5 & 30.1 & 20.1 & 81.3 & 24.6 & 70.3 & 53.8 & 26.4 & 49.9 & 17.2 & 25.9 & 6.5 & 25.3 & 36.0 & 36.6 \\
Fully supervised & -&R &97.9 &81.3 &90.3 &48.8 &47.4 &49.6 &57.9 &67.3 &91.9 &69.4 &94.2 &79.8 &59.8 &93.7 &56.5 &67.5 &57.5 &57.7 &68.8 &70.4 \\
\midrule
\multicolumn{23}{c}{\scriptsize{Domain Generalization}}\\
\midrule
Advent\cite{Vu_2019_CVPR}& 2019 &R &83.0 &1.8 &72.0 &8.2 &3.6 &16.2 &22.9 &9.8 &79.3 &17.1 &75.7 &35.1 &15.8 &70.9 &30.9 &35.3 &0.0 &16.4 &24.9 &32.6\\
MaxSquare \cite{Chen_2019_ICCV}& 2019 &R &76.8 &14.2 &77.0 &18.8 &14.1 &14.5 &30.3 &18.0 &79.3 &11.7 &70.5 &53.0 &24.2 &68.7 &25.3 &14.0 &1.3 &20.6 &25.5 &34.6\\
CLAN \cite{Luo_2019_CVPR}& 2019 &R & 87.2 &20.1 &77.9 &25.6 &19.7 &23.0 &30.4 &22.5 &76.8 &25.2 &76.2 &55.1 &28.1 &82.7 &30.7 &36.9 &0.8 &26.0 &17.1 &40.1 \\
ASM\cite{DBLP:conf/nips/LuoLGY020}& 2020 &R & 56.2 &0.0 &7.0 &0.6 &1.0 &0.3 &0.7 &0.6 &13.8 &0.1 &0.01 &0.08 &0.04 &1.2 &0.5 &0.7 &0.2 &0.0 &0.0 & 4.4 \\
\midrule
Domain Rand.\cite{Yue_2019_ICCV}& 2019 &R & - &- &- &- &- &- &- &- &- &- &- &- &- &- &- &- &- &- &- &\textbf{42.5} \\
ASH (Ours)& 2021 &R & 88.3 & 19.8 & 78.8 & 23.6 & 19.5 &24.4 &30.3 &24.7 &79.1 &27.0 &74.4 &56.4 &27.9 & 83.4 &36.4 &38.4 &0.8 &22.5 &29.8 & 41.3 \\
ASH (Uni.Sem.Info.)& 2021 &R &87.4 &17.0 &77.7 &20.6 &17.8 &22.8 & 30.1 & 24.5 & 78.7 & 24.6 & 72.7 & 55.6 & 26.5 & 81.5 & 32.2 & 37.7 & 1.1 & 21.7 & 20.5& 39.5\\
\bottomrule
\end{tabular}
\end{center}
\caption{Per-class IoU and mean IoU for the segmentation network (Deeplab-v2 with Resnet-101 backbone) trained on GTA5, tested on Cityscapes. 'ASH Uni.Sem.Info' refers to ASH with uniform class-wise probability map (identical values across all classes and pixels).}
\label{tab:gta-cityscapes}
\end{table*}
\begin{table*}[t]
\centering
\scriptsize
\setlength{\tabcolsep}{1.4pt}
\begin{tabular}{p{2cm}|c|ababababababab|cc}
\toprule
\multicolumn{17}{c}{\textbf{Synthia $\rightarrow$ Cityscapes}} \\
\midrule &\rotatebox{90}{Year} & \rotatebox{90}{Arch.}
& \rotatebox{90}{road} & \rotatebox{90}{side.} & \rotatebox{90}{buil.} & \rotatebox{90}{light} & \rotatebox{90}{sign} & \rotatebox{90}{vege.} & \rotatebox{90}{sky} & \rotatebox{90}{pers.} & \rotatebox{90}{rider} & \rotatebox{90}{car} & \rotatebox{90}{bus} & \rotatebox{90}{motor} & \rotatebox{90}{bike} & \rotatebox{90}{\textbf{mIoU}}& \rotatebox{90}{\textbf{mIoU16}} \\
\midrule
\midrule
Source only &-& R & 47.1 & 19.5 & 68.9 & 9.1 & 9.3 & 75.1 & 79.1 & 52.5 & 20.3 & 43.0 & 20.7 & 9.4 & 29.3 & 37.17& 32.38 \\
Fully supervised &-& R &95.1 &72.9 &87.3 &46.7 &57.2 &87.1 &92.1 &74.2 &35.0 &92.1 &49.3 &53.2 &68.8 &70.1& - \\
\midrule
\multicolumn{17}{c}{\scriptsize{Domain Generalization}}\\
\midrule
Advent \cite{Vu_2019_CVPR}& 2019 &R & 72.3 &30.7 &65.2 &4.1 &5.4 &58.2 &77.2 &50.4 &10.1 &70.0 &13.2 &4.0 &27.9 &37.6& 31.8\\
MaxSquare \cite{Chen_2019_ICCV} & 2019 &R & 57.8 &23.19 &73.63 &8.37 &11.66 &73.84 &81.92 &56.68 &20.73 &52.18 &14.71 &8.37 &39.18& 40.17 &34.96 \\
CLAN \cite{Luo_2019_CVPR} & 2019 &R & 63.9 &25.9 &72.1 &14.3 &12.0 &72.5 &78.7 &52.7 &14.5 &62.2 &25.1 &10.4 &26.5 &40.9& 34.9 \\
ASM \cite{DBLP:conf/nips/LuoLGY020} & 2020 &R &75.4 & 18.5 &66.6 &0.1 &0.8 &67.0 &77.8 &15.6 &0.5 &11.4 &1.3 &0.03 &0.2 & 25.8 & 21.6 \\
\midrule
Domain Rand.\cite{Yue_2019_ICCV}& 2019 &R & - &- &- &- &- &- &- &- &- &- &- &- &- &- &37.58 \\
ASH (Ours)& 2021 &R & 70.2 &27.9 &75.4 &16.0 &15.2 &74.2 &80.1 &55.0 &20.4 &71.1 &29.6 &10.9 &38.2 &44.9&\textbf{38.7} \\
\bottomrule
\end{tabular}
\caption{
Per-class IoU and mean IoU for the segmentation network (Deeplab-v2 with Resnet-101 backbone) trained on Synthia, tested on Cityscapes..
}
\vspace{-0.5cm}
\label{tab:synthia-cityscapes}
\end{table*}
\begin{figure*}[t]
\begin{center}
\fbox{\includegraphics[width=0.8\linewidth,keepaspectratio]{figures/qualitative_synthia_comparison.eps}
}
\end{center}
\caption{Qualitative comparison of segmentation output for Synthia \textrightarrow{} Cityscapes. For each target domain image, we show the corresponding results for 'Source only' (Non adapted instance),'ASM' Adversarial Style Mining \cite{DBLP:conf/nips/LuoLGY020}, 'ASH'(our proposed method) and the ground truth labels. }
\label{figure:qualitative }
\end{figure*}
\subsection{Implementation details}
\label{subsection:implementationdetails}
We implement our approach with the Pytorch library \cite{NEURIPS2019_9015} on a single 16Gb Quadro RTX 5000. The GTA5 images are resized to $1280 \times 720$ pixels and the Synthia images are resized to $1280 \times 760$ pixels. We use the Deeplab-v2 segmentation network \cite{7913730} with ResNet-101 \cite{He2015} backbone pretrained on the ImageNet \cite{ILSVRC15}. The discriminator network architecture used is similar to the one used in \cite{Luo_2019_CVPR}.
We use stochastic gradient descent (SGD) to optimize the segmentation network (Deeplab-v2) and ASH module. Adam is used to optimize the discriminator network. All optimizers have a momentum of 0.9.
The initial learning rate for the Deeplab-v2 segmentation network and the discriminator network is \begin{math} 2.5 \times 10^{-4} \end{math} and \begin{math} 1.0 \times 10^{-4} \end{math}. We train the network for 100,000 iterations.
\subsection{Experimental studies}
\label{subsection:peerstudies}
We compare our method with 5 representative methods \cite{Chen_2019_ICCV,Luo_2019_CVPR,Vu_2019_CVPR,DBLP:conf/nips/LuoLGY020,Yue_2019_ICCV}. \cite{Chen_2019_ICCV,Luo_2019_CVPR,Vu_2019_CVPR} are UDA approaches where target domain data are available during training; \cite{DBLP:conf/nips/LuoLGY020} aims to align domains with limited target domain data and \cite{Yue_2019_ICCV} is a domain alignment approach with no target domain data available. Maximum Squares Loss\cite{Chen_2019_ICCV} improves upon semi-supervised learning by preventing easier classes from dominating training, CLAN \cite{Luo_2019_CVPR} seeks to reduce the difference between learned feature representations from the source and target domain, while ADVENT \cite{Vu_2019_CVPR} aims to reduce the prediction uncertainty for target domain data. ASM~\cite{DBLP:conf/nips/LuoLGY020}, generates additional training data from a target domain image under one shot unsupervised domain adaptation approach. Domain randomization \cite{Yue_2019_ICCV} stylizes multiple instances of a source domain image with style images obtained from ImageNet \cite{ILSVRC15} for each iteration and performs pyramidal pooling on the output from the final layers to maintain feature consistency between the different stylized instances.
The unsupervised domain adaptation approaches are trained with source domain data, with stylized source domain data as the target domain data. Our method demonstrates superior performance to these methods for both GTA5 and Synthia datasets (Table \ref{tab:gta-cityscapes}, \ref{tab:synthia-cityscapes}) .
While our approach shows comparable performance with Yue \etal \cite{Yue_2019_ICCV} for the GTA5 dataset, Yue \etal \cite{Yue_2019_ICCV} generate 15 additional auxillary domains (stylized images) for each source domain image. In comparison, our approach only stylizes a single source domain image once per iteration. Furthermore, Yue \etal perform spatial pyramid pooling on the extracted features from all instances, increasing the computational requirements. With much less computational cost, our approach still achieves comparable results on GTA5 (Table \ref{tab:gta-cityscapes}) and better results on Synthia (Table \ref{tab:synthia-cityscapes}).
We conducted an additional experiment to help evaluate the significance of semantic information for stylization. We trained a ASH model that receives uniform semantic information across all classes (ASH Uni.Sem.Info) and report the performance in Table \ref{tab:gta-cityscapes}. ASH outperformed ASH Uni.Sem.Info across all classes (Table \ref{tab:gta-cityscapes}). This suggests that semantic information is essential to the performance improvements seen with ASH.
\begin{figure*}[h]
\begin{center}
\fbox{\includegraphics[width=0.7\textwidth,keepaspectratio]{figures/scale_shift_plot_redone_v2.pdf}}
\end{center}
\caption{Plots of L1 Norm of Shift and Scale Coefficients versus Number of iterations during training. The predicted segmentation output is obtained at different points of training.a)Plot of L1 shift coefficients b)Plot of L1 shift coefficients with class-wise normalization c)Plot of L1 scale coefficients d) Plot of L1 scale coefficients with class-wise normalization. Corresponding semantic label image(inset)}
\label{figure:scale_shift_plt}
\end{figure*}
\begin{comment}
\subsection{Hyperparameter evaluation}
\label{subsection:hyperparametereval}
In Table \ref{tab:booktabs_label smoothing coefs}, we evaluate the effect of different label smoothing coefficients on performance. The results suggest that label smoothing does improve generalization performance, however, too much smoothing may reduce the amount of useful class wise information.
\end{comment}
\begin{table}
\setlength{\tabcolsep}{1.4pt}
\centering
\begin{tabular}{c c c c c c c}
\hline
Baseline & Stylization & Noise & ASH & mIoU \\
& & perturbation & & \\
\hline
\checkmark & & & & 36.6 \\
\checkmark & \checkmark & & & 40.1 \\
\checkmark & \checkmark & \checkmark& & 40.8 \\
\checkmark & \checkmark & \checkmark&\checkmark & 41.3\\
\hline
\end{tabular}
\caption{Ablation study for our approach, with GTA5 as the source domain data. The baseline approach is the CLAN \cite{Luo_2019_CVPR} method trained on source domain data. Stylization refers to the model trained with additional stylized data, Noise perturbation refers to inclusion of orthogonal noise to style features prior to stylization. ASH (Adversarial Style Hallucination) is our proposed method.}
\label{tab:booktabs_ablation}
\end{table}
\subsection{Ablation study}
\label{subsection:ablationstudy}
In Table \ref{tab:booktabs_ablation}, we compare our methods with CLAN as the baseline method.
Training the segmentation network with stylized images results in a noticeable improvement in segmentation performance. This is likely because the training approach encourages similar segmentation outputs between the source and the stylized source outputs. Since stylization varies texture information, the segmentation network learns to become texture invariant and ignore such information.
Adding orthogonal noise to the style features improves segmentation performance, and a possible reason might be the increased diversity of the style features after adding noise. Finally, including ASH improves the segmentation performance.
\section{Discussion}
\label{section:discussion}
\subsection{Scaling and shift coefficients}
We further investigate the scaling and shifting coefficients for each class (Figure \ref{figure:scale_shift_plt}). We achieve this by calculating the L1 norm of all the scaling and shifting coefficients for a source image. We can determine the contributions of each class by comparing the fall in the L1 norm after zeroing the contribution of that class.
As expected, classes that take up a larger proportion of the image pixels contribute more to the scaling and shifting coefficients. In particular, 'road' and 'sky' classes have a larger effect on scaling and shifting coefficients compared to other classes such as 'pole' and 'light'(Figure \ref{figure:scale_shift_plt} a,c). Since larger scaling and shifting coefficients would mean that the corresponding magnitude of the style feature would be larger, this suggests that 'road' and 'sky' undergo a larger degree of stylization compared to 'pole' and 'light' classes.
These observations lead us to suggest that the ASH module stylizes classes that occupy a large proportion of pixels for the given image in the predicted segmentation output (e.g 'road', 'sky'). Since the ASH module is optimized to generate stylized images that maximize adversarial loss, it appears that the network stylizes pixels belonging to majority classes to a larger extent than minority classes so that adversarial loss can be maximized.
Furthermore, there are several classes that have close to zero scale and shift coefficients. Although these classes (e.g 'vegetation', 'pole') are present in the segmentation output, regions corresponding to these classes do not undergo significant stylization compared to the majority classes. Classes such as 'vegetation' and 'pole' do not vary considerably in terms of colour information or texture. Consequently, stylizing these classes may not significantly affect the adversarial loss, which might explain the small variation in the scale and shift coefficients throughout training.
Interestingly, we notice that some classes, such as 'building', have consistently negative shift coefficients throughout the entire training process (Figure \ref{figure:scale_shift_plt} a). This indicates that the stylization of regions corresponding to 'building' will be reduced. One possible explanation may be that buildings tend to 'blend in' with other classes, such as 'pole' or 'wall', and stylization makes classification of the building pixels much easier.
\subsection{Normalized scaling and shift coefficients}
We normalize the class-wise change in scale and shift coefficients by the number of pixels predicted for each class. Consequently, 'road' and 'building' classes have a much smaller absolute normalized shift and scale coefficients, while 'terrain' and 'vegetation' have much larger absolute normalized L1 coefficients (Figure \ref{figure:scale_shift_plt} b,d). Additionally, classes such as 'terrain', 'vegetation', 'light' and 'sign' with negative scale and shift coefficients become more apparent after normalization.
While it may not be apparent from the plots in Figure \ref{figure:scale_shift_plt} a and c, classes that occupy a smaller area in the image also undergo stylization. As seen in Figure \ref{figure:scale_shift_plt} b,d, pixels corresponding to the minority classes -'sign', 'vegetation' and 'car' experience a larger normalized scale and shift relative to pixels corresponding to the majority classes. Additionally, pixels in these classes also show greater variation in the scale and shift coefficients compared to the majority classes.
\section{Conclusions}
In this paper, we introduce the adversarial style hallucination network, which addresses the problem of adapting to an unseen target domain. By using a class-wise adversarial approach to stylize source domain images during training, ASH can adaptively stylize the source domain images with semantic information obtained from the segmentation output. Experimental results also indicate ASH effectively improves segmentation performance and is comparable with state of the art domain generalization approaches, while requiring much less computational resources.
{\small
\bibliographystyle{ieee_fullname}
|
1,108,101,562,964 | arxiv | \section{Introduction}
Let $G=(V, E)$ be a graph where $V$ and $E$ are the vertex and edge set, respectively. The \emph{neighbourhood} of a vertex $v\in V$ is the set of all adjacent vertices of $v$ and is denoted as $N(v)$. The \emph{degree} of a vertex, denoted as $d(v)$, is the number of edges incident to $v$. A vertex $v$ is said to \emph{dominates} itself and all its neighbouring vertices. A \emph{dominating set} of $G=(V,E)$ is a subset of vertices $D$ such that every vertex $x\in V\setminus D$ has a neighbour $y\in D$, i.e., $x$ is dominated by some vertex $y$ of $D$. For two disjoint subsets $A$ and $B$, we say $A$ \emph{dominates} $B$ if every vertex of $B$ is adjacent to at least one vertex of $A$. Over the past few decades, researchers have studied graph partitioning problem where the goal is to partition the vertex set (or edge set) into some parts with desired properties, such as independence, having minimum edges across partite sets etc. In this paper, we have studied a special type of graph partitioning problem. We are interested in partitioning the vertex set into some parts such that the partite sets follow different types of domination properties.
Cockayne and Hedetniemi, in 1977, introduced the notion of \emph{domatic partition} of a graph $G=(V,E)$ where the vertex set is partitioned into $k$ parts, say $\pi =\{V_1,V_2, \ldots, V_k\}$, such that each $V_i$ is a dominating set of $G$ \cite{cockayne1977towards}. The maximum order of such a domatic partition is called \emph{domatic number} of $G$ and it is denoted by $d(G)$. Another similar type of partitioning problem is \emph{Grundy coloring}. Christen and Selkow introduced Grundy coloring of a graph $G=(V,E)$ in 1979 \cite{CHRISTEN197949}. In Grundy coloring problem, the vertex set is partitioned into $k$ parts, say $\pi =\{V_1,V_2, \ldots, V_k\}$, such that each $V_i$ is an independent set and for all $1\leq i< j\leq k$, $V_i$ dominates $V_j$. The maximum order of such a domatic partition is called \emph{Grundy number} of $G$ and it is denoted by $\Gamma(G)$. In 2004, S. M. Hedetniemi et al. introduced another such partitioning problem, namely \emph{upper iterated domination partition}\cite{erdos2003equality}. In an upper iterated domination partition, the vertex set is partitioned into $k$ parts, say $\pi =\{V_1,V_2, \ldots, V_k\}$, such that for each $1\leq i\leq k$, $V_i$ is a minimal dominating set of $G\setminus (\cup_{j=1}^{i-1} V_j)$. The \emph{upper iterated domination number}, denoted by $\Gamma^*(G)$, is equals to the maximum order of such a vertex partition. Recently, in 2018, Haynes et al. generalized the idea of \emph{domatic partition} and introduced the concept of \emph{upper domatic partition} of a graph $G$ where the vertex set is partitioned into $k$ parts, say $\pi =\{V_1,V_2, \ldots, V_k\}$, such that for each $i, j$, $1\leq i<j\leq k$, either $V_i$ dominates $V_j$ or $V_j$ dominates $V_i$ or both \cite{haynes2020upper}. The maximum order of such a upper domatic partition is called \emph{upper domatic number} of $G$ and it is denoted by $D(G)$. All these problem, domatic number \cite{chang1994domatic,zelinka1980domatically,zelinka1983k}, Grundy number \cite{zaker2005grundy,zaker2006results,furedi2008inequalities,hedetniemi1982linear, effantin2017note}, upper iterated number \cite{erdos2003equality}, upper domatic number \cite{haynes2020upper} have been extensively studied both from algorithmic and structural point of view.
In this article, we have studied a similar graph partitioning problem, namely \emph{transitive partition}. In 2018, J. T. Hedetniemi and S. T. Hedetniemi \cite{hedetniemi2018transitivity} introduced this notion as a generalization of Grundy coloring. A \emph{transitive $k$-partition} is defined as a partition of the vertex set into $k$ parts, say $\pi =\{V_1,V_2, \ldots, V_k\}$, such that for all $1\leq i< j\leq k$, $V_i$ dominates $V_j$. The maximum order of such a transitive partition is called \emph{transitivity} of $G$ and is denoted by $Tr(G)$. The \textsc{Maximum Transitivity Problem} and its correcponding decision version is defined as follows:\\
\noindent\textsc{\underline{Maximum Transitivity Problem(MTP)}}
\noindent\emph{Instance:} A graph $G=(V,E)$
\noindent\emph{Solution:} A transitive partition of $G$
\noindent\emph{Measure:} Order of the transitive partition of $G$\\
\noindent\textsc{\underline{Maximum Transitivity Decision Problem(MTDP)}}
\noindent\emph{Instance:} A graph $G=(V,E)$, integer $k$
\noindent\emph{Question:} Does $G$ have a transitive partition of order at least $k$?\\
\noindent Note that a Grundy coloring is a transitive partition with addition restriction that each partite set must be independent. In a domatic partition $\pi =\{V_1,V_2, \ldots, V_k\}$ of $G$, since each partite sets are dominating sets of $G$, we have domination property in both directions, i.e., $V_i$ dominates $V_j$ and $V_j$ dominates $V_i$ for $1\leq i< j\leq k$. Whereas in a transitive partition $\pi =\{V_1,V_2, \ldots, V_k\}$ of $G$, we have domination property in one direction, i.e., $V_i$ dominates $V_j$ for $1\leq i< j\leq k$. In a upper domatic partition $\pi =\{V_1,V_2, \ldots, V_k\}$ of $G$, for $1\leq i<j\leq k$, either $V_i$ dominates $V_j$ or $V_j$ dominates $V_i$ or both. The definition of each vertex partitioning problem ensures the following inequality for any graph $G$. For any graph $G$, $1\leq d(G)\leq \Gamma(G)\leq \Gamma^*(G)\leq Tr(G)\leq D(G)\leq n$.
In the introductory paper, J. T. Hedetniemi and S. T. Hedetniemi \cite{hedetniemi2018transitivity} showed that the transitivity of a graph $G$ bounded by $\Delta+1$, where $\Delta$ maximum degree of $G$. They also proved a necessary and sufficient condition for graphs with $Tr(G)=2$ and graphs with $Tr(G)\geq 3$. They further showed that transitivity and Grundy number are the same for trees. Therefore, the linear time algorithm for finding the Grundy number of a tree, presented in \cite{hedetniemi1982linear}, implies that we can find the transitivity of a tree in linear time as well. Moreover, for any graph, transitivity is equal to upper iterated domination number \cite{hedetniemi2018transitivity}, and the decision version of the upper iterated domination problem is known to be NP-complete \cite{hedetniemi2004iterated}. Therefore, MTDP is NP-complete for chordal graphs. It is also known that every connected graph $G$ with $Tr(G)\geq 4$ has a transitive partition $\pi =\{V_1,V_2, \ldots, V_k\}$ such that $\lvert V_k \rvert$=$\lvert V_{k-1} \rvert=1$ and $\lvert V_{k-i} \rvert \leq 2^{i-1}$ for $2\leq i\leq k-2$ \cite{haynes2017transitivity}. This implies that MTP is fixed-parameter tractable.
In this paper, we study the computational complexity of the transitivity problem and also prove a few structural results. The main contributions are summarized below:
\begin{enumerate}
\item[1.] We show that MTDP is NP-complete for bipartite graphs. As the resultant graph in the polynomial reduction is a perfect elimination bipartite graph, MTDP remains NP-complete for this important subclass of bipartite graphs.
\item[2.] We prove that finding transitivity of a given bipartite chain graph $G$ is the same as finding the maximum index $t$ such that $G$ contains either $K_{t,t}$ or $K_{t,t}-\{e\}$ as an induced subgraph. We design a linear time algorithm for MTP in a bipartite chain graph based on this fact.
\item[3.] In \cite{hedetniemi2018transitivity}, the authors posed two open problems of characterizing graphs with $Tr(G)\geq 4$ and graphs with $Tr(G)=3$. We solve these open problems by giving a general characterization of graphs with $Tr(G)\geq t$, for any integer $t$.
\end{enumerate}
The rest of the paper is organized as follows. In Section 2, we present the NP-completeness of MTDP for perfect elimination bipartite graphs. Then in Section 3, we design a linear time algorithm for MTP in bipartite chain graphs. Section 4 deals with characterization of graphs with $Tr(G)$ for any integer $t$. Finally, Section 5 concludes the paper.
\section{NP-complete for bipartite graphs}
In this section, we show that \textsc{Maximum Transitivity Decision Problem} is NP-complete for bipartite graphs. Clearly, this problem is in NP. We prove the NP-completeness of this problem by showing a polynomial time reduction from \textsc{Proper $3$-Coloring Decision Problem} in graphs. A proper $k$-colring of a graph $G=(V,E)$ is a funtion $g$, from $V$ to $\{1,2,3, \ldots , k\}$ such that $g(u)\not= g(v) $ for any edge $uv \in E$. The \textsc{Proper $3$-Coloring Decision Problem} is defined as follows:
\noindent\textsc{\underline{Proper $3$-Coloring Decision Problem}}
\noindent\emph{Instance:} A graph $G=(V,E)$
\noindent\emph{Question:} Does there exist a proper $3$-coloring of $V$?
The \textsc{Proper $3$-Coloring Decision Problem} is known to be NP-complete \cite{garey1990guide}. Given an instance of \textsc{Proper $3$-Coloring Decision Problem}, say $G=(V,E)$, we construct an instance of MTDP. The construction is as follows:
\noindent\textbf{Construction:}
Let $V=\{v_1, v_2, \ldots, v_n\}$ and $E= \{e_1, e_2, \ldots, e_m\}$.
\begin{itemize}
\item[$1$] For each vertex $v_i\in V$, we consider two paths of length three, $P_{v_i}=\{x_i,w_i,v_i,z_i\}$ and $P_{v_i}^\prime=\{x_i^\prime,w_i^\prime,v_i^\prime,z_i^\prime\}$ in $G'$, where $x_i$ and $z_i$ are the pendant vertices of $P_{v_i}$ and $x'_i$ and $z'_i$ are the pendant vertices of $P'_{v_i}$. Similarly, for each edge $e_j\in E$, we consider two paths of length three, $P_{e_j}=\{x_{e_j},w_{e_j},v_{e_j},z_{e_j}\}$ and $P_{e_j}^\prime=\{x_{e_j}^\prime,w_{e_j}^\prime,v_{e_j}^\prime,z_{e_j}^\prime\}$ in $G'$. Next consider six more paths of length three, $P_a=\{x_{a},w_{a},v_{a},z_{a}\}$, $P'_a=\{x_{a}^\prime,w_{a}^\prime,v_{a}^\prime,z_{a}^\prime\}$, $P_b=\{x_{b},w_{b},v_{b},z_{b}\}$, $P'_b=\{x_{b}^\prime,w_{b}^\prime,v_{b}^\prime,z_{b}^\prime\}$, $P_e=\{x_{e},w_{e},v_{e},z_{e}\}$ and $P'_e=\{x_{e}^\prime,w_{e}^\prime,v_{e}^\prime,z_{e}^\prime\}$ in $G'$.
\item[$2$] For each edge $e_j\in E$, we take two vertices, $e_j$ and $e'_j$ in $G'$ and also we take two extra vertices $e$ and $e'$ in $G'$. Let $A=\{e_1,e_2,\ldots ,e_m,e\}$ and $B=\{e_1^\prime,e_2^\prime,\ldots,e_m^\prime,e^\prime\}$. We make a complete bipartite graph with vertex set $A\cup B$.
\item[$3$] Next we add the following edges: for every edge $e_k=(v_i,v_j)\in E$, we join the edges $(v_i,e_k)$, $(v_j, e_k)$, $(v_{e_k}, e_k)$, $(v'_i,e'_k)$, $(v'_j, e'_k)$ and $(v'_{e_k},e'_k)$. Also we add the edges $(v_a,e)$, $(v_b, e)$, $(v_{e}, e)$, $(v'_a,e')$, $(v'_b, e')$ and $(v'_{e},e')$.
\item[$4$] Finally, we set $k=m+5$.
\end{itemize}
From the above construction, it is clear that the graph $G'=(V', E')$ consists of $10m+8n+26$ vertices and $m^2+14m+6n+25$ edges. The construction is illustrated in Figure \ref{fig:bipartitenp}.
\begin{figure}[htbp!]
\centering
\includegraphics[scale=0.75]{bipartitenp.pdf}
\caption{Construction of $G^\prime$}
\label{fig:bipartitenp}
\end{figure}
Now we show that $G$ has a proper 3-coloring if and only if $G^\prime$ has a transitive $k$-partition. For the forward direction, we have the following lemma.
\begin{lem}
If $G=(V,E)$ has a proper 3-coloring, then $G'=(V', E')$ has a transitive $k$-partition.
\end{lem}
\begin{proof}
Given a proper 3-coloring $g$ from $V$ to $\{1,2,3\}$, a partition $\pi=\{V_1,V_2, \ldots,V_{m+5}\}$ of the vertices of $G'$ can be obtained in the following ways:
\begin{itemize}
\item If $g(v_i)=q$ in $G$, then $v_i, v_i^\prime \in V_q$ and we put $v_a, v_a^\prime\in V_1$, and $v_b, v_b^\prime\in V_2$.
\item For an edge $e_k=(v_i,v_j)$ in $G$, we put $v_{e_k}, v_{e_k}^\prime \in V_l$, where $l= \{1, 2, 3\} \setminus \{g(v_i),g(v_j)\}$. Also, we put $v_e, v_e^\prime\in V_3$. The other vertices of all $P_4$ and $P_4^\prime$, i.e. $\{x,w,z\}$ and $\{x',w',z'\}$, are put to a partition based on the partition of $v$ or $v'$, respectively as shown in Figure \ref{fig:vertexedge_coloring}.
\begin{figure}[htbp!]
\centering
\includegraphics[scale=0.70]{vertexedge_coloring.pdf}
\caption{Partition of all $P_4$}
\label{fig:vertexedge_coloring}
\end{figure}
\item Lastly, we put $e_j, e_j^\prime \in V_{3+j}$, for all $1\leq j\leq m$, and $e\in V_{m+4}$, $e^\prime \in V_{m+5}$.
\end{itemize}
Now we show that the above process produces a transitive $k$-partition $\pi=\{V_1,V_2, \ldots,V_{m+5}\}$ of $G'$. Let $H$ be the complete bipartite graph induced by $A\cup B$. Since $H$ is a complete bipartite graph, then $V_i$ dominates $V_j$ for $4\leq i<j\leq m+5$. Also each vertex from $A\cup B$ is adjacent to a vertex from each $V_1, V_2$ and $V_3$. Hence $V_i$ dominates $V_t$, for all $i=1,2,3$ and $t>3$. Also from Figure \ref{fig:vertexedge_coloring}, it is clear that $V_i$ dominates $V_j$ for $1\leq i<j\leq 3$. Therefore, $\pi$ is a transitive $k$-partition of $G'$.
\end{proof}
The following lemma shows that the converse of the statement is also true.
\begin{lem}
If $G^\prime$ has a transitive $k$-partition, then $G$ has a proper 3-coloring.
\end{lem}
\begin{proof}
Let $\pi=\{V_1,V_2,\ldots ,V_k\}$ be a transitive $k$-partition of $G'$. By Proposition 11 of \cite{hedetniemi2018transitivity}, we know that $\pi$ can be transformed into $\pi'=\{V'_1,V'_2,\ldots ,V'_k\}$, such that $\lvert V'_k \rvert=\lvert V'_{k-1}\rvert =1$. So, without loss of generality, let us assume $G$ has a transitive $k$-partition $\pi=\{V_1,V_2,\ldots ,V_k\}$, such that $\lvert V_k \rvert =\lvert V_{k-1} \rvert=1$.\\
\begin{cl}\label{cl:PartitionOfV'}
In the transitive $k$-partition $\pi$, the partitions $\{V_1, V_2, V_3\}$ contain only vertices from $V'\setminus (A\cup B)$ and the partitions $\{V_4, V_5,\ldots V_k\}$ contain only vertices from $A\cup B$.
\end{cl}
\begin{proof}
We divide the proof into the following four cases based on the partition of $e$ and $e'$:
\begin{ca}\label{cas}
\textbf{$e\in V_{m+5}$ and $e'\in V_{m+4}$}
\textnormal{
Note that $\{v_a,v_b, v_e\}$ cannot be in $V_p$ for $p\geq 4$ because in that case $e$ would be in $V_3$. Similarly, $\{v'_a,v'_b,v'_e\}$ cannot be in $V_p$ for $p\geq 4$ as well. Therefore, the vertices from $\{v_a,v_b,v_e\}$ and $\{v'_a,v'_b,v'_e\}$ belong to $V_p$ for $1\leq p \leq 3$. To dominate $e$ and $e'$, the vertices from $\{e'_1, e'_2, \ldots, e'_m\}$ and $\{e_1, e_2, \ldots, e_m\}$ must belong to $\{V_4, V_5, \ldots, V_{m+3}\}$, respectively. Moreover, for $4\leq i \leq m+3$, each $V_i$ contains exactly one vertex from $\{e'_1, e'_2, \ldots, e'_m\} $ to dominate $e$ and exactly one vertex from $\{e_1, e_2, \ldots, e_m\}$ to dominate $e'$. Hence, the vertices of $A\cup B$ belong to $\{V_4, V_5,\ldots V_{m+5}\}$. Note that none of the vertices from $\{v_1, v_2, \ldots, v_n, v'_1, v'_2, \ldots, v'_n, v_{e_1}, v_{e_2}, \ldots, v_{e_m}, v'_{e_1}, v'_{e_2}, \ldots, v'_{e_m}\}$ can belong to $V_p$ for $p\geq 4$ because otherwise there exists a vertex in $A\cup B$ which belongs to $V_3$. Since degree of every other vertices is at most $2$, they cannot belong to $V_p$ for $p\geq 4$. Hence, $\{V_4, V_5,\ldots V_k\}$ contain only vertices from $A\cup B$ and $\{V_1, V_2, V_3\}$ contain only vertices from $V'\setminus (A\cup B)$.
}
\end{ca}
\begin{ca}
\textbf{$e\in V_{m+5}$ and $e'\notin V_{m+4}$}
\textnormal{
Using similar arguments as in the previous case, we know that vertices from $\{v_a,v_b,v_e\}$ belong to $V_p$ for $1 \leq p \leq 3$ and the vertices from $\{e'_1, e'_2, \ldots, e'_m,e'\} $ belong to $\{V_4, V_5, \ldots, V_{m+3},V_{m+4}\}$ to dominate $e$. Moreover, every $V_i$ for $4\leq i \leq m+4$ contains exatly one vertex from $\{e'_1, e'_2, \ldots, e'_m,e'\} $. Since the vertices from $\{e'_1, e'_2, \ldots, e'_m,e'\} $ belong to $\{V_4, V_5, \ldots, V_{m+3},V_{m+4}\}$, no vertex from $\{v'_1, v'_2, \ldots, v'_n, v'_{e_1}, v'_{e_2}, \ldots, v'_{e_m},\\ v'_a, v'_b, v'_e\}$ can be in $V_p$ for $p\geq 4$. As $e'\notin V_{m+4}$, there exist a vertex from $\{e'_1, e'_2, \ldots, e'_m\} $, say $e'_j$ such that $e'_j \in V_{m+4}$. To dominate $e'_j$, the vertices from $\{e_1, e_2, \ldots, e_m\}$ belong to $\{V_4, V_5, \ldots, V_{m+3}\}$. With similar arguments as in previous case, we know that every vertex of $\{v_1, v_2, \ldots, v_n, v_{e_1}, v_{e_2}, \ldots, v_{e_m}\}$ belong to $V_p$ for $1\leq p\leq 3$. Hence, $\{V_4, V_5,\ldots V_k\}$ contain only vertices from $A\cup B$ and $\{V_1, V_2, V_3\}$ contain only vertices from $V'\setminus (A\cup B)$.
}
\end{ca}
\begin{ca}
\textbf{$e\notin V_{m+5}$ and $e'\in V_{m+4}$}
\textnormal{
Using similar arguments as in Case \ref{cas}, we know that the vertices from $\{v'_a,v'_b,v'_e\}$ belong to $V_p$ for $1 \leq p \leq 3$ and the vertices from $\{e_1, e_2, \ldots, e_m,e\} $ belong to $\{V_4, V_5, \ldots, V_{m+3}\}$. As $e\notin V_{m+5}$, there exist a vertex from $\{e_1, e_2, \ldots, e_m\} $, say $e_j$ such that $e_j \in V_{m+5}$. Therefore, the vertices from $\{e_1, e_2, \ldots, e_m,e\} $ belong to $\{V_4, V_5, \ldots, V_{m+3},V_{m+5}\}$. Moreover, every $V_i$ for $4\leq i \leq m+5$ with $i\not=m+4$, contains exatly one vertex from $\{e_1, e_2, \ldots, e_m,e\}$. Since the vertices $\{e_1, e_2, \ldots, e_m,e\} $ belong to $\{V_4, V_5, \ldots, V_{m+3},V_{m+5}\}$, no vertex from $\{v_1, v_2, \ldots, v_n, v_{e_1}, v_{e_2}, \ldots, v_{e_m}, v_a, v_b, v_e\}$ can be in $V_p$ for $p\geq 4$. Since $e_j \in V_{m+5}$, the vertices from $\{e'_1, e'_2, \ldots, e'_m\}$ must belong to $\{V_4, V_5, \ldots, V_{m+3}\}$ to dominate $e_j$. With similar arguments as in Case \ref{cas}, we know that every vertex of $\{v'_1, v'_2, \ldots, v'_n, v'_{e_1}, v'_{e_2}, \ldots, v'_{e_m}\}$ belong to $V_p$ for $1\leq p\leq 3$. Hence, $\{V_4, V_5,\ldots V_k\}$ contain only vertices from $A\cup B$ and $\{V_1, V_2, V_3\}$ contain only vertices from $V'\setminus (A\cup B)$.
}
\end{ca}
\begin{ca}
\textbf{$e\notin V_{m+5}$ and $e'\notin V_{m+4}$}
\textnormal{
Since the degree of each vertex of $V'\setminus (A\cup B)$ is at most $m+2$, they cannot belong to $V_{m+4}$ or $V_{m+5}$, i.e., only verties from $A\cup B$ can be in $V_{m+4}$ or $V_{m+5}$. Also since we can interchange the vertices between $V_{m+4}$ and $V_{m+5}$ in the transitive partition, without loss of generality, we can assume that $e_1\in V_{m+5}$ and $e'_s \in V_{m+4}$ in the transitive $k$-partition of $G'$. Also, let $e_1$ be the edge between $v_1$ and $v_2$ in $G$ and in the transitive $k$-partition of $G'$, $v_1\in V_l$ and $v_2\in V_t$ where $l\geq t$.
First we show that every vertex of the form $v'_r$ belong to a partition $V_p$ such that $p\leq l$. Because otherwise if $v'_r \in V_{l+i}$ For some $i\geq 1$, then each of $V_3, V_4, \ldots, V_{l+i-1}$ contains at least one vertex from $B \setminus \{e'_s\}$.
Also, to dominate $e_1$, each of $\{V_{l+1}, \ldots, V_{m+3}\}$ contains at least one vertex from $B \setminus \{e'_s\}$. This implies that each of the $(m+1)$ partitions, $\{V_3,V_4, \ldots, V_{m+3}\}$, contains one vertex from the set of $m$ vertices, $B \setminus \{e'_s\}$, which is impossible.
Next we show that the vertex $e$ belong to a partition $V_t$ such that $t\geq l+1$. This is because to dominate $e'_s$, each of $\{V_{l+1}, \ldots, V_{m+3}\}$ contains at least one vertex from $A\setminus \{e_1\}$. Also, since $v_1\in V_l$, then each of $\{V_3, V_4, \ldots, V_{l-1}\}$ contains at least one vertex from $A\setminus \{e_1,e\}$. Therefore, $e$ belong to a partition $V_t$ such that $t\geq l+1$.
Next we show that $l \leq 3$. Since degree of $e_1$ is $m+4$, each of its neighbours should belong to exactly of the the partitions $\{V_1, V_2, \ldots V_{m+4}\}$. Moreover, since $v_1\in V_l$, no vertex from $B$ can be in $V_l$. Also, no vertex of $\{v_a,v_b,v_e\}$ belongs to $V_l$ as $l\geq 4$. Hence, none of the neighbours of $e$ belong to $V_l$. But $e$ belongs to a partition $V_t$ where $t\geq l+1$. Therefore, $V_l$ cannot dominate $V_t$, which is a contradiction.
Now, $l\leq 3$ implies that the vertices of $B$ belong to $\{V_4, V_5, \ldots, V_{m+4}\}$ and Moreover, each of these partitions contains exatly one vertex from $B$. And since the vertices of $B$ belong to $\{V_4, V_5, \ldots, V_{m+4}\}$, the vertices of $\{v'_1, v'_2, \ldots, v'_n, v'_{e_1}, v'_{e_2}, \ldots, v'_{e_m}, v'_a, v'_b, v'_e\}$ belongs to some partition $V_p$ for some $p\leq 3$. As $e'_s\in V_{m+4}$, to dominate $e'_s$, the vertices from $A\setminus \{e_1\}$ belong to $\{V_4, V_5, \ldots, V_{m+3}\}$. With similar arguments as in Case \ref{cas}, we know that each vertex of $\{v_1, v_2, \ldots, v_n, v_{e_1}, v_{e_2}, \ldots, v_{e_m}, v_a,v_b,v_e\}$ belong to $V_p$ for some $p\leq 3$. Hence, $\{V_4, V_5,\ldots V_k\}$ contain only vertices from $A\cup B$ and $\{V_1, V_2, V_3\}$ contain only vertices from $V'\setminus (A\cup B)$.
}
\end{ca}
Therefore, for all the cases, the partitions $\{V_1, V_2, V_3\}$ contain only vertices from $V'\setminus (A\cup B)$ and the partitions $\{V_4, V_5,\ldots V_k\}$ contain only vertices from $A\cup B$.
\end{proof}
Now let us define the mapping $g$ from $V$ to $\{1,2,3\}$. We set $g(v_i)=p$, if $v_i$ is in the partition $V_p$ in $G'$. By the previous claim, the mapping is well-defined. Let $e_k=(v_i,v_j)$ be an edge in $G$. By Claim \ref{cl:PartitionOfV'}, in $G'$ the vertex $e_k$ belongs to some partition $V_t$ where $t\in \{4,5,\ldots,k\}$. Moreover, since none of the vertices of $A\cup B$ belongs to $V_1, V_2$ or $V_3$, the vertices $v_i, v_j$ and $v_{e_k}$ must belong to three different partitions among $V_1, V_2, V_3$. Hence, $v_i$ and $v_j$ belong to different partitions among $V_1, V_2, V_3$. This imples that $g(v_i)\neq g(v_j)$. Therefore $g$ defines a proper $3$-coloring in $G$.
\end{proof}
Hence, we have the following theorem.
\begin{theo}
The \textsc{Maximum Transitivity Decision Problem} is NP-complete for bipartite graphs.
\end{theo}
An edge $uv$ in a bipartite graph $G$ is called \emph{bisimplicial} if $N(u)\cup N(v)$ induces a biclique in $G$. For an edge ordering $(e_1,e_2, \ldots, e_k)$, let $S_i$ be the set of endpoints of $\{e_1,e_2, \ldots, e_i\}$ and $S_0=\emptyset$. An ordering $(e_1,e_2, \ldots, e_k)$ is a perfect edge elimination ordering for a bipartite graph $G=(V, E)$ if $G[V\setminus S_i]$ has no edges and each edge $e_i$ is a bisimplicial edge in $G[V\setminus S_i]$. A graph $G$ is a perfect elimination bipartite if and only if it admits a perfect edge elimination ordering \cite{golumbic1978perfect}. Note that, the construction implies that $G'$ in a perfect elimination bipartite graph. If we consider all the pendant edges of $G'$ in any order and after that a matching of the complete subgraph $H$, then these edges form a perfect edge elimination ordering of $G'$. Therefore, we have the following corollary.
\begin{coro}\label{bipartite_coro1}
The \textsc{Maximum Transitivity Decision Problem} remains NP-complete for perfect elimination bipartite graphs.
\end{coro}
\section{Transitivity in bipartite chain graph}
In this section, we design a linear time algorithm to solve the transitivity of a given bipartite chain graph. A bipartite graph $G=(X\cup Y,E)$ is called a \textit{bipartite chain graph} if there exist an ordering of vertices of $X$ and $Y$, say $\sigma_X= (x_1,x_2, \ldots ,x_m)$ and $\sigma_Y=(y_1,y_2, \ldots ,y_n)$, such that $N(x_m)\subseteq N(x_{m-1})\subseteq \ldots \subseteq N(x_2)\subseteq N(x_1)$ and $N(y_n)\subseteq N(y_{n-1})\subseteq \ldots \subseteq N(y_2)\subseteq N(y_1)$. This ordering of $X$ and $Y$ is called a chain ordering. A chain ordering of a bipartite chain graph can be computed in linear time \cite{heggernes2007linear}. To design the algorithm, we first prove that if $t$ is the maximum integer such that $G$ contains either $K_{t,t}$ or $K_{t,t}-\{e\}$ as an induced subgraph, then $Tr(G)=t+1$. After that, we design an algorithm for finding maximum integer $t$ such that $G$ contains either $K_{t,t}$ or $K_{t,t}-\{e\}$ as an induced subgraph.
\begin{lem}\label{chain_th1}
Let $G=(X\cup Y,E)$ be a chain graph and $t$ be the maximum integer such that $G$ contains either $K_{t,t}$ or $K_{t,t}-\{e\}$ as an induced subgraph, then $Tr(G)=t+1$.
\end{lem}
\begin{proof}
Suppose $t$ is the maximum integer such that $G=(X\cup Y,E)$ contains either $K_{t,t}$ or $K_{t,t}-\{e\}$ as an induced subgraph. In this case, $Tr(G)\geq t+1$ because transitivity of $K_{t,t}$ or $K_{t,t}-\{e\}$ is $t+1$. Next, we show that $Tr(G)$ cannot be greater than $t+1$ by proving the following claim.
\begin{cl}\label{chain_claim1}
If $Tr(G)=m+1$, then $G=(X\cup Y,E)$ contains either $K_{m,m}$ or $K_{m,m}-\{e\}$ as an induced subgraph.
\end{cl}
\emph{Proof of Claim \ref{chain_claim1}}. To prove this claim, we use induction on $m$. For $m=1$, i.e., $Tr(G)=2$, by the Proposition 5 of \cite{hedetniemi2018transitivity}, $G$ contains at least one edge. This implies that $G$ contains $K_{1,1}$ as an induced subgraph. Also, for $m=2$, i.e., $Tr(G)=3$, $G$ contains either an induced $C_3$, an induced $P_4$ or an induced $C_4$ by the Proposition 7 of \cite{hedetniemi2018transitivity}. Since $G$ is a bipartite chain graph, it cannot contain $C_3$. Therefore $G$ contains either $P_4$, i.e., $K_{2,2}-\{e\}$ or $C_4$, i.e., $K_{2,2}$ as an induced subgraph.
By induction hypothesis let us assume that the claim is true for any graph $G$ with $Tr(G)<m+1$. Let $G$ be a bipartite chain graph with $Tr(G)=m+1$. Also let $\{V_1,V_2, \ldots ,V_{m+1}\}$ be a transitive $(m+1)$-partition of $G$. Let $G'=G[V_2\cup V_3\cup \ldots \cup V_{m+1} ]$ and so $Tr(G^\prime)=m$. By induction hypothesis $G'$ contains either $K_{m-1,m-1} $ or $K_{m-1,m-1}-\{e\}$ as an induced subgraph. Let $X'=\{x_1, x_2, \ldots , x_{m-1}\}$ and $Y'=\{y_1, y_2, \ldots , y_{m-1}\}$ be two sets such that $G'[X'\cup Y']$ is either $K_{m-1,m-1} $ or $K_{m-1,m-1}-\{e\}$, where $e=x_{i}y_{j}$ for some $i$ and $j$ in $\{1,2, \ldots, m-1\}$. Now, in $G$, since $V_1$ dominates $V_j$, for all $j\geq 2$, $V_1$ contains at least one vertex from $X$ and also at least one vertex from $Y$. Let $\{x_{l_1},x_{l_2}, \ldots, x_{l_s}\}$ and $\{y_{k_1},y_{k_2}, \ldots, y_{k_t}\}$ be the vertices in $V_1$ from $X$ and $Y$ respectively. Since $G$ is a bipartite chain graph, there exist $x_p\in\{x_{l_1},x_{l_2}, \ldots, x_{l_s}\}$ and $ y_q\in \{y_{k_1},y_{k_2},\ldots, y_{k_t}\}$ such that $N(x_p)\supseteq N(x)$ for all $x\in\{x_{l_1}, \ldots, x_{l_s}\}$ and $N(y_q)\supseteq N(y)$ for all $y \in\{y_{k_1},y_{k_2}, \ldots, y_{k_t}\}$. Therefore, $x_p$ dominates $\{y_1,y_2, \ldots, y_{m-1}\}$ and $y_q$ dominates $\{x_1, x_2, \ldots, x_{m-1}\}$. Now, if $G'[X'\cup Y']$ is $K_{m-1,m-1} $, then clearly $\{x_1,x_2, \ldots, x_{m-1},x_p\}$ and $\{y_1,y_2, \ldots, y_{m-1},y_q\}$ induces either a $K_{m,m} $ or a $K_{m,m}-\{e\}$ in $G$ depending on whether the edge $x_p y_q$ is in $G$ or not. On the other hand, if $G'[X'\cup Y']$ is $K_{m-1,m-1}-\{e\}$, where $e=x_{i}y_{j}$ for some $i$ and $j$ in $\{1,2, \ldots, m-1\}$. In this case also, we can argue in a similar way that there exist $x_p\in V_1$ and $y_q\in V_1$, such that $x_p$ dominates $\{y_1,y_2, \ldots, y_{m-1}\}$ and $y_q$ dominates $\{x_1, x_2, \ldots, x_{m-1}\}$. Since $G$ is a bipartite chain graph, either $N(x_p)\subseteq N(x_i)$ or $N(x_p)\supseteq N(x_i)$. If $N(x_p)\subseteq N(x_i)$, then $ \{y_1,y_2, \ldots,y_j,\ldots, y_{m-1}\} \subseteq N(x_p)\subseteq N(x_i)$, which implies $y_j \in N(x_i)$. This implies that $x_iy_j\in E$, which is a contradiction. Also if $N(x_p)\supseteq N(x_i)$, then $\{y_1,y_2, \ldots , y_{j-1},y_{j+1},\ldots, y_{m-1},y_q\}\subseteq N(x_i)\subseteq N(x_p)$, which implies $x_py_q\in E$. Therefore, $G[X' \cup Y' \cup \{x_p,y_q\}]$ induces an $K_{m,m}-\{e\}$. Hence, if $Tr(G)=m+1$, then $G=(X\cup Y,E)$ contains either $K_{m,m}$ or $K_{m,m}-\{e\}$ as an induced subgraph.
\qed
From the above claim, it follows that $m>t$ contradicts the maximality of $t$. Hence $Tr(G)=t+1$.
\end{proof}
Next we present an algorithm to find the maximum integer $t$, such that a bipartite chain graph $G$ contains either $K_{t,t}$ or $K_{t,t}-\{e\}$ as an induced subgraph. We show that if $t$ is the maximum index such that $G$ contains either $K_{t,t}$ or $K_{t,t}-\{e\}$ as an induced subgraph, then first $t$ vertices of the chain ordering from each partite set induces $K_{t,t}$ or $K_{t,t}-\{e\}$.
\begin{lem}\label{chain_th2}
Let $G=(X\cup Y,E)$ be a bipartite chain graph and $t$ be the maximum integer such that $G$ contains either $K_{t,t}$ or $K_{t,t}-\{e\}$ as an induced subgraph, then $G[X_t\cup Y_t]=K_{t,t}$ or $G[X_t\cup Y_t]=K_{t,t}-\{e\}$, where $X_t=\{x_1,x_2, \ldots ,x_t\}$ and $Y_t=\{y_1,y_2,\ldots ,y_t\}$ for all $t\leq \min\{m,n\}$.
\end{lem}
\begin{proof}
First let us assume that $t$ is the maximum integer such that $G$ contains a $K_{t,t}$, i.e., there exist $X'\subseteq X$ and $Y'\subseteq Y$ such that $G[X'\cup Y']=K_{t,t}$. If $X'=X_t$ and $Y'=Y_t$, then we are done. Otherwise, one of the following cases are true:
\begin{ca}
$X'\not= X_t$ and $Y'\not=Y_t$
\end{ca}
Since $X'\not=X_t$, there exist $p$ and $q$ with $p<q$ such that $x_p\notin X', x_q\in X'$ and $x_p\in X_t, x_q\notin X_t$ and since $Y'\not=Y_t$, there exist $i$ and $j$ with $i<j$ such that $y_i\notin Y', y_j\in Y'$ and $y_i\in Y_t, y_j\notin Y_t$. Since $i<j$, we have $N(y_i)\supseteq N(y_j)$ and since $y_j\in Y'$ and $G[X'\cup Y']$ is a complete bipartite graph, we have $N(y_j) \supseteq X'$. Therefore, $N(y_i)\supseteq X'$. Similarly, we have $N(x_p)\supseteq Y'$. Moreover, since $y_i\in N(x_q)$ and $N(x_q) \subseteq N(x_p)$, we have $x_py_i\in E$. Therefore, $G[X' \cup Y' \cup \{x_p,y_i\}]$ induces a $K_{t+1,t+1}$ which contradicts the maximality of $t$.
\begin{ca}
$X'=X_t$ and $Y'\not=Y_t$
\end{ca}
Since $Y'\not=Y_t$, with similar arguments as above, we have $N(y_i)\supseteq X'$. This implies that $G[X_t \cup Y''] = K_{t,t}$, where $Y''=(Y'-\{y_j\})\cup \{y_i\}$. Note that $Y''$ has one more vertex common in $Y_t$ than $Y'$. So repeating this process, we finally have $G[X_t\cup Y_t]=K_{t,t}$.
\begin{ca}
$X'\not=X_t$ and $Y=Y_t$
\end{ca}
Since $X'\not=X_t$, with similar arguments, we have $N(x_p)\supseteq Y'$. This implies that $G[X'' \cup Y_t] = K_{t,t}$, where $X''=(X'-\{x_q\})\cup \{x_p\}$. Note that $X''$ has one more vertex common in $X_t$ than $X'$. So repeating this process, we finally have $G[X_t\cup Y_t]=K_{t,t}$.
Hence, if $t$ is the maximum integer such that $G$ contains a $K_{t,t}$, then $G[X_t\cup Y_t]=K_{t,t}$.
Next, we assume that $t$ is the maximum integer such that $G$ contains a $K_{t,t}- \{e\}$ but not a $K_{t,t}$. This implies that $t-1$ is the maximum integer such that $G$ contains $K_{t-1,t-1}$. So, by the previous result, $G[X_{t-1},Y_{t-1}]=K_{t-1,t-1}$. We show that $G[X_t\cup Y_t]= K_{t,t}-\{e\}$. Note that $N(x_t)$ contains all the vertices from the set $ \{y_1,y_2, \ldots, y_{t-1}\}$. This is because if there exists $y_i \in \{y_1,y_2, \ldots, y_{t-1}\}$ such that $y_i\notin N(x_t)$, then $y_i\notin N(x_j)$ for all $j>t$. This implies that $\{x_t,x_{t+1}, \ldots, x_m\} \notin N(y_i)$. Therefore $\{x_t,x_{t+1}, \ldots, x_m\} \notin N(y_j)$, for all $j>i$. Hence, $G$ cannot contain $K_{t,t}-\{e\}$. Similarly, $N(y_t)$ contains all the vertices from the set $ \{x_1,x_2, \ldots, x_{t-1}\}$. Also, note that $x_ty_t\notin E$ because otherwise $G[X_t \cup Y_t]= K_{t,t}$, which is a contradiction. Hence, in this case, $G[X_t \cup Y_t]= K_{t,t}-\{e\}$.
Hence, if $t$ is the maximum integer such that $G$ contains either $K_{t,t}$ or $K_{t,t}-\{e\}$ as an induced subgraph, then $G[X_t\cup Y_t]=K_{t,t}$ or $G[X_t\cup Y_t]=K_{t,t}-\{e\}$.
\end{proof}
The following algorithm finds the maximum integer $t$ such that $G$ contains either $K_{t,t}$ or $K_{t,t}-\{e\}$ as induced subgraph.
\begin{algorithm}[h]
\caption{MaxIndex($G$)}
\begin{algorithmic}[1]
\State \textbf{Input:} A bipartite chain graph $G=(X\cup Y,E)$ with $X=\{x_1,x_2,\ldots ,x_{m}\}$ and $Y=\{y_1,y_2, \ldots ,y_{n}\}$ such that $N(x_m)\subseteq N(x_{m-1})\subseteq \ldots \subseteq N(x_2)\subseteq N(x_1)$ and $N(y_n)\subseteq N(y_{n-1})\subseteq \ldots \subseteq N(y_2)\subseteq N(y_1)$.
\State \textbf{Output:} Maximum $t$ such that $G$ contains either a $K_{t,t}$ or a $K_{t,t}-\{e\}$ as an induced subgraph.
\State $ i \gets 1$, $t \gets 0$
\While { $x_iy_i\in E$}
\State $i=i+1$
\EndWhile
\State $j \gets i$
\If {$x_jy_{j-1}\in E$ \& $x_{j-1}y_j\in E$ }
\State $t \gets j$~~~~~~~~~~~~~~~~~~~~~ [$G$ contains $K_{t,t}-\{e\}$]
\Else
\State $t\gets j-1$~~~~~~~~~~~~~~~~~[$G$ contains $K_{t,t}$]
\EndIf
\State \Return($t$)
\end{algorithmic}
\end{algorithm}
Since the condition in line 8 can be checked in constant time, the above algorithm runs linearly. Hence we have the following theorem.
\begin{theo}
The transitivity of a bipartite chain graph can be computed in linear time.
\end{theo}
\section{Characterization of graphs with $Tr(G)\geq t$}
In \cite{hedetniemi2018transitivity}, the authors showed that the transitivity of a graph $G$ is greater or equal to $3$ if and only if $G$ contains either $K_3$ or an induced $P_4$ or an induced $C_4$. They also posed the following open questions:
\textbf{Question 1.}
What is a necessary and sufficient condition for a graph $G$ to have $Tr(G)\geq 4$?
\textbf{Question 2.}
What is a necessary and sufficient condition for a graph $G$ to have $Tr(G)= 3$?
\noindent In this section, we present a necessary and sufficient condition for a graph $G$ to have $Tr(G)\geq t$, for any integer $t$. As a consequence of this result, we get a necessary and sufficient condition for a graph $G$ to have $Tr(G)= 3$.
Our characterization of graphs with $Tr(G)\geq t$ is based on the result of M. Zakar, which characterizes the graphs with a Grundy number greater equal to $t$. Zakar introduced the notion of a \emph{$t$-atom} and proved that $\Gamma(G)\geq t$ if and only if $G$ contains (with respect to the \emph{cannonical partition}) a $t$-atom \cite{zaker2006results}. In transitivity, we show that the characterization can be done with subgraph containment relation. For the sake of completeness, we state the definition of \emph{$t$-atom} in this article.
\begin{defi}[\cite{zaker2006results} ]
A $t$-atom is defined in a recursive way as follows:
\begin{itemize}
\item The only $1$-atom is $K_1$.
\item The only $2$-atom is $K_2$.
\item Let $H=(V,E)$ be any $(t-1)$-atom with $n$ vertices. Consider an independent set $\overline{K_r}$ on $r$ vertices for any $r\in \{1,2,\ldots n\}$. For that fixed $r$, consider a $r$ vertex subset $W$ of $V$ and draw a perfect matching between the vertices of $\overline{K_r}$ and $W$. Then join an edge between each vertex of $V\setminus W$ and an (and to only one) arbitrary vertex of $\overline{K_r}$. The resultant graph $G$ is a $t$-atom.
\end{itemize}
\end{defi}
Let $\mathcal{A}_t$ denote the class of $t$-atoms. The following figure illustrates the construction of all $3$-atom.
\begin{figure}[htbp!]
\centering
\includegraphics[scale=0.85]{atoms.pdf}
\caption{Construction of all $3$-atoms from $2$-atom}
\label{Fig:atoms}
\end{figure}
Next, we prove the main theorem of this section, which characterizes the graphs with transitivity greater or equal to $t$for any integer $t$.
\begin{theo}\label{theo:GEQ4}
\label{theo}
For an integer $t$, the transitivity of a graph $G$ is greater or equal to $t$ if and only if $G$ contains a $t$-atom as a subgraph.
\end{theo}
\begin{proof}
Suppose $G$ contains a $t$-atom, say $H$, as a subgraph. We show, by induction that the transitivity of $H$ is at least $t$. Clearly, transitivity of $K_1$ and $K_2$ are $1$ and $2$, respectively. Let the $t$-atom be constructed from a $(t-1)$-atom, say $H'$, and an independent set $\overline{K}_r$. By induction hypothesis, $Tr(H')=t-1$. Let us assume that $\{U_1, U_2, \ldots, U_{t-1}\}$ is a transitive $(t-1)$-partition of $H'$. By the definition of $t$-atom, $\overline{K}_r$ is a dominating set of $H'$. Therefore, $\{\overline{K}_r, U_1, U_2, \ldots, U_{t-1}\}$ forms a transitive $t$-partition of $H$. Hence, the transitivity of $H$ is at least $t$. Since $H$ is a subgraph of $G$, then $Tr(G)\geq Tr(H)\geq t$.
Conversely, let us assume that the transitivity of $G=(V,E)$ is greater or equal to $t$. Therefore, $G$ has a transitive $t$-partition \cite{hedetniemi2018transitivity}. Let $\{V_1,V_2,\ldots,V_t\}$ be a transitive $t$-partition of $G$. Once again , by induction, we show that $G$ contains a $t$-atom. For the base case $t=1$, the statement is trivially true. Note that transitivity of $G'\geq t-1$, where $G'=G\setminus \{V_1\}$. By induction hypothesis, $G'$ contains a $(t-1)$-atom, say $H=(V_H, E_H)$. Now, $V_1$ is a dominating set of $G$, therefore $V_1$ dominates every vertex of $V_H$. Let us consider a subset of vertices $B\subseteq V_1$ such that $B$ dominates every vertex of $V_H$ and $\lvert B \rvert$ $\leq$ $\lvert V_H \rvert$. By Hall's theorem \cite{diestel2005graph}, there exists a matching, say $M$, between $B$ and $V_H$ of size $|B|$. Let $W$ be the endpoints of $M$ that are in $V_H$. Since, $B$ dominates every vertex of $V_H$, every vertex of $V_H\setminus W$ has a neighbour in $B$. Let $X$ be the set of edges between $B$ and $V_H\setminus W$ such that every edge of $X$ is incident to exactly one vertex in $V_H\setminus W$. Now, a $t$-atom can be obtained from $G$ by removing the vertices $(V\setminus \{B \cup V_H\})$ and removing all the edges $(E \setminus \{E_H \cup M\cup X\})$.
\end{proof}
\begin{remk}\label{rem:UsingSubgraph}
\textnormal{Note that the class of graphs that contains $K_3$ or $P_4$ as a subgraph is equivalent to the class of graphs that contains $K_3$ or $P_4$ or $C_4$ as an induced subgraph. Therefore, the characterization of graphs with transitivity greater or equal to $3$, proved in \cite{hedetniemi2018transitivity}, is a special case of Theorem \ref{theo:GEQ4}.}
\end{remk}
Next we list all non-isomorphic copies of $\mathcal{A}_4$. Figure \ref{fig:K_3subgraphs} and Figure \ref{fig:P_4subgraphs} illustrates the $4$-atoms that can be obtained from the two $3$-atoms, namely $K_3$ and $P_4$, respectively.
\begin{figure}[htbp!]
\centering
\includegraphics[scale=0.65]{4atom_K_3.pdf}
\caption{Non-isomorphic $4$-atoms obtained from $K_3$}
\label{fig:K_3subgraphs}
\end{figure}
\begin{figure}[htbp!]
\centering
\includegraphics[scale=0.65]{4atom_P_4.pdf}
\caption{Non-isomorphic $4$-atoms obtained from $P_4$}
\label{fig:P_4subgraphs}
\end{figure}
Note that the graph $\alpha_2$ is a subgraph of $\beta_2$. This implies that the class of graphs containing a member of $\mathcal{A}_4$ is the same as the class of graphs containing a member from $\mathcal{A}$, where $\mathcal{A}= \mathcal{A}_4 \setminus \{\beta_2\}$. Hence, we have the following corollary, which answers \emph{Question 1}.
\begin{coro}\label{coro:geq4}
The transitivity of a graph $G$ is greater or equal to $4$ if and only if $G$ contains one of the graph form $\mathcal{A}$ as a subgraph.
\end{coro}
Since, by Remark \ref{rem:UsingSubgraph}, the transitivity of a graph $G$ is greater or equal to $3$ if and only if $G$ contains either $K_3$ or $P_4$ as subgraph, we have the following corollary which answers \emph{Question 2}.
\begin{coro}\label{coro:eq3}
The transitivity of a graph $G$ is equal to $3$ if and only if $G$ contains either $K_3$ or $P_4$ as subgraph but does not contain any graph from $\mathcal{A}$ as a subgraph.
\end{coro}
\begin{remk}
Since, given a graph $G$, in polynomial time, we can check whether another fixed graph $H$ is a subgraph of $G$ or not, the conditions in Corollary \ref{coro:geq4} and \ref{coro:eq3} can also be done in polynomial time.
\end{remk}
\section{Conclusion}
In this paper, we have shown that \textsc{Maximum Transitivity Decision Problem} is NP-complete for bipartite graphs. On the positive side, we have demonstrated that the transitivity of a given bipartite chain graph can be computed in linear time. Then, we have provided a necessary and sufficient condition for a graph to have transitivity greater or equal to $t$for any integer $t$. It would be interesting to investigate the complexity status of this problem in other subclasses of bipartite graphs. Designing an approximation algorithm for this problem would be another challenging open problem.
\bibliographystyle{alpha}
\addcontentsline{toc}{section}{Bibliography}
|
1,108,101,562,965 | arxiv | \section{Introduction}
\label{sec1}
Electrical power grids are expected to be reliable at all times. The rise of intermittent renewable generation is making this expectation challenging to live up to. Power imbalances caused by generation intermittency may cause grid stability constraints to be violated:~80\% of the bottlenecks in the European high-voltage grid was already caused by renewables in 2015~\cite{Yang2015b}. A well-controlled power grid matches supply and demand, ensuring that line constraints are not violated. System operators achieve this by making periodic control actions that adapt the operating point of the grid in response to changing conditions~\cite{Ela2011}.
Due to the impact of renewables, a planning that accounts for worst-case behavior may lead to overly conservative solutions. A more realistic paradigm is to make a planning admissible when the probability that line power flows exceed a threshold is sufficiently small. This has motivated several recent works that attempt to evaluate line failure probabilities using rare event simulation techniques~\cite{Wadman2012,Wadman2013,Shortle2013}, as well as large deviations techniques~\cite{Nesti2016}. Simulation techniques can lead to accurate estimates, but may be too time-consuming to use as subroutine within an optimization package that has to determine a planning that is operational during the next 5 to 15 minutes, such as optimal power flow (OPF). Recent papers studying chance-constrained versions of OPF problems include~\cite{Bienstock2014,Summers2014}. Large deviations techniques are appealing, but rely on a scaling procedure, essentially assuming that the noise during the next planning period is small.
This article makes a new contribution to this emerging area by deriving approximations for line failure probabilities that are \textit{guaranteed to be conservative}. That is, keeping the approximation smaller ensures that reliability constraints are actually met. In addition, these new approximations are explicit enough to be used for optimization purposes on short time scales. In particular, we develop two such approximations in Section~\ref{sec3}. Both bounds lead to an approximation of the capacity region that is conservative, convex and polyhedral, making our results compatible with existing planning methods like OPF~\cite{Bienstock2014,Summers2014}.
This paper is organized as follows. In Section~\ref{sec2} we provide a detailed problem formulation. We model stochastic power injections into the network by means of Gaussian random variables, describe line power flows through the well-known DC approximation, and define the failure probabilities of interest. Our main results are two different upper bounds that we present in Section~\ref{sec3}. The first upper bound is explicit, while the second one is sharper and explicit up to a finite-step minimization procedure. These bounds are compared numerically with the exact safe capacity regions in Section~\ref{sec4}. Section~\ref{sec6} provides the proofs of the results in Section~\ref{sec3}. Concluding remarks are provided in Section~\ref{sec5}.
\section{Problem formulation}
\label{sec2}
\subsection{Network description and DC approximation}
\label{sub:network}
We model the power grid network as a connected graph $G=G(V,E)$, where $V$ denotes the set of \textit{buses} and $E$ the set of directed edges modeling the \textit{transmission lines}. $n=|V|$ is the number of buses and $m=|E|$ is the number of lines. $(i,j) \in E$ denotes the transmission line between buses $i$ and $j$ with \textit{susceptance} $\beta_{i,j}=\beta_{j,i}$. If there is no transmission line between $i$ and $j$ we set $\beta_{i,j}=\beta_{j,i}=0$.
As in~\cite{Cetinay2016,Zocca2016}, the network structure and susceptances are encoded in the \textit{weighted Laplacian matrix}
\[
L_{i,j} :=\begin{cases}
-\beta_{i,j} &\text{if } i \neq j,\\
\sum_{k\neq j} \beta_{i,k} &\text{if } i=j.
\end{cases}
\]
Let $p\in\mathbb{R}^n$ denote the vector of (real) power injections, $\theta\in\mathbb{R}^n$ the vector of phase angles, and $\widetilde{f}\in\mathbb{R}^m$ the vector of (real) power flow over the lines. We will use the convention that $p_i\ge 0$ ($p_i<0$) means that power is generated (consumed, respectively) at bus $i$.
We make use of the \textit{DC approximation}, which is commonly used in transmission system analysis~\cite{Purchala2005,Stott2009,Powell2004,Wood2012}. Thus, the real flow $\widetilde{f}_{i,j}$ over line $(i,j)$ is related to the phase angle difference between buses $i$ and $j$ via the linear relation
\begin{equation}\label{eq:DC_scalar}
\widetilde{f}_{i,j}=\beta_{i,j}(\theta_i-\theta_j).
\end{equation}
We assume a balanced DC power flow, which means that the total net power injected in the network is zero, i.e.
\begin{equation}
\label{eq:zerosum}
\mathbf{1}^Tp=0,
\end{equation}
where $\mathbf{1} \in \mathbb{R}^n$ is the vector with all entries equal to $1$. We enforce this constraint through the concept of \textit{slack bus}.
Following the approach in~\cite{Cetinay2016}, and invoking assumption~\eqref{eq:zerosum}, the relation between $\theta$ and $p$ can be written in matrix form as
\begin{equation}
\label{eq:dcapprox}
\theta =L^+p,
\end{equation}
where $L^+$ is the \textit{Moore-Penrose} pseudo-inverse of the matrix $L$ and an average value of zero has been chosen as a reference for the node voltage phase angles. Choosing an arbitrary but fixed orientation of the transmission lines, the network structure is described by
the \textit{weighted edge-vertex incidence matrix} $B\in\mathbb{R}^{m\times n}$ whose components are
\[
B_{\ell, i}=\begin{cases}
\beta_{i,j} &\text{if } \ell=(i,j),\\
-\beta_{i,j} &\text{if } \ell=(j,i),\\
0 &\text{otherwise}.
\end{cases}
\]
Using such a matrix, we can rewrite identity~\eqref{eq:DC_scalar} as $\widetilde{f} = B \theta$. Combining the latter equation and~\eqref{eq:dcapprox}, the line power flow $\widetilde{f}$ can be written as a linear transformation of the power injections $p$, i.e.
\begin{equation}
\label{eq:tfBLp}
\widetilde{f}=BL^+ p.
\end{equation}
Transmission lines can fail due to overload. We say that a \textit{line overload} occurs in transmission line $\ell$ if $|\widetilde{f}_\ell| > M_\ell$, where $M_\ell$ is the \textit{line capacity}. If this happens, the line may trip, causing a global redistribution of the line power flows which could trigger cascading failures and blackouts.
It is convenient to look at the \textit{normalized line power flow} vector $f \in \mathbb{R}^m$, defined component-wise as $f_\ell:= \widetilde{f}_\ell / M_\ell$ for every $\ell=1,\dots,m$. The relation between line power flows and normalized power flows can be rewritten as $f = D \widetilde{f}$, where $D \in \mathbb{R}^{m \times m}$ is the diagonal matrix $D:=\mathrm{diag}(M_1^{-1}, \dots, M_m^{-1})$. In view of~\eqref{eq:tfBLp}, we have
\begin{equation}
\label{eq:fCp}
f= C p,
\end{equation}
where $C:=DBL^+\in\mathbb{R} ^{m\times n}$. Henceforth, we refer to the normalized power flows simply as power flows, unless specified otherwise.
\subsection{Stochastic power injections and line power flows}
\label{sub:stochasticpower}
In this section we describe our model for the bus power injections. As our focus is on network reliability under uncertainty, we assume that each bus houses a \textit{stochastic power injection} or \textit{load}. This choice allows to model, for example, intermittent power generation by renewable sources or highly variable load.
In order to guarantee that network balance condition~\eqref{eq:zerosum} is satisfied even with stochastic inputs, we assume that bus $n$ is a \textit{slack bus}, which means that its power injection is chosen in such a way that the vector of actual power injections is a zero-sum vector as required in~\eqref{eq:zerosum}.
More specifically, we assume that the the vector of the first $n-1$ power injections $(p_1,\ldots,p_{n-1})$ follows a multivariate Gaussian distribution, with expected value $\mu \in \mathbb{R}^{n-1}$ and covariance matrix $\Sigma \in \mathbb{R} ^{(n-1) \times (n-1)}$. Since the covariance matrix $\Sigma$ is positive semi-definite, the matrix $\sqrt{\Sigma} \in \mathbb{R}^{ (n-1) \times (n-1)}$ is well defined via the Cholesky decomposition of $\Sigma$. We are now able to formally define the vector $p$ of power injections as the $n$-dimensional random vector
\begin{equation}
\label{eq:powerinjections}
p = S (\sqrt{\Sigma} X + \mu),
\end{equation}
where $X \sim \mathcal{N}_{n-1}(\bm{0}, I_{n-1})$ is a $(n-1)$-dimensional standard multivariate Gaussian random variable and $S$ is the matrix
\[
S :=
\left(
\begin{array}{c}
I_{n-1}\\
- \mathbf{1}
\end{array}
\right) \in \mathbb{R}^{n \times (n-1)}.
\]
By construction we have $p=(p_1,\ldots,p_{n-1},- \sum_{i=1}^{n-1}p_i)$, so that~\eqref{eq:zerosum} is satisfied. Note that this formulation allows us to model \textit{deterministic power injections} as well, by means of choosing the corresponding variances and covariances equal to zero (or, from a practical standpoint, equal to very small positive numbers, so that the rank of~$\Sigma$ is not affected).
It is well known that an affine transformation of a multivariate Gaussian random variable is again a multivariate Gaussian random variable. Thus, identity~\eqref{eq:powerinjections} tells us that the power injections $p$ are indeed Gaussian, and hence, in view of~\eqref{eq:fCp}, so are the line power flows $f$. As it is convenient to look at the line power flows $f$ as an affine transformation of \textit{standard independent} Gaussian random variables, combining~\eqref{eq:fCp} and~\eqref{eq:powerinjections}, we can write
\begin{equation}
\label{eq:fXmu}
f=V X + W \mu,
\end{equation}
where $V:= D C S \sqrt{\Sigma} \in \mathbb{R}^{m \times (n-1)}$ and $W:=D C S \in \mathbb{R}^{m \times (n-1)}$. We denote by $\nu:=W \mu$ the vector of expected line power flows.
To summarize, we assume that the line power flows $f$ follow a multivariate Gaussian distribution $f\sim\mathcal{N}_{m}(\nu,VV^T)$, where the network topology and the correlation of the power injections are both encoded in the matrix $V$. Note in particular that $f_i \sim \mathcal{N}(\nu_i, \sigma_i^2)$, where the variance can be calculated as
\begin{equation}
\label{eq:defsigmai}
\sigma_i^2:= \sum_{j=1}^n V_{i,j}^2.
\end{equation}
The main assumption behind our stochastic model is that the power injections are Gaussian. In~\cite[Section 1.5]{Bienstock2014} it is argued how this assumption, altough simplifying, is reasonable in order to model buses that house wind farms. Note that, compared to the power injections model in~\cite{Bienstock2014}, our formulation allows for general correlations between stochastic injections, as we do not impose any restrictions on the covariance matrix $\Sigma$. Section~\ref{sec5} contains a discussion to what extent our assumptions may be relaxed.
\subsection{Line failure probabilities}
\label{sub:problemstatement}
The main goal of the present paper is to understand how the probability of an overload violation depends on the parameters of the systems and characterize which average power injection vector $\mu$ will make such a probability smaller than a desired target.
In view of the definition of line overload given in Subsection~\ref{sub:network}, we define the \textit{line failure event} $\mathcal{L}$ as
$
\mathcal{L} := \left \{ \exists \, \ell=1,\dots,m ~:~ |\widetilde{f}_\ell| \geq M_\ell \right \}.
$
Leveraging the normalized line power flows that we introduced earlier, we can equivalently rewrite $\mathcal{L}$ as
\[
\mathcal{L}= \left \{\max_i|f_i| \geq 1\right \}.
\]
Given a power injection covariance matrix $\Sigma$, define the \textit{risk level} $r(\mu)$ associated with a power injection profile $\mu$ as
\[
r(\mu):= \mathbb{E} \max_i |f_i|.
\]
Given a covariance matrix $\Sigma$, the risk level is a well-defined function $r: \mathbb{R}^{n-1} \to \mathbb{R}$ of the average injection vector $\mu$, since in view of~\eqref{eq:fXmu} we can rewrite $ r(\mu) = \mathbb{E} \max_i |V_i X + W_i \mu|$, where $V_i$ and $W_i$ denote the $i$-th row of the matrices $V$ and $W$, respectively.
We aim to characterize for a \textit{given} covariance matrix $\Sigma$ the average power injection vectors $\mu$ that make line failures a \textit{rare event}, say $\P(\mathcal{L}) \leq q$ for some very small threshold $q \in (0,1)$ to be set by the network operator (think of $q=10^{-5}$ or $q=10^{-6}$). In other words, given $q \in (0,1)$, we aim to determine the region $\cR^{\mathrm{true}}_q \subset \mathbb{R}^{n-1}$ defined by
\[
\cR^{\mathrm{true}}_q:=\{\mu\in \mathbb{R}^{n-1} : \P(\mathcal{L}) \leq q\}.
\]
For every given $\mu \in \mathbb{R}^{n-1}$, calculating exactly the probability $\mathbb P_{\mu}(\mathcal{L})$ means solving a high-dimensional integral that is also unavoidably error-prone, since the integrand becomes extremely small quickly (containing a multivariate Gaussian density). Hence, finding the region $\cR^{\mathrm{true}}_q$ exactly is a very computationally expensive and error-prone task.
This is the main motivation of the present work, in which we develop analytic tools which are explicit enough to be useful for planning and control of power grids in the short-term. More specifically, in the next section we propose \textit{capacity regions} that can be calculated much faster and that can be used to approximate $\cR^{\mathrm{true}}_q$.
\section{Main results}
\label{sec3}
This section is entirely devoted to the new three capacity regions $\cR^{\mathrm{up}}_q, \cR^{\star}_q$, and $\cR^{\mathrm{c.i.}}_q$ that we introduce to approximate $\cR^{\mathrm{true}}_q$. We first introduce the probabilistic upper bounds on which our method is based in Subsection~\ref{sub31}, then formally define the regions $\cR^{\mathrm{up}}_q, \cR^{\star}_q$, and $\cR^{\mathrm{c.i.}}_q$ in Subsection~\ref{sub32} and lastly in Subsection~\ref{sub33} discuss the trade-offs between these different regions.
\subsection{Concentration inequalities}
\label{sub31}
Our methodology relies on a well-known \textit{concentration bound} for a function of Gaussian random variables.
Concentration bounds describe the likelihood of a function of many random variables to deviate from its expected value. In our context, we are interested in understanding how likely is the random variable $\max_i |f_i|$ to deviate from its expected value $r(\mu) =\mathbb{E} \max_i |f_i|$.
Many concentration bounds have been proved, see~\cite[Chapter 2]{Wainwright2015} for an overview. In our setting, we require Proposition~\ref{prop:borell}, which is presented and proved later in Section~\ref{sec6}. The next theorem presents an explicit upper bound for the line failure probability in terms of $r(\mu)=\mathbb{E} \max_i|f_i|$ and the variances $\sigma^2_1, \dots, \sigma^2_m$ of the line power flows that can be derived using the aforementioned concentration bound.
\begin{thm}[Upper bound for line failure probability]\label{thm:concentration}$ $\\
If $r(\mu)< 1$, then
\begin{equation}
\label{eq:bound_C}
\P(\mathcal{L}) \leq \exp\Bigl(-\frac{(1-r(\mu))^2}{2 \max_i \sigma_i^2}\Bigr).
\end{equation}
\end{thm}
Note that $\mathbb{E} \max_i |f_i|=r(\mu)>1$ is definitely not a desirable operational regime for the power grid, since line failures are not rare events anymore.
\subsection{Capacity regions}
\label{sub32}
Given $q \in (0,1)$, region $\cR^{\mathrm{c.i.}}_q$ is defined as the region that consists of all average power injection vectors $\mu$ such that the upper bound for $\P(\mathcal{L})$ given by the concentration inequality~\eqref{eq:bound_C} is smaller than or equal to $q$, i.e.
\[
\cR^{\mathrm{c.i.}}_q := \left \{\mu\in \mathbb{R}^{n-1} ~:~ \exp\Bigl(-\frac{(1-r(\mu))^2}{2 \max_i \sigma_i^2}\Bigr) \leq q \right \},
\]
which can be rewritten as
\[
\cR^{\mathrm{c.i.}}_q = \left \{\mu\in \mathbb{R}^{n-1} ~:~ r(\mu) \leq 1-\max_i \sigma_i \sqrt{2\log q^{-1}} \right \}.
\]
Unfortunately, the exact calculation of $r(\mu)$ is computationally expensive, for the same reasons outlined at the end of Section~\ref{sec2}. Furthermore, we want to have a better analytic understanding of the dependency of $r(\mu)$ on the power injection averages $\mu$, on the network topology and on the variances $\sigma_i$, something that is hard to obtain from purely numerical procedure. Aiming to overcome these issues, we propose an explicit upper bound for $r(\mu)$, namely
\begin{equation}
\label{eq:rup}
r(\mu) \leq r^{\mathrm{up}}(\mu):= \max_i |\nu_i| + \max_i \sigma_i \sqrt{2\log(2m)},
\end{equation}
where we recall that $\nu=W\mu$ is the vector of average line power flows. The bound in~\eqref{eq:rup} is proven in Lemma~\ref{lem:upperbound_rmu} and can be used to obtain the following sub-region of $\cR^{\mathrm{c.i.}}_q$
\[
\cR^{\mathrm{up}}_q:=\left \{\mu\in \mathbb{R}^{n-1} ~:~ r^{\mathrm{up}}(\mu) \leq 1-\max_i \sigma_i \sqrt{2\log q^{-1} } \right \},
\]
which can be rewritten explicitly as
\begin{align*}
\cR^{\mathrm{up}}_q=\Bigl\{\mu & \in \mathbb{R}^{n-1} ~:~ \max_i |\nu_i| \leq \\
& \leq 1-\max_i \sigma_i (\sqrt{2\log q^{-1}}+\sqrt{2\log(2m)}\Bigr\}.
\end{align*}
In terms of $\mu$ we see that $\cR^{\mathrm{up}}_q$ is an intersection of half-spaces, and so $\cR^{\mathrm{up}}_q$ is convex and polyhedral.
A refinement of our analysis (see Lemma~\ref{lem:upperbound_rmu}) shows that is possible to obtain a sharper upper bound $r^\star(\mu)$ for $r(\mu)$,
\[
r(\mu) \leq r^\star(\mu) \leq r^{\mathrm{up}}(\mu),
\]
which results in the following region
\[
\cR^{\star}_q:=\left \{\mu\in \mathbb{R}^{n-1} ~:~ r^\star(\mu) \leq 1-\max_i \sigma_i\sqrt{2\log q^{-1}} \right \}.
\]
Unfortunately there is no analytic expression for $r^\star(\mu)$, but in Section~\ref{sec6} we show that calculating $r^\star(\mu)$ requires only the evaluation of a function in a finite number of points, making it a numerically viable approach, and the resulting capacity region remains convex and polyhedral. Summarizing, we have
\begin{thm}[Inclusions among capacity regions]\label{thm:regions}
Given $q \in (0,1)$, if $r(\mu)<1$, then the following inclusions hold:
\begin{equation}\label{eq:inclusion}
\cR^{\mathrm{up}}_q \subseteq \cR^{\star}_q \subseteq \cR^{\mathrm{c.i.}}_q \subseteq \cR^{\mathrm{true}}_q.
\end{equation}
\end{thm}
\subsection{Discussion}
\label{sub33}
We can guarantee that a line overload is a sufficiently rare event by enforcing that the risk level $r(\mu)$ is at most $1-\max_{i}\sigma_i\sqrt{2\log(1/q)}$. This approach has the merit to provide a capacity region $\cR^{\mathrm{c.i.}}_q$ that can be expressed as a simple linear condition on the risk level $r(\mu)$, but has the drawback that it requires the computation of $r(\mu)$, a non-trivial task.
The smaller region $\cR^{\mathrm{up}}_q$, although more conservative, is expressed in closed-form and, moreover, its dependency on the parameters $\nu,\sigma$ and $m$ is made explicit. In particular, the maximum standard deviation of the power flows, i.e.~$\max_i \sigma_i$ plays a big role in defining the capacity regions: indeed to larger values of $\max_i \sigma_i$ correspond smaller regions, which is intuitive since a bigger variance results in a higher probability of overload. In between the two regions $\cR^{\mathrm{up}}_q$ and $\cR^{\mathrm{c.i.}}_q$ lies the intermediate region $\cR^{\star}_q$, which is less conservative that $\cR^{\mathrm{up}}_q$ and can be computed very efficiently, even if it cannot be expressed in closed-form (see Section~\ref{sec6} for more details).
Both regions $\cR^{\mathrm{up}}_q$ and $\cR^{\star}_q$ seem sufficiently explicit to be used as probabilistic constraints into chance-constrained versions of OPF problems, as studied in~\cite{Bienstock2014,Summers2014}.
\section{Numerical case studies}
\label{sec4}
To illustrate how the three new regions compare to $\cR^{\mathrm{true}}_q$, we consider first a very simple network with a circuit topology, consisting of $3$ buses, all connected with each other by $3$ identical lines of unit susceptance and capacity $M=5$. We take the power injections in the non-slack nodes to be independent, zero-mean Gaussian random variables with variance $\epsilon=0.5$, which correspond to taking $\mu=(0,0)$ and $\Sigma= \epsilon I_2$. The corresponding four safe capacity regions with $q=10^{-3}$ are plotted in Figure~\ref{fig:3bus}.
We then plot in Figure~\ref{fig:14bus} the two-dimensional capacity regions $\cR^{\mathrm{up}}_q$ and $\cR^{\star}_q$ for the IEEE 14-bus test network (representing a portion of the American Electric Power System~\cite{Christie2006}) corresponding to bus $6$ and $9$. We replace the deterministic power injections with Gaussian random variables with average $\mu$ equal to the original deterministic values and variance $\epsilon=2\cdot 10^{-2}$. The line capacities have been chosen to be equal to $1.5$ times the average line power flow $\nu=W\mu$, and we used $q=10^{-4}$. The data for $\mu$, line susceptances and network topology have been extracted from the MATPOWER package~\cite{Zimmerman2011}. The regions $\cR^{\mathrm{c.i.}}_q$ and $\cR^{\mathrm{true}}_q$ have been omitted since the calculations were intrinsically computationally unstable, as argued at the end of Section~\ref{sec2}.
Note that our capacity regions are indeed convex, and polyhedral.
\vspace{-0.1cm}
\begin{figure}[!h]
\centering
\subfloat[3-bus cycle network]{\includegraphics[scale=0.202]{4regions_3bus.pdf} \label{fig:3bus}}
\hspace{0.0cm}
\subfloat[IEEE 14-bus network]{\includegraphics[scale=0.202]{2regions_14bus.pdf} \label{fig:14bus}}
\label{fig:regions}
\caption{Capacity regions comparison}
\end{figure}
\section{Mathematical tools}
\label{sec6}
\begin{prop}[Unilateral concentration inequality for the maximum of multivariate Gaussian random variables]\label{prop:borell}$ $\\
Let $X=(X_1,\ldots,X_k) \sim \mathcal{N}_{k}(\mu,\Sigma)$ be a multivariate Gaussian random variable, and let $\delta_i :=\sqrt{\Sigma_{i,i}}$ be the standard deviation of $X_i$, $i=1,\ldots,k$. The following concentration inequality holds for every $s\geq 0$:
\[
\mathbb P \left ( \max_i |X_i| -\mathbb{E} \max_i |X_i| \geq s \right ) \leq \exp \left (-\frac{s^2}{2 \max_i \delta_i^2}\right).
\]
\end{prop}
\begin{proof}
The multivariate Gaussian vector $X$ can be seen as an affine transformation $X=\sqrt{\Sigma}Z+\mu$ of a standard Gaussian vector $Z\sim \mathcal{N}_{k}(0,I_k)$. Then we apply\cite[Theorem 2.4]{Wainwright2015} to the random vector $Z$ choosing the function $h:\mathbb{R}^k\to\mathbb{R}$ that maps $Z$ into $h(Z) :=\max_{i=1,\ldots,k} |(\sqrt{\Sigma})_i Z+\mu_i|$. A straightforward computation shows that $h$ is a Lipschitz function with Lipschitz constant equal to $\max_{i=1,\ldots,k} \delta_i$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:concentration}]
Write
\[
\P(\mathcal{L}) =\P \Big (\max_i|f_i|-\mathbb{E} \max_i|f_i|\geq 1-\mathbb{E} \max_i|f_i|\Big ).
\]
Set $s:=1-\mathbb{E} \max_i|f_i| >0$ and apply Proposition~\ref{prop:borell} to $f$. Inequality~\eqref{eq:bound_C} follows as the standard deviation of $f_i$ is equal to $\sigma_i$, in view of definition~\eqref{eq:defsigmai}.
\end{proof}
\begin{lem}[Upper bounds for the risk level]\label{lem:upperbound_rmu}
Let $r(\mu):=\mathbb{E} \max_i|f_i|$, and define
\[
r^\star(\mu):=\inf_{s\in(0,+\infty)} \left\{ \frac{\log(2m)}{s}+ \max_{i=1,\dots,m} \left ( \frac{\sigma_i^2}{2} s + |\nu_i| \right ) \right \}.
\]
Then
\begin{equation}
\label{eq:rmubounds}
r(\mu) \leq r^\star(\mu)\leq \max_i |\nu_i| + \max_i \sigma_i \sqrt{2 \log(2m)}.
\end{equation}
\end{lem}
\begin{proof}
Take $2m$ random variables $Y_1,\dots,Y_{2m}$ defined as
\[
Y_j:=
\begin{cases}
f_j & \text{ if } j=1,\dots,m,\\
-f_{j-m} &\text{ if } j=m+1,\dots,2m.
\end{cases}
\]
From the definition of these random variables it immediately follows that $\max_{i=1,\dots,m} | f_i | = \max_{j=1,\dots,2m} Y_j$ and therefore $ \mathbb{E} \max_{i} | f_i | = \mathbb{E} \max_{j} Y_j$. Note that
\begin{equation}
\lambda_j := \mathbb{E} Y_j=
\begin{cases}
\nu_j & \text{ if } j=1,\dots,m,\\
-\nu_{j-m} &\text{ if } j=m+1,\dots,2m,
\end{cases}
\end{equation}
and $ \mathrm{Var} Y_j= \mathrm{Var} Y_{j+m} = \sigma_i^2$ for every $j =1, \dots,m$. For every $j=1,\dots,2m$ let $m_j(s):=\mathbb{E} \left ( \mathrm{e}^{s Y_j} \right ) = \mathrm{e}^{\sigma_j^2 s^2 /2 + \lambda_j s}$ be the moment generating function of the random variable $Y_j$. Following~\cite{Dasarathy2011}, for any $s \geq 0$ we have
\[
\mathrm{e}^{s \, \mathbb{E} \max_j Y_j} \hspace{-0.075cm} \leq \mathbb{E} ( \mathrm{e}^{s \max_j Y_j} ) = \hspace{-0.075cm} \sum_{j=1}^{2m} m_j(s) \leq 2m \max_j \mathbb{E} ( \mathrm{e}^{s Y_j} ).
\]
Taking the $\log$ on both sides and rearranging we obtain
\begin{align*}
&\mathbb{E} \max_j Y_j
\leq \inf_{s \in (0, \infty)} \frac{1}{s} \log \left ( 2m \cdot \mathbb{E} \Big ( \max_{j=1,\dots,2m} \mathrm{e}^{s Y_j} \Big ) \right )\\
& =\hspace{-0.075cm} \inf_{s \in (0, \infty)} \left \{ \frac{\log (2m) }{s} + \hspace{-0.075cm} \frac{1}{s} \log \left [ \max_{j=1,\dots,2m} \left (\mathrm{e}^{\sigma_j^2 s^2/2 + \lambda_j s} \right ) \right ] \right \},
\end{align*}
yielding the first bound, since the RHS is equal to $r^\star(\mu)$. If we now denote $\hat\nu := \max_{j=1,\dots,2m} \lambda_j = \max_i \nu_i$ and $\hat\sigma^2 = \max_i \sigma_i^2$, we have $m_j(s) \leq M(s)$ for all $s \geq 0$ and for every $j=1,\dots,2m$. Thus
\[
\mathbb{E} \max_i Y_i \leq \frac{\log (2m)}{s} + \frac{\hat \sigma^2}{2} s+ \hat{\nu}.
\]
Optimizing over $s$ in $(0,+\infty)$ and finding the optimum equals $s=\hat \sigma^{-1} \sqrt{2 \log(2m)}$, we get
$
\mathbb{E} \max_i Y_i \leq \hat{\nu} + \hat \sigma \sqrt{2 \log(2m)},
$
proving the other inequality in~\eqref{eq:rmubounds}.
\end{proof}
Lastly, we want to make some final remarks on how to calculate $r^\star(\mu)$ which is
the infimum over $(0,\infty)$ of
\[
g(s):=\frac{\log(2m)}{s}+ \max_{i=1,\dots,m} \left ( \frac{\sigma_i^2}{2} s + |\nu_i| \right ).
\]
This can be seen as the point-wise maximum of $m$ functions $g_k(s):=\frac{\log(2m)}{s}+ \frac{\sigma_k^2}{2} s + |\nu_k|$, $k=1,\dots, m$. Note that $r^\star(\mu)$ can be computed by evaluating of the function $g$ into at most $m + m(m-1)/2$ points and then take the minimum value: the candidate points are the $m$ local minima of the functions $g_1(s),\dots,g_m(s)$ (which are $s^\star_i := \sqrt{2 \log(2m)} /\sigma_i$, $i=1,\dots,m$), and the points $s_{i,j} := 2(|\nu_i |- |\nu_j|)/(\sigma_j^2 - \sigma_i^2), \quad i,j=1,\dots,m, \, \, i \neq j,$
(if they exist and positive) of the lines $\frac{\sigma_i^2}{2} s + |\nu_i|$ and $\frac{\sigma_j^2}{2} s + |\nu_j|$ with , which are at most $m(m-1)/2$.
This analysis implies that the resulting capacity region is convex and polyhedral.
\section{Concluding remarks}
\label{sec5}
Probabilistic techniques, in particular powerful upper bounds for Gaussian random vectors, can be applied to generate explicit upper bounds for failure probabilities and corresponding safe capacity regions. The resulting regions are polyhedral, and can be characterized in such a way that they can be incorporated in optimization routines, such as OPF.
In an extended version of this paper we will show that our upper bounds give the correct asymptotic estimate of the failure probability in the small-noise large deviations regime as studied in \cite{Nesti2016}, i.e.\ our bounds are asymptotically sharp.
We will also extend the scope of our method as it is not limited to the assumptions in Section~\ref{sec2}: (i) the static analysis we consider can be extended to the dynamic situation as considered in~\cite{Nesti2016,Wadman2016}; (ii) the Gaussian assumption may be relaxed by the ideas in~\cite{Boucheron2013}; (iii) other performance measures, like the probability that several lines fail, can be analyzed.
\bibliographystyle{IEEEtran}
|
1,108,101,562,966 | arxiv | \section{Introduction}
The phenomenon of Anderson localization \cite{Anderson} has been studied both experimentally and theoretically for already half a century \cite{thouless74,Ping_Sheng,bart99}. It takes place for waves in strongly disordered media when interference effects become plethoric in the multiple scattering process. Theoretical description of Anderson localization reached a decisive stage in the eighties with the self-consistent (SC) theory of Vollhardt and W\"{o}lfle \cite{Vollhardt}. However, in its original form, this theory did not fully account for finite-size effects. Later, Van Tiggelen {\em et al.} proposed a natural generalization of SC theory to media of finite size by introducing a \emph{position-dependent} diffusion coefficient $D$ \cite{Lagendijk}. This generalized SC theory has been recently used to study the dynamics of Anderson localization in quasi-one-dimensional \cite{SB1} and three-dimensional \cite{SB2} systems. Meanwhile, the generalized SC equations of Refs.\ \onlinecite{Lagendijk,SB1,SB2} have never been derived microscopically. Such a derivation is highly desirable for at least two reasons. First, our recent results indicate that the position dependence of $D$ is crucial for the internal consistency of the theory itself and that some of the important features of Anderson localization (like the $1/L^2$ scaling of the transmission coefficient with the size $L$ of a disordered sample at the mobility edge) cannot be reproduced without fully taking it into account \cite{Nicolas}. Second, the very fact that $D$ should be position-dependent can be questioned in favor of momentum \cite{berk87} or time \cite{berk90} dependencies studied in the past, unless the position dependence of $D$ is given a microscopic justification. This calls for a rigorous derivation of SC equations in a medium of finite size, showing the emergence of position-dependent $D$ from microscopic equations of wave propagation and clarifying the physics behind it.
In this paper we present a derivation of SC equations of localization in a finite medium
of size $L$ much exceeding the two main ``microscopic'' length scales of the problem: the wavelength
$\lambda$ and the mean free path $\ell$ due to disorder. Our derivation is
based on the ``Hikami box'' formalism \cite{Gorkov,Hikami}. We work in the framework of classical wave scattering, but our results can be extended to quantum particles (e.g., an electron or an atom at low temperatures) described by Schr\"{o}dinger equation with a disordered potential.
Whereas electronic properties of disordered systems have been a subject of intense studies over several decades \cite{Anderson,Vollhardt,berk87,berk90,Gorkov,Hikami,Akkermans_Montambaux,shapiro82,mackinnon94,abrahams79}, the behavior of coherent atomic ensembles (Bose-Einstein condensates) in disordered optical lattices has come into focus only recently \cite{lye05,clement05,shapiro07,skipetrov08}.
Mathematically, the finiteness of the medium comes into play when we evaluate interference corrections to the sum of ladder diagrams. These interference corrections are due to infinite series of maximally-crossed diagrams that we insert inside the ladders. In the presence of time-reversal invariance, the final result depends on the probability for the wave (or quantum particle) to return back to a given point $\mathbf{r}$. Whereas, due to the translational and rotational invariance, this return probability is a position-independent quantity in the infinite medium, it becomes position-dependent in a medium of finite size. In an open medium, the return probability decreases when the boundary of the medium is approached because of the increased probability for the wave to leave the medium through the boundary. This leads to a position-dependent renormalized diffusion coefficient $D$, the renormalization being less important near the boundaries of the disordered medium. The dependence of $D$ on $\mathbf{r}$ is not known in advance, but has to be determined self-consistently by solving a diffusion equation containing the same $D$.
Anderson localization is often defined as an asymptotic property of eigenstates of disordered wave
(Schr\"{o}dinger, Helmholtz, etc.) equations. The states are said (exponentially) localized if their
intensity decays exponentially at large distances. Another widespread definition of Anderson localization is
vanishing of the diffusion coefficient \cite{bart99}. Strictly speaking, none of these definitions can be directly applied in open media of finite size, which were a subject of extensive work initiated by Thouless \cite{thouless74} and culminated in the scaling theory of localization \cite{abrahams79}. We are not intended to give a review of this work here and the interested reader can find more details in Refs.\ \cite{thouless74,Ping_Sheng,bart99}. For our purposes it will be sufficient to think of ``Anderson localization
in a medium of finite size'' as of an interference, wave phenomenon that would give rise to ``truly localized'' states if the medium were extended to infinity.
The paper is organized as follows. In Sec.\ \ref{I} we review SC theory of localization in infinite and finite media. The main ``building block'' of our derivation --- an ``interference loop'' that we insert inside ladder diagrams to account for interference effects in the intensity Green's function --- is calculated in Sec.\ \ref{II}. In Sec.\ \ref{III} we sum an infinite series of diagrams for the intensity Green's function and obtain the SC equations of localization. Section \ref{IV} is devoted to boundary conditions and a discussion of energy conservation. Finally, we summarize our main results and discuss their implications in Sec.\ \ref{concl}. Technical details of calculations are collected in 4 appendices.
\section{Theoretical framework}
\label{I}
We consider propagation of a scalar, monochromatic wave of circular frequency $\omega$ in a disordered three-dimensional medium of finite size. The amplitude Green's function $G(\textbf{r},\textbf{r}^{\prime},\omega)$ obeys the Helmholtz equation:
\begin{equation}
[\Delta _{\textbf{r}}+k^{2}(1+\mu(\textbf{r}))]G(\textbf{r},\textbf{r}^{\prime},\omega)=\delta(\textbf{r}-\textbf{r}^{\prime}).\label{Helmholtz}
\end{equation}
Here $\mu(\textbf{r})=\delta\epsilon(\textbf{r})/\bar{\epsilon}$ is the relative fluctuation of the dielectric constant $\epsilon(\textbf{r})=\bar{\epsilon}+\delta\epsilon(\textbf{r})$, $\bar{\epsilon}$ is the average dielectric constant, $k=\sqrt{\bar{\epsilon}}\omega/c$ is the wave number, and $c$ is the speed of wave in a homogeneous medium with $\epsilon = 1$ (vacuum).
We assume that $\mu(\textbf{r})$ obeys the white-noise Gaussian statistics:
\begin{equation}
k^4\langle\mu(\textbf{r})\mu(\textbf{r}^{\prime})\rangle=\dfrac{4\pi}{\ell}\delta(\textbf{r}-\textbf{r}^{\prime}),
\label{W-NGS}
\end{equation}
where angular brackets denote averaging over realizations of disorder and $\ell$ is the scattering mean free path.
The average amplitude Green's function can be calculated assuming weak disorder ($k \ell \gg 1)$ \cite{Ping_Sheng}:
\begin{equation}
\langle G(\textbf{r},\textbf{r}^{\prime},\omega)\rangle=-\dfrac{1}{4\pi\left| \textbf{r}-\textbf{r}^{\prime}\right|}\exp\left(ik\left|\textbf{r}-\textbf{r}^{\prime}\right|-\dfrac{\left| \textbf{r}-\textbf{r}^{\prime}\right|}{2\ell}\right).
\end{equation}
Although this result has been obtained for the infinite medium, it holds in a medium of finite size as well, provided that the points $\textbf{r}$ and $\textbf{r}^{\prime}$ are at least one mean free path from the boundaries.
In this paper we will be interested in the average intensity Green's function:
\begin{equation}
C(\textbf{r},\textbf{r}^{\prime},\Omega)=\dfrac{4\pi}{c}\langle G(\textbf{r},\textbf{r}^{\prime},\omega_1)G^{*}(\textbf{r},\textbf{r}^{\prime},\omega_2)\rangle,
\label{intensity_GF}
\end{equation}
where $\omega_1=\omega_0+\Omega/2$, $\omega_2=\omega_0-\Omega/2$, and we omit the dependence of $C$ on the carrier frequency $\omega_0$. We assume the latter to be fixed in the remainder of the paper. Physically, the Fourier transform $C(\textbf{r},\textbf{r}^{\prime},t-t^{\prime})$ of Eq.\ (\ref{intensity_GF}) describes the density of wave energy at $\textbf{r}$ at time $t$ due to a short pulse emitted at time $t^{\prime}$ by a point source at $\textbf{r}^{\prime}$. For a quantum particle, $C$ can be interpreted as a probability density of finding the particle in the vicinity of point $\textbf{r}$ at time $t$, provided that the particle was at $\textbf{r}^{\prime}$ at time $t^{\prime}$ (``probability of quantum diffusion'') \cite{Akkermans_Montambaux}.
The analysis of the intensity Green's function is generally complicated and relatively simple results can be obtained only for weak disorder ($k \ell \gg 1$) at large spatial scales ($|\textbf{r} - \textbf{r}^{\prime}| \gg \ell$) and for slow dynamics ($\Omega \ll \omega_0$, $c/\ell$). Under these assumptions, one derives the diffusion equation for the intensity Green's function \cite{Ping_Sheng,Akkermans_Montambaux}:
\begin{equation}
\left( -i\Omega - D_B \Delta_{\textbf{r}}\right)
C(\textbf{r},\textbf{r}^{\prime},\Omega)=\delta(\textbf{r}-\textbf{r}^{\prime}),
\label{eq_for_C_infinite}
\end{equation}
where $D_B=c\ell/3$ is the Boltzmann diffusion coefficient. This equation holds in the infinite as well as in finite media, provided that it is supplemented with appropriate boundary conditions \cite{Ping_Sheng,Akkermans_Montambaux,Zhu} in the latter case. Obviously, Eq.\ (\ref{eq_for_C_infinite}) ignores interference effect and treats the wave as a classical particle that propagates through a disordered medium by diffusion. Vollhardt and W\"{o}lfle \cite{Vollhardt} have shown that interference effects lead to a renormalization of $D_B$ in Eq.\ (\ref{eq_for_C_infinite}). The renormalized diffusion coefficient $D(\Omega)$ obeys \footnote{To lighten the notation, we use $d\textbf{x}$ instead of $d^3\textbf{x}$ to denote three-dimensional integration over a vector $\textbf{x}$.}:
\begin{eqnarray}
\dfrac{1}{D(\Omega)} &=& \dfrac{1}{D_B}+\dfrac{6\pi}{k^2\ell}
\int \frac{d\textbf{Q}}{(2\pi)^3} \dfrac{1}{-i\Omega +D(\Omega) \bf{Q}^2}.
\label{eq_for_D_infinite}
\end{eqnarray}
In three dimensions, the integral over $\textbf{Q}$ exhibits an ultraviolet divergence arising from the failure of the diffusion equation (\ref{eq_for_C_infinite}) at small length scales $| \mathbf{r} - \mathbf{r}^{\prime} | < \ell$. This unphysical divergence can be regularized by introducing an upper cutoff of integration $Q_{max} \sim 1/\ell$.
Although, strictly speaking, Eq.\ (\ref{eq_for_C_infinite}) with $D_B$ replaced by $D(\Omega)$ can only be justified for $k \ell \gg 1$, the great success of self-consistent equations (\ref{eq_for_C_infinite}) and (\ref{eq_for_D_infinite}) is due to the fact that they correctly describe many aspects of wave propagation in disordered media all the way down to $k \ell \simeq 1$ (mobility edge) and even at $k \ell < 1$ (Anderson localized regime). In a disordered metal, for example, where the quantity of interest is the dynamic conductivity $\sigma(\Omega) \propto D(\Omega)$, these equations yield the weak localization effect $\sigma(0) \propto 1 - \mathrm{const}/(k\ell)^{2}$, the low-frequency behavior of conductivity at the mobility edge $\sigma(\Omega) \propto (-i \Omega)^{1/3}$ and in the localized (i.e. insulating) phase $\sigma(\Omega) \propto -i \Omega \xi^2$ \cite{shapiro82}. However, not all the results obtained in the framework of SC theory are correct. As an example, we mention the critical exponent $\nu$ describing the divergence of localization length $\xi$ with $k \ell - 1$: $\xi \propto |k\ell - 1|^{-\nu}$. SC theory yields $\nu = 1$, whereas numerical simulations suggest $\nu \approx 1.5$ \cite{mackinnon94}. Another shortcoming of SC theory is its inapplicability to systems with broken time-reversal symmetry.
The derivation of Eq.\ (\ref{eq_for_D_infinite}) heavily relies on the translational invariance and cannot be straightforwardly generalized to media of finite size, even when the size $L$ of the medium is much larger than $\lambda$ and $\ell$. To some extent, Eq.\ (\ref{eq_for_C_infinite}) with $D_B$ replaced by $D(\Omega)$ can still be used to study media of finite size by using a lower cutoff $\sim 1/L$ in the integral over $\mathbf{Q}$ in Eq.\ (\ref{eq_for_D_infinite}) \cite{Vollhardt}. Such an approach can be more or less successful in making qualitative predictions in the spirit of the scaling theory of localization \cite{abrahams79}, but it becomes insufficient when one is interested in fine details of multiple wave scattering close to the mobility edge and in the localized regime: coherent backscattering cone \cite{Lagendijk}, dynamics of short pulses \cite{SB1,SB2}, or precise scaling of the transmission coefficient with the size of disordered sample \cite{Nicolas}. A plausible generalization of SC theory to media of finite size can be obtained by noticing that, by virtue of Eq.\ (\ref{eq_for_C_infinite}), the $\mathbf{Q}$-integral of Eq.\ (\ref{eq_for_D_infinite}) is formally equal to the ``return probability'' $C(\mathbf{r}, \mathbf{r}, \Omega)$. We can therefore rewrite Eq.\ (\ref{eq_for_D_infinite}) as $1/D(\Omega) = 1/D_B + 6\pi/(k^2\ell) C(\textbf{r},\textbf{r},\Omega)$. Van Tiggelen \emph{et al.} conjectured \cite{Lagendijk} that in this new form the self-consistent equation for $D$ might hold in a medium of finite size as well. In a medium of finite size, the position dependence of $C(\mathbf{r}, \mathbf{r}, \Omega)$ naturally gives rise to a position dependence of $D$:
\begin{eqnarray}
\dfrac{1}{D(\mathbf{r},\Omega)} &=& \dfrac{1}{D_B}+\dfrac{6\pi}{k^2\ell}C(\textbf{r},\textbf{r},\Omega).
\label{eq_for_D_finite}
\end{eqnarray}
If we then enforce diffusive behavior of the intensity Green's function and insist on the energy conservation, the equation for $C$ becomes \cite{Lagendijk}
\begin{equation}
\left( -i\Omega -\boldsymbol{\nabla}_{\textbf{r}} \cdot D(\textbf{r},\Omega) \boldsymbol{\nabla}_{\textbf{r}}\right)
C(\textbf{r},\textbf{r}^{\prime},\Omega)=\delta(\textbf{r}-\textbf{r}^{\prime}).
\label{eq_for_C_finite}
\end{equation}
Although SC equations (\ref{eq_for_D_finite}) and (\ref{eq_for_C_finite}) appear to be a powerful tool to study Anderson localization in realistic situations \cite{Lagendijk,SB1,SB2,Nicolas}, they still remain a conjecture and lack microscopic justification. Derivation of these equations from the first principles is the main purpose of the present paper.
\section{Interference effects in finite media}
\label{II}
Formally, the intensity Green's function is given by \cite{Lagendijk_WL}
\begin{eqnarray}
C(\textbf{r},\textbf{r}^{\prime},\Omega) &=& \dfrac{4\pi}{c}\langle G(\textbf{r},\textbf{r}^{\prime},\omega_1)\rangle \langle G^*(\textbf{r},\textbf{r}^{\prime},\omega_2)\rangle \nonumber \\
&+& \dfrac{4\pi}{c}\int d\textbf{r}_1d\textbf{r}_2d\textbf{r}_3d\textbf{r}_4\langle G(\textbf{r},\textbf{r}_1,\omega_1)\rangle \langle G^*(\textbf{r},\textbf{r}_3,\omega_2)\rangle
\nonumber \\
&\times& \Gamma(\textbf{r}_1,\textbf{r}_2,\textbf{r}_3,\textbf{r}_4,\Omega)\langle G(\textbf{r}_2,\textbf{r}^{\prime},\omega_1)\rangle \langle G^*(\textbf{r}_4,\textbf{r}^{\prime},\omega_2)\rangle, \label{Rigorous_intensity}
\end{eqnarray}
where $\Gamma (\textbf{r}_1,\textbf{r}_2,\textbf{r}_3,\textbf{r}_4,\Omega)$ is the complete vertex function given by a sum of all diagrams connecting scattering paths corresponding to $G$ and $G^*$. The first term $\langle G\rangle\langle G^*\rangle$ on the right-hand side (r.h.s.) of Eq.\ (\ref{Rigorous_intensity}) will be neglected in the following. Indeed, it is exponentially small at large distances $\vert\textbf{r}-\textbf{r}^{\prime}\vert \gg \ell$ that are of main interest for us here.
In the regime of weak disorder, defined by $k\ell\gg 1$, $\Gamma (\textbf{r}_1,\textbf{r}_2,\textbf{r}_3,\textbf{r}_4,\Omega)=\delta(\textbf{r}_1-\textbf{r}_3)\delta(\textbf{r}_2-\textbf{r}_4)\Gamma_D (\textbf{r}_1,\textbf{r}_2,\Omega)$ with $\Gamma_D$ a sum of ladder diagrams \cite{Ping_Sheng,Akkermans_Montambaux,Vollhardt} shown in Fig. \ref{Ladder_diagrams}(a). We denote $C$ given by Eq.\ (\ref{Rigorous_intensity}) with $\Gamma_D$ substituted for $\Gamma$ by $C_D$. At large distances $| \mathbf{r} - \mathbf{r}^{\prime}| \gg \ell$ and in the limit of small $\Omega$, $C_D$ obeys the diffusion equation (\ref{eq_for_C_infinite}).
We also introduce a sum of maximally-crossed diagrams $\Gamma_C(\textbf{r}_1,\textbf{r}_2,\Omega)$ shown in Fig.\ \ref{Ladder_diagrams}(b). If we do not consider the first term on the r.h.s. of Fig.\ \ref{Ladder_diagrams}(a), we can formally obtain $\Gamma_C$ from $\Gamma_D$ by rotating the bottom propagation line of the diagram of Fig.\ \ref{Ladder_diagrams}(a) by 180$^{\circ}$ in the plane perpendicular to the plane of the figure. The time-reversal invariance, that we assume to hold throughout this paper, implies $\Gamma_C(\textbf{r}_1,\textbf{r}_2,\Omega) = \Gamma_D (\textbf{r}_1,\textbf{r}_2,\Omega)$ if $|\mathbf{r}_1 - \mathbf{r}_2|$ exceeds the correlation length of disorder (i.e. if $\mathbf{r}_1 \ne \mathbf{r}_2$ for the white-noise disorder that we consider here) because the first term of Fig.\ \ref{Ladder_diagrams}(a) can be neglected in this case.
\begin{figure}[t]
\includegraphics[width=12.0cm]{LadderANDMC_diagrams.eps}
\caption{\label{Ladder_diagrams} (a) Sum of ladder diagrams $\Gamma_D(\textbf{r}_1,\textbf{r}_2,\Omega)$ and (b) sum of maximally-crossed diagrams $\Gamma_C(\textbf{r}_1,\textbf{r}_2,\Omega)$. Solid and dashed lines denote $\langle G\rangle$ and $\langle G^*\rangle$, respectively. The dotted line symbolizes the correlation function of disorder
$k^4 \langle \mu(\mathbf{r}) \mu(\mathbf{r}^{\prime}) \rangle$ given by Eq. (\ref{W-NGS}). Crosses denote scattering events. Integrations over positions of all internal scattering events are assumed. In all diagrams of this paper, $\langle G\rangle$ and $\langle G^*\rangle$ should be evaluated at frequencies $\omega_1 = \omega_0 + \Omega/2$ and $\omega_2 = \omega_0 - \Omega/2$, respectively. We show this explicitly in the panel (a) of this figure only.}
\end{figure}
\begin{figure}[t]
\includegraphics[width=6.0cm]{Gamma_loop.eps}
\caption{\label{Hikami_boxes} The diagram $X(\textbf{r},\textbf{r}^{\prime},\Omega)$ that we use to introduce interference effects in the calculation of intensity Green's function. This diagram is made of a four-point Hikami box $H(\textbf{r},\textbf{r}_1,\textbf{r}^{\prime},\textbf{r}_2)$ --- detailed in Appendix \ref{A} --- and of the sum of maximally-crossed diagrams $\Gamma_C(\textbf{r}_1,\textbf{r}_2,\Omega)$ shown by wavy lines connecting $\textbf{r}_1$ and $\textbf{r}_2$.}
\end{figure}
To account for interference effects during propagation, we consider a loop-shaped diagram $X(\textbf{r},\textbf{r}^{\prime},\Omega)$ shown in Fig.\ \ref{Hikami_boxes}. This diagram is made of a square diagram known as a four-point Hikami box $H(\textbf{r},\textbf{r}_1,\textbf{r}^{\prime},\textbf{r}_2)$ \cite{Gorkov, Hikami} and of a sum of maximally-crossed diagrams $\Gamma_C(\textbf{r}_1,\textbf{r}_2,\Omega)$ that we replace by $\Gamma_D(\textbf{r}_1,\textbf{r}_2,\Omega)$, making use of time-reversal invariance:
\begin{equation}
X(\textbf{r},\textbf{r}^{\prime},\Omega) = \int d\textbf{r}_1d\textbf{r}_2H(\textbf{r},\textbf{r}_1,\textbf{r}^{\prime},\textbf{r}_2)\Gamma_D(\textbf{r}_1,\textbf{r}_2,\Omega).
\label{calculation_loop}
\end{equation}
Because $H$ is a local object having non-zero value only when all the 4 points $\textbf{r}$, $\textbf{r}_1$, $\textbf{r}^{\prime}$ and $\textbf{r}_2$ are within a distance of order $\ell$ from each other, we can expand $\Gamma_D$ in series around $\mathbf{r}$, assuming that its spatial variations are small at the scale of $\ell$:
\begin{eqnarray}
\Gamma_D(\textbf{r}_1,\textbf{r}_2,\Omega) &\simeq & \Big\{1+(\textbf{r}_1-\textbf{r})\cdot\boldsymbol{\nabla}_{\textbf{r}_1}
+(\textbf{r}_2-\textbf{r})\cdot\boldsymbol{\nabla}_{\textbf{r}_2}
\nonumber\\
&+& \dfrac{1}{2}{\left[(\textbf{r}_1-\textbf{r})\cdot\boldsymbol{\nabla}_{\textbf{r}_1} \right]}^2
+\dfrac{1}{2}{\left[(\textbf{r}_2-\textbf{r})\cdot\boldsymbol{\nabla}_{\textbf{r}_2} \right]}^2 + \dots \Big\} \Gamma_D(\textbf{r}_1,\textbf{r}_2,\Omega)\mid _{\textbf{r}_1=\textbf{r}_2=\textbf{r}}.
\label{expansion_gamma}
\end{eqnarray}
We will truncate this expansion to the first order and use the reciprocity principle $\Gamma_D(\textbf{r}_1,\textbf{r}_2,\Omega)=\Gamma_D(\textbf{r}_2,\textbf{r}_1,\Omega)$ that allows us to rewrite Eq. (\ref{expansion_gamma}) as
\begin{equation}
\Gamma_D(\textbf{r}_1,\textbf{r}_2,\Omega)\simeq \left[ 1+\dfrac{1}{2}(\textbf{r}_1+\textbf{r}_2-2\textbf{r})\cdot\boldsymbol{\nabla}_{\textbf{r}} \right]
\Gamma_D(\textbf{r},\textbf{r},\Omega).
\label{expansion_simplified}
\end{equation}
Substituting this into Eq. (\ref{calculation_loop}) we obtain
\begin{equation}
X(\textbf{r},\textbf{r}^{\prime},\Omega)=
\left[ H(\textbf{r},\textbf{r}^{\prime})
+\dfrac{1}{2} \textbf{H}_f(\textbf{r},\textbf{r}^{\prime})\cdot\boldsymbol{\nabla}_{\textbf{r}} \right] \Gamma_D(\textbf{r},\textbf{r},\Omega)
\label{Cloop},
\end{equation}
with
\begin{equation}
H(\textbf{r},\textbf{r}^{\prime})=\int d\textbf{r}_1d\textbf{r}_2H(\textbf{r},\textbf{r}_1,\textbf{r}^{\prime},\textbf{r}_2)
\label{H_definion}
\end{equation}
and
\begin{equation}
\textbf{H}_f(\textbf{r},\textbf{r}^{\prime})=\int d\textbf{r}_1d\textbf{r}_2(\textbf{r}_1+\textbf{r}_2-2\textbf{r})H(\textbf{r},\textbf{r}_1,\textbf{r}^{\prime},\textbf{r}_2).
\label{H_f_definition}
\end{equation}
The first term on the r.h.s. of Eq.\ (\ref{Cloop}) is the ``usual'' term arising in the infinite medium as well \cite{Akkermans_Montambaux}. The second term on the r.h.s. is non-zero only in a finite medium because $\Gamma_D(\textbf{r},\textbf{r},\Omega)$ is independent of $\mathbf{r}$ in the infinite medium. It will be seen from the following that this term is of fundamental importance for the derivation of self-consistent equations of localization in a finite medium.
A calculation detailed in Appendix \ref{A} gives
\begin{equation}
\textbf{H}_f(\textbf{r},\textbf{r}^{\prime})=-(\textbf{r}-\textbf{r}^{\prime})H(\textbf{r},\textbf{r}^{\prime}).
\label{H_f_result}
\end{equation}
Substituting Eq. (\ref{H_f_result}) into Eq. (\ref{Cloop}) we obtain
\begin{equation}
X(\textbf{r},\textbf{r}^{\prime},\Omega)=H(\textbf{r},\textbf{r}^{\prime})\left[1 - \dfrac{1}{2}(\textbf{r}-\textbf{r}^{\prime}) \cdot \boldsymbol{\nabla}_{\textbf{r}} \right] \Gamma_D(\textbf{r},\textbf{r},\Omega).
\label{Cloop_r_r'}
\end{equation}
For convenience of calculations, we introduce the difference variable $\Delta\textbf{r}=\textbf{r}-\textbf{r}^{\prime}$, such that a given function $f$ of $\textbf{r}$ and $\textbf{r}^{\prime}$ becomes a function $\tilde{f}$ of $\textbf{r}$ and $\Delta\textbf{r}$. In particular, $H(\textbf{r},\textbf{r}^{\prime})$ becomes $\tilde{H}(\Delta \mathbf{r})$ and does not depend on $\textbf{r}$ \cite{Akkermans_Montambaux}. Using the new set of variables $\textbf{r}$ and $\Delta\textbf{r}$, we have $\Gamma_D(\textbf{r},\textbf{r},\Omega)=\tilde{\Gamma}_D(\textbf{r},\Delta\textbf{r}=\textbf{0},\Omega)$. Equation (\ref{Cloop_r_r'}) becomes
\begin{equation}
\tilde{X}(\textbf{r},\Delta\textbf{r},\Omega) = \tilde{H}(\Delta\textbf{r})\left[1-\dfrac{1}{2}\Delta\textbf{r} \cdot \boldsymbol{\nabla}_{\textbf{r}}\right]
\tilde{\Gamma}_D(\textbf{r},\textbf{0},\Omega).
\label{Cloop_R_Dr}
\end{equation}
We now take the Fourier transform of Eq. (\ref{Cloop_R_Dr}) with respect to $\Delta\textbf{r}$ and consider the limit $\textbf{q} \rightarrow \textbf{0}$. Because the Fourier transform $\tilde{H}(\textbf{q})$ of $\tilde{H}(\Delta\textbf{r})$ is equal to $D_B\ell^4q^2/8\pi c k^2$ in this limit \cite{Akkermans_Montambaux, Feng}, we obtain
\begin{equation}
\tilde{X}(\textbf{r},\textbf{q},\Omega)=\dfrac{-\ell^4D_B}{8 \pi c k^2}\left[(i\textbf{q})^2+(i\textbf{q}) \cdot \boldsymbol{\nabla}_{\textbf{r}}\right]
\tilde{\Gamma}_D(\textbf{r},\textbf{0},\Omega).
\label{Cloop_r_Dr_Fourier}
\end{equation}
An approximate expression for $\tilde{X}(\textbf{r},\Delta\textbf{r},\Omega)$ can then be obtained by the inverse Fourier transform of Eq. (\ref{Cloop_r_Dr_Fourier}) with respect to $\textbf{q}$ (see Appendix \ref{B}):
\begin{equation}
\tilde{X}(\textbf{r},\Delta\textbf{r},\Omega)=\dfrac{-\ell^4D_B}{8 \pi c k^2}
\left\{
\left[ \Delta_{\Delta\mathbf{r}} \delta(\Delta \mathbf{r}) \right] +
\left[ \boldsymbol{\nabla}_{\Delta\mathbf{r}} \delta(\Delta \mathbf{r}) \right]
\cdot \boldsymbol{\nabla}_{\mathbf{r}}
\right\}
\tilde{\Gamma}_D(\textbf{r},\textbf{0},\Omega).
\label{Cloop_r_Dr_explicit}
\end{equation}
Because $\boldsymbol{\nabla}_{\Delta\mathbf{r}} \delta(\Delta \mathbf{r}) =
\boldsymbol{\nabla}_{\mathbf{r}} \delta(\mathbf{r} - \mathbf{r}^{\prime})$ and
$\Delta_{\Delta\mathbf{r}} \delta(\Delta \mathbf{r}) =
\Delta_{\mathbf{r}} \delta(\mathbf{r} - \mathbf{r}^{\prime})$,
Eq.\ (\ref{Cloop_r_Dr_explicit}) can be rewritten in terms of the original variables $\textbf{r}$ and $\textbf{r}^{\prime}$ as
\begin{equation}
X(\textbf{r},\textbf{r}^{\prime},\Omega)=\dfrac{-\ell^4D_B}{8 \pi c k^2}\boldsymbol{\nabla}_{\textbf{r}} \cdot \left[\Gamma_D(\textbf{r},\textbf{r},\Omega)\boldsymbol{\nabla}_{\textbf{r}}\right]\delta(\textbf{r}-\textbf{r}^{\prime}).
\label{Cloop_r_r'_explicit}
\end{equation}
\section{Derivation of self-consistent equations}
\label{III}
We will now use the diagram $X$ of Fig.\ \ref{Hikami_boxes} analyzed in the previous section to include interference effects in the calculation of intensity Green's function $C(\textbf{r},\textbf{r}^{\prime},\Omega)$. To this end, we insert the ``interference loop'' $X$ in the sum of ladder diagrams for $C_D$ and account for the possibility of having multiple consecutive interference loops. This leads to an infinite series of diagrams shown in Fig.\ \ref{Series_diagrams}. This series can be written analytically as
\begin{eqnarray}
C(\textbf{r},\textbf{r}^{\prime},\Omega) &=& C_D(\textbf{r},\textbf{r}^{\prime},\Omega) +
\frac{4 \pi c}{\ell^2} \int C_D(\textbf{r},\textbf{r}_1,\Omega)X(\textbf{r}_1,\textbf{r}_2,\Omega)C_D(\textbf{r}_2,\textbf{r}^{\prime},\Omega)d\textbf{r}_1 d\textbf{r}_2\nonumber\\
&+& \left( \frac{4 \pi c}{\ell^2} \right)^2 \int C_D(\textbf{r},\textbf{r}_1,\Omega)X(\textbf{r}_1,\textbf{r}_2,\Omega)C_D(\textbf{r}_2,\textbf{r}_3,\Omega)
\nonumber \\
&\times& X(\textbf{r}_3,\textbf{r}_4,\Omega)C_D(\textbf{r}_4,\textbf{r}^{\prime},\Omega)d\textbf{r}_1 d\textbf{r}_2d\textbf{r}_3d\textbf{r}_4
+ \dots
\label{Resummation}
\end{eqnarray}
\begin{figure}[t]
\includegraphics[width=16cm]{Series_of_diagrams.eps}
\caption{\label{Series_diagrams}
Diagrammatic representation of an infinite series of diagrams contributing to the intensity Green's function. The first term is the sum of ladder diagrams. The second term is the sum of ladder diagrams with a single interference loop denoted by wavy lines and equal to an infinite sum of maximally-crossed diagrams. Next terms contain 2, 3, etc. consecutive interference loops. The ladder and the maximally-crossed diagrams are joined together by a Hikami box detailed in Appendix \ref{A}. The analytic representation of this diagrammatic series is given by Eq.\ (\ref{Resummation}).}
\end{figure}
We now apply the operator $-i\Omega-D_B \Delta_{\textbf{r}}$ to Eq. (\ref{Resummation}) and use Eq.\ (\ref{eq_for_C_infinite}) for $C_D$ and Eq. (\ref{Cloop_r_r'_explicit}) for $X(\textbf{r},\textbf{r}^{\prime})$. This yields (see the detailed calculation in Appendix \ref{C}):
\begin{equation}
\left[-i\Omega-D_B \Delta_{\textbf{r}} \right]C(\textbf{r},\textbf{r}^{\prime},\Omega)=\delta(\textbf{r}-\textbf{r}^{\prime})-\dfrac{\ell^2D_B}{2k^2}\boldsymbol{\nabla}_{\textbf{r}}\cdot\left[\Gamma_D(\textbf{r},\textbf{r},\Omega)\boldsymbol{\nabla}_{\textbf{r}}C(\textbf{r},\textbf{r}^{\prime},\Omega) \right],
\label{applied_operator_init}
\end{equation}
or
\begin{equation}
\left[-i\Omega-\boldsymbol{\nabla}_{\textbf{r}}\cdot\left(D_B-\dfrac{\ell^2D_B}{2k^2}\Gamma_D(\textbf{r},\textbf{r},\Omega)\right)\boldsymbol{\nabla}_{\textbf{r}}\right]C(\textbf{r},\textbf{r}^{\prime},\Omega)=\delta(\textbf{r}-\textbf{r}^{\prime}).
\label{applied_operator}
\end{equation}
As we demonstrate in Appendix \ref{D}, $\Gamma_D$ is proportional to $C_D$:
$\Gamma_D(\textbf{r},\textbf{r},\Omega)=(4\pi c/\ell^2) C_D({\textbf{r},\textbf{r}},\Omega)$.
This allows us to define a renormalized, position-dependent diffusion coefficient
\begin{equation}
D(\textbf{r},\Omega)=D_B-\dfrac{2\pi c}{k^2}D_B C_D(\textbf{r},\textbf{r},\Omega).
\label{diffusion_coeff_WL}
\end{equation}
and rewrite Eq.\ (\ref{applied_operator}) as
\begin{equation}
\left[-i\Omega-\boldsymbol{\nabla}_{\textbf{r}} \cdot D(\textbf{r},\Omega)\boldsymbol{\nabla}_{\textbf{r}}\right]C(\textbf{r},\textbf{r}^{\prime},\Omega)=\delta(\textbf{r}-\textbf{r}^{\prime}).
\label{equation_for_C}
\end{equation}
The last step consists in applying the self-consistency principle \cite{Vollhardt}. This can be done by using $D(\textbf{r},\Omega)$ instead of $D_B$ when calculating the second term on the r.h.s. of Eq. (\ref{diffusion_coeff_WL}). Diagrammatically, this procedure is equivalent to inserting ``secondary loops'' in the loops shown by wavy lines in Fig.\ \ref{Series_diagrams} and then inserting the same loops in these secondary loops, etc., thus obtaining a sum of diagrams with an infinite sequence of loops inserted one inside the other. Physically, this simply means that the same, self-consistent diffusion coefficient $D(\mathbf{r}, \Omega)$ should be used when we calculate the intensity Green's function $C$ and the sum of maximally-crossed diagrams $\Gamma_C$. More specifically, we have to perform the following replacements:
\begin{enumerate}
\item We replace $D_B$ by $D$ in $H(\textbf{r},\textbf{r}^{\prime})$ in Eq. (\ref{Cloop_r_r'}), or equivalently in $H(\textbf{q})$, such that $D_B$ is replaced by $D$ in the second term on the r.h.s. of Eq. (\ref{diffusion_coeff_WL}).
\item We replace $D_B$ by $D$ in $\Gamma_D$ in Eq. (\ref{Cloop_r_r'}), which amounts to replace $C_D$ by $C$ in the second term on the r.h.s. of Eq. (\ref{diffusion_coeff_WL}).
\end{enumerate}
Equation (\ref{diffusion_coeff_WL}) then becomes
$D(\textbf{r},\Omega)=D_B-(2\pi c/k^2) D(\mathbf{r},\Omega) C(\textbf{r},\textbf{r},\Omega)$ or
\begin{equation}
\dfrac{1}{D(\textbf{r},\Omega)}=\dfrac{1}{D_B}+\dfrac{6\pi}{k^2\ell}C(\textbf{r},\textbf{r},\Omega).
\label{diffusion_coeff}
\end{equation}
This completes the derivation of self-consistent equations of localization --- Eqs.\ (\ref{equation_for_C}) and (\ref{diffusion_coeff}) --- in a medium of finite size.
The solution of the diffusion equation (\ref{equation_for_C}) in three dimensions diverges when $\mathbf{r}^{\prime} \rightarrow \mathbf{r}$: $C(\textbf{r},\textbf{r}^{\prime},\Omega) \propto 1/|\mathbf{r} - \mathbf{r}^{\prime}|$. This unphysical divergence poses potential problems in Eq.\ (\ref{diffusion_coeff}) that contains $C(\textbf{r},\textbf{r},\Omega)$. One possibility to regularize this divergence is to represent $C(\textbf{r},\textbf{r}^{\prime},\Omega)$ as a Fourier transform of $C(\textbf{r},\textbf{q},\Omega)$, where $\mathbf{q}$ is a variable conjugated to $\Delta \mathbf{r} = \mathbf{r} - \mathbf{r}^{\prime}$, and then cut off the integration over $\mathbf{q}$ at some $q_{max} \sim 1/\ell$. The exact proportionality constant between $q_{max}$ and $1/\ell$ will determine the exact position of the mobility edge $k \ell \sim 1$. It is also possible to cut off only the integration over $\mathbf{q}_{\perp} = (q_x, q_y)$, leaving the integration over $q_z$ unrestricted. Such a two-dimensional cutoff is easier to implement for the particular geometry of a disordered slab perpendicular to the $z$ axis \cite{SB2,Nicolas}. As could be expected, the main qualitative features of final results are largely insensitive to the details of the large-$q$ cutoff, although quantitative details can vary slightly.
\section{Energy conservation and boundary conditions}
\label{IV}
It is important to note that although we have obtained Eq.\ (\ref{equation_for_C}) by summing only the diagrams of certain type and neglecting many other diagrams, this equation satisfies the conservation of energy \emph{exactly}. Indeed, let us take its inverse Fourier transform with respect to $\Omega$:
\begin{equation}
\dfrac{\partial C(\textbf{r},\textbf{r}^{\prime},t)}{\partial t}-\int \frac{d\Omega}{2 \pi}\boldsymbol{\nabla}_{\textbf{r}} \cdot D(\textbf{r},\Omega)\boldsymbol{\nabla}_{\textbf{r}}C(\textbf{r},\textbf{r}^{\prime},\Omega) e^{-i\Omega t}=\delta(\textbf{r}-\textbf{r}^{\prime})\delta(t).
\label{continuity_equation}
\end{equation}
The flux of energy is given by Fick's law: $\textbf{J}(\textbf{r},\textbf{r}^{\prime},t) = -\int d\Omega/(2\pi) D(\textbf{r},\Omega)\boldsymbol{\nabla}_{\textbf{r}}C(\textbf{r},\textbf{r}^{\prime},\Omega)e^{-i\Omega t}$. By integrating Eq.\ (\ref{continuity_equation}) over a control volume $V$ contained inside the disordered medium and enclosed by a surface $S$, we obtain
\begin{equation}
\int_V \dfrac{\partial C(\textbf{r},\textbf{r}^{\prime},t)}{\partial t}d\mathbf{r}=-\int_V\boldsymbol{\nabla}_{\textbf{r}}
\cdot \textbf{J}(\textbf{r},\textbf{r}^{\prime},t)d\mathbf{r}+\delta(t) \int_V \delta(\mathbf{r}-\mathbf{r}^{\prime})d\mathbf{r}.
\label{Ostrogradsky}
\end{equation}
We now apply the Gauss-Ostrogradsky theorem to the first term on the r.h.s. of Eq. (\ref{Ostrogradsky}) and assume that the source point $\mathbf{r}^{\prime}$ is contained inside $V$:
\begin{equation}
\dfrac{d}{dt}\int_V C(\textbf{r},\textbf{r}^{\prime},t)d\mathbf{r}=-\oint_S\textbf{J}(\textbf{r},\textbf{r}^{\prime},t) \cdot d\textbf{S}+\delta(t).
\label{energy_conservation}
\end{equation}
Here $d\mathbf{S}$ is a vector normal to the surface element $dS$ and directed outwards the volume $V$.
Equation (\ref{energy_conservation}) is a conservation equation. It states that the variation of wave energy in the volume $V$ is given by a balance of energy emitted by the source (the second term on the r.h.s.) and energy leaving the volume through its surface $S$ (the first term on the r.h.s.).
Although inside a disordered medium the energy flux $\mathbf{J}(\mathbf{r}, \mathbf{r}^{\prime},t)$ can have arbitrary magnitude and direction consistent with the diffusion equation (\ref{equation_for_C}) and Fick's law, additional factors come into play at the surface of the medium. More specifically, for an open disordered medium of convex shape surrounded by the free space, no energy flux enters the medium from outside, provided that all sources are located inside the medium. This simple principle allows a derivation of boundary conditions for the intensity Green's function at the surface of disordered medium.
Following Zhu \emph{et al.} \cite{Zhu}, we consider a disordered medium occupying the half-space $z > 0$.
At a given point $\textbf{r}$ inside the medium, the Fourier component of intensity $I(\textbf{u},\textbf{r},\textbf{r}^{\prime},\Omega)$ propagating in the direction of a unit vector $\textbf{u}$, can be represented as \cite{Akkermans_Montambaux,Zhu}
\begin{eqnarray}
I(\textbf{u},\textbf{r},\textbf{r}^{\prime},\Omega) &=& C(\textbf{r},\textbf{r}^{\prime},\Omega)+\dfrac{3}{c}\textbf{J}(\textbf{r},\textbf{r}^{\prime},\Omega)\cdot\textbf{u}
\nonumber \\
&=& C(\textbf{r},\textbf{r}^{\prime},\Omega)-\dfrac{3}{c}D(\textbf{r},\Omega)\boldsymbol{\nabla}_\textbf{r}C(\textbf{r},\textbf{r}^{\prime},\Omega)\cdot\textbf{u},
\label{I(u,r)}
\end{eqnarray}
where Fick's law was used to obtain the second line.
The total flux of wave energy crossing some plane $z=\textrm{const}$ at point $\mathbf{r}$ in the positive direction of axis $z$ is
\begin{equation}
J_+(\textbf{r},\textbf{r}^{\prime},\Omega)=\dfrac{c}{4\pi}\int_0^{2\pi}d\phi \int_0^{\pi/2}d\theta \sin\theta\; u_z I(\textbf{u},\textbf{r},\textbf{r}^{\prime},\Omega),
\label{J(r)}
\end{equation}
where $u_z=\cos\theta$ is the $z$ component of $\textbf{u}$. We then substitute Eq. (\ref{I(u,r)}) into Eq. (\ref{J(r)}) and perform integrations over $\theta$ and $\phi$. This yields
\begin{equation}
J_+(\textbf{r},\textbf{r}^{\prime},\Omega)=\dfrac{C(\textbf{r},\textbf{r}^{\prime},\Omega) c}{4}-\dfrac{D(\textbf{r},\Omega)}{2} \dfrac{\partial C(\textbf{r},\textbf{r}^{\prime},\Omega)}{\partial z}.
\end{equation}
By requiring $J_+(\textbf{r},\textbf{r}^{\prime},\Omega)=0$ at the surface $z=0$ of the medium, we obtain the following boundary condition:
\begin{equation}
\left. C(\textbf{r},\textbf{r}^{\prime},\Omega)\right|_{z=0}-\dfrac{2}{c}\left.D(\textbf{r},\Omega)\right|_{z=0}\left. \dfrac{\partial C(\textbf{r},\textbf{r}^{\prime},\Omega)}{\partial z}\right|_{z=0}=0.
\label{boundary_condition}
\end{equation}
For a medium of more complex but still convex shape, the above derivation can be repeated locally in the vicinity of each point of the medium surface $S$, assumed to be locally flat. This yields
\begin{equation}
C(\textbf{r},\textbf{r}^{\prime},\Omega)-\dfrac{2}{3} \ell \frac{D(\textbf{r},\Omega)}{D_B} \left(\mathbf{n}(\mathbf{r}) \cdot \boldsymbol{\nabla} \right) C(\textbf{r},\textbf{r}^{\prime},\Omega)=0,
\label{boundary_condition2}
\end{equation}
where $\mathbf{n}(\mathbf{r})$ is a unit inward normal to the surface $S$ at the point $\mathbf{r} \in S$. This equation is the boundary condition for the intensity Green's function at an open boundary. It can be generalized to include internal reflections of waves at the boundary by replacing $2\ell/3$ by a larger ``extrapolation length'' $z_0$ in front of the second term on its l.h.s., in complete analogy with Ref.\ \onlinecite{Zhu}.
\section{Conclusion}
\label{concl}
In this paper we derived the self-consistent (SC) equations of Anderson localization --- Eqs.\ (\ref{equation_for_C}) and (\ref{diffusion_coeff}) --- starting from the first principles. Mathematically, this was achieved by dressing the ladder propagator with ``interference loops'' made of maximally-crossed diagrams. Each loop was inserted into the ladder with the help of a Hikami-box diagram. The SC equations were then obtained by applying the self-consistency principle.
The essential difference of our derivation compared to the derivation of SC equations in the infinite medium is the position dependence of the sum of ladder diagrams $\Gamma_D(\mathbf{r}, \mathbf{r}^{\prime}, \Omega)$ with coinciding end points $\mathbf{r} = \mathbf{r}^{\prime}$. This position dependence leads to the appearance of an additional term, proportional to $\boldsymbol{\nabla}_{\mathbf{r}} \Gamma_D(\mathbf{r}, \mathbf{r}, \Omega)$, in a series expansion of $\Gamma_D(\mathbf{r}_1, \mathbf{r}_2, \Omega)$ around an arbitrary point $\mathbf{r}$. As a consequence, we have to keep an additional term in the expression of Hikami box employed to connect ladder and maximally-crossed diagrams in our approach. It is this term that finally allows us to derive SC equations of localization in a medium of finite size.
Although the condition $k \ell \gg 1$ was explicitly used to derive Eqs.\ (\ref{equation_for_C}) and (\ref{diffusion_coeff}), one can still hope that, similarly to SC equations in the infinite medium, they could yield reasonable results in the vicinity of mobility edge and in the localized regime. According to Refs.\ \onlinecite{Lagendijk,SB1,SB2,Nicolas}, this seems indeed be the case. However, one should understand that even though the general form of these equations might be largely universal in both diffuse and localized regimes, the numerical prefactor $6\pi/k^2 \ell$ in front of the second term in the SC equation for $D(\mathbf{r}, \Omega)$, Eq.\ (\ref{diffusion_coeff}), should not be taken too seriously because it originates from the calculation of complicated diagrams that was carried out in the limit $k \ell \gg 1$ only (see Appendix \ref{A}). When the result is extrapolated to $k \ell \lesssim 1$, this prefactor could vary and, in general, its dependence on $k \ell$ is likely to be more complex than just $1/(k \ell)^2$. In addition, the SC theory neglects interference processes insensitive to the breakdown of time-reversal invariance by, e.g., a strong magnetic field. The inclusion of such processes in the theoretical description would at least change the prefactor in Eq.\ (\ref{diffusion_coeff}). In Refs.\ \onlinecite{SB1,SB2,Nicolas}, for example, a larger prefactor was used in Eq.\ (\ref{diffusion_coeff}) to study the vicinity of the localization transition. This was justified by a comparison of some of the final results with those of the supersymmetric $\sigma$-model \cite{mirlin00}. Such a comparison indicates that the prefactor $6\pi/k^2 \ell$ in Eq.\ (\ref{diffusion_coeff}) have to be multiplied by 2 to obtain an exact correspondence between the two theoretical approaches \cite{SB1} .
Finally, SC theory of localization is a very convenient tool for description of realistic experimental situations, like the recent experiments on Anderson localization of light \cite{Maret}, microwaves \cite{zhang07}, ultrasound \cite{Page}, and matter waves \cite{lye05,clement05}. It can be adapted to almost any detail of a particular experiment (short pulses or focused beams, internal reflections on the sample surface, complex shapes or inhomogeneous scatterer density profiles of disordered samples, etc.). This gives SC theory a serious advantage as compared to other theories of Anderson localization.
{\it Note.} After this paper was submitted for publication, we became aware of the work of C. Tian \cite{tian08} who justifies the concept of the position-dependent diffusion coefficient using methods of supersymmetric field theory.
\acknowledgments
We thank Bart van Tiggelen for many fruitful discussions.
SES acknowledges financial support from the French ANR (project 06-BLAN-0096 CAROL) and the French Ministry of Education and Research.
|
1,108,101,562,967 | arxiv | \section{Introduction}
Decision Trees (DTs) and Random Forests (RFs) are arguably the most popular non-linear machine learning models of today.
In Kaggle's 2019 report on the \emph{State of Data Science and Machine Learning} \cite{Kaggle2019}, DTs and RFs appear as second most widely used techniques, right after linear and logistic regressions.
Moreover, decision trees are often considered interpretable \cite{Freitas2014} and hence have enjoyed a surge in popularity with the increasing interest in explainable artificial intelligence.
Nonetheless, efforts towards uncertainty-aware and reliable tree-based models are still comparatively scarce.
In this paper, we demonstrate that some of these shortcomings are addressed by Generative Forests (GeF{}s) \cite{Correia2020}, a class of deep probabilistic models that subsumes Random Forests.
In particular, we show in a number of classification tasks that GeF{}s enable new principled methods to i) estimate the uncertainty of \emph{each} of the model's predictions and ii) monitor the input distribution to detect out-of-domain samples or distribution shifts.
\section{Generative Forests}
Before discussing the main ideas of the paper, we introduce Generative Forests and the required notation.
As we focus on classification tasks, we denote the set of explanatory variables as $\mathbf{X} = \{X_1, X_2, \ldots, X_m\}$ and the target variable as $Y$.
As usual, we write realisations of random variables (or collections thereof) in lowercase; for example, $\mathbf{X}=\mathbf{x}$ or $Y=y.$
We assume the pair $(\mathbf{X},Y)$ is drawn from a fixed joint distribution $\mathbb{P}^*(\mathbf{X}, Y)$ with density $p^*(\mathbf{X}, Y)$ and that, while the true distribution $\mathbb{P}^*$ is unknown, we have a dataset $\mathcal{D}_n = \{(\mathbf{x}_1,y_1), \ldots, (\mathbf{x}_n,y_n)\}$ of $n$ i.i.d.~samples from $\mathbb{P}^*$.
Generative Forests are in fact a class of \emph{Probabilistic Circuits} (PCs) \cite{VanDenBroeck2019} satisfying smoothness and decomposability~\cite{Peharz2015}. PCs are a family of deep density representations facilitating many exact and efficient inference routines \cite{Darwiche2003, VanDenBroeck2019}.
In short, they are computational graphs with three types of nodes:
i) \emph{distribution nodes}, ii) \emph{sum nodes} and iii) \emph{product nodes}.
Distribution nodes compute a probability density (by an adequate choice of the underlying measure, this also subsumes probability mass functions) over some subset $\mathbf{X}' \subseteq \mathbf{X}$, that is, a normalised function mapping the state space of $\mathbf{X}'$ to the non-negative real numbers.
Sum nodes compute convex combinations over their children: if $v$ is a sum node and $\ch(v)$ its children, then $v$ computes $v(\mathbf{x}) = \sum_{u \in \ch(v)} w_{v,u} u(\mathbf{x})$, where $w_{v,u} \geq 0$ and $\sum_{u \in \ch(v)} w_{v,u} = 1.$
Product nodes compute the product over their children; if $v$ is a product node, then $v(\mathbf{x}) = \prod_{u \in \ch(v)} u(\mathbf{x}_u)$, with the collection $\{\mathbf{x}_u\}_{u\in\ch(v)}$ a partition (non-overlapping projections) of $\mathbf{x}$.
Finally, a (smooth and decomposable) PC represents the density over all variables (here $p(\mathbf{X}, Y)$) computed by its root node.
Generative Forests are best understood by relating individual decision trees to Probabilistic Circuits.
For any given DT, we can construct a corresponding PC---a \emph{Generative Decision Tree} (GeDT{})---representing a full joint density $p(\mathbf{X},Y)$.
In a nutshell, each decision node is converted into a sum node and each leaf into a density with support restricted to the leaf's cell.
The training samples can be figured to be routed from the root node to the leaves, following the decisions at each decision/sum node.
The sum weights are given by the fraction of samples which are routed from the sum node to each of its children.
The leaf densities are learned on the data which arrives at the respective leaves.
\begin{figure}[t!]
\input{tree1.tex}
\vspace{-.25cm}
\caption{Illustration of a DT and its corresponding PC.}
\label{fig:dt-spn}
\end{figure}
Note that GeDT{}s are proper PCs over $(\mathbf{X},Y)$, albeit rather simple ones: they are tree-shaped and contain only sum nodes.
Nonetheless, GeDT{}s are in fact a class of models, as we are free to fit arbitrarily complex functions at the leaves; say graphical models, again PCs or even advanced density estimators such as a VAEs \cite{Kingma2014} or Flows \cite{Rezende2015}.
In this work, however, we focus on arguably the simplest density estimator, and model the density at the leaves as $p(\mathbf{X}, Y)=p(X_1)\ldots p(X_m)p(Y)$, with continuous and categorical variables represented by univariate normal and multinomial distributions, respectively.
We show these \emph{fully-factorised} leaves are already sufficient to equip standard RFs with effective and principled ways to detect outliers and estimate the robustness of each prediction.
The main semantic difference between DTs and GeDT{}s is that a DT represents a classifier, that is, a conditional distribution $f(\mathbf{x})$, while the corresponding GeDT{} encodes a full joint distribution $p(\mathbf{X}, Y)$---the latter naturally lends itself to classification via the conditional distribution $p(Y \,|\, \mathbf{x}) \propto p(\mathbf{x}, Y)$.
Note that, in theory, $p(Y \,|\, \mathbf{x})$ might differ substantially from $f(\mathbf{x})$, as every feature might influence classification in a GeDT{}, even if it never appears in any decision node of the DT.
Still, it is easy to see that if the distribution at the leaves satisfy $p(\mathbf{X}, Y)=p(\mathbf{X})p(Y)$, then a GeDT{} defines the same prediction function as the original DT.
Generative Forests are ensembles of GeDT{}s and can also be made equivalent to the original RF by an appropriate choice of density model at the leaves. However, instead of ensuring ``backwards compatibility'', in this paper we are interested in exploiting the generative properties of GeF{}s.
To that end, we extend GeF{}s to model a single joint by considering a uniform mixture of GeDT{}s (using a sum node over the trees). That is, instead of averaging over the conditional distributions of each of the \ensuremath{n_t}{} trees, we define a single model that represents the joint $p(\mathbf{X},Y)=\ensuremath{n_t}^{-1}\sum_{j=1}^{\ensuremath{n_t}} p_j(\mathbf{X},Y)$, where each $p_j$ comes from a GeDT. Since this model is essentially a mixture of the different trees, we call it GeF$^+${}.
\section{Outlier Detection}
Most of machine learning theory relies on the assumption that training and test data are sampled from the same distribution. This is a reasonable assumption---there would be no hope for learning otherwise---but is often violated in practice, as real-world data is constantly evolving.
Reliable machine learning models should then be able to identify such violations to either suspend judgement and fail gracefully or signal the need for further data gathering and retraining.
\begin{figure}[h!]
\begin{center}
\scalebox{0.9}{\input{Figures/outlier_udl.pgf}}
\end{center}
\vspace{-.5cm}
\caption{Normalised histograms of $\log p(\mathbf{x})$ (KDE and GeF$^+$) and $\max_y p(y|\mathbf{x})$ (RF) of samples from red and white wine data.}
\label{fig:fx_wine}
\end{figure}
Generative models offer a natural and principled way to detect outliers or distribution shifts. As they innately fit the joint distribution of the training data, they are capable of estimating the likelihood of every new sample, flagging unlikely ones as potential anomalies.
In a GeF$^+${} this is done by monitoring the marginal $p(\mathbf{X})$, which comes at no extra cost; classification is performed over the joint $p(Y, \mathbf{X})$ and computing $p(\mathbf{x})$ only requires summing over all possible classes, $p(\mathbf{x}) = \sum_y p(y, \mathbf{x})$.
We illustrate outlier detection in GeF$^+${}s using the wine quality dataset \cite{Cortez2009} (where the class is a scale of quality of wine) with a variant of transfer testing \cite{Bradshaw2017}. We learn two different GeF$^+${}s, each with only one type of wine (red or white), and compute the log-density of unseen data (70/30 train-test split) for the two wine types.
As we see in the histograms of Figure~\ref{fig:fx_wine}, the marginal distribution over the joints does provide a strong signal to identify out-of-domain instances.
We compare GeF$^+${}s to a Gaussian Kernel Density Estimator (KDE) and to a common baseline for deep models \cite{hendrycks2016}, whereby the probability of the predicted class, $\max_y p(y|\mathbf{x})$, is used as a signal to detect outliers.
We see from the histograms and the ROC (receiver operating characteristic curve) scores, that our models largely outperform the baseline while being comparable to KDEs, even though the structure of a GeF$^+${} is learned in a discriminative manner.
We note that previous works have already proposed using Random Forests for outlier detection \cite{Liu2008}.
However, these models are typically directly trained to identify anomalies and have that as their sole purpose, while GeF$^+${}s are unique in that, while being primarily classifiers (or regressors) they also effectively detect out-of-domain samples.
\section{Robust Classification}
Outlier detection is related to the concept of \emph{vagueness} or \emph{epistemic uncertainty}, that is, the lack of sufficient statistical support for issuing a prediction.
However, machine learning models are often confronted with cases where the data supports the thesis that a given instance is associated with more than a single class with high probability. That is commonly referred to as \emph{aleatory uncertainty}.
Effectively quantifying both types of uncertainty is indispensable in critical applications, where overconfident predictions may lead to catastrophic failures.
One common approach to estimate the model's confidence in classification tasks is to manipulate the reported probability $p(Y|\mathbf{x})$ \cite{Guo2017,Liang2017}.
Still, this is not only overly simplistic but also fails to distinguish the types of uncertainty.
\begin{figure}[h!]
\begin{center}
\scalebox{0.8}{\input{Figures/robust.pgf}}
\end{center}
\vspace{-.75cm}
\caption{Accuracy of predictions with $\epsilon$-robustness (a) below and (b) above different thresholds for 12 OpenML datasets. Some curves end abruptly because we only computed the accuracy when 30 or more data points were available for a given threshold.}
\label{fig:acc_rob}
\end{figure}
GeF$^+${}s offer an arguably more principled approach rooted in the notion of \emph{robustness} \cite{Dietterich2017} as obtained with credal sum-product networks~\cite{Maua2017,Maua2018}.
In a nutshell, we evaluate how much we can perturb all parameters of the model without changing its prediction on a given instance.
Formally, we quantify this perturbation with the concept of $\epsilon$-contamination for each of the sum nodes in a PC.
If $w$ is the vector of weights of a given sum node, then its $\epsilon$-contamination is given by the set
$$
\mathcal C_{w, \epsilon} = \{(1-\epsilon)w + \epsilon v: v_j \geq 0, \sum_j v_j = 1\}.
$$
This definition naturally leads to the idea of $\epsilon$-robustness: the largest $\epsilon$ for which all parameter configurations in $\mathcal C_{w, \epsilon}$ yield the same classification. We run such analysis for all of the nodes at once: let $\mathcal{C}_{\epsilon}$ represent the collection $\{\mathcal C_{w, \epsilon}\}$ for all sum nodes in the PC and let $\mathbf{w}$ be one possible choice of a $w$ in each of the nodes.\footnote{Multinomial leaf nodes can be contaminated in the very same manner as sum nodes, while normal leaf nodes are contaminated in their means while keeping variance fixed~\cite{dewit19a}.} We compute whether there is a label $y$ of the class such that
\begin{equation*}
\forall y'\neq y: \max_{\mathbf{w}\in\mathcal{C}_{\epsilon}} \mathbb{E}_{\mathbf{w}}[\indf{Y=y'} - \indf{Y=y}|\mathbf{x}] < 0\, ,
\label{eq:rob}
\end{equation*}
\noindent and if so, we declare $y$ robust for threshold $\epsilon$.
The maximum $\epsilon$ such that this is true (which we find by binary search) is what we call the $\epsilon$-robustness of a given prediction ~\cite{Maua2018}.
Note that, since GeF$^+${}s have a tree structure and sum nodes with out-degree bounded by a constant, the time for computing $\epsilon$-robustness in GeF$^+${}s is linear in the input size \cite{Correia2019,Maua2018}.
We experiment with $\epsilon$-robustness in a selection of 12 datasets from the OpenML-CC18 benchmark\footnote{\url{https://www.openml.org/s/99/data}} \cite{OpenML2013}.
Once more, we use GeF$^+${}s composed of 30 trees with fully-factorised leaves and a 70/30 train-test split.
In Figure~\ref{fig:acc_rob}, we defined a number of robustness thresholds and, for each of them, we computed the accuracy of the models over instances for which $\epsilon$ was above and below the threshold. We clearly see there is a positive correlation; the higher the $\epsilon$-robustness of a prediction, the more likely it is to be correct. Obviously, the computation of robustness does not require knowing the true labels.
This concept of robustness has a clear interpretation. Given that for $\epsilon=0$ we have $\mathcal C_{w, \epsilon}=\{w\}$ and for $\epsilon=1$ we have the whole simplex, we can interpret the value of $\epsilon$ as the ``percentage'' of variation that we allow in the parameters for \emph{each prediction}.
That is in contrast to typical uncertainty measures where individual uncertainty values are hard to interpret in isolation \cite{Mentch2016, Shaker2020}.
\section{Discussion and Further Experiments}
We also trained GeF$^+${}s on the Mnist \cite{Lecun2010} and Fashion-Mnist \cite{xiao2017} datasets to visually evaluate the samples with different $\epsilon$-robustness values.
In Figure~\ref{fig:mnist}, we report test instances with lowest and highest $\epsilon$-robustness for each dataset. We see that samples for which the prediction is less robust are not only less likely to be correctly classified but also often contain irregular shapes and patterns, justifying the model's uncertainty.
\begin{figure}[t!]
\centering
\begin{subfigure}
\centering
\includegraphics[width=.225\textwidth]{Figures/mnist_min.png}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[width=.225\textwidth]{Figures/mnist_max.png}
\end{subfigure}
\hfill
\vskip -10.pt
\hfill
\begin{subfigure}
\centering
\includegraphics[width=.225\textwidth]{Figures/fashion_min.png}
\end{subfigure}
%
\begin{subfigure}
\centering
\includegraphics[width=.225\textwidth]{Figures/fashion_max.png}
\end{subfigure}
\hfill
\vspace{-0.35cm}
\caption{Samples from (Fashion-)Mnist datasets with lowest (left) and highest (right) $\epsilon$-robustness in the test set. Correctly and incorrectly classified examples are shown in green and red, respectively.}%
\label{fig:mnist}%
\end{figure}
We emphasise outlier detection and robustness estimation are related but different notions, and GeF$^+$ effectively distinguishes them.
Figure~\ref{fig:log_mnist} shows a few of the most likely and unlikely (Fashion-)Mnist samples under the training data distribution. While samples are ordered by their marginal density $p(\mathbf{x})$, the background light is proportional to their $\epsilon$-robustness, with darker colours for larger $\epsilon$. We can clearly see how these measures differ as, for example, although the model deems $1$s highly likely, $\epsilon$-robustness seems to vary with the shape/orientation of the trace.
Moreover, these two measures complement each other and allow us to better understand the underlying cause of the model's uncertainty. Notably, for a consistent model---one that fits the true data generating distribution if given sufficient data---and a sample $\mathbf{x}$ with high $p(\mathbf{x})$ and low $\epsilon$-robustness, one may infer there is high aleatory uncertainty.
A number 9 with an incomplete circle at the top is a good example of a pattern in handwritten digits that, albeit likely, is still hard to tell apart from a number 4.
Conversely, an instance might be misshaped and hence unlikely, but still be associated to high robustness values. In that case, epistemic uncertainty is dominant, that is, the model has not been trained on similar examples and its high confidence estimate should not be trusted.
Distinguishing the two types of uncertainty is not only fundamental to better understand the task at hand but also to establish the correct course of action; namely, suspend judgement when faced with aleatory uncertainty or collect more data and possibly retrain the model in cases of epistemic uncertainty.
\begin{figure}[t]
\centering
\begin{subfigure}
\centering
\includegraphics[width=.225\textwidth]{Figures/mnist_log_min.png}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[width=.225\textwidth]{Figures/mnist_log_max.png}
\end{subfigure}
\hfill
\vskip -10.pt
\hfill
\begin{subfigure}
\centering
\includegraphics[width=.225\textwidth]{Figures/fashion_log_min.png}
\end{subfigure}
%
\begin{subfigure}
\centering
\includegraphics[width=.225\textwidth]{Figures/fashion_log_max.png}
\end{subfigure}
\hfill
\vspace{-0.35cm}
\caption{Samples from (Fashion-)Mnist datasets with lowest (left) and highest (right) $p(\mathbf{x})$ in the test set. The background light is proportional to the $\epsilon$-robustness.}%
\label{fig:log_mnist}%
\end{figure}
In all experiments\footnote{The source code will be available at the authors' web-page.}, the trees are made ``deep'', that is, we keep splitting the feature space until each leaf cell contains either only samples of one class or a single sample. That means the average depth of our models is $\Theta(\log n)$, with $n$ the number of samples in the training data \cite{Louppe2014}.
Such deep trees make GeF$^+${}s highly expressive, while the overall ensemble, by and large, avoids overfitting.
It is also worth noticing that our models are learned as regular Random Forests, with bootstrapping and gini-impurity criterion, and afterwards converted to GeF{}s.
Moreover, we use fully-factorised leaves, $p(\mathbf{X}, Y)=p(X_1)\ldots p(X_m)p(Y)$, which are trivial to learn and compute but also achieve similar results as the original RF (identical predictions in each tree of the RF~\cite{Correia2020}).
\section{Conclusion}
While more experimentation is still needed, initial results indicate Generative Forests are a promising extension of Random Forests that effective leverages the properties of Probabilistic Circuits to detect out-of-domain samples and estimate the robustness of its own predictions.
We believe this is not only a important step towards more reliable machine learning but also a promising avenue for future research on deep hybrid discriminative-generative models.
|
1,108,101,562,968 | arxiv | \section{Introduction}
The stock market crash of 2008 is one of the largest stock market crashes in the history of capitalist economies. During the period from 2004 through 2013, the Dow Jones Industrial Average hit a high of 14,164.43 on October 9, 2007. For the eight trading days between October 1 and October 10, 2008, the DJIA fell continuously from 10,831.07 to 8,451.19, a 22.11 percent decline. The DJA hit a bottom of 6,594.44 on March 5, 2009. In less than 18 months, the index had declined more than 50 percent. The crisis was not limited to the US market. Markets worldwide were simultaneously in free fall. Figure 1 shows the mean of the logarithmic share price for 7,796 worldwide companies in the period 2004 through 2013. This value hit 1.71 in 2007, and in 2008 hit a bottom of 1.26. The mean share price declined by 36 percent in one year.
The liquidity crunch in U.S. and European short-term money markets began with an incident in August 2007 involving the “complete evaporation of liquidity” of three hedge funds invested in U.S. asset-backed securities (ABS) affiliated with BNP Paribas, one of the largest banks in France. U.S. and European financial institutions that provided liquidity support for the redemption of asset backed commercial paper (ABCP) were obliged to raise funds. Consequently, liquidity pressures in funding markets rose, spurring a liquidity crisis in short-term money markets. Amid this abrupt tightening of global financial markets, which triggered Lehman Brothers, a large investment bank, to file for bankruptcy in September 2008, investors in large numbers rapidly withdrew their funds from the stock markets, causing severe global disruptions.
In an efficient market (Fama 1970), stock price volatility is linked to changes in company fundamentals. To explore this notion, numerous attempts have been made by scholars to determine whether stock price volatility systematically exceeds levels which could be justified by changes in fundamentals. (See Shiller 1981, LeRoy and Porter 1981, Mehra and Prescott 1985, De Bondt and Thaler 1985, Fama and French 1992, Jegadeesh and Titman 1993, etc.) The issue remains controversial.
This paper examines the question of whether the stock market crash of 2008 was an efficient response to financial shocks that was in line with fundamentals or was caused by investor panic. In order to produce estimates of the fundamentals required for our study, we construct a panel regression model using three financial indicators — dividends per share, cash flow per share, and book value per share— as explanatory variables for share price. These financial indicators are the representative variables commonly used to evaluate a firm's business performance. We perform the panel analysis using a large database gleaned from the balance sheets of 7,796 of the world's largest listed companies over a 10-year period (2004-2013). The two-way fixed effects model was selected as the best panel regression model for our work, based on standard tests for panel regression models.
The two-way fixed effects model has two fixed effects: the individual fixed effects that account for an individual company's heterogeneity, including such factors as the company's diversity of corporate governance and the quality of its employees; and the time fixed effects that indicate variables that fluctuate over time but are fixed across companies. The time fixed effects reflect various shocks, including financial shocks.
We define fundamentals as the theoretical value that omits the time fixed effects from our estimated regression model. One advantage of our model is that it can capture unobservable factors explaining company fundamentals. We investigated the distributions of the divergence rate, which is defined as the logarithmic difference between the share price and the fundamentals, and found that share prices deviated substantially from company fundamentals in the period 2006 to 2008. The distributions of the divergence rate deviated in the positive direction in the boom period from 2006 through 2007, but shifted significantly from the positive side to the negative side in 2008. It is clear that share prices (on average) were overvalued against the fundamentals during the boom period from 2006 to 2007, while in 2008 they were significantly below the fundamentals. In addition, the distributions of the divergence rate were negatively skewed and leptokurtic as compared to the distributions in other periods. It is notable that the negative skewness and leptokurtosis of the distributions of the divergence rate is indicative of the danger of a bubble. We conclude that the bubble of 2006 and 2007, and the subsequent crash of 2008, cannot be linked to changes in company fundamentals, but rather was likely caused by factors such as the psychological panic of investors.
This paper is organized as follows: Section 2 describes the data used in this study; Section 3 discusses the panel data regression model for company fundamentals; Section 4 examines the divergence rate, that is, the deviation of share prices from the fundamentals; Section 5 gives concluding remarks.
\section{Data}
The data for this paper were collected from the OSIRIS database provided by Bureau van Dijk containing company financial statements and reports for nearly 80,000 companies listed around the world,
One difficulty with international comparisons of financial statements collected from multiple sources is that the formats of the statements tend to be different by source and country. As a result, inconsistencies across sources can cause problems of comparability of the data. Mismeasurements can lead to biased results. The merit of OSIRIS is that it provides information on company financials in a standardized format expressed in local currencies and US dollars. Thus, one can use OSIRIS financial data for worldwide companies without encountering comparability problems.
In this paper, we perform a statistical investigation of stock prices and financial indicators per share for 7,796 companies over the 10-year period from 2004 through 2013. As is evident, the database contains data for time periods before and after the global financial crisis.
\section{Panel data analysis}
In this section, we examine the panel data described in the previous section. The aim of the panel data analysis is to build a model estimating company fundamentals, which establish the fair value of a share estimated from a company's balance sheet.
To calculate company fundamentals, we use three financial indicators as the explanatory variables for share price—{\itshape dividends per share, cash flow per share, and book value per share}. These three indicators are commonly used by financial professionals and investors in fundamental analysis as a tool for identifying the divergence of share price in the market from the intrinsic value of a company. These same three financial indicators were used as the explanatory valuables for share price in our previous study (kaizoji and Miyano 2016). In that study, a simple cross-sectional analysis of share prices per year was conducted to explain the power law for share price. A weakness of such a cross-sectional analysis is that it is unable to consider company-specific characteristics. Given the presence of unobserved heterogeneity among companies—qualitative factors such as corporate governance and the quality of company employees—we develop the panel regression model in this study in order to express this unobserved heterogeneity.
Based on our analysis of the panel data, a two-way fixed effects model of share prices was selected as the best share price model. The two-way fixed effects model controls for (i) individual fixed effects which control for unobserved factors that differ between companies but are constant over the 10-year period for each company, and (ii) period fixed effects which control for unobserved factors that are shared by all companies at a specific point in the year and are not accounted for by the three financial indicators.
\subsection{The explanatory variables of share price}
Dividends per share, cash flow per share, and book value per share are the financial indicators commonly used to evaluate fundamentals. Most elementary stock valuation methods are based on company profits and shareholder equity. We introduce briefly these financial indicators.
In our OSIRIS database, cash flow per share is defined as net income plus depreciation divided by the number of outstanding shares of company stock. Earnings per share (EPS) is often used in fundamental analysis as an alternative to cash flow per share. However, we prefer cash flow per share to earnings per share since, as many analysts point out, ``earnings can be manipulated more easily than cash flow.'' Dividends per share is calculated as the total dividends received by shareholders for each outstanding share of the company. Book value per share is the amount of money that a shareholder would receive if a company were to liquidate. Book value per share is often used as a measure to judge whether a share is overvalued or undervalued. If the share price exceeds book value per share, then the share price may be overvalued in the stock market, and vice versa.
\subsection{Panel data regression models}
We performed our panel data analysis using the financial indicators introduced in the previous section. All of the distributions of share price, dividends per share, cash flow per share, and book value per share are highly skewed. Therefore, we used a logarithmic transformation of the variables. The log transformation can be useful in satisfying the regression assumptions for such panel data. The panel data regression model is written as
\begin{equation}
lnY_{it}=a+b_{1}lnX_{1,it}+b_{2}lnX_{2,it}+b_{3}lnX_{3,it}+u_{it} \quad i=1,\dots, N; \quad t=1,\dots T
\end{equation}
where $Y_{it}$ denotes the dependent variable (the share price) for company $i$ in year $t$ ;
$a$ denotes a constant; $X_{1,it}$ is the dividends per share of company $i$ in year $t$ ; $X_{2,it}$ is the cash flow per share of company $i$ in year $t$ ; $X_{3,it}$ is the book value per share of company $i$ in year $t$ ; $u_{it}$ denotes the error term.
We estimate the model in equation (1) using the Panel Least Squares method. In the panel regression mode, the error term, $u_{it}$ can be assumed to be divided into a pure disturbance term and an error term due to other factors. Assuming a two-way error component model with respect to error, the factors other than disturbance are (i) factors due to unobservable individual effects, and (ii) factors due to unobservable time effects. That is, the error term can be written as
\begin{equation}
u_{it}=\mu_{i}+\gamma_{t}+\epsilon_{it}
\end{equation}
where $\mu_{i}$ denotes unobservable individual effects, $\gamma_{t}$ denotes unobservable time effects, and $\epsilon_{it}$ denotes pure disturbance.
If both $\mu_{i}$ and $\gamma_{t}$ are equal to zero, equation (1) is estimated using the pooled OLS method. If either $\mu_{i}$ or $\gamma_{t}$ is equal to zero, equation (2) is a one-way error component model. If both $\mu_{i}$ and $\gamma_{t}$ are not equal to zero, equation (2) is a two-way error component model. There are two estimation methods for estimating the error term in equation (2). One is fixed effects estimation and the other is random effects estimation. Therefore, the available estimation models are a pooled OLS, an individual fixed effects model, a time effects model, a two-way fixed effects model, an individual random effects model, a time random effects model, and a two-way random effects model.\footnote{Tow-way random effects model is unavailable since we use unbalanced panel data.}
We estimated the models described above and found that the two-way fixed effects model was selected as the best model after appropriate model selection tests. The model selection tests used in this study include the likelihood ratio test and F-test for the selection of the pooled OLS model vs the fixed effects model, and the Hausman test for the selection of the random effects model vs the fixed effects model. The selection test for the pooled OLS model vs the random effects model is based on the simple test proposed by Wooldridge (2010).\footnote{Wooldridge(2010,p299) proposed the method that uses residuals from the pooled OLS to check the existence of serial correlation.}
The two-way fixed effects mode is written as
\begin{gather}
lnY_{it}=a+b_{1}lnX_{1,it}+b_{2}lnX_{2,it}+b_{3}lnX_{3,it}+\epsilon_{it} \notag \\
a=a_{0}+\mu_{i}+\gamma_{t}
\end{gather}
where $a_{0}$ is a constant term common to all companies, $\mu_{i}$ denotes the individual fixed effects, and $\gamma_{t}$ denotes the time fixed effects, $\mu_{i}$ is constant toward time series and $\gamma_{t}$ is constant toward cross section. $\epsilon_{it}$ is the pure disturbance. The individual fixed effects, $\mu_{i}$ , accounts for an individual company's heterogeneity and includes such factors as the company's diversity of corporate governance and the quality of its employees. The time fixed effects, $\gamma_{t}$ , indicates variables that fluctuate over time but are fixed across companies. The time fixed effects reflect various shocks, including financial shocks.
Table 1 presents the estimates produced for the two-way fixed effects model. The first line shows the estimated intercept and estimates of the coefficients of the explanatory variables—dividends per share, cash flow per share, and book value per share. The second line shows the standard error of the estimates modified using the White period method, since we detected heteroscedasticity for the residuals and serial correlation of the residuals in the two-way fixed effects model.
\begin{table}
\begin{center}
\caption{ Results of estimates for the two-way fixed effects model .}
\begin{tabular}{lcccc} \
& $a_{0}$ & $b_{1}$ & $b_{2}$ & $b_{3}$ \\ \hline
coefficient &1.485 & 0.137 & 0.208 &0.378 \\
Std. error &0.032 & 0.007 & 0.007 & 0.019 \\
t-Statistic & 46.07 & 19.45 & 28.46 & 19.55 \\
$p$-value & 0.000 & 0.000 & 0.000 & 0.000 \\ \hline
R-squared & 0.969 & & & \\
$p$-value (F-statistic) & 0.000 & & & \\ \hline
\end{tabular}
\end{center}
\end{table}
The coefficients of the three financial indicators have a positive sign and are statistically significant. The $p$-values for all explanatory variables are near zero, indicating that the null hypothesis—that the coefficient is equal to zero—can be rejected in each case. The $p$-value for the overall F-test is also very close to zero and the R-squared value (0.97) is high. The R-squared statistic has a high value for all of the regression models, indicating that the regression model explains the variation in stock prices very well. More concretely, it means that the theoretical value explains 97 percent of the total variation in the stock prices about the average. The positive sign of the estimates justifies the fundamental analysis. For example, the coefficient of the dividend per share suggests that a 1 percent increase in dividend per share can be associated with a 0.13 percent rise in the share price.
Estimates of the two-way fixed effects model for share price, $ln\hat{Y}_{it}$ are written as
\begin{equation}
ln\hat{Y}_{it}=\hat{a}+\hat{b}_{1}lnX_{1,it}+\hat{b}_{2}lnX_{2,it}+\hat{b}_{3}lnX_{3,it}
\end{equation}
We call $\hat{Y}$ the theoretical value of share price. Figure 2 is a scatter diagram of the theoretical value of the logarthmic share price plotted against the actual logarthmic share price. Figure 2 suggests that the relationship between theoretical value and actual share price is highly positive.
Figure 3 shows that the relative frequency distribution of the individual fixed effects (which are constant over time) for 6,209 companies.\footnote{Since 7,796 companies used in this study are unbalanced panel data for regression model, we obtained individual fixed effects of 6,209 companies.} The mean of the individual fixed effects is -0.054; the standard error is 0.01. The distribution indicates a wide heterogeneity in the unobservable capability of the studied companies.
Figure 4 shows the time effects reported separately for each year. The movement of these time fixed effects is considered to be the result of temporal shocks to the stock market. We discuss the time fixed effects further in Section 4.
\subsection{Company fundamentals}
To estimate the fundamentals of individual companies, we eliminate the time fixed effects from the two-way fixed effects model, while retaining the individual fixed effects. The reason for eliminating the time fixed effects term is that these effects are considered to be the effects of temporal financial and economic shocks on share price. We retained the individual fixed effects because these effects represent the individual company's unobserved heterogeneity as reflected in its share price. Therefore, we define the logarithmic form of a company's fundamentals as
\begin{equation}
ln\tilde{Y}_{it}=\hat{a_{0}}+\hat{\mu_{i}}+\hat{b}_{1}lnX_{1,it}+\hat{b}_{2}lnX_{2,it}+\hat{b}_{3}lnX_{3,it}
\end{equation}
where $\tilde{Y}_{it}$ denotes the fundamentals of company $i$ in year $t$.
This model of company fundamentals serves to further our purpose of investigating the deviation of a company's share price from its fundamentals. The model is different from other fundamental analysis and offers substantial value. The estimates of the coefficients in equation (5), $\hat{a_{0}}$ ,$\hat{\mu}_{i}$ ,$\hat{b}_{1}$ ,$\hat{b}_{2}$ ,and $\hat{b}_{3}$ , are constant over time. Therefore, if we can obtain values for dividends per share, cash flow per share, and book value per share for a company, we can easily estimate the company's fundamentals.
\subsection{Divergence rate of share price from company fundamentals}
We use the company fundamentals ($\tilde{Y}$) model for the 10-year period from 2004 through 2013 to pursue our primary goal: to investigate the deviation of share price from company fundamentals. The divergence rate between share price and company fundamentals is defined as
\begin{equation}
D_{it}=lnY_{it}-ln\tilde{Y}_{it}
\end{equation}
where $Y_{it}$ denotes the share price of company $i$ in year $t$, and $\tilde{Y}_{it}$ denotes the fundamentals of company $i$ in year $t$.
The divergence rate, $D_{it}$ , for company $i$'s share prices is the rate of change between company $i$'s share price and the company $i$'s fundamentals in year $t$. We calculate the divergence rate $D_{it}$ for each company for each year. Table 2 shows basic statistics for the divergence rates for each year over the 10-year period from 2004 through 2013. Figure 5 shows the mean of the divergence rates and indicates that there is substantial variation in the divergence rate over time.
\begin{table}
\begin{center}
\caption{Basic statistic of divergence rate for each year.}
\begin{tabular}{lccccc} \
Year & Mean & Std. Dev. & Kurtosis & Skewness & Observations \\ \hline
2004 & 0.063 &0.368 & 4.3 & 0.0 &4807 \\
2005 & 0.148 &0.390 & 3.7 &-0.3 &4884 \\
2006 & 0.161 &0.339 &60.7 &-3.9 &4822 \\
2007 & 0.109 &0.386 &61.8 &-4.0 &4914 \\
2008 &-0.342 &0.356 & 3.6 &-0.3 &4383 \\
2009 &-0.048 &0.275 & 4.0 & 0.6 &4364 \\
2010 &-0.007 &0.276 & 1.9 & 0.6 &4675 \\
2011 &-0.103 &0.265 & 2.7 & 0.0&4719 \\
2012 &-0.055 &0.302 &16.3 &-0.8 &4770 \\
2013 & 0.045 & 0.330& 2.1 & 0.3& 4823 \\ \hline
\end{tabular}
\end{center}
\end{table}
As can be seen here, the mean divergence rate is more than 0.1 for the years 2005 to 2007, during which time the world economy and financial markets enjoyed a boom period. The mean then fell sharply, from 0.1 in 2007 to minus 0.34 in 2008, amid the global financial crisis. The implication is that stocks were, on average, bought excessively from 2005 through 2007, and, on average, overly sold in 2008 relative to the fundamentals. It is clear that the reason for this precipitous fall in the average divergence rate in 2008 was the global financial crisis of that year.
We also investigate the distribution of the divergence rate. Figure 6 shows the distribution for the period from 2006 to 2008, which includes the period before and during the global financial crisis. The figure shows clearly that the distribution of the divergence rate shifted drastically towards the minus side from 2007 to 2008. On the other hand, Figure 7 shows the distribution of the divergence rate for the years 2009 through 2013, which is not a period of bubble and panic but rather is relatively normal. During the stock market boom (2006-2007), the distributions of the divergence rate are pulled in the positive direction, then suddenly move in the opposite (negative) direction in 2008. In other words, the distribution of the divergence rate is unimodal in normal times, shifting to bimodal in times of bubble and crash. Some authors draw an analogy between non-equilibrium phase transitions and the collapse of a speculative bubble. (See Chowdhury and Stauffer 1999, Kaizoji 2000, Boland 2009). On the assumption that trading in the stock market can be described by an Ising spin model, a phase transition from a bull market to a bear market can be characterized as a stock market crash. Our empirical finding is in agreement with this theoretical hypothesis. (See Kaizoji 2000)
\section{Conclusion}
In this paper, we examine the deviation of share price from company fundamentals. Using company balance sheet data, we propose a panel regression model of share prices for 7,796 companies listed worldwide over the 10-year period from 2004 to 2013. We find that a two-way fixed effects model of share price that uses three financial indicators—dividends per share, cash flow per share, and book value per share—as the explanatory variables fits very well to the panel share price data. We estimate company fundamentals by removing the time fixed effects from the two-way fixed effects model, recognizing that the time fixed effects represent the effect of temporary shocks on share price. One advantage of our model of fundamentals is that we are able to quantitatively estimate unobservable factors in company fundamentals using the individual fixed effects. More concretely, by using our model (i) the parameters of the model of fundamentals can be estimated with a panel data regression, and (ii) unobserved heterogeneity among companies can be quantified by using the individual fixed effects. Our model of fundamentals is of value both to researchers and to investors.
Having established an effective model for determining company fundamentals, we then investigate the divergence rate—measuring the deviation of a company's share price from the company's fundamentals. The mean divergence rates are positive in the years from 2005 to 2007 but declined drastically to a large negative value in 2008. These results suggest that share prices were overvalued, on average, during the period of the financial boom, but were significantly undervalued, on average, due to the stock market crash caused by the global financial crisis in 2008. The results of our empirical study provide evidence of excessive volatility.
The financial crisis of 2008 reduced the world economy to a deep and prolonged recession and raises the question of how financial diseases are disseminated. Examining the mechanisms for such propagation is of interest and will provide an added focus for our future research.
future.
\section{Acknowledgment}
This research was supported by JSPS KAKENHI Grant Number 2538404, 2628089.
|
1,108,101,562,969 | arxiv | \section{Introduction}
The probability density function (PDF) on $z_l \in [0,1]$ $(l=1,\dots,N)$
\begin{equation}\label{1.1}
{1 \over S_n(\alpha_1,\alpha_2,\tau)} \prod_{i=1}^n z_i^{\alpha_1 - 1} (1 - z_i)^{\alpha_2 - 1}
\prod_{1 \le j < k \le n} |z_j - z_k|^{2 \tau},
\end{equation}
where
\begin{eqnarray}\label{1.1a}
S_n(\alpha_1,\alpha_2,\tau) & := &
\int_{[0,1]^n} \prod_{i=1}^n z_i^{\alpha_1 - 1} (1 - z_i)^{\alpha_2 - 1}
\prod_{1 \le j < k \le n} |z_j - z_k|^{2 \tau} \, dz_1 \cdots dz_n \nonumber \\
& = &
\prod_{j=0}^{n-1} {\Gamma (\alpha_1 + j\tau)
\Gamma (\alpha_2 + j\tau)\Gamma(1+(j+1)\tau) \over
\Gamma (\alpha_1 + \alpha_2 + (n + j-1)\tau) \Gamma (1 + \tau )},
\label{3.2}
\end{eqnarray}
is the Selberg integral, plays a fundamental role in the theories of random matrices and
Calogero-Sutherland quantum many body systems (see e.g.~\cite{Fo02}). In random matrix theory,
(\ref{1.1}) with $(\alpha_1,\alpha_2,\tau) = (a+1,b+1,\beta/2)$ defines the Jacobi
$\beta$-ensemble. For $\beta = 1,2$ and 4, and certain $a,b$, this can be realized as the
eigenvalue PDF occurring in the analysis of correlation coefficients associated with
Gaussian data sets, and also as the singular values of sub-matrices formed from various
unitary matrices chosen according to the Haar measure. For general $\alpha_1,\alpha_2, \tau > 0$
there are constructions of (\ref{1.1}) relating to similarity reductions of unitary matrices to
Hessenberg form \cite{KN04}, to block diagonal form \cite{ES06a}, and to the generalized eigenvalue problem
for certain tridiagonal matrices \cite{FR02b,Zh99}.
Upon the change of variables $z_i = \sin^2 \phi_i$, $0 < \phi_i < \pi/2$ $(i=1,\dots,n)$,
the PDF~(\ref{1.1}) becomes proportional to
$$
\prod_{i=1}^n (\sin^2 \phi_i)^{\alpha_1'} (\cos^2 \phi_i)^{\alpha_2'}
\prod_{1 \le j < k \le n} | \sin^2 \phi_j - \sin^2 \phi_k|^{2 \tau}
$$
with $\alpha_1' = \alpha_1 - 1/2$, $\alpha_2' = \alpha_2 - 1/2$.
As such it is the absolute value squared of the ground state wave function for the
$BC$-type Calogero-Sutherland Schr\"odinger operator \cite[Eq.(11.55)]{Fo02}
\begin{eqnarray*}
&& - \sum_{j=1}^n {\partial^2 \over \partial \phi^2} +
\sum_{j=1}^n \Big ( {\alpha_1'\tau(\alpha_1'\tau - 1) \over \sin^2 \phi_j} +
{\alpha_2'\tau(\alpha_2'\tau - 1) \over \cos^2 \phi_j} \Big ) \\
&& \qquad \qquad + 2 \tau (\tau - 1)
\sum_{1 \le j < k \le n}
\Big ( {1 \over \sin^2(\phi_j - \phi_k)} + {1 \over \sin^2(\phi_j + \phi_k)} \Big ).
\end{eqnarray*}
It is well known that (\ref{1.1}) exhibits many remarkable integrability properties. One is the
gamma function form of the normalization (\ref{1.1a}). Another is that the family of averages
associated with (\ref{1.1})
\begin{equation}\label{2.1}
\int_{[0,x]^p} \int_{[0,1]^{n-p}} \prod_{i=1}^n z_i^{\alpha - 1}
(1 - z_i)^{\beta - 1} |x - z_i|^{\mu } \prod_{1 \le j < k \le n}
|z_j - z_k|^{2 \tau} \, dz_1 \cdots dz_n
\end{equation}
(we have set $\alpha_1 = \alpha$, $\alpha_2 = \beta$)
can be characterized in terms of a certain differential-difference equation \cite{Fo93},
equivalent to a $(n+1) \times (n+1)$ matrix Fuchsian differential equation \cite{FW07p,Mi07}.
With $p=0$ and $\mu = 2 \tau $, (\ref{2.1}) is simply related to the one-point
density implied by (\ref{1.1}), and the differential-difference equation was used in
\cite{Fo93} to compute the polynomial in $x$ specified by (\ref{2.1}) in the case
$\tau \in \mathbb Z_+$ (the polynomial is of degree $2 \tau n$ so for practical
purposes $\tau n$ cannot be too large).
In the case $\tau = 1$ (\ref{2.1}) can be calculated in terms of the solution of the
Painlev\'e VI non-linear differential equation in $\sigma$ form \cite{FW04}. It is also
revealed in \cite{FW04} that the $\sigma$-function associated with (\ref{2.1})
satisfies a third order non-linear difference equation for integer shifts
in the variable $\mu$, while (\ref{2.1}) itself can be computed by a recurrence scheme
based on the discrete Painlev\'e V equation. This can be understood from the
viewpoint of a more general theory relating to isomonodromic deformations of
linear difference systems \cite{Bo04}.
It is the objective of this work to provide a $(n+1) \times (n+1)$ matrix
linear difference system for integer shifts of
the variables $\alpha$, $\beta$ or $\mu$ (the latter restricted to cases that
$(x - z_i) |x - z_i|^{\mu } = \pm |x - z_i|^{\mu + 1}$ for some sign $\pm$, or
alternatively to twice integer shifts) in the integrals
(\ref{2.1}) with $\tau > 0$. This will be used to provide an alternative method to compute the
polynomial in $x$ specified by (\ref{2.1}) in the case $p=0$, $\mu - 1$ even.
A precise formulation of the family of Selberg correlation integrals to be
studied is given in Section 2, along with a statement of our
result for the explicit form of the difference system. In Section 3 we introduce a certain family of
interpolation polynomials, and we state three term relations satisfied by the corresponding
generalized Selberg integrals. The three term relations are shown to imply the difference system. We
give their proof in Section 4. In Section 5 it is shown how to use the difference
system to compute (\ref{2.1}) in the polynomial case. Furthermore, we specify applied studies in
random matrix theory to which our computations have relevance, and we make note too of the
wider problem of characterizing correlation functions in statistical mechanical problems in terms
of differential or difference equations.
\section{Definitions and main result}
We begin with some definitions.
We introduce the notation $\Phi(z)$, $z := (z_1,\dots,z_n)$, to denote the generalization of the
integrand (\ref{2.1}),
\begin{equation}\label{3a}
\Phi(z) := \prod_{i=1}^n | x_1 - z_i|^{\alpha_1 - 1} | x_2 - z_i|^{\alpha_2 - 1}
| x_3 - z_i|^{\alpha_3 - 1} \prod_{1 \le j < k \le n} |z_j - z_k|^{2 \tau}.
\end{equation}
The parameters $\alpha_1, \alpha_2, \alpha_3$ are assumed restricted to domains for which it is
possible to specify a region $\Delta \subset \mathbb R^n$ with the property $\Phi(z)$ vanishes on
the boundary $\partial \Delta$ of $\Delta$. For example, if $0 < x < 1$ and
${\rm Re} (\alpha_1)$, ${\rm Re} (\alpha_2)$, ${\rm Re} (\alpha_3) > 0$, we can specify
\begin{equation}\label{4.1}
\Delta = \Delta_p = [0,x]^p \times [0,1]^{n-p} \qquad (p=0,1,\dots,n).
\end{equation}
For rational functions $\phi(z)$ bounded on $\Delta$ we define
\begin{equation}\label{4.0}
\langle \phi \rangle := \int_{\Delta} \phi(z) \Phi(z) \, dz_1 \cdots dz_n,
\end{equation}
and we use this notation in turn to specify $T_{\alpha_j} $ according to
\begin{equation}\label{T1}
T_{\alpha_j} \langle \phi \rangle = \Big \langle \prod_{i=1}^n (z_i - x_j) \phi \Big \rangle.
\end{equation}
Note that in the cases that $(z_i - x_j) | z_i - x_j|^{\alpha_j - 1} = \pm |z_i - x_j|^{\alpha_j}$
for some sign $\pm$, $T_{\alpha_j}$ corresponds to incrementing $\alpha_j$ by 1, and
independent of this requirement, $T_{\alpha_j}^2$ corresponds to incrementing $\alpha_j$ by 2.
Our goal is to identify polynomials $\{\varphi_i(z)\}_{i=0,1,\dots,n+1}$ such that
$\{T_{\alpha_1} \langle \varphi_i \rangle \}_{i=0,1,\dots,n+1}$ is linearly related to
$\{\langle \varphi_i \rangle \}_{i=0,1,\dots,n+1}$.
For this purpose we take inspiration from the work of Aomoto \cite{Ao75,Ao87}. With
$\Phi^*(z)$ denoting $\Phi(z)$ specialized to $x_1 = 0$, $x_2 = 1$, $\alpha_3 = 1$
so that
\begin{equation}\label{5a}
\Phi^*(z) := \prod_{i=1}^n z_i^{\alpha_1 - 1} (1 - z_i)^{\alpha_2 - 1}
\prod_{1 \le j < k \le n} |z_j - z_k|^{2 \tau}
\end{equation}
and with
\begin{equation}\label{5b}
\langle \phi \rangle^* := \int_{[0,1]^n} \phi(z) \Phi^*(z) \, dz_1 \cdots dz_n
\end{equation}
it was proved in \cite{Ao87} that
$$
\Big \langle \prod_{l=1}^{i+1} z_l \Big \rangle^* =
{\alpha_1 + (n-i-1) \tau \over \alpha_1 + \alpha_2 + (2n - i - 2) \tau}
\Big \langle \prod_{l=1}^{i} z_l \Big \rangle^*.
$$
Since
$
S_n(\alpha_1+1,\alpha_2,\tau)=\int z_1\cdots z_{n}\,\Phi^*(z) \, dz_1\cdots dz_n,
$
by iterating this
we immediately have the difference equation for the Selberg integral of (\ref{1.1a}),
\begin{equation}\label{Sa}
S(\alpha_1+1,\alpha_2,\tau)=S(\alpha_1,\alpha_2,\tau)\prod_{i=1}^n\frac{\alpha_1+(n-i)\tau}{
\alpha_1+\alpha_2+(2n-i-1)\tau},
\end{equation}
which in turn can be used to deduce the Gamma function evaluation given in (\ref{1.1a}).
Thus we learn that in the case of the Selberg integral $T_{\alpha_1} \langle 1 \rangle^* $ is
linearly related to $ \langle 1 \rangle^*$, and moreover we note that to derive the linear relation
use was made of the auxilary polynomials $\{\prod_{l=1}^i z_l \}$ --- referred to as interpolation
polynomials for the role they play in the calculation.
We will follow the same general strategy in relation to the integrals (\ref{4.0}).
Thus a family of interpolation polynomials, $\{\varphi_{i,j}(z)\}$, will be introduced so
as to determine the polynomials $\{\varphi_i(z)\}_{i=0,1,\dots,n+1}$ forming
a polynomial basis of the difference system associated with the shift
$\alpha_1 \mapsto \alpha_1 + 1$ in the integrals (\ref{4.0}).
We know from \cite{Ao75,Ao87} that the main tool in determining these polynomials
is the vanishing of a certain class of averages (\ref{4.0}).
\begin{lem}
\label{lem:nabla}
For $k=1,2,\ldots, n$ let
\begin{eqnarray}
(\nabla_{\!k}\,\phi)(z)&:=&\frac{\partial\phi}{\partial z_k}(z)+\frac{\phi(z)}{\Phi(z)}\frac{\partial\Phi}{\partial z_k}(z) \nonumber \\
&=&\frac{\partial\phi}{\partial z_k}(z)
+\Big(
-\frac{\alpha_1-1}{x_1-z_k}-\frac{\alpha_2-1}{x_2-z_k}-\frac{\alpha_3-1}{x_3-z_k}
+\sum_{1\le l\le n\atop l\ne k}\frac{2\tau}{z_k-z_l}\Big)\phi(z).
\label{eq:nabla}
\end{eqnarray}
We have $\langle\nabla_{\!k}\,\phi\rangle=0$.
\end{lem}
{\bf Proof.}
By definition,
$$
\langle\nabla_{\!k}\,\phi\rangle=\int_{\Delta} \Phi(z)\nabla_{\!k}\,\phi(z)dz_1\cdots dz_n
=\int_{\Delta}\frac{\partial}{\partial z_k}\Big(\phi(z)\Phi(z)\Big)dz_1\cdots dz_n=0
$$
if $\phi(z) \Phi(z)$ vanishes on the boundary $\partial \Delta$ of $\Delta$, which we have
assumed. \hfill $\Box$\\
However,
for purposes of presentation, rather than to start with the interpolation polynomials,
it is convenient to immediately
present our findings for the explicit form of the polynomials
$\{\varphi_i(z)\}_{i=0,1,\dots,n+1}$ and the corresponding difference system.
\begin{thm}\label{thm2.2}
Write
\begin{equation}\label{5}
\varphi_i(z):=\underbrace{ (x_2-z_1)\cdots(x_2-z_{n-i})}_{n-i}
\times\underbrace{ (x_3-z_{n-i+1})\cdots(x_3-z_n)}_{i}
\qquad i=0,1,\ldots, n.
\end{equation}
We have
\begin{equation}\label{9}
T_{\alpha_1}(\langle \varphi_0\rangle,\langle \varphi_1\rangle,\ldots,\langle \varphi_n\rangle)
=(\langle \varphi_0\rangle,\langle \varphi_1\rangle,\ldots,\langle \varphi_n\rangle)A
\end{equation}
where $A=LDU$ with
\[L=
\left(\!\!
\begin{array}{cccc}
l_{00} & & & \\
l_{10} & l_{11} & & \\
\cdots & \cdots & \cdots & \\
l_{n0} & l_{n1} & \cdots & l_{nn}
\end{array}
\!\!\right),
\quad
D=
\left(\!\!
\begin{array}{cccc}
d_{0} & & & \\
& d_{1}& & \\
& & \cdots &\\
& & & d_{n}
\end{array}
\!\!\right),
\quad
U=
\left(\!\!
\begin{array}{cccc}
u_{00} & u_{01} & \cdots & u_{0n}\\
& u_{11}& \cdots & u_{1n}\\
& & \cdots & \cdots\\
& & & u_{nn}
\end{array}
\!\!\right)
.
\]
All entries in $L,D,U$ not explicitly shown are zero, while for the non-zero entries we have
\begin{eqnarray}\label{9a}
l_{ij}&=&(-1)^{i-j}{n-j\choose n-i}
\frac{(\alpha_2+j\tau;\tau)_{i-j}}{\big(\alpha_1+\alpha_2+2j\tau;\tau\big)_{i-j}}
\bigg(\frac{x_2-x_1}{x_3-x_1}\bigg)^{\!\! i-j},
\nonumber \\[7pt]
d_{j}&=&\frac{(\alpha_1;\tau)_j\big(\alpha_1+\alpha_2+2j\tau;\tau\big)_{n-j}(x_2-x_1)^j(x_3-x_1)^{n-j}}
{(\alpha_1+\alpha_2+(j-1)\tau;\tau\big)_j\big(\alpha_1+\alpha_2+\alpha_3+(n+j-1)\tau;\tau\big)_{n-j}},
\nonumber \\[7pt]
u_{ij}&=&
(-1)^{j-i}{j\choose i}
\frac{
\big(\alpha_3+(n-j)\tau;\tau\big)_{j-i}}{\big(\alpha_1+\alpha_2+2i\tau;\tau\big)_{j-i}},
\end{eqnarray}
where $(x;\tau)_0=1$ and
$(x;\tau)_i:=x(x+\tau)(x+2\tau)\cdots(x+(i-1)\tau)$ for $i=1,2,\ldots$.
\end{thm}
The proof of this result will be given in Section 3.
In Theorem \ref{thm2.2} the matrix $A$ is given in terms of its Gauss $LU$ decomposition.
The symmetry of (\ref{3a}) under the interchange $(x_2,\alpha_2) \mapsto (x_3,\alpha_3)$ allows
for $A$ also to be written in terms of its $UL$ Gauss decomposition. To see this, let
$\bar{L}$, $\bar{D}$ and $\bar{U}$ be the matrices $L,D$ and $U$ after this interchange.
Since
$$
(\langle \varphi_n\rangle,\langle \varphi_{n-1}\rangle,\ldots,\langle \varphi_0\rangle)=(\langle \varphi_0\rangle,\langle \varphi_1\rangle,\ldots,\langle \varphi_n\rangle)J,
$$
where
$$
J=
\left(
\begin{array}{cccc}
& & &1 \\
& &1 & \\
& \cdots & & \\
1 & & &
\end{array}
\right),
$$
(\ref{3a}) can be rewritten
$$
T_{\alpha_1}(\langle \varphi_0\rangle,\langle \varphi_1\rangle,\ldots,\langle \varphi_n\rangle)
=(\langle \varphi_0\rangle,\langle \varphi_1\rangle,\ldots,\langle \varphi_n\rangle)U'D'L',
$$
where
$$
U'=J\bar{L}J,\quad D'=J\bar{D}J,\quad L'=J\bar{U}J.
$$
We see that $U'$, $D'$ and $L'$ are upper triangular, diagonal and lower triangular matrices,
respectively with non-zero entries
\begin{eqnarray*}
u'_{ij}&=&(-1)^{j-i}{j\choose i}
\frac{(\alpha_3+(n-j)\tau;\tau)_{j-i}}{\big(\alpha_1+\alpha_3+2(n-j)\tau;\tau\big)_{j-i}}
\bigg(\frac{x_3-x_1}{x_2-x_1}\bigg)^{\!\! j-i},
\\[7pt]
d'_{j}&=&\frac{(\alpha_1;\tau)_{n-j}\big(\alpha_1+\alpha_3+2(n-j)\tau;\tau\big)_{j}(x_2-x_1)^{j}(x_3-x_1)^{n-j}}
{(\alpha_1+\alpha_3+(n-j-1)\tau;\tau\big)_{n-j}\big(\alpha_1+\alpha_2+\alpha_3+(2n-j-1)\tau;\tau\big)_{j}},
\\[7pt]
l'_{ij}&=&
(-1)^{i-j}{n-j\choose n-i}
\frac{
\big(\alpha_2+j\tau;\tau\big)_{i-j}}{\big(\alpha_1+\alpha_3+2(n-i)\tau;\tau\big)_{i-j}}.
\end{eqnarray*}
\section{Interpolation polynomials}
In this section we present some lemmas, and collaries of the lemmas, which together imply
Theorem \ref{thm2.2}. We will defer the proof of one of these --- certain key three-term
relations --- until the next section.
We begin with a lemma which enables the difference
system of Theorem \ref{thm2.2} to be written in a more convenient form.
\begin{lem}\label{lk}
Let $U$ be as in Theorem \ref{thm2.2}. We have that $U^{-1} = (u_{ij}^*)_{0\le i,j\le n}$ is
the upper triangular matrix with non-zero entries
$$
u_{ij}^*={j\choose i}
\frac{\big(\alpha_3+(n-j)\tau;\tau\big)_{j-i}}{\big(\alpha_1+\alpha_2+(j+i-1)\tau;\tau\big)_{j-i}}.
$$
\end{lem}
\noindent
{\bf Proof.} \quad It suffices to check that for $i < j$, $\sum_{k=i}^j u_{ik}
u_{kj}^* = 0$. Now
\begin{eqnarray*}
\lefteqn{\sum_{k=i}^j u_{ik}u_{kj}^*}\\&=&\sum_{k=i}^j
(-1)^{k-i}{k\choose i}
\frac{
\big(\alpha_3+(n-k)\tau;\tau\big)_{k-i}}{\big(\alpha_1+\alpha_2+2i\tau;\tau\big)_{k-i}}
{j\choose k}
\frac{\big(\alpha_3+(n-j)\tau;\tau\big)_{j-k}}{\big(\alpha_1+\alpha_2+(j+k-1)\tau;\tau\big)_{j-k}}\\
&=&
\frac{\big(\alpha_3+(n-j)\tau;\tau\big)_{j-i}}{\big(\alpha_1+\alpha_2+2i\tau;\tau\big)_{2j-2i-1}}
{j\choose i}\sum_{k=i}^j (-1)^{k-i}{j-i\choose k-i}\big(\alpha_1+\alpha_2+2i\tau+(k-i)\tau;\tau\big)_{j-i-1}
\\
&=&
\frac{\big(\alpha_3+(n-j)\tau;\tau\big)_{j-i}}{\big(\alpha_1+\alpha_2+2i\tau;\tau\big)_{2j-2i-1}}
{j\choose i}\sum_{k=0}^{j-i} (-1)^{k} {j-i\choose k}\big(\alpha_1+\alpha_2+2i\tau+k\tau;\tau\big)_{j-i-1},
\end{eqnarray*}
and the last summation vanishes as an example of the summation formula for
${}_2 F_1(a,b;c;1)$. \hfill $\square$
A more convenient form of the difference system can now be established.
\begin{lem}
\label{lem:our result2}
$$
T_{\alpha_1}(\langle \varphi_0\rangle,\langle \varphi_1\rangle,\ldots,\langle \varphi_n\rangle)
\left(\!\!
\begin{array}{cccc}
\tilde{u}_{00} & \tilde{u}_{01} & \cdots & \tilde{u}_{0n}\\
& \tilde{u}_{11}& \cdots & \tilde{u}_{1n}\\
& & \cdots & \cdots\\
& & & \tilde{u}_{nn}
\end{array}
\!\!\right)
=(\langle \varphi_0\rangle,\langle \varphi_1\rangle,\ldots,\langle \varphi_n\rangle)
\left(\!\!
\begin{array}{cccc}
\tilde{l}_{00} & & & \\
\tilde{l}_{10} & \tilde{l}_{11} & & \\
\cdots & \cdots & \cdots & \\
\tilde{l}_{n0} & \tilde{l}_{n1} & \cdots & \tilde{l}_{nn}
\end{array}
\!\!\right)
$$
where
\begin{eqnarray*}
\tilde{u}_{ij}&=&{j\choose i}\big(\alpha_1+\alpha_2+\alpha_3+(n+j-1)\tau;\tau\big)_{n-j}\big(\alpha_3+(n-j)\tau;\tau\big)_{j-i}
\big(\alpha_1+\alpha_2+(j-1)\tau;\tau\big)_i,
\\[10pt]
\tilde{l}_{ij}&=&(-1)^{i-j}{n-j\choose n-i}(\alpha_1;\tau)_j(\alpha_2+j\tau;\tau)_{i-j}\big(\alpha_1+\alpha_2+(i+j)\tau;\tau\big)_{n-i}(x_3-x_1)^{n-i}(x_2-x_1)^i.
\end{eqnarray*}
\end{lem}
This is obtained by acting on the left of both sides of the difference system of
Theorem \ref{thm2.2} by $U^{-1}$, making use of the explicit form of the latter known from
Lemma \ref{lem:our result2} on the LHS, and clearing denominators.
\smallskip
Before considering the proof of this rewrite of the difference system, we make note that in a
special case it implies the recurrence (\ref{Sa}) for the Selberg integral. Thus we note from
(\ref{5}) that with $x_2 = x_3$ we have $\varphi_i(z) = \prod_{i=1}^n (x_2 - z_i)$
independent of $i$. We note too that it follows from the definitions of $\tilde{u}_{ij}$ and
$\tilde{l}_{ij}$ in Lemma \ref{lem:our result2} that
\begin{eqnarray*}
&&\sum_{i=0}^j \tilde{u}_{ij} = \tilde{u}_{00} = (\alpha_1 + \alpha_2 + \alpha_3 + (n-1) \tau;\tau)_n \nonumber \\
&& \sum_{i=j}^n \tilde{l}_{ij} \Big |_{x_2 = x_3} = \tilde{l}_{nn} \Big |_{x_2 = x_3} =
(\alpha_1;\tau)_n (x_3 - x_1)^n,
\end{eqnarray*}
valid for $j=0,1,\dots,n$, where use has been made of the summation formula for
${}_2 F_1(a,b;c;1)$. It follows that if we set $x_1 = 0$, $x_2 = x_3 = 1$ and replace
$\alpha_2 + \alpha_3$ with $\alpha_2$ in the difference system of Lemma
\ref{lem:our result2}, then it degenerates to a single equation, which is precisely
(\ref{Sa}).
To derive the difference system of Lemma \ref{lem:our result2}, and thus that of
Theorem \ref{thm2.2}, we introduce the interpolation polynomials $\varphi_{i,j}(z)$ according to
\begin{equation}
\varphi_{i,j}(z):=\underbrace{
(z_1-x_1)(z_2-x_1)\cdots(z_j-x_1)
}_{j}
\varphi_i(z)\quad\mbox{for}\quad i,j=0,1,\ldots,n.
\label{eq:eij}
\end{equation}
Note that setting $j=0$ gives $\varphi_{i,0}(x) = \varphi_i(z)$, while setting $j=n$ gives
$$
\varphi_{i,n}(z) = (z_1 - x_1)(z_2 - x_1) \cdots (z_n - x_1) \varphi_i(z), \qquad i=0,1,\dots,n
$$
and so
\begin{equation}\label{7}
T_{\alpha_1} \langle \varphi_i \rangle = \langle \varphi_{i,n} \rangle, \qquad \langle \varphi_i \rangle =
\langle \varphi_{i,0} \rangle.
\end{equation}
Most importantly, the integrals (\ref{4.0}) with $\phi = \varphi_{i+1,j}$, $\varphi_{i,j+1}$,
$\varphi_{i+1,j+1}$, or with $\phi = \varphi_{i,j+1}$, $\varphi_{i,j}$, $\varphi_{i+1,j}$ satisfy certain three-term relations.
\begin{lem}[Three-term relations]
\label{lem:3term}
For $i,j=0,1,\ldots, n-1$ we have
\begin{eqnarray}
\lefteqn{
\big(\alpha_1+(n-j-1)\tau\big)(x_2-x_1)\langle \varphi_{i+1,j}\rangle
}\nonumber\\
&=&
\big(\alpha_3+(n-i-1)\tau\big)\langle \varphi_{i,j+1}\rangle
+\big(\alpha_1+\alpha_2+(n+i-j-1)\tau\big)\langle \varphi_{i+1,j+1}\rangle.
\label{eq:up1}
\\
[10pt]
\lefteqn{
\big(\alpha_1+\alpha_2+\alpha_3+(2n-j-2)\tau\big)\langle \varphi_{i,j+1}\rangle
}\nonumber\\
&=&\big(\alpha_1+\alpha_2+(n+i-j-1)\tau\big)(x_3-x_1)\langle \varphi_{i,j}\rangle
-(\alpha_2+ i\tau)(x_2-x_1)\langle \varphi_{i+1,j}\rangle.
\label{eq:down1}
\end{eqnarray}
\end{lem}
\noindent
{\bf Proof.} \quad See Appendix A. \hfill $\Box$
\smallskip
These three-term relations in fact imply the difference system of recurrences in
Lemma \ref{lem:our result2}. To see this we first use an induction on $j$ to deduce from
the three-term relations
two particular difference systems.
\begin{cor}
\label{cor:UpDown2}
For $0\le j\le k\le n$ we have
\begin{eqnarray}
\lefteqn{\big(\alpha_1+(k-j)\tau;\tau\big)_j(x_2-x_1)^j\langle \varphi_{k,n-k}\rangle}
\label{eq:up1.5}\\
&=&\sum_{i=0}^j{j\choose i}\big(\alpha_3+(n-k)\tau;\tau\big)_{j-i}\big(\alpha_1+\alpha_2+(2k-j-1)\tau;\tau\big)_i\langle \varphi_{i+k-j,n-k+j}\rangle,
\nonumber\\[10pt]
\lefteqn{\big(\alpha_1+\alpha_2+\alpha_3+(n+j-1)\tau;\tau\big)_{n-k}\langle \varphi_{j,n-j}\rangle}
\label{eq:down1.5}\\
&=&\sum_{i=k}^n(-1)^{i-k}{n-k\choose n-i}\big(\alpha_1+\alpha_2+(i+2j-k)\tau;\tau\big)_{n-i}(\alpha_2+j\tau;\tau)_{i-k}\nonumber\\
&&\quad\times (x_3-x_1)^{n-i}(x_2-x_1)^{i-k}\langle \varphi_{i-k+j,k-j}\rangle.
\nonumber
\end{eqnarray}
In particular, by setting $k=j$ in the above,
for $j=0,1,\ldots, n$ we have
\begin{eqnarray}
\lefteqn{(\alpha_1;\tau)_j(x_2-x_1)^j\langle \varphi_{j,n-j}\rangle}
\label{eq:up2}\\
&=&\sum_{i=0}^j{j\choose i}\big(\alpha_3+(n-j)\tau;\tau\big)_{j-i}\big(\alpha_1+\alpha_2+(j-1)\tau;\tau\big)_i\langle \varphi_{i,n}\rangle,
\nonumber\\[10pt]
\lefteqn{\big(\alpha_1+\alpha_2+\alpha_3+(n+j-1)\tau;\tau\big)_{n-j}\langle \varphi_{j,n-j}\rangle}
\label{eq:down2}\\
&=&\sum_{i=j}^n(-1)^{i-j}{n-j\choose n-i}\big(\alpha_1+\alpha_2+(i+j)\tau;\tau\big)_{n-i}(\alpha_2+j\tau;\tau)_{i-j}\nonumber\\
&&\quad\times (x_3-x_1)^{n-i}(x_2-x_1)^{i-j}\langle \varphi_{i,0}\rangle.
\nonumber
\end{eqnarray}
\end{cor}
\noindent
{\bf Proof.} \quad See Appendix B. \hfill $\Box$
\smallskip
The difference system of Lemma \ref{lem:our result2} can now be derived.
\smallskip
\noindent
{\bf Proof of Lemma \ref{lem:our result2}.} \quad Multiplying (\ref{eq:up2}) and (\ref{eq:down2})
by appropriate factors so as to make their LHS's equal, then equating RHS's gives
$$
\sum_{i=0}^j \tilde{u}_{ij} \langle \varphi_{i,n} \rangle =
\sum_{i=j}^n \tilde{l}_{ij} \langle \varphi_{i,0} \rangle, \qquad j=0,1,\dots,n,
$$
where $\tilde{u}_{ij}$ and $\tilde{l}_{ij}$ are as in Lemma \ref{lem:our result2}.
Making use of (\ref{7}) this reads
$$
\sum_{i=0}^j \tilde{u}_{ij} T_{\alpha_1} \langle \varphi_i \rangle =
\sum_{i=j}^n \tilde{l}_{ij} \langle \varphi_i \rangle
$$
which is precisely the sought difference system.\hfill $\Box$
\section{Implementing the recurrences}
Consider $\Phi$ specialized to the function $\Phi^*(z)$ of (\ref{5a}) but with $\alpha_1 = \alpha$,
$\alpha_2 = \beta$. Let this be a consequence of setting
\begin{equation}\label{xz0}
\alpha_1 = 1, \quad \alpha_2 = \alpha, \quad \alpha_3 = \beta,
\quad x_1 = x, \quad x_2 = 0, \quad x_3 =1
\end{equation}
in (\ref{3a}). With this choice of $\Phi^*$ define $\langle \phi \rangle^{\#} = \langle \phi \rangle^*
/ \langle 1 \rangle^*$, where $\langle \phi \rangle^*$ is specified by (\ref{5b}).
Here our aim is to use the difference system of Theorem \ref{thm2.2} to explictly compute the
polynomial in $x$ specified by
\begin{equation}\label{xz}
\Big \langle \prod_{j=1}^n (x - z_j)^\mu \Big \rangle^{\#}, \qquad \mu \in \mathbb Z_+.
\end{equation}
Let $(T_{\alpha_1} \langle \varphi_0 \rangle)^{\#}$ refer to
(\ref{T1}) with the substitutions (\ref{xz0}) made afterwards, and normalized
by dividing by $\langle 1 \rangle^*$. Then according to (\ref{T1}), (\ref{5})
\begin{equation}
\label{xzr}
(T_{\alpha_1} \langle \varphi_0 \rangle)^{\#} = {S_n(\alpha+1,\beta,\tau)
\over S_n(\alpha,\beta,\tau) } \,
\Big ( \Big \langle \prod_{j=1}^n (x - z_j) \Big\rangle^{\#} \Big |_{\alpha \mapsto
\alpha + 1} \Big ).
\end{equation}
Thus our task is to compute $(T_{\alpha_1}^\mu \langle \varphi_0 \rangle)^{\#}$, as we have
\begin{equation}
\label{xzr1}
(T_{\alpha_1}^\mu \langle \varphi_0 \rangle)^{\#} = (-1)^{n(\mu - 1)} {S_n(\alpha+1,\beta,\tau)
\over S_n(\alpha,\beta,\tau) } \,
\Big ( \Big \langle \prod_{j=1}^n (x - z_j)^\mu \Big \rangle^{\#} \Big |_{\alpha \mapsto
\alpha + 1} \Big ).
\end{equation}
For $\mu = 1$ the closed form evaluation of (\ref{xz}) is known from the work of
Aomoto \cite{Ao87} as being
proportional to the Jacobi polynomial $P_N^{(\gamma_1,\gamma_2)}(1 - 2x)$ with
$\gamma_1 = \alpha/\tau - 1$, $\gamma_2 = \beta/\tau - 1$. This in turn can be written in terms of a
Gauss hypergeometric function, giving
\begin{equation}\label{xz1}
\Big \langle \prod_{j=1}^n (x - z_j) \Big \rangle^{\#} =
\tilde{c} \, {}_2 F_1(-n,(\alpha + \beta)/\tau + n - 1, \alpha/\tau;x),
\end{equation}
where
\begin{equation}
\tilde{c} = {(-1)^n ( \alpha;\tau)_n \over (\alpha + \beta + (n - 1)\tau ;\tau)_n }
\end{equation}
(the factor of $\tilde{c}$ is required to make the coefficient of $x^n$ on the RHS unity).
We can use knowledge of this to calculate $\langle \varphi_k \rangle^{\#}$ ($k=0,\dots,n$). Once these
have been determined we can use (\ref{9}) with $A =: A_{\alpha_1}$
to recursively compute $(T_{\alpha_1}^\mu \langle \varphi_0 \rangle)^{\#}$
according to
\begin{equation}\label{xzp}
\Big (T_{\alpha_1}^\mu (\langle \varphi_0\rangle,\langle \varphi_1\rangle,\ldots,\langle \varphi_n\rangle \Big )^{\#}
=(\langle \varphi_0\rangle^{\#},\langle \varphi_1\rangle^{\#},\ldots,\langle \varphi_n\rangle^{\#})A_1 A_2 \cdots A_\mu
\end{equation}
Upon making use of (\ref{xzr1}) this determines (\ref{xz}). Explicitly, with $(\vec{v})_k$ denoting the
$k$-th component of the row vector $\vec{v}$, we have
\begin{equation}\label{xzp1}
(-1)^{n(\mu-1)} {S_n(\alpha+1,\beta,\tau)
\over S_n(\alpha,\beta,\tau) } \,
\Big ( \Big \langle \prod_{j=1}^n (x - z_j)^\mu \Big \rangle^{\#} \Big ) \Big |_{\alpha \mapsto
\alpha + 1} \Big ) = \Big ( (\langle \varphi_0\rangle^{\#},\langle \varphi_1\rangle^{\#},\ldots,\langle \varphi_n \rangle^{\#}) A_1 A_2 \cdots A_\mu \Big )_1.
\end{equation}
\begin{lem}\label{LE}
Let $\langle \phi \rangle^{\#}$ be specified as below (\ref{xz0}). We have
\begin{equation}\label{eek}
\langle \varphi_k \rangle^{\#} = (-1)^n {(\alpha;\tau)_n \over (\alpha + \beta + (n-1) \tau;\tau)_n }
{(-\beta - (n-1) \tau ; \tau)_k \over (\alpha;\tau)_k}.
\end{equation}
\end{lem}
\noindent
{\bf Proof.} \quad According to (\ref{T1}), (\ref{5}) and (\ref{xz1})
\begin{equation}\label{xz2}
(T_{\alpha_1} \langle \varphi_0 \rangle)^{\#} = {S_n(\alpha+1,\beta,\tau)
\over S_n(\alpha,\beta,\tau) }
\tilde{c} \, {}_2 F_1\Big (-n,1-\beta/\tau - n, \alpha/\tau;
- {x \over 1 - x} \Big ) \Big |_{\alpha \mapsto \alpha + 1},
\end{equation}
where use has been made of a Kummer relation for ${}_2 F_1$. Note from (\ref{Sa}) that
$$
{S_n(\alpha+1,\beta,\tau)
\over S_n(\alpha,\beta,\tau) } = {(\alpha;\tau)_n \over (\alpha + \beta + (n-1) \tau)_n}.
$$
On the other hand,
it follows from (\ref{9}) that
\begin{equation}\label{xz3}
(T_{\alpha_1} \langle \varphi_0 \rangle)^{\#} = d_0 \sum_{k=0}^n \langle \varphi_k \rangle^{\#} l_{k0}
\end{equation}
where, after substituting (\ref{xz0}) in (\ref{9a}),
\begin{equation}
d_0 l_{k0} = {(1 + \alpha; \tau)_n (\alpha; \tau)_k \over (1 + \alpha + \beta + (n-1)\tau; \tau)_n
(1 + \alpha;\tau)_k } \Big ( {n \atop n - k} \Big )
\Big ( {x \over 1 - x} \Big )^k.
\end{equation}
Equating (\ref{xz2}) and (\ref{xz3}) we see that the factor of $(1-x)^n$ cancels, and we can
equate coefficients of $(-x/(1-x))^k$ to deduce (\ref{eek}). \hfill $\square$
\medskip
It is also possible to derive (\ref{eek}) independent of knowledge of (\ref{xz1}), using instead
an integration formula in Jack polynomials theory, due to Warnaar \cite{Wa05} (see also \cite{FS09}).
With $\lambda = (\lambda_1,\dots,\lambda_n)$ a partition of non-negative integers, and
$P_\lambda^{(\alpha)}(t) = P_\lambda^{(\alpha)}(t_1,\dots,t_n)$ denoting the symmetric Jack polynomial,
the integration formula reads
\begin{eqnarray}\label{W}
&&\int_{[0,\infty)^n} P_\lambda^{(1/\tau)}(t) \prod_{i=1}^n t_i^{x-1} (1 + t_i)^{-x-y-2(n-1)\tau}
\prod_{1 \le j < k \le n} | t_k - t_j|^{2 \tau} \, dt_1 \cdots dt_n \nonumber \\
&& \qquad = P_\lambda^{(1/\tau)}(v) \Big |_{v_1 = \cdots = v_n = -1}
{[x + (n-1) \tau]_\lambda^{(1/\tau)} \over [-y+1]_\lambda^{(1/\tau)} }
S_n(x,y,\tau),
\end{eqnarray}
where
\begin{equation}\label{Wu}
[u]_\kappa^{(\alpha)} = \prod_{j=1}^n {\Gamma(u - (j-1)/\alpha + \kappa_j) \over
\Gamma(u - (j-1)/\alpha) }.
\end{equation}
Our interest is in a transformed version of (\ref{W}).
\begin{cor}
Let $\langle \phi \rangle^{\#}$ be as specified below (\ref{xz0}) and let
$ P_\lambda^{(1/\tau)} ( {1 - z \over z} ) =
P_\lambda^{(1/\tau)} ( {1 - z_1 \over z_1},\dots,{1 - z_n \over z_n} )$. We have
\begin{equation}\label{W1}
\Big \langle P_\lambda^{(1/\tau)}\Big ( {1 - z \over z} \Big ) \Big \rangle^{\#} =
P_\lambda^{(1/\tau)}(v) \Big |_{v_1 = \cdots = v_n = -1}
{[\beta + (n-1) \tau]_\lambda^{(1/\tau)} \over [-\alpha+1]_\lambda^{(1/\tau)} }.
\end{equation}
\end{cor}
\noindent
{\bf Proof.} \quad This follows by making the change of variables $t_i = (1 - u_i)/u_i$ in
(\ref{W}), then writing $x=\beta$, $y=\alpha$. \hfill $\square$
\medskip
For $\lambda = 1^k$ (i.e.~1 repeated $k$ times), $k \le n$, we have that
$P_\kappa^{(1/\tau)}(z) = e_k(z)$ where $e_k(z) = e_k(z_1,\dots,z_n)$ denotes
the $k$-th elementary symmetric function. Noting from (\ref{Wu}) that with $\kappa = 1^k$,
$$
[u]_\kappa^{(1/\tau)} = (-1)^k (-u;\tau)_k
$$
we can thus specialize (\ref{W1}) and so reclaim (\ref{eek}).
\begin{cor}
We have
\begin{equation}\label{W2}
\Big \langle e_k \Big ( {1 - z \over z} \Big ) \Big \rangle^{\#} =
\Big ( {n \atop k} \Big ) (-1)^k {(-\beta - (n-1) \tau;\tau)_k \over (\alpha - 1;\tau)_k}.
\end{equation}
This is equivalent to (\ref{eek}).
\end{cor}
\noindent
{\bf Proof.} \quad It remains to explain the final assertion. This is in fact a consequence of the
very definition of $\varphi_k$ as given in (\ref{5}), which gives
\begin{eqnarray}\label{W3}
\langle \varphi_k \rangle^{\#} & = & (-1)^{n-k} \Big \langle \prod_{j=1}^{n-k} z_j \prod_{l=n-k+1}^n (1 - z_l)
\Big \rangle^{\#} \nonumber \\
& = & (-1)^{n-k} \Big ( {n \atop k} \Big )^{-1} {S_{n+1}(\alpha+1,\beta,\tau) \over
S_n(\alpha,\beta,\tau) }
\Big ( \Big \langle e_k \Big ( {1 - z \over z} \Big ) \Big \rangle^{\#} \Big |_{\alpha \mapsto \alpha + 1}
\Big ).
\end{eqnarray}
Substituting (\ref{W2}) in (\ref{W3}) and recalling (\ref{Sa}) reclaims (\ref{eek}).
\hfill $\square$
\medskip
The fact that (\ref{eek}) has been derived independent of (\ref{xz1}) means, from the argument of
the proof of Lemma \ref{LE}, that the difference system (\ref{9}) can be used to prove (\ref{xz1}).
With (\ref{eek}) substituted in (\ref{xzp1}), and the entries of $A$, (\ref{9a}),
specialized according to (\ref{xz0}), we know all terms in (\ref{xzp1}) except the
average---a polynomial in $x$ of degree $n \mu$---which can therefore by computed by matrix
multiplication. For example, in the case $\alpha = \beta = 2$, $n=5$, $\tau = 5$, this give for
(\ref{xz})
\begin{eqnarray}\label{ff}
&& {23 \over 5437500} - {23 x \over 65250} + {3197 x^2 \over 261000} - {8993 x^3 \over 56550} +
{2117449 x^4 \over 2035800} - {793093 x^5 \over 203580} \nonumber \\
&& \qquad + {601937 x^6 \over 67860} -
{4384 x^7 \over 351} + {7457 x^8 \over 702} - 5 x^9 + x^{10}.
\end{eqnarray}
In general (\ref{xz})
in the case $\alpha = \beta$ must be unchanged (up to a possible sign) by the mapping $x = 1-y$.
One can check that (\ref{ff}) has this invariance.
We have presented the difference system (\ref{5}) both for its theoretical interest, and its
utility in computing the random matrix average (\ref{xz}) in the case $\mu \in \mathbb Z_+$.
Regarding the latter,
the case $\tau = 1$ of (\ref{xz}), after the
change of variables $z_i = \cos^2 \theta_j/2$, $x=\cos^2 \phi/2$, and for certain values
of $a,b$, corresponds to the $\mu$-th moment of the characteristic polynomial for the classical
groups ${\rm Sp}(2n)$, $O^\pm(2n)$ and $O^\pm(2n+1)$. As such it has appeared in various applied
studies in random matrix theory \cite{FK04,KM04,KO08}.
It was remarked in the Introduction that the differential-recurrence scheme of \cite{Fo93}
can also be used to compute the random matrix average (\ref{xz}) in the case $\mu \in \mathbb Z_+$.
This differential-recurrence is equivalent to
a $(n+1) \times (n+1)$ matrix Fuchsian differential equation \cite{FW07p,Mi07}. The fact that there
is both a differential and difference system for the Selberg correlation integrals is closely related
to there being dynamical difference equations associated with solutions of the
KZ equation \cite{MV02}; indeed as an example of the latter theory the Selberg integral
recurrence (\ref{Sa}) was reclaimed.
In recent years higher order
scalar differential equations have been shown to characterize certain correlation functions
in the two-dimensional Ising model \cite{ZBHM04,ZBHM05,BBGHJMZ09} and certain generating
functions in enumerative combinatorics \cite{GJ06a,JR08}. Finding characterizations of
similar problems, outside the class of averages (\ref{xz}), in terms of higher order
scalar difference equations or matrix difference equations remains an open problem.
Another consequence of our results is in providing a fast method of computation of a certain
class of generalized hypergeometric functions based on Jack polynomials $P_\kappa^{(\alpha)}(z)$.
With $C_\kappa^{(\alpha)}(z) := (\alpha^{|\kappa|} |\kappa|! /d_\kappa') P_\kappa^{(\alpha)}(z)$
denoting a renormalized Jack polynomial (for the definition of the
quantity $d_\kappa'$ see \cite[Eq.~(12.37)]{Fo02}), and $[u]_\kappa^{(\alpha)}$
defined by (\ref{Wu}), the generalized hypergeometric functions of interest are defined by
the infinite series
\begin{equation}\label{2F1}
{}_2 F_1^{(\alpha)}(a_1,a_2;b_1;z) :=
\sum_{\kappa} {1 \over |\kappa|!}
{[a_1]_\kappa^{(\alpha)} [a_2]_\kappa^{(\alpha)} \over [b_1]_\kappa^{(\alpha)}}
C_\kappa^{(\alpha)}(z).
\end{equation}
In the case $n=1$ this reduces to the usual Gauss hypergeometric function. In general, the
computation of this function from the series is an inherently difficult task due to the need
to sum over all partitions $\kappa$ \cite{KE04}. In the special case that $a_1$ is a negative integer
the series terminates and it is equal to a multivariable polynomial. If furthermore
$z_1=\cdots=z_n = x$ (\ref{2F1}) reduces a polynomial of degree $n |a_1|$ and
it relates to the average (\ref{xz}) according to \cite[Eq.(13.12)]{Fo02}
\begin{equation}
\Big \langle \prod_{j=1}^n (x - z_j)^\mu \Big \rangle^{\#}
= x^{N\mu} \,
{}_2 F_1^{(\alpha)}(-\mu,\alpha+ (n-1)\tau;\alpha+\beta+2(n-1)\tau;z_1,\dots,z_n)
\Big |_{z_1=\cdots=z_n = 1/x}
\end{equation}
Thus the matrix formula (\ref{xzp1}) can be used to compute this class of ${}_2 F_1^{(\alpha)}$
using O$(n^3)$ operations. In contrast, computation from (\ref{2F1}) requires at least
O($e^{\pi \sqrt{2n/3}}$) operations, due to the sum over partitions.
\section*{Acknowledgements}
PJF thanks Eric Rains for suggesting this problem. This work was supported by the Australian
Research Council and JSPS Grant-in-Aid for Scientific Research (C) 21540225.
|
1,108,101,562,970 | arxiv | \section{ Introduction }
It is well known that at sufficiently low temperature, an electron with spin up is
paired with its partner with spin down across the Fermi surface to
form a Cooper pair with total momentum zero and becomes superconductor and exhibits superfluid property.
This phenomenon is well described by Bardeen-Copper-Schrieffer ( BCS ) theory.
The most favorable condition for paring is when spin up and spin down electrons have the same density.
Now imagine one apply a magnetic field to split the spin up and spin down electrons by
Zeeman effect and look at the response of a superconductor to the
Zeeman splitting.
For $ s $-wave superconductor, if the Zeeman splitting
$ \delta \mu = \mu_{\uparrow}-\mu_{\downarrow} $ is very small compared to the gap,
then the superconducting state is stable, if it is much larger than the
gap, the superconducting state will turn into a normal state.
When $ \delta \mu $ is comparable to the energy gap $ \Delta_{0} $ at zero magnetic
field, it may becomes non-trivial. It was argued by Fulde and Ferrell \cite{ff},
Larkin and Ovchinnikov \cite{lo} about 40 years ago that an in-homogeneous
superconductor with pairing order parameter oscillating in space
may be the ground state at a narrow window of Zeeman splitting
$ \delta \mu_{1} \sim \Delta_{0}/\sqrt{2} < \delta \mu < \delta \mu_{2} \sim 0.754
\Delta_{0} $ \cite{ms,loff} ( Fig.1 ). This in-homogeneous state is called LOFF state where
the Cooper pairs carry a finite momentum. In FF state, $ \Delta(x) =
\Delta_{0} e^{i \vec{q} \cdot \vec{x}} $ where $ q \sim k_{F \uparrow} -k_{ F\downarrow} $, the
Cooper pairs carry finite superfluid momentum ,
while in the LO state, $ \Delta(x) = \Delta_{0} \cos \vec{q} \cdot \vec{x}
$, the Cooper pairs carry two opposite momenta.
The LOFF state breaks both $ U(1) $ gauge symmetry and
translational order. Unfortunately, so far, the
LOFF state has never been observed in conventional
superconductors, because in these systems, the Zeeman effect is overwhelmed by orbital effects.
However, this LOFF state has attracted renewed interests in the context of
organic, heavy fermion and high $ T_{c} $ cuprates \cite{exp,heavy}, because
these new classes of superconductors may provide favorable conditions
to realize the LOFF state.
Recently, experiments \cite{martin} on penetration depth measurement on $CoCeIn_{5}$ shows
that at a temperature below 250 mK, for magnetic field applied parallel to the $ab$ plane, two phase
transitions were detected, one of which maybe identified as a phase transition from LOFF state to normal state
transition. Also the measurement of thermal conductivity\cite{capan} on $CoCeIn_{5}$
shows anisotropy in real space, which could be interpreted as domain wall formation, namely, a stripe phase
but possibly with higher harmonics. LOFF states also played important roles in high density quark matter,
astrophysics \cite{loff} and superconductor-ferromagnet heterostructures \cite{jun}.
With the development of trapped cold atoms system, it was proposed that due to absence of
orbital effects, ultracold neutral fermion gases with unequal populations may realize the LOFF state
in a tiny window on the BCS side of Feshbach resonance \cite{fesh}.
Recently, it was argued in \cite{yip} that the LO state, in fact,
may be stable in an appreciate regime in the BCS side of the
Feshbach resonance.
\begin{figure}
\includegraphics[width=2.5in]{loff.eps}
\caption{The phase diagram of LOFF state. $ \delta \mu $ is the
Zeeman splitting, $ T $ is the temperature, $ \Delta_{0} $ is the
energy gap at the balanced case $ \delta \mu=0 $. }
\end{figure}
Before we discuss the phase diagram Fig.1, we reviewed the basic facts of classical Lifschitz point
which is closely related to normal state to LOFF state phase transition.
This connection is not that new, but has not been stressed in any literature.
The free energy near a classical $ ( d, d_{\perp} ) $ Lifshitz
point is \cite{tom}:
\begin{eqnarray}
H&=& \frac{1}{2} \int d^{d} x [ t m^{2} + K_{\parallel} ( \nabla m )^{2} +
K_{\perp} ( \nabla m )^{2} \nonumber \\
&+& L ( \nabla^{2} m )^{2} ]+ u \int d^{d} x m^{4} + \cdots
\label{first}
\end{eqnarray}
where $ K_{\parallel} > 0 $ and $ m( x ) $ is a $ n \geq 2 $ component order parameter,
the dimension $ d $ is divided into $ d_{\perp} $ perpendicular dimension and
$ d_{\parallel} $ parallel dimension. Its phase diagram \cite{tom} is shown in Fig.2.
\begin{figure}
\includegraphics[width=3.5in]{clp.eps}
\caption{\footnotesize (a) Phase diagram of Classical Lifshitz point
(CLP). P is the Paramagnetic phase, F is the ferromagnetic phase, M
is the modulated phase. The LP point is at $ (t, K_{\perp} )=(0,0)
$. The dashed line is the P-M transition we are studying. (b)
Momentum shell of width $ \Lambda $ around 2d roton surface. }
\end{figure}
Let me review the phase transition from $ P $ to $ M $ transition along the dashed line shown in
Fig.2. In the P phase along the path close to the P-M transition boundary,
$ t >0, K_{\parallel}>0, K_{\perp} < 0 $, for simplicity, we can set $
k_{\parallel}=0 $,
the propagator $ D(k_{\parallel}=0, k_{\perp}) $ can be
written as $ D(k_{\perp})= t+K_{\perp} k^{2}_{\perp} + L k^{4}_{\perp} = \Delta
+ L ( k^{2}_{\perp}- k^{2}_{r} )^{2} $
where $ \Delta= t- \frac{ K^{2}_{\perp} }{4 L }, k^{2}_{r}=
\frac{|K_{\perp}|}{2L} $. It is easy to see
the minima is located at the " roton " surface $ k^{2}_{r} $ ( Fig. 2b), in
sharp contrast to $ K_{\perp} > 0 $
case where the minimum is at $ k_{\perp}=0 $. This class of problems with minima located at $ k_{r} > 0 $
was first investigated in \cite{bs} and has wide applications in
the context of liquid crystals \cite{tom}. When $ \Delta
> 0 $, the system is in the paramagnetic ( P ) phase with $ < m > =0 $, while when $
\Delta < 0 $, it is in a modulated ( M ) phase with the mean
field structure $ < m( x )> = \sum^{P}_{i=1} \Delta_{i}
e^{i \vec{p}_{i} \cdot \vec{x} }, \ q_{i}= k_{r} $. The $ P-M $
transition happens at $ \Delta=0 $, namely, $ t= \frac{ K^{2}_{\perp} }{4 L
} $ as shown in Fig. 2. The $ M $ phase breaks
both the internal $ O(n) $ rotational symmetry and
the translational symmetry, therefore supports two kinds of Goldstone modes:
phase mode due to the $ O(n) $ symmetry breaking and the lattice phonon mode
due to the translational symmetry breaking. At the mean field theory, the P-M
transition is 2nd order. Under fluctuations,
For $ d_{\perp} =1 $, the roton surface in Fig.2b, in fact, turns into two isolated
points, the transition which describes nematic-Smectic A transition in liquid crystal
remains 2nd order. However, for $ d_{\perp} \geq 2 $, the transition becomes a fluctuation driven
1st order transition as shown by Renormalization group analysis in \cite{qgl}.
Indeed, to some extent, the LOFF
phase diagram Fig.2 looks similar to Fig. 1 if we identify Zeeman splitting
$\delta \mu $ as the pressure $ -K_{\perp} $, normal phase as the
paramagnetic phase, the superconducting phase as the
ferromagnetic phase and the LOFF state as the modulated phase.
Of course, the original pairing problem of fermions with unequal
populations are a fermionic problem. However, just like usual
normal state to BCS superconductor transition, one can integrate
out fermions at any finite temperature and lead to
the following Ginsburg-Landau free energy describing the normal state to the LOFF state
transition \cite{Hou1,Hou2,loff,kun}:
\begin{eqnarray}
& f &\propto|(-\nabla^2-q^2_{0} )\psi|^2+a|\psi|^2+b|\psi|^4+c|\psi|^2|\nabla\psi|^2 \nonumber \\
&+&d[(\psi^*)^2(\nabla\psi)^2+\psi^2(\nabla\psi^*)^2]+e|\psi|^6,
\label{f}
\end{eqnarray}
where $ q_{0} \sim k_{F \uparrow}-k_{F \downarrow} $.
Indeed, this action is very similar to the Lifshitz action
Eqn.\ref{first} with $ K_{\perp} < 0 $, so similar procedures
following Eqn.\ref{first} can be used.
Substituting $ \psi=\sum_{G}\psi_{G}e^{iGx} $ where $ G $ are
the shortest reciprocal lattice vectors into the above
equation and combining terms lead to the GL free energy
in momentum space:
\begin{eqnarray}
f &= & \sum_{G}\frac{1}{2}r_{G}|\psi_{G}|^{2}+u\sum_{G}\psi_{G_{1}}\psi_{G_{2}}\psi_{G_{3}}\psi_{G_{4}}\delta_{G_{1}+G_{2}+G_{3}+G_{4}}\nonumber \\
&+&v\sum_{G}\psi_{G_{1}}\psi_{G_{2}}\psi_{G_{3}}\psi_{G_{4}}\psi_{G_{5}}\psi_{G_{6}}\delta_{G_{1}+
G_{2}+G_{3}+G_{4}+G_{5}+G_{6}}
\label{mom}
\end{eqnarray}
where $ r= T-T_{c} $ and $ u,v $ are functions of the coefficients $ b,c,d,e $ in Eqn.\ref{f} and $\vec{G} $.
If $ r > 0 $, the system is in the normal state with $ < \psi(\vec{G}) > =0 $,
while when $ r < 0 $, it is in a modulated ( M ) phase with the mean
field structure $ < \psi ( x )> = \sum^{P}_{i=1} \Delta_{i}
e^{i \vec{q}_{i} \cdot \vec{x} }, \ q_{i}= q_{0} $. This $ M $
phase is the LOFF state.
The LOFF state breaks both $ U(1) $
symmetry and the translational symmetry, therefore it supports two kinds of Goldstone
modes. (1) the Goldstone mode due to the $ U(1) $ symmetry
breaking, but it was "eaten" by the gauge field due to Higgs
mechanism in electron pairing case in condensed matter system, but will stay in
the neutral atom pairing case in ultra cold atom atomic experiments (2) the lattice phonon modes due to the translational symmetry
breaking, they will survive the gauge field fluctuations. In this
paper, we approach the LOFF state from the normal state and try
to determine what is the lowest lattice structure of the LOFF
state. $ P=1 $ corresponds to the FF state, $ P=2 $ corresponds
to the LO state. It is known that the FF state, being carry
finite superfluid momentum, is always unstable. The LO state has
nodes where the excess fermions reside. However, it is still not know the LO state is the most
favorable lattice structure. In this paper, we will study what is
the lowest lattice structure by
considering seven most common lattice structures namely the stripe, square, triangular,
Simple Cubic (SC), Face centered Cubic (FCC),
Body centered Cubic (BCC) and Quasi-crystal (QC) listed in Table I. The stripe case
corresponds to the original LO state.
The rest of the paper is organized as follows.
In section II, we compute the coefficients of the free energy of the LOEF states with different lattice structures.
In section III, by comparing the free energy and the transition
temperature of all the seven lattice structures of LOFF state,
we find the lowest energy lattice structure remains the LO state.
In the appendix A, we discuss in detail how to get the geometrical factors in the
fourth and sixth order terms which are used in evaluating the
free energy of the seven lattices. As a byproduct, we corrected some over-counting mistakes
in describing liquid to solid transition in the textbook in \cite{tom}. In appendix B, we revisit the
solid to liquid transition by considering both cubic and quartic
term and show that the BCC lattice remains the favorable
lattice in the presence of cubic term in a certain region.
\section{ Effective free energies of the LOFF state with different lattice structures }
We only look at the subset $ L_{G} $ spanned by all the shortest reciprocal lattice vectors $ G=q_{0}
$. In the ground state, $ \psi_{G} $ has to be real up to a global
phase. From the point group symmetry of the lattices, $ \psi_{G} $
is a constant when $ G $ belongs to $ L_{G} $. Following
\cite{tom}, we have scaled $ n_{G} \rightarrow n_{G} m^{-1/2} $
so the quadratic term is the same for all the lattices. Then Eqn.\ref{mom}
is simplified to the effective free energy in different lattices:
\begin{equation}
f=\frac{1}{2}r\psi_{G}^{2}+u_{\alpha}\psi_{G}^{4}+v_{\alpha}\psi_{G}^{6}
\label{eff}
\end{equation}
where $ \alpha $ stands for different lattices. In the
following, we will calculate the fourth order term $ u_{\alpha}$ and the sixth order term $ v_{\alpha} $ for
different lattices respectively.
{\sl 1. The fourth order term $ u_{\alpha} $. } For stripe phase,
square lattice ,triangular lattice, SC and FCC, as shown in the
appendix A, there are only contributions from paired vectors to the
quartic term $u^{p}_{\alpha}=3(1-\frac{1}{m})$ where $ m $ is number
of the vectors in the set $L_{G}$. Therefore
$u_{\Vert}=\frac{3}{2}u, u_{\Box}=\frac{9}{4}u,
u_{\triangle}=\frac{5}{2}u, u_{sc}=\frac{5}{2}u,
u_{fcc}=\frac{21}{8}u $. The set $ L_{G} $ for
different lattices are shown in Fig.3 for one and two dimensional
lattices and Fig.4 for three dimensional lattices.
\begin{figure}
\includegraphics[width=3.5in]{2dim.eps}
\caption{\footnotesize The set of shortest reciprocal lattice vectors $ L_{G} $ for one and two dimensional
lattices (a)
Stripe lattice (b) Square lattice (c) Triangular lattice}
\end{figure}
\begin{figure}
\includegraphics[width=3.5in]{3dim.eps}
\caption{\footnotesize The set of shortest reciprocal lattice vectors $ L_{G} $ for three dimensional
lattices (a) Simple Cubic (b) BCC lattice (c) FCC lattice (d) Quasicrystal }
\end{figure}
For a BCC lattices, there is an additional
vertex contribution $ u_{v}= u $ coming from the 4 vectors from any of the six vertices.
So in all, $u_{bcc}=u_{p}+u_{v}= \frac{15}{4}u $.
For a quasi-crystal, we have an additional contribution from the non-planar diamonds
\cite{tom} $ u_{npd}=\frac{4}{5}u $, so in all,
$ u_{qc}= u_{p}+ u_{npd}= \frac{37}{10}u $.
{\sl 2. The sixth order term $ u_{\alpha} $ }
For the stripe
phase, square lattice, SC and FCC, there are only contributions from paired
vectors $v^{p}_{\alpha}=5({3m^{2}-9m+8})/ m^{2}$. So we get
$v_{\Vert}=2\frac{1}{2}v, v_{\Box}=6\frac{1}{4}v,
v_{sc}=\frac{155}{18}v, v_{fcc}=10v $.
For the triangular lattice, there is an additional contribution
$ v_{tri}=\frac{5}{6}v$ coming from the
closed triangles diagram ( Fig.5c ). So we get $
v_{\triangle}=v_{p}+v_{tri} = 9 \frac{4}{9} v$.
For the BCC, in additional to the paired vector contributions
$ v^{p}= \frac{415}{36}v$, there are also contributions coming from the
three configurations listed in Fig.5 which is $ \frac{155}{12} v $.
In all, $ v_{bcc}=220/9 v $.
For Quasicrystal, in additional to the paired vector contributions
$ v^{p}=\frac{1219}{90}v $, there are also contributions coming from the
four configurations listed in Fig.6 which is $ \frac{211}{15}v$.
In all, $ v_{qc}=497/18v $
\begin{figure} \includegraphics[width=3.5in]{com.eps}
\caption{\footnotesize non-paired contributions to sixth order term in BCC lattice
(a) a pair of opposite vectors plus four vectors coming out of one vortex,10v,
(b) a non-planar triangle diagram with the common edge chosen twice, $\frac{5}{2}v$
(c) a triangle diagram, each vector in the triangle was chosen twice, $\frac{5}{12}v $;
for the triangle lattice in Fig.3c, this term is $ \frac{5}{6}v $ }
\end{figure}
\begin{figure}
\includegraphics[width=3.5in]{quasi.eps}
\caption{\footnotesize non-paired contributions to sixth order term in Quasicrystal lattice
(a) a pair of opposite vectors plus a non-planar diamond structure, $\frac{52}{5}v$.
(b) a non-planar triangle diagram with the common edge chosen twice, $\frac{2}{5}v$.
(c) a triangle diagram, each vector in the triangle was chosen twice, $\frac{1}{15}v$.
(d) two triangles with no common edges, $\frac{16}{5}v$ }
\end{figure}
The $ u_{\alpha} $ and $ v_{\alpha} $ for the seven lattices are
listed in the following table. \\
\begin{table}[htbp]
\begin{tabular}{|c|c|c|c|c|c|c|c}\hline
lattices & stripe &square&triangular&SC&BCC&FCC&QC\\ \hline
$u_{\alpha}$ & $\frac{3}{2}u$ &$\frac{9}{4}u$ &$\frac{5}{2}u$& $\frac{5}{2}u$ &$\frac{15}{4}u $&$\frac{21}{8}u$&$\frac{37}{10}u$ \\ \hline
$v_{\alpha}$ &$\frac{5}{2}v$ &$\frac{25}{4}v$ &$\frac{85}{9}v$& $\frac{155}{18}v$ &$\frac{220}{9}v$&$10v$&$\frac{497}{18}v$\\ \hline
\end{tabular}
\caption{ $ u $ and $ v $ for the seven lattices}
\end{table}
\\
\section{ Optimal lattice structure of the LOFF state }
In the original GL action Eqn.\ref{mom}, $ u $ can be negative and
positive. In case $ v $ is also negative, then an eighth order is
needed. In this paper, we assume $ v $ is always positive to keep
the system stable. In the following, we discuss $ u < 0 $ and $ u >
0 $ cases respectively.\\
{\sl 1. $ u $ is positive. }
It is easy to see that $u_{\Vert}<u_{\Box}<u_{sc}=u_{\triangle}<u_{fcc}<u_{bcc}$ and $v_{\Vert}<v_{\Box}<v_{sc}<v_{\triangle}<v_{fcc}<v_{bcc}$ so for any given $ \psi $: $
f_{\Vert}(\psi)<f_{\Box}(\psi)<f_{sc}(\psi)<f_{\triangle}(\psi)<f_{fcc}(\psi)<f_{bcc}(\psi)
$. Then $
f_{\Vert}(\psi_{\Vert})<f_{\Box}(\psi_{\Box})<f_{sc}(\psi_{sc})<f_{\triangle}(\psi_{\triangle})<
f_{fcc}(\psi_{fcc})<f_{bcc}(\psi_{bcc}) $.
However, more work is
needed to compare Quasicrytal with BCC. Minimization of
Eqn.\ref{eff} leads to the order parameter and the free energy:
\begin{eqnarray}
\psi_{\alpha}^{2} & = &
\frac{-2u_{\alpha}+\sqrt{4{u^{2}_{\alpha}-6v_{\alpha}r}}}{6v_{\alpha}}
\nonumber \\
f & = &
\frac{6rv_{\alpha}-4u^{2}_{\alpha}}{18v_{\alpha}}\psi_{\alpha}^{2}-\frac{u_{\alpha}r}{18v_{\alpha}}
\label{free}
\end{eqnarray}
Defining $ r=x \frac{u^{2}}{v}$ where $ x $ is dimensionless and plugging it into Eqn.\ref{free},
we get $ f_{\alpha}=\frac{u^{3}}{v^{2}} g_{\alpha}(x) $ where $ g_{\alpha} $ are dimensionless
functions and $\alpha$ stands for Quasicrytal and BCC. Comparing these two functions,
we find that there is a shift of order between these lattices as shown in
Fig.7.
\begin{figure}
\includegraphics[width=3in]{4.eps}
\caption{ $ u $ is positive. Difference between $ g_{qc} $ and $g_{bcc}$. }
\end{figure}
When $-0.274\frac{u^{2}}{v}<r<0$, $g_{qc}<g_{bcc}$ thus $f_{qc}<f_{bcc}$.
However when $ r<-0.274\frac{u^{2}}{v} $, $ g_{qc}>g_{bcc}$ thus $ f_{qc} > f_{bcc}$.
In any case, the stripe phase is the lowest free energy lattice.\\
{\sl 2. $ u $ is negative. } Eqn. \ref{free} still hold for $ u<0 $.
We can use the same method used when u is positive. Defining $
r=x \frac{u^{2}}{v}$ and plugging it into Eqn.\ref{free}, we still
have the following expression $
f_{\alpha}=\frac{u^{3}}{v^{2}}g_{\alpha}(x)$. For seven different
lattices, we get the same coefficient $\frac{u^{3}}{v^{2}}$, but
different functions $g_{\alpha}$ with respect to $ x $.
\begin{figure}
\includegraphics[width=3.5in]{3.eps}
\caption{ $ u $ is negative. (a) $g_{\alpha}(x)$ of seven different lattices, it is hard to see the
difference between FCC and triangular in this scale.
(b) The difference between triangular and FCC in the expanded scale. }
\end{figure}
Comparing $ g_{\Box},g_{\Vert},g_{\triangle},g_{bcc},g_{fcc},g_{sc},g_{qc}$ shown in Fig.8a,
we find that there is a shift of order between triangle lattice and FCC
lattice shwon in Fig.8(b). The transition temperature of FCC is
$ T_{fcc}= \frac{1}{2}\frac{u_{fcc}^{2}}{v_{fcc}}=\frac{441}{1280}\frac{u^{2}}{v}$ and that of triangular lattice is
$T_{\triangle}= \frac{1}{2}\frac{u_{\triangle}^{2}}{v_{\triangle}}=\frac{45}{136}\frac{u^{2}}{v}$. It shows that as the
temperature is decreased, the first solid phase between these two is
FCC, but when the temperature is further decreased below the
transition temperature of triangular lattice and when
$r < -0.617\frac{u^{2}}{v}$, the triangular lattice has the lower
energy than FCC, which means that FCC is a mestable state after
that. In general, we have the following relations,
when $ -0.617\frac{u^{2}}{v}< r < T_{fcc}$,
$ g_{\Vert}<g_{\Box}<g_{sc}<g_{fcc}<g_{\triangle}<g_{bcc}<g_{qc} $
thus $f_{\Vert}<f_{\Box}<f_{sc}<f_{fcc}<f_{\triangle}<f_{bcc}<f_{qc}$.
When $ r < -0.617\frac{u^{2}}{v} $,
$g_{\Vert}<g_{\Box}<g_{sc}<g_{\triangle}<g_{fcc}<g_{bcc}<g_{qc}$
thus$f_{\Vert}<f_{\Box}<f_{sc}<f_{\triangle}<f_{fcc}<f_{bcc}<f_{qc}$.
In any case, the stripe phase is always the lowest energy
state of all the seven lattices.
In fact, we can get the same
result from the critical transition temperatures of different lattices. It is known that the transition
temperature in the above model is
$ r_{c}=\frac{1}{2}\frac{u_{\alpha}^{2}}{v_{\alpha}}$, Plugging
$u_{\alpha}$ and $ v_{\alpha}$ for different lattices, we find out
that the stripe lattice has the highest transition temperature as
expected, which means when we decrease the temperature, the first
solid phase will be the stripe phase.
\section{Conclusions}
In this paper, we study the transition from the normal state to the LOFF state from the GL free energy
in a mean field theory. We consider seven most common lattices.
By comparing the free energy and the transition temperature of the
seven lattice structures, we find that the lowest energy lattice structure of the LOFF state is
the stripe phase, which is the LO state originally proposed by Larkin and Ovchinnikov \cite{lo}.
Our result shows that in heavy fermion system or cold atom system, at a sufficiently low temperature,
if a LOEF state can be realized, then its lattice structure will likely to be a ( stripe ) LO
phase which will lead to anisotropy in many physical measurable
quantities. Although so far, there is no direct probe on the structure of the order parameter
in all these heavy fermion materials, in experiment in \cite{capan}, the thermal conductivity
measurement was used to probe the anisotropy of the order parameter,
especially the structure of the nodes in the momentum space. The
experiment indeed show the anisotropy of the thermal conductivity of
$CoCeIn_{5}$ in the possible LOFF state regime in Fig.1. Our results
suggest that the LOFF state observed in the experiment is the
original LO state. Of course, the order parameter may contain higher
Harmonics terms. Recently, it was argued in \cite{yip} that the LO
state may be stable in an appreciable regime in the imbalance
versus detuning phase diagram in the BCS side of the Feshback
resonance. It is not known if the GL action still can be used to
describe the normal to the LO transition at $ T=0 $ where $ r=
p-p_{c} $ where $ p_{c} $ is the critical polarization difference,
because at $ T=0 $, the residual fermions can not be integrated out,
especially near the transition point. However, we expect the normal
to the LOFF state transition is still of the Lifshitz type first
order transition. Well inside the LOFF state, mean field analysis in
the paper still holds, so the results still apply.
We thank Kun Yang for helpful discussions and Yong Tang for technical support. The Research at KITP
was supported in part by the NSF under grant No. PHY-05-51164.
|
1,108,101,562,971 | arxiv | \section{Introduction}
\label{sec:introduction}
The recent interest in non-lorentzian theories and their associated
geometries is, among other things, due to the following developments:
\begin{enumerate}[label=(\roman*)]
\item \emph{non-relativistic holography} \cite{sachdev2011quantum}
\cite{zaanen2015holographic}, which has applications in condensed
matter physics. In particular it allows to describe non-relativistic
strongly coupled field theories in terms of dual non-relativistic
gravity theories;
\item \emph{flat space holography}, see for example
\cite{barnich2010symmetries} \cite{bagchi2012bms} (general),
\cite{bondi1962gravitational} \cite{sachs1962gravitational} (BMS
symmetries) and \cite{levy1965nouvelle} \cite{sen1966analogue}
(Carroll symmetries), which allows us to understand soft theorems
\cite{weinberg1965infrared} \cite{strominger2018lectures} and the
symmetries of black-hole horizons \cite{Donnay:2019jiz};
\item \emph{non-relativistic string theories} \cite{Gomis:2000bd}
\cite{danielsson2000iia}, and carrollian string theories
\cite{Cardona:2016ytk} as corners of the moduli space of solvable
string theories (see the review \cite{Oling:2022fft});
\item \emph{post-Newtonian corrections} \cite{dautcourt1990newtonian,
Dautcourt:1996pm, VandenBleeken:2017rij, Hansen:2019svu,
Hansen:2020wqw} in the experimental and theoretical investigations
of gravitational waves \cite{Planck:2018vyg}; and
\item \emph{fractons} \cite{Nandkishore:2018sel} \cite{Pretko:2020cko}
which are condensed matter configurations with restricted mobility
which display infrared/ultraviolet mixing with subsystem symmetries
(see the review \cite{Grosvenor:2021hkn}).
\end{enumerate}
In this review we introduce some of the basic concepts and tools to
study these theories. We first introduce kinematical Lie algebras
their associated homogeneous spacetimes. Some of these Lie algebras
arise as contractions of the isometry algebras of (anti) de~Sitter
spacetimes, following the pioneering work of Bacry and Lévy-Leblond
\cite{Bacry:1968zf}, but by far not all of them are obtained in this
way. We restrict ourselves to kinematical Lie algebras which preserve
space isotropy and hence the kinematical spacetimes we consider are
also spatially isotropic. They are adequate to describe particle
dynamics. In particular this means that we are not considering
so-called $p$-brane kinematical Lie algebras and their associated
spacetimes in which to describe non-lorentzian $p$-brane actions. We
refer the interested reader to \cite{Brugues:2004an, Gomis:2005pg,
Brugues:2006yd, Barducci:2018wuj}.
We present a classification of (spatially isotropic) kinematical
and aristotelian\footnote{These are kinematical Lie algebras without
boosts.} Lie algebras in generic dimension
\cite{Figueroa-OFarrill:2017ycu,Figueroa-OFarrill:2017tcy}. Generic
means that they exist in all dimensions. There are additional
kinematical Lie algebras in two, three and four spacetime dimensions,
to which we refer the reader to the classic work of Bacry and Nuyts
\cite{MR857383} (reviewed in \cite{Figueroa-OFarrill:2017ycu}) for
dimension $3+1$, \cite{Andrzejewski:2018gmz} for dimension $2+1$ and
the classic Bianchi classification of three-dimensional Lie algebras
\cite{Bianchi,MR1900159} for $1+1$. After a brief review of
homogeneous geometry and the infinitesimal description of homogeneous
spaces in terms of Klein pairs, we present the classification of
spatially isotropic homogeneous spacetimes of kinematical Lie groups.
Again we list those which exist in generic dimension, which here it
means we are omitting some $1+1$ and $2+1$ dimensional spacetimes,
which can be found in
\cite{Figueroa-OFarrill:2018ilb,Figueroa-OFarrill:2019sex}.
In the study of particle dynamics on homogeneous kinematical
spacetimes, one meets homogeneous spaces of the kinematical groups
other that the actual spacetimes: namely, coadjoint orbits and their
associated evolution spaces. We review the rôle played by these
homogeneous spaces in the construction of lagrangians describing
particle dynamics on the homogeneous spacetimes.
We review the method of nonlinear realisations and coadjoint orbits in
the construction of particle lagrangians and apply it in several
examples, among them the well-known relativistic massive and
massless particles. In the non-relativistic case we construct the
harmonic oscillator as a nonlinear realisation of a centrally extended
Newton--Hooke group. We also consider the massless galilean particle
introduced by Souriau \cite{MR1066693,Souriau}.
In the case of Caroll due to its causal structure we consider a
massive timelike and tachyonic particles. Using the conformal algebra
in one dimension we derive the action of conformal mechanics of
Alfaro, Fubini and Furlan \cite{deAlfaro:1976vlx} and the Schwarzian
action \cite{Kitaev:2017awl,Maldacena:2016hyu,Stanford:2017thb}.
The analogues of contractions for Lie algebras in dynamical systems
are limits of actions, such as non-relativistic, carrollian and flat
limits. The actions constructed using the nonlinear realisation
method are also obtained as nonrelativistic limits of the relativistic
actions. In general these limits produce terms that are divergent,
these unwanted terms can be eliminated by a suitable coupling of a
relativistic dynamical system to a gauge field in the case of a
particle or a $B$-field in the case of a string \cite{Gomis:2000bd}
\cite{Gomis:2005pg}. In some of these cases the divergent terms are
total derivatives. One can also eliminate divergences by a
redefinition of the parameters appearing in the first term of the
expansion see \cite{Batlle:2016iel}\cite{Bergshoeff:2017btm}.
As for the case of non-lorentzian particles, we continue discussing
different aspects of non-lorentzian gravity theories. We first review
how general relativity can be described by a gauging procedure applied
to the Poincare algebra. Next, we extend the discussion by applying
the same gauging procedure to the following non-lorentzian algebras:
Galilei, Bargmann and Carroll. These gaugings lead to Galilei gravity,
Newton-Cartan gravity and Carroll gravity, respectively. We show how
these same gravity theories can be obtained by taking particular
(Galilean, Bargmann and Carroll) limits of general relativity.
For recent work on electric and magnetic theories of gravity see
\cite{Henneaux:2021yzg,Hansen:2021fxi,Perez:2021abf,Perez:2022jpr}.
Besides taking non-relativistic limits there are two more ways to
obtain non-relativistic theories that we do not explore any further in
this review. First, instead of taking the Inönü--Wigner contraction
of a Lie algebra, one may also consider a Lie algebra expansion
\cite{Hatsuda:2001pp,Boulanger:2002bt,deAzcarraga:2002xi,Izaurieta:2006zz}
where the number of generators corresponding to the nonrelativistic
symmetries is increased. Second, one may obtain a non-relativistic
theory by the null reduction of a relativistic theory in one spatial
dimension higher, see, e.g.,
\cite{Gauntlett:1990xq,Julia:1994bs}. This null reduction is based
upon the fact that the Bargmann algebra allows a null-embedding into a
Poincare algebra in one spatial dimension higher.
Having discussed non-lorentzian gravity, we continue introducing
matter and discussing non-lorentzian field theories. We will
do this for a for a complex and real massive spin-0 particle, a
massive spin-1/2 particle and a massless spin-1 particle. In
particular, for spin-0, we will discuss the Galilei, Bargmann and
Carroll limits while for spin-1/2 and spin-1 we will only discuss the
Bargmann limit.
\section{Motivation}
\label{sec:motivation}
Let us motivate our discussion of kinematical symmetries and their
spacetimes by contrasting two classical models of the universe: the
Galilei spacetime of newtonian mechanics and Minkowski spacetime of
special relativity. As we will see, both spacetimes are described by
a four-dimensional affine space, homogeneous under the action of a
kinematical Lie group; that is, a transformation group consisting of
rotations, boosts and translations in both space and time. We will
contrast the invariant structures of the two spacetimes: a clock and a
ruler in the Galilei spacetime and a proper distance in Minkowski
spacetime. The latter defines a lorentzian metric and the former, as
we will see, a (weak) Newton--Cartan structure. We will also contrast
their Lie algebras of symmetries: the finite-dimensional Lie algebra
of isometries in Minkowski spacetime and the infinite-dimensional
Coriolis algebra in the Galilei spacetime.
\subsection{Affine space}
\label{sec:affine-space}
Let $\mathbb{A}^4$ denote the four-dimensional affine space. It is modelled
on the vector space $\mathbb{R}^4$ in the sense that given any two points
$a,b \in \mathbb{A}^4$ there exists a unique translation $v \in
\mathbb{R}^4$ such that $b = a + v$. We often refer to $v$ as $b-a$ and
identify translations with differences of points. We will use an
explicit model for $\mathbb{A}^4$ as the affine hyperplane in $\mathbb{R}^5$
consisting of points $(x^1,x^2,x^3,x^4,x^5=1) \in \mathbb{R}^5$, but we
should emphasise that the fifth dimension is an auxiliary construct
and has no physical meaning. One cannot add points in $\mathbb{A}^4$ (their
last entry would not equal $1$), but one can add differences, since
those lie in the hyperplane $x^5=0$. In this model, the group
$\operatorname{Aff}(4,\mathbb{R})$ of affine transformations of $\mathbb{A}^4$ is the subgroup of
$\operatorname{GL}(5,\mathbb{R})$ which preserves the hyperplane $x^5=1$. It consists of
matrices of the form
\begin{equation}\label{eq:affine-trans}
\begin{pmatrix}
L & v \\ 0 & 1
\end{pmatrix}
\end{equation}
where $v \in \mathbb{R}^4$ and $L \in \operatorname{GL}(4,\mathbb{R})$. We will see that the
relativity groups of the Galilei and Minkowski spacetimes are
subgroups of the affine group containing all the translations $v \in
\mathbb{R}^4$ but with a restricted subgroup of linear transformations
consisting of rotations and boosts.
It follows from matrix multiplication that the affine group is the
semidirect product $\operatorname{GL}(4,\mathbb{R}) \ltimes \mathbb{R}^4$, with $\operatorname{GL}(4,\mathbb{R})$
acting on $\mathbb{R}^4$ by matrix multiplication. Multiplying $(x,1) \in
\mathbb{R}^5$ by the matrix in equation~\eqref{eq:affine-trans}
gives $(Lx + v, 1)$, which is the effect of an affine transformation.
Both the Galilei and Minkowski spacetimes are described by $\mathbb{A}^4$,
only that their invariant structures differ. Points in $\mathbb{A}^4$ are
called \textbf{(spacetime) events}.
\subsection{Galilei spacetime}
\label{sec:galilei-spacetime}
The following description of Galilean spacetime is essentially due to
Weyl \cite{MR988402}.
Galilei spacetime is defined by $\mathbb{A}^4$ together with two invariant
notions:
\begin{itemize}
\item a \textbf{clock} $\tau : \mathbb{R}^4 \to \mathbb{R}$, sending $b-a \mapsto
\tau(b-a)$ and measuring the time interval between two events $a,b
\in \mathbb{A}^4$. If $a = (x, 1)$ and $b = (y, 1)$, then $\tau(b-a) =
y^4 - x^4$. Two events $a,b \in \mathbb{A}^4$ are said to be
\textbf{simultaneous} if $\tau(b-a) = 0$. In other words,
simultaneous events are related by translations in the
kernel of $\tau$. If we fix an event $a$, the set of events
simultaneous to $a$ defines a three-dimensional affine subspace
\begin{equation}
a + \ker \tau = \left\{a + v \middle | \tau(v) = 0\right\}
\end{equation}
of $\mathbb{A}^4$. As the notation suggests, it is a coset of the subgroup
$\ker \tau$ of the translation group $\mathbb{R}^4$. The quotient $\mathbb{A}^4/\ker
\tau$ is an affine line $\mathbb{A}^1$, so that the clock gives a fibration $\pi: \mathbb{A}^4 \to \mathbb{A}^1$ whose fibre
at $\pi(a)$ consists of all those events simultaneous to $a$, which
constitute an affine hypersurface $\mathbb{A}^3_a$ of $\mathbb{A}^4$. This is
illustrated in Figure~\ref{fig:clock-fibration}.
\item a \textbf{ruler} $\lambda : \ker \tau \to \mathbb{R}$, sending
$b-a \mapsto \lambda(b-a)$ and measuring the euclidean distance
between simultaneous events. Explicitly, if $a = (\boldsymbol{x}, x^4, 1)$ and $b = (\boldsymbol{y}, y^4,1)$ with
$\boldsymbol{x},\boldsymbol{y} \in \mathbb{R}^3$ and $x^4=y^4$ are simultaneous events, then
$\lambda(b-a) = \|\boldsymbol{y} - \boldsymbol{x}\| = \sqrt{(\boldsymbol{y} - \boldsymbol{x})\cdot (\boldsymbol{y} -
\boldsymbol{x})}$, which is the euclidean distance between $\boldsymbol{x}$ and $\boldsymbol{y}$.
\end{itemize}
\begin{figure}[h!]
\centering
\begin{tikzpicture}[x=1.0cm,y=1.0cm,scale=0.6]
\coordinate [label=above right:{$\mathbb{A}^4$}] (a4) at (4,1);
\coordinate [label=right:{$\mathbb{A}^1$}] (a1) at (4,-1);
\coordinate [label=below:{$\pi(a)$}] (pa) at (-3,-1);
\coordinate [label=below:{$\pi(b)$}] (pb) at (2,-1);
\coordinate [label=left:{$a$}] (a) at (-3,3);
\coordinate [label=right:{$b$}] (b) at (2,6);
\coordinate [label=right:{$\mathbb{A}^3_a$}] (a3) at (-3,7);
%
\draw [black!50!white, thin] (-4,1) -- (4,1) -- (4,9) -- (-4,9) -- cycle;
\draw [black!50!white, thin] (-4,-1) -- (4,-1);
%
\draw [red, thick] (-3,1) -- (-3,9);
\draw [red, thick] (2,1) -- (2,9);
%
\draw[->,shorten >=2mm,shorten <=2mm,black,thick] (a)--(b) node[midway,sloped,above]{$b-a$};
\draw[->,shorten >=1mm,shorten <=1mm,black,thick] (pa)--(pb) node[midway,above]{$\tau(b-a)$};
%
\begin{scope}[transform canvas={xshift=0.7em}]
\draw [->, shorten >=3mm,thin, black] (a4) -- node[above right] {$\pi$} (a1);
\end{scope}
%
\foreach \point in {pa,pb}
\fill [red] (\point) circle (3pt);
\foreach \point in {a,b}
\fill [blue] (\point) circle (3pt);
\end{tikzpicture}
\caption{The clock fibration $\pi : \mathbb{A}^4 \to \mathbb{A}^1$}
\label{fig:clock-fibration}
\end{figure}
The kinematical group of Galilei spacetime is called the
\textbf{Galilei group} and it consists of those affine
transformations of $\mathbb{A}^4$ which preserve the clock and the ruler. It
embeds in $\operatorname{GL}(5,\mathbb{R})$ as those matrices of the form
\begin{equation}
\label{eq:gal-in-gl5}
\begin{pmatrix}
R & \boldsymbol{v} & \boldsymbol{p} \\
0 & 1 & s \\
0 & 0 & 1
\end{pmatrix},
\end{equation}
where $R \in \operatorname{O}(3)$, $\boldsymbol{p},\boldsymbol{v} \in \mathbb{R}^3$ and $s \in
\mathbb{R}$. This matrix is of the form~\eqref{eq:affine-trans}, but
where the general linear transformation $L$ is of the form
$\begin{pmatrix} R & \boldsymbol{v} \\ 0 & 1\end{pmatrix}$.
The action of the matrix in equation~\eqref{eq:gal-in-gl5} on an event
$(\boldsymbol{x}, t, 1)$ gives the event $(R\boldsymbol{x} + t \boldsymbol{v} + \boldsymbol{p}, t +
s,1)$ which we interpret as the composition of an orthogonal
transformation $\boldsymbol{x} \mapsto R \boldsymbol{x}$, a \textbf{Galilei boost} $\boldsymbol{x}
\mapsto \boldsymbol{x} + t \boldsymbol{v}$, a spatial translation $\boldsymbol{x} \mapsto \boldsymbol{x} +
\boldsymbol{p}$ and a temporal translation $t \mapsto t + s$:
\begin{equation}
\label{eq:gal-decomposition}
\begin{pmatrix}
R & \boldsymbol{v} & \boldsymbol{p} \\
0 & 1 & s \\
0 & 0 & 1
\end{pmatrix} =
\begin{pmatrix}
I & 0 & 0 \\
0 & 1 & s\\
0 & 0 & 1
\end{pmatrix}
\begin{pmatrix}
I & 0 & \boldsymbol{p} \\
0 & 1 & 0\\
0 & 0 & 1
\end{pmatrix}
\begin{pmatrix}
I & \boldsymbol{v} & 0 \\
0 & 1 & 0\\
0 & 0 & 1
\end{pmatrix}
\begin{pmatrix}
R & 0 & 0 \\
0 & 1 & 0\\
0 & 0 & 1
\end{pmatrix}.
\end{equation}
Its Lie algebra is the \textbf{Galilei algebra}, which is isomorphic
to the subalgebra of $\mathfrak{gl}(5,\mathbb{R})$ consisting of matrices of the form
\begin{equation}
\label{eq:gal-algebra-gl5}
\begin{pmatrix}
A & \boldsymbol{v} & \boldsymbol{p}\\
0 & 0 & s\\
0 & 0 & 0
\end{pmatrix},
\end{equation}
where $A \in \mathfrak{so}(3)$, $\boldsymbol{v},\boldsymbol{p} \in \mathbb{R}^3$ and $s \in \mathbb{R}$.
We may introduce a basis $L_{ab} = - L_{ba}, B_a, P_a, H$ by
\begin{equation}
\label{eq:gal-basis}
\begin{pmatrix}
A & \boldsymbol{v} & \boldsymbol{p}\\
0 & 0 & s\\
0 & 0 & 0
\end{pmatrix} = \tfrac12 A^{ab} L_{ab} + v^a B_a + p^a P_a + s H.
\end{equation}
We can easily work out the Lie brackets of the Galilei algebra in
this basis. The nonzero brackets are given by
\begin{equation}
\label{eq:gal-algebra-brackets}
\begin{split}
[L_{ab},L_{cd}] &= \delta_{bc} L_{ad} - \delta_{ac} L_{bd} - \delta_{bd} L_{ac} + \delta_{bd} L_{ac} \\
[L_{ab}, B_b] &= \delta_{bc} B_a - \delta_{ac} B_b\\
[L_{ab}, P_b] &= \delta_{bc} P_a - \delta_{ac} P_b\\
[B_a, H] &= P_a.
\end{split}
\end{equation}
This shows that $L_{ab}$ span an $\mathfrak{so}(3)$ subalgebra, relative to
which $B_a,P_a$ transform according to the three-dimensional vector
representation (which is also the adjoint representation in this
dimension) and $H$ transforms as the one-dimensional scalar
representation. We shall see that all kinematical Lie algebras (with
spatial isotropy) share these properties, which are strong enough to
allow for their classification.
\subsection{Minkowski spacetime}
\label{sec:minkowski-spacetime}
Minkowski spacetime is also described by $\mathbb{A}^4$, but the invariant
notion is now that of a \textbf{proper distance} $\Delta : \mathbb{R}^4 \to
\mathbb{R}$, sending $b-a \mapsto \Delta(b-a)$, where if $a = (x,1)$ and $b =
(y,1)$,
\begin{equation}
\label{eq:proper-distance}
\Delta(b-a) = (y-x)^T \eta (y-x),
\end{equation}
where
\begin{equation}
\label{eq:minkowski-IP}
\eta =
\begin{pmatrix}
-1 & 0 & 0 & 0 \\
\phantom{-}0 & 1 & 0 & 0\\
\phantom{-}0 & 0 & 1 & 0\\
\phantom{-}0 & 0 & 0 & 1
\end{pmatrix}.
\end{equation}
We no longer have a separate clock and ruler, or as Minkowski himself
put it \cite{zbMATH02638586}:
\begin{quotation}
Von Stund' an sollen Raum für sich und Zeit für sich völlig zu
Schatten herabsinken und nur noch eine Art Union der beiden soll
Selbständigkeit bewahren.\footnote{Henceforth space by itself, and
time by itself, are doomed to fade away into mere shadows, and
only a kind of union of the two will preserve an independent
reality.}
\end{quotation}
In particular, there is no longer an invariant notion of simultaneity
between events, so instead of affine subspaces of simultaneity, we
have lightcones at every spacetime event $a$: the \textbf{lightcone}
$\mathbb{L}_a$ of $a$ being defined as those events which are a zero proper distance
away from $a$:
\begin{equation}
\mathbb{L}_a = \left\{ b \in \mathbb{A}^4 \middle | \Delta(b-a) = 0\right\}.
\end{equation}
The kinematical group of Minkowski spacetime is the \textbf{Poincaré
group} and consists of those affine transformations which preserve
the proper distance between events. It embeds in $\operatorname{GL}(5,\mathbb{R})$ as
those matrices
\begin{equation}
\label{eq:poin-in-gl5}
\begin{pmatrix}
L & v \\
0 & 1
\end{pmatrix}
\end{equation}
where $L^T \eta L = \eta$ and $v \in \mathbb{R}^4$. Matrix multiplication
shows that the Poincaré group is isomorphic to the semidirect product
$\operatorname{O}(3,1) \ltimes \mathbb{R}^4$, where $\operatorname{O}(3,1)$ is the \textbf{Lorentz
group}. Acting on an event $(x,1)$ with the matrix in
equation~\eqref{eq:poin-in-gl5}, we obtain the event $(Lx + v, 1)$,
which is the effect of a Lorentz transformation $(x \mapsto L x)$ and a
(spatiotemporal) translation $x \mapsto x + v$; that is,
\begin{equation}
\begin{pmatrix}
L & v \\
0 & 1
\end{pmatrix}=
\begin{pmatrix}
I & v \\
0 & 1
\end{pmatrix}
\begin{pmatrix}
L & 0 \\
0 & 1
\end{pmatrix}.
\end{equation}
The Lie algebra of the Poincaré group embeds in $\mathfrak{gl}(5,\mathbb{R})$ as those
matrices of the form
\begin{equation}
\begin{pmatrix}
X & v \\
0 & 0
\end{pmatrix}
\end{equation}
where $X^T \eta + \eta X = 0$ and $v \in \mathbb{R}^4$. Introducing a basis
$L_{AB} = - L_{BA}, P_A$, where now $A,B = 0,1,2,3$, by
\begin{equation}
\begin{pmatrix}
X & v \\
0 & 0
\end{pmatrix} = \tfrac12 X^{AB} L_{AB} + v^A P_A,
\end{equation}
it is easy to calculate the nonzero Lie brackets:
\begin{equation}\label{eq:poincare-brackets}
\begin{split}
[L_{AB},L_{CD}] &= \eta_{BC} L_{AD} - \eta_{AC} L_{BD} - \eta_{BD} L_{AC} + \eta_{AD} L_{BC}\\
[L_{AB},P_C] &= \eta_{BC} P_A - \eta_{AC} P_B.
\end{split}
\end{equation}
To ease comparison with the Galilei algebra
\eqref{eq:gal-algebra-brackets}, we will let $P_A = (H = P_0, P_a)$ and
$L_{AB} = (B_a = L_{0a}, L_{ab})$, relative to which the brackets
become
\begin{equation}
\label{eq:poincare-kla-brackets}
\begin{split}
[L_{ab},L_{cd}] &= \delta_{bc} L_{ad} - \delta_{ac} L_{bd} - \delta_{bd} L_{ac} + \delta_{bd} L_{ac} \\
[L_{ab}, B_b] &= \delta_{bc} B_a - \delta_{ac} B_b\\
[L_{ab}, P_b] &= \delta_{bc} P_a - \delta_{ac} P_b\\
[B_a, B_b] &= L_{ab}\\
[B_a, P_b] &= \delta_{ab} H\\
[B_a, H] &= P_a.
\end{split}
\end{equation}
We see that again $L_{ab}$ span an $\mathfrak{so}(3)$ subalgebra relative to
which $B_a,P_a$ transform according to the three-dimensional vector
representation and $H$ transforms according to the one-dimensional
scalar representation. What sets the Poincaré and Galilei algebras
apart are the Lie brackets which do not involve the $L_{ab}$: the last
bracket in equation~\eqref{eq:gal-algebra-brackets} and the last three
brackets in equation~\eqref{eq:poincare-kla-brackets}.
\subsection{Lie algebra of symmetries}
\label{sec:lie-algebra-symm}
Minkowski spacetime is a lorentzian manifold, diffeomorphic to $\mathbb{R}^4$
with lorentzian metric
\begin{equation}
g = - dt^2 + dx^2 + dy^2 + dz^2 = \eta_{\mu\nu} dx^\mu dx^\nu
\end{equation}
relative to cartesian coordinates $x^\mu = (t,x,y,z)$. The Poincaré
Lie algebra is isomorphic to the Lie algebra of Killing vector fields
of the metric $g$. Let $\xi = \xi^\mu \d_\mu$ denote a vector field
of Minkowski spacetime. It is a Killing vector field if $\mathscr{L}_\xi g =
0$, which translates into
\begin{equation}
\eta_{\rho\nu} \d_\mu\xi^\rho + \eta_{\mu\rho} \d_\nu\xi^\rho= 0,
\end{equation}
or, defining $\xi_\mu = \eta_{\mu\rho}\xi^\rho$, into Killing's
equation\footnote{Solutions of equation~\eqref{eq:mink-killing-eqn}
are the Noether charges for point symmetries of the geodesic
equation. Indeed, if we consider the variational problem with
lagrangian $\mathscr{L} = \tfrac12 \eta_{\mu\nu} \dot x^\mu \dot x^\nu$ and
ask which point transformations $\delta x^\mu = \xi^\mu(x)$ leave
$\mathscr{L}$ invariant, we find that $\xi^\mu$ must satisfy
equation~(\ref{eq:mink-killing-eqn}).}:
\begin{equation}\label{eq:mink-killing-eqn}
\d_\mu \xi_\nu + \d_\nu \xi_\mu = 0.
\end{equation}
Notice that $\d_\mu\d_\nu\xi_\rho$ is clearly
symmetric in $\mu\leftrightarrow\nu$ and, from Killing's equation, also
skewsymmetric in $\nu \leftrightarrow\rho$. Therefore
$\d_\mu\d_\nu\xi_\rho = 0$ and hence $\xi_\mu = \Lambda_{\mu\nu} x^\nu +
a_\mu$. Re-inserting this into Killing's equation, we find that
$\Lambda_{\mu\nu} = - \Lambda_{\nu\mu}$ and we may write the general
solution of Killing's equation as
\begin{equation}
\xi = \tfrac12 \Lambda^{\mu\nu} \xi_{L_{\mu\nu}} + a^\mu \xi_{P_\mu},
\end{equation}
where
\begin{equation}
\xi_{L_{\mu\nu}}= x_\nu \d_\mu - x_\mu \d_\nu \qquad\text{and}\qquad
\xi_{P_\mu} = \d_\mu.
\end{equation}
One can check that these vector fields obey the opposite (i.e.,
negative) brackets of those of the Poincaré Lie algebra:
\begin{equation}
\label{eq:opposite-poincare}
\begin{split}
[\xi_{L_{\mu\nu}}, \xi_{L_{\rho\sigma}}] &= - \eta_{\nu\rho}
\xi_{L_{\mu\sigma}}+ \eta_{\mu\rho} \xi_{L_{\nu\sigma}}+ \eta_{\nu\sigma}
\xi_{L_{\mu\rho}} - \eta_{\mu\sigma} \xi_{L_{\nu\rho}}\\
[\xi_{L_{\mu\nu}}, \xi_{P_\rho}] &= - \eta_{\nu\rho} \xi_{P_\mu} + \eta_{\mu\rho} \xi_{P_\nu}.
\end{split}
\end{equation}
The fact that we have an antihomomorphism of Lie algebras might seem
counter-intuitive, but we will see that it is natural in the
context of homogeneous spaces, where the group action is induced from
left multiplication in the group. The infinitesimal generators of
left multiplication are the right-invariant vector fields whose Lie
brackets are opposite to those of the left-invariant vector fields
defining the Lie algebra.
In contrast, Galilei spacetime is a non-lorentzian geometry:
there is no invariant metric, but rather an invariant Newton--Cartan
structure.\footnote{Some authors (e.g., \cite{Duval:2014uoa}) refer to
this structure as a ``weak'' Newton--Cartan structure, reserving the
unqualified name for the structure which results by an additional
choice of an adapted connection; that is a connection relative to
which the clock one-form and the spatial cometric are parallel.}
Relative to cartesian coordinates $(x,y,z,t)$, the clock defines a
one-form $\tau = dt$. Indeed, as shown in
Figure~\ref{fig:clock-fibration}, the clock is the linear projection
$\mathbb{R}^4 \to \mathbb{R}$ taking $b-a$ to $\tau(b-a)$. This is nothing but the
derivative of the projection $\pi : \mathbb{A}^4 \to \mathbb{A}^1$, which in this
model of the affine space is given by $\pi(x,y,z,t) = t$; in other
words, $dt$. We will see later that in a general Newton--Cartan
manifold, the clock one-form need not be exact or even closed. The
ruler defines an invariant symmetric $(2,0)$-tensor field
$\lambda = \d_x \otimes \d_x + \d_y \otimes \d_y + \d_y \otimes \d_y$.
Interpreting $\lambda$ as a symmetric bilinear form on one-forms, we
notice that $\lambda$ is degenerate along $dt$. It is often called
the ``spatial cometric''. In analogy with a lorentzian spacetime, let
us say that a vector field $\xi$ is ``Killing'', if it preserves the
clock one-form $\tau$ and the spatial cometric $\lambda$; that is,
\begin{equation}\label{eq:gal-killing}
\mathscr{L}_\xi \tau = 0 \qquad\text{and}\qquad \mathscr{L}_\xi \lambda = 0.
\end{equation}
For Galilei spacetime, and introducing coordinates
$x^a = (x,y,z)$, the general solution of
equations~\eqref{eq:gal-killing} is given by
\begin{equation}
\xi = \alpha \d_t + v^a(t) \d_a + \tfrac12 T^{ab}(t) (x_b \d_a - x_a\d_b),
\end{equation}
where $\alpha \in \mathbb{R}$ and where $v^a$ and $T^{ab} = - T^{ba}$ are
smooth functions of $t$. In contrast to the Lie algebra
of isometries of a lorentzian manifold, the Lie algebra of symmetries
of the (weak) Newton--Cartan structure of Galilei spacetime is
infinite-dimensional, and is known as the \textbf{Coriolis algebra}
\cite{Duval:1993pe}. It contains (the opposite of) the Galilei
algebra as a subalgebra, spanned by
\begin{equation}
\xi_H = \d_t, \qquad \xi_{P_a} = \d_a, \qquad \xi_{B_a}= t \d_a
\qquad\text{and}\qquad \xi_{L_{ab}} = - x_a \d_b + x_b \d_a.
\end{equation}
Had we considered (strict) Newton--Cartan structures, including the
adapted connection as part of the data, then the Lie algebra of
symmetries would be finite-dimensional.\footnote{The reason is that
Newton--Cartan structures are Cartan geometries and the infinitesimal
automorphisms of a Cartan geometry form a finite-dimensional Lie
algebra.}
\section{Symmetry}
\label{sec:symmetries}
In Section~\ref{sec:motivation} we discussed two models of the
universe: the Galilei and Minkowski spacetimes. Both are
four-dimensional affine space homogeneous under the action of a
kinematical Lie group: the Galilei and Poincaré groups, respectively.
In this section we will discuss the notion of a kinematical Lie group
in more formally and will discuss the classification of kinematical
Lie algebras.
\subsection{Kinematical Lie algebras}
\label{sec:kinem-lie-algebr}
In a landmark paper \cite{Bacry:1968zf} written more than half a
century ago, Bacry and Lévy-Leblond asked themselves the question of
which were the possible kinematics, rephrasing the question
mathematically as the classification of kinematical Lie algebras.
A careful comparison of the Poincaré and Galilei algebras we
met in Section~\ref{sec:motivation} suggests the following
definition for four-dimensional spacetimes.\footnote{Strictly
speaking, the definition is for spatially isotropic spacetimes.
There are generalisations where the rotational subalgebra $\r$ in
the definition is replaced by a Lorentz subalgebra. Such homogeneous
spaces do occur in nature. Indeed, as shown in
\cite{Gibbons:2019zfs}, the blow-up of spatial infinity of Minkowski
spacetime is a homogeneous space of the Poincaré group with lorentzian
isotropy. There are other homogeneous spaces of the Poincaré group
occurring at the asymptotic infinities of Minkowski spacetime, as
discussed in \cite{Figueroa-OFarrill:2021sxz}.}
\begin{definition}\label{def:KLA}
A \textbf{kinematical Lie algebra} is a ten-dimensional (real) Lie
algebra $\k$ with generators $L_{ab}=-L_{ba}, B_a, P_a, H$ with $a,b=1,2,3$
satisfying the following conditions:
\begin{itemize}
\item the generators $L_{ab}$ span an $\mathfrak{so}(3)$-subalgebra $\r$ of $\mathfrak{g}$:
\begin{equation}\label{eq:gen-kla-1}
[L_{ab},L_{cd}] = \delta_{bc} L_{ad} - \delta_{ac} L_{bd} - \delta_{bd} L_{ac} + \delta_{bd} L_{ac},
\end{equation}
\item the generators $B_a,P_a$ transform as vectors under $\r$:
\begin{equation}\label{eq:gen-kla-2}
\begin{split}
[L_{ab}, B_b] &= \delta_{bc} B_a - \delta_{ac} B_b\\
[L_{ab}, P_b] &= \delta_{bc} P_a - \delta_{ac} P_b\\
\end{split}
\end{equation}
\item and the generator $H$ transforms as a scalar:
\begin{equation}\label{eq:gen-kla-3}
[L_{ab},H] = 0.
\end{equation}
\end{itemize}
\end{definition}
In addition, Bacry and Lévy-Leblond initially also imposed that the
Lie brackets should be invariant under parity $P_a \mapsto -P_a$ and
time-reversal $H \mapsto -H$; although they did point out that those
restrictions were ``by no means compelling'' and indeed twenty years
later, Bacry and Nuyts \cite{MR857383} lifted those conditions
arriving at a classification of four-dimensional kinematical Lie
algebras. This classification was recovered using deformation theory
in \cite{Figueroa-OFarrill:2017ycu} and extended to arbitrary
dimension in \cite{Figueroa-OFarrill:2017tcy,Andrzejewski:2018gmz}.
The definition of kinematical algebra in $d+1$ dimensions is formally
as the one above, except that $a,b=1,\dots, d$ and the subalgebra $\r$
spanned by $L_{ab}$ is now isomorphic to $\mathfrak{so}(d)$. The case of $d=1$
corresponds to the Bianchi classification of three-dimensional real
Lie algebras \cite{Bianchi,MR1900159}, here re-interpreted as
kinematical Lie algebras for two-dimensional spacetimes. The cases of
$d=2$ and $d=3$ are the most complicated due to the existence of
$\epsilon_{ab}$ and $\epsilon_{abc}$ which are $\r$-invariant and can
thus appear in the Lie brackets and, indeed, there are kinematical Lie
algebras in dimension $2+1$ and $3+1$ which have no higher-dimensional
analogues. We will refer the interested reader to the papers cited
above and will concentrate here on those kinematical Lie algebras
which exist in generic dimensions.
Before we state the classification, let us make an important remark.
Although the notation for the generators of a kinematical Lie algebra
suggests a physical interpretation: namely, $L_{ab}$ generate
rotations, $B_a$ boots, $P_a$ spatial translations and $H$ temporal
translations, it would be imprudent to take this very seriously. The
physical interpretation of the generators can only be determined once
we realise them geometrically as vector fields in a spacetime. In the
two examples we have seen in Section~\ref{sec:motivation}, it is
indeed the case that the generators can be interpreted as above, but
this is certainly not true in most cases.
One way to approach the classification is to write the down the most
general $\r$-invariant Lie brackets for the generators $B_a,P_a,H$ and
impose the Jacobi identity. The Jacobi identity cuts out an algebraic
variety $\mathscr{J}$ in the vector space of possible brackets: i.e., linear
maps $\wedge^2 W \to \k$, where $W\subset \k$ is the vector subspace
spanned by $B_a, P_a, H$. Two points in $\mathscr{J}$ define isomorphic
kinematical Lie algebras if and only if they are related by a change
of basis in $W$. We take care of this ambiguity by quotienting $\mathscr{J}$
by the action of the subgroup of $\operatorname{GL}(W)$ which commutes with the
action of $\r$. In practice, one selects a unique representative for
each isomorphism class of kinematical Lie algebras.
Table~\ref{tab:KLAs} lists the kinematical Lie algebras in generic
dimension $d+1$. For $d \leq 2$, there are some degeneracies (e.g., if
$d=2$, the Galilei algebra $\mathfrak{g}$ is isomorphic to the Carroll algebra
$\mathfrak{c}$), but for general $d$ the table below lists non-isomorphic
kinematical Lie algebras and for $d>3$ the table is complete. The
table lists the nonzero Lie brackets except for the common ones in
every kinematical Lie algebra. It also uses a shorthand notation
omitting indices. The only $\r$-invariant tensor which can appear is
$\delta_{ab}$ and hence there is an unambiguous way to add indices.
For example, $[H,\boldsymbol{B}] = \boldsymbol{B} + \P$ unpacks as $[H,B_a] = B_a + P_a$,
whereas $[\boldsymbol{B},\P] = H + \L$ stands for $[B_a, P_b]= \delta_{ab} H +
L_{ab}$, et cetera. There is no standard notation for all the kinematical
Lie algebras, so we have made some choices.
\begin{table}[h!]
\centering
\caption{Kinematical Lie algebras in generic dimension}
\label{tab:KLAs}
\rowcolors{2}{red!10}{yellow!10}
\begin{tabular}{>{$}l<{$}|*{5}{>{$}l<{$}}|l} \toprule
\multicolumn{1}{c|}{Name} & \multicolumn{5}{c|}{Nonzero Lie brackets in addition to \eqref{eq:gen-kla-1}--\eqref{eq:gen-kla-3}} & \multicolumn{1}{c}{Comments} \\\midrule
\mathfrak{s} & & & & & & \\
\mathfrak{g} & [H,\boldsymbol{B}] = - \P & & & & & \\
\mathfrak{n}^0 & [H,\boldsymbol{B}] = \boldsymbol{B} + \P & [H, \P] = \P & & & & \\
\mathfrak{n}^+_\gamma & [H,\boldsymbol{B}] = \gamma \boldsymbol{B} & [H,\P] = \P & & & & $\gamma \in [-1,1]$ \\
\mathfrak{n}^-_\chi & [H,\boldsymbol{B}] = \chi \boldsymbol{B} + \P & [H,\P] = \chi \P - \boldsymbol{B} & & & & $\chi \geq 0$ \\
\mathfrak{c} & & & & [\boldsymbol{B},\P] = H & & \\
\choice{\mathfrak{iso}(d,1)}{\mathfrak{iso}(d+1)} & [H,\boldsymbol{B}] = -\varepsilon \P & & [\boldsymbol{B},\boldsymbol{B}]= \varepsilon \L & [\boldsymbol{B},\P] = H & & $\varepsilon = \pm 1$ \\
\mathfrak{so}(d+1,1) & [H,\boldsymbol{B}] = \boldsymbol{B} & [H,\P] = -\P & & [\boldsymbol{B},\P] = H + \L & & \\
\choice{\mathfrak{so}(d,2)}{\mathfrak{so}(d+2)} & [H,\boldsymbol{B}] = -\varepsilon \P & [H,\P] = \varepsilon \boldsymbol{B} & [\boldsymbol{B},\boldsymbol{B}]= \varepsilon \L & [\boldsymbol{B},\P] = H & [\P,\P] = \varepsilon \L & $\varepsilon = \pm 1$ \\ \bottomrule
\end{tabular}
\end{table}
We now describe each of the algebras in turn:
\begin{itemize}
\item The Lie algebra $\mathfrak{s}$ is the \textbf{static} kinematical Lie
algebra: all additional brackets are zero. Therefore every
kinematical Lie algebras is a deformation of $\mathfrak{s}$.
\item The Galilei algebra is denoted $\mathfrak{g}$ and we have denoted by
$\mathfrak{n}^0$ a closely related algebra. In $\mathfrak{g}$ and $\mathfrak{n}^0$, the adjoint
action of $H$ is not diagonalisable over the complex numbers, but
has a nontrivial Jordan block:
\begin{equation}
\operatorname{ad}_H^{\mathfrak{g}}
\begin{pmatrix} \boldsymbol{B} \\ \P \end{pmatrix} = \begin{pmatrix}
0 & -1 \\ 0 & 0
\end{pmatrix} \begin{pmatrix} \boldsymbol{B} \\ \P
\end{pmatrix}\qquad\text{and}\qquad
\operatorname{ad}_H^{\mathfrak{n}^0}
\begin{pmatrix} \boldsymbol{B} \\ \P \end{pmatrix} = \begin{pmatrix}
1 & 1 \\ 0 & 1
\end{pmatrix} \begin{pmatrix} \boldsymbol{B} \\ \P
\end{pmatrix}.
\end{equation}
\item There are two one-parameter families of algebras: $\mathfrak{n}^+_\gamma$,
with $\gamma \in [-1,1]$, which for $\gamma = -1$ is one of the two
\textbf{Newton--Hooke} algebras; and $\mathfrak{n}^-_\chi$, with
$\chi \geq 0$, which for $\chi = 0$ is the other Newton--Hooke
algebra. These two families correspond to the cases where the
adjoint action of $H$ is diagonalisable over the complex numbers: in
$\mathfrak{n}^+_\gamma$, the eigenvalues are real, whereas in $\mathfrak{n}^-_\chi$ they
are complex: \begin{equation}
\operatorname{ad}_H^{\mathfrak{n}^+}
\begin{pmatrix} \boldsymbol{B} \\ \P \end{pmatrix} = \begin{pmatrix}
\gamma & 0 \\ 0 & 1
\end{pmatrix} \begin{pmatrix} \boldsymbol{B} \\ \P
\end{pmatrix}\qquad\text{and}\qquad
\operatorname{ad}_H^{\mathfrak{n}^-}
\begin{pmatrix} \boldsymbol{B} \\ \P \end{pmatrix} = \begin{pmatrix}
\chi & 1 \\ -1 & \chi
\end{pmatrix} \begin{pmatrix} \boldsymbol{B} \\ \P
\end{pmatrix}.
\end{equation}
\item The Carroll algebra is denoted $\mathfrak{c}$.
\item The Poincaré algebra is $\mathfrak{iso}(d,1)$ and the euclidean algebra is
$\mathfrak{iso}(d+1)$.
\item The remaining algebras are semisimple (for $d\geq2$) and consist
of $\mathfrak{so}(d+2)$, $\mathfrak{so}(d+1,1)$ and $\mathfrak{so}(d,2)$. Finite-dimensional
semisimple Lie algebras are rigid, so they cannot be deformed
further. However they can be contracted. Not all the kinematical
Lie algebras in the table can be obtained as contractions of the
simple ones: those which can are the Poincaré, euclidean, (both)
Newton--Hooke, Galilei, Carroll and static algebras. These are
precisely the algebras which admit parity and time-reversal
automorphisms; that is, the ones originally classified in
\cite{Bacry:1968zf}.
\end{itemize}
\subsection{Aristotelian Lie algebras}
\label{sec:arist-lie-algebr}
A closely related family of Lie algebras are the \textbf{aristotelian
algebras}, defined just like in Definition~\ref{def:KLA}, but
dropping the boosts.
\begin{definition}\label{def:ALA}
An \textbf{aristotelian Lie algebra} is a real Lie
algebra $\a$ with generators $L_{ab}=-L_{ba}, P_a, H$, with
$a,b=1,\dots, d$, satisfying the following conditions:
\begin{itemize}
\item the generators $L_{ab}$ span an $\mathfrak{so}(d)$-subalgebra $\r$ of $\mathfrak{g}$:
\begin{equation}\label{eq:aristo-gen-1}
[L_{ab},L_{cd}] = \delta_{bc} L_{ad} - \delta_{ac} L_{bd} - \delta_{bd} L_{ac} + \delta_{bd} L_{ac},
\end{equation}
\item the generators $P_a$ transform as vectors under $\r$:
\begin{equation}\label{eq:aristo-gen-2}
[L_{ab}, P_b] = \delta_{bc} P_a - \delta_{ac} P_b
\end{equation}
\item and the generator $H$ transforms as a scalar:
\begin{equation}\label{eq:aristo-gen-3}
[L_{ab},H] = 0.
\end{equation}
\end{itemize}
\end{definition}
Aristotelian Lie algebras are easy to classify in any dimension and
the result is contained in
\cite[Appendix~B]{Figueroa-OFarrill:2018ilb} and summarised in the
Table~\ref{tab:ALAs}, which lists the nonzero Lie brackets in addition
to those fixed by the definition. We omit aristotelian Lie algebras
which do not exist in general dimension.
\begin{table}[h!]
\centering
\caption{Aristotelian Lie algebras}
\label{tab:ALAs}
\rowcolors{2}{red!10}{yellow!10}
\begin{tabular}{>{$}l<{$}|*{2}{>{$}l<{$}}|l}\toprule
\multicolumn{1}{l|}{Name~~~~~~~} & \multicolumn{2}{c|}{Nonzero Lie brackets}& \multicolumn{1}{c}{Comments}\\\midrule
\mathfrak{iso}(d) \oplus \mathbb{R} & & & \\
\mathfrak{sim}(d) & [H,P_a] = P_a & &\\
\choice{\mathfrak{so}(d,1)\oplus \mathbb{R}}{\mathfrak{so}(d+1)\oplus \mathbb{R}} & & [P_a,P_b] = \varepsilon L_{ab} & $\varepsilon = \pm 1$\\
\bottomrule
\end{tabular}
\end{table}
Let us describe each of the aristotelian Lie algebras in turn:
\begin{itemize}
\item The aristotelian Lie algebra with no additional nonzero Lie
brackets, which we could term the ``static'' aristotelian Lie
algebra, is isomorphic to $\mathfrak{iso}(d) \oplus \mathbb{R}$, with $\mathfrak{iso}(d)$
spanned by $L_{ab}, P_a$ and the one-dimensional Lie subalgebra
spanned by $H$, which is central.
\item If instead of being central, we think of $H$ as dilatations, we
obtain a Lie algebra isomorphic to the similitude algebra of
$d$-dimensional euclidean space: $\mathfrak{sim}(d)$. This is also denoted
$\mathfrak{co}(d) \ltimes \mathbb{R}^d$, where $\mathfrak{co}(d) = \mathfrak{so}(d) \oplus \mathbb{R}$ is the
extension of the rotation algebra by dilatations and $\mathbb{R}^d$
transforms as a vector under rotations but with nonzero conformal
weight.
\item If $H$ remains central, but now the translations do not commute,
we obtain trivial central extensions of $\mathfrak{so}(d,1)$ or $\mathfrak{so}(d+1)$.
\end{itemize}
\subsection{Central extensions}
\label{sec:central-extensions}
Central extensions of Lie algebras arise naturally in Physics. In
quantum Physics they arise due to the fact that the state space of a
quantum system is a projective space (the space of rays of a Hilbert
space) so that the action of a group $\mathscr{G}$ on the projective space
may only lift to a projective representation on the Hilbert space, and
hence an honest representations of a one-dimensional central extension
of $\mathscr{G}$. In classical Physics they arise due to the fact that
homogeneous symplectic manifolds of a Lie group $\mathscr{G}$ are (up to
covering) coadjoint orbits of $\mathscr{G}$ or perhaps a one-dimensional
central extension of $\mathscr{G}$, as we will discuss in
Section~\ref{sec:coadjoint-orbits}.
Mathematically, a central extension of a Lie algebra $\k$ is a special
case of a Lie algebra extension. A Lie algebra $\widetilde\k$ is said
to be a an \textbf{extension} of a Lie algebra $\k$ by a Lie algebra
$\a$ if they fit in an exact sequence of Lie algebras
\begin{equation}
\label{eq:extension}
\begin{tikzcd}
0 \arrow[r] & \a \arrow[r] & \widetilde\k \arrow[r] & \k \arrow[r]
& 0,
\end{tikzcd}
\end{equation}
This is equivalent to the following conditions: $\widetilde\k = \k
\oplus \a$ as a vector space, $\a$ is an ideal of
$\widetilde\k$ (i.e., $[\widetilde\k,\a] \subset \a$) and the quotient
Lie algebra $\widetilde\k/\a$ is isomorphic to $\k$. If $\a$ is
central, so that $[\a,\widetilde\k]=0$, then we have a \textbf{central
extension}. Notice that $\k$ is not necessarily a Lie subalgebra of
$\widetilde\k$. If that is the case, the sequence is said to be split
and we have that $\widetilde\k$ is the semidirect product of $\k$ with
$\a$. A special case of semidirect products are the trivial
extensions, when $\widetilde\k = \k \oplus \a$ as a Lie algebra; that
is, $\k$ and $\a$ are subalgebras (actually ideals) and $[\k,\a] =
0$.
Whereas every Lie algebra admits trivial extensions, the only
kinematical Lie algebras in Table~\ref{tab:KLAs} admitting nontrivial
central extensions (in dimension $d>2$) are the static ($\mathfrak{s}$),
Newton--Hooke ($\mathfrak{n}^\pm$) and Galilei ($\mathfrak{g}$) algebras. To describe
them, we introduce a new generator $Z$ with $[Z,-] = 0$ and modify the
Lie brackets of the kinematical Lie algebra by
$[B_a,P_b] = \delta_{ab} Z$. The central extension of the Galilei
algebra is called the \textbf{Bargmann algebra}.
A one-dimensional extension (not necessarily central) of a
kinematical Lie algebra is called a \textbf{generalised Bargmann
algebra}. Apart from the central extensions listed above and the
trivial extensions, there is a small list (some with parameters).
Those for which $[B_a,P_b] = \delta_{ab} Z$ are deformations of the
central extension of the static kinematical Lie algebra and have been
classified in \cite{Figueroa-OFarrill:2017ycu} (for $d=3$) and in
\cite{Figueroa-OFarrill:2017tcy} (for $d>3$). Those for which
$[B_a,P_b]=0$ are listed here for the first time.
Table~\ref{tab:gen-bargmann} lists the (nontrivial) generalised
Bargmann algebras in dimension $d>2$. In the table $\k$ stands for
the kinematical Lie algebra being extended and the brackets listed are
the ones which involve the additional generator $Z$, so they are
either new or modifications of the brackets in $\k$. The (nonzero)
parameter $\alpha$ in the last three rows is effective: different
values of $\alpha$ give non-isomorphic Lie algebras.
\begin{table}[h!]
\setlength{\tabcolsep}{3pt}
\centering
\caption{Generalised Bargmann algebras in $d>2$}
\label{tab:gen-bargmann}
\setlength{\extrarowheight}{2pt}
\rowcolors{2}{red!10}{yellow!10}
\begin{tabular}{>{$}l<{$}|*{2}{>{$}l<{$}}|l}
\k & \multicolumn{2}{c|}{Brackets involving $Z$} & \multicolumn{1}{c}{Comments} \\
\toprule
\mathfrak{s} & [\boldsymbol{B},\P] = Z & & \\
\mathfrak{n}^+ & [\boldsymbol{B},\P] = Z & & \\
\mathfrak{n}^- & [\boldsymbol{B},\P] = Z & & \\
\mathfrak{g} & [\boldsymbol{B},\P] = Z & & \\
\midrule
\mathfrak{n}^+_\gamma & [\boldsymbol{B},\P] = Z & [H,Z] = (\gamma+1) Z & $\gamma\in(-1,1]$\\
\mathfrak{n}^0 & [\boldsymbol{B},\P] = Z & [H,Z] = 2 Z & \\
\mathfrak{n}^-_\chi & [\boldsymbol{B},\P] = Z & [H,Z] = 2 \chi Z & $\chi > 0$\\
\midrule
\mathfrak{s} & & [H,Z] = Z & \\
\mathfrak{g} & & [H,Z] = Z & \\
\mathfrak{n}^+_\gamma & & [H,Z] = \alpha Z & $\gamma \in [-1,1]$ and $\alpha \neq 0$\\
\mathfrak{n}^0 & & [H,Z] = \alpha Z & $\alpha \neq 0$\\
\mathfrak{n}^-_\chi & & [H,Z] = \alpha Z & $\chi \geq 0$ and $\alpha \neq 0$\\
\bottomrule
\end{tabular}
\end{table}
Aristotelian Lie algebras (if $d>2$) admit no nontrivial central
extensions: there are no $\r$-invariant cochains, let alone cocycles.
They do, however, admit nontrivial non-central extensions, which are
listed in Table~\ref{tab:ext-aristo}, which lists the aristotelian Lie
algebra being extended and the brackets involving the additional
generator $Z$. Again the (nonzero) parameter $\alpha$ is effective.
\begin{table}[h!]
\setlength{\tabcolsep}{3pt}
\centering
\caption{One-dimensional extensions of aristotelian Lie algebras in $d>2$}
\label{tab:ext-aristo}
\setlength{\extrarowheight}{2pt}
\rowcolors{2}{red!10}{yellow!10}
\begin{tabular}{>{$}l<{$}|>{$}l<{$}|l}
\multicolumn{1}{c|}{$\a$} & \multicolumn{1}{c|}{Brackets involving $Z$} & \multicolumn{1}{c}{Comments} \\
\toprule
\mathfrak{iso}(d) \oplus \mathbb{R} & [H,Z] = Z & \\
\mathfrak{sim}(d) & [H,Z] = \alpha Z & $\alpha \neq 0$ \\
\choice{\mathfrak{so}(d,1)\oplus \mathbb{R}}{\mathfrak{so}(d+1)\oplus \mathbb{R}} & [H,Z] = Z & \\
\bottomrule
\end{tabular}
\end{table}
Changing notation: $(H,Z) \mapsto (D,H)$, the Lie algebras in
Table~\ref{tab:ext-aristo} are examples of Lifshitz Lie algebras (see,
e.g., \cite{Figueroa-OFarrill:2022kcd}). The extension of $\mathfrak{sim}(d)$
is the original Lifshitz algebra, where the parameter $\alpha$ is
typically denoted $z$:
\begin{equation}
[D,\P] = \P \qquad\text{and} \qquad[D,H] = z H,
\end{equation}
in addition to the brackets
\eqref{eq:aristo-gen-1}--\eqref{eq:aristo-gen-3} common to all
aristotelian Lie algebras.
\section{Geometry}
\label{sec:geometries}
In this section we discuss non-lorentzian geometries. In the spirit
of Klein's Erlangen Programme, we start by discussing the homogeneous
spacetimes associated to the kinematical Lie algebras discussed in
Section~\ref{sec:symmetries}. We will see that these spaces fall
into different families depending on the structure which the
kinematical group preserves: metric (riemannian or lorentzian),
Newton--Cartan, carrollian or aristotelian. Each of these geometries
is a Cartan geometry modelled on a homogeneous spacetime and we will
discuss them in turn. In a sense, all we are doing is extending to
the non-lorentzian context the standard sequence of ideas:
\begin{equation}
\text{Poincaré symmetry} \longrightarrow \text{Minkowski spacetime}
\longrightarrow \text{lorentzian geometry}.
\end{equation}
Of course, Minkowski spacetime is not the only homogeneous space of
the Poincaré group, so the passage from the Poincaré group to
Minkowski spacetime requires a choice, whereas the passage from
Minkowski spacetime to lorentzian geometry is more or less forced.
\subsection{Homogeneous spaces}
\label{sec:homogeneous-spaces}
In this section we review the basic notions of homogeneous geometry.
\subsubsection{Group actions on manifolds}
\label{sec:group-acti-manif}
Let $\mathscr{G}$ be a Lie group. A (linear) representation of $\mathscr{G}$ on a
vector space $V$ is a Lie group homomorphism $\rho: \mathscr{G} \to \operatorname{GL}(V)$;
that is, $\rho$ is a smooth map and a group homomorphism
$\rho(ab) = \rho(a)\rho(b)$ for all $a,b \in \mathscr{G}$. We are also
interested in nonlinear realisations\footnote{In Physics it is
customary to reserve the name ``nonlinear realisation'' only to
transitive actions (see later), when $M$ is diffeomorphic to a coset
space $\mathscr{G}/\mathscr{H}$. A manifold admitting a transitive action of a
Lie group is the nonlinear analogue of an \emph{irreducible}
representation. In the same way that it is useful to consider
representations which are not necessarily irreducible, we shall
consider nonlinear realisations where the action is not necessarily
transitive.} of $\mathscr{G}$ on a manifold $M$. It would be tempting by
analogy with the case of a linear representation to define a nonlinear
realisation as a Lie group homomorphism $\rho : \mathscr{G} \to \operatorname{Diff}(M)$,
except for the fact that the diffeomorphism group $\operatorname{Diff}(M)$ of a
manifold is not typically a Lie group. Instead we define nonlinear
realisations as \emph{actions}. One has to distinguish between left
and right actions; although it is easy to go between them. By a
\textbf{(left) action} of $\mathscr{G}$ on a manifold $M$ we mean a smooth
map $\alpha : \mathscr{G} \times M \to M$, written simply as $\alpha(g,p) = g
\cdot p$, satisfying two properties:
\begin{itemize}
\item for all $g_1,g_2 \in \mathscr{G}$ and $p \in M$, $(g_1 g_2)\cdot p = g_1
\cdot (g_2 \cdot p)$; and
\item for all $p \in M$, $e \cdot p = p$ where $e \in \mathscr{G}$ is the
identity element.
\end{itemize}
If we fix $g \in \mathscr{G}$, $\alpha(g,-): M \to M$ is a diffeomorphism which
we typically denote $\alpha_g$. On the other hand, if we fix $p \in
M$, we get a map $\alpha(-,p): \mathscr{G} \to M$ known as the \textbf{orbit
map}, as its image is the orbit of $p$ under $\mathscr{G}$.
Let $\mathfrak{g}$ denote the Lie algebra of $\mathscr{G}$. An action of $\mathscr{G}$ on $M$ gives
rise to a Lie algebra antihomomorphism $\xi: \mathfrak{g} \to \mathscr{X}(M)$, assigning to
every $X \in \mathfrak{g}$ a vector field $\xi_X$ and such that for all $X,Y \in
\mathfrak{g}$, $[\xi_X,\xi_Y] = - \xi_{[X,Y]}$, where the bracket on the LHS is
the Lie bracket of vector fields and that on the RHS is the bracket on
$\mathfrak{g}$. The vector fields in the image of $\xi$ are called the
\textbf{fundamental vector fields} of the group action.
Although one can redefine the fundamental vector fields in such a way
that the new map $\mathfrak{g} \to \mathscr{X}(M)$ is a Lie algebra homomorphism, it
turns out not to be natural, as we will see shortly.
A group action $\alpha : \mathscr{G} \times M \to M$ is said to be
\textbf{effective} if the only element $g \in \mathscr{G}$ which acts trivially
(i.e., which obeys $g \cdot p = p$ for all $p \in M$) is the identity
element. A weaker condition is for the action to be \textbf{locally
effective}, which says that the elements of $\mathscr{G}$ which act trivially
form a discrete subgroup of $\mathscr{G}$. This is equivalent to the map $\xi:
\mathfrak{g} \to \mathscr{X}(M)$ being injective, so that no nonzero element in $\mathfrak{g}$ is
sent to the zero vector field.
A group action $\alpha : \mathscr{G} \times M \to M$ is said to be
\textbf{transitive} if given any two points $p,q \in M$, there is some
$g \in \mathscr{G}$ with $q = g \cdot p$. Equivalently, if the $\mathscr{G}$-orbit of any
point is the whole manifold. This is the analogue for nonlinear
realisations of irreducibility for linear representations. A linear
representation of $\mathscr{G}$ on $V$ is irreducible if there are no proper
subspaces of $V$ which are stable under $\mathscr{G}$. Similarly, an action of
$\mathscr{G}$ on $M$ is transitive if there are no proper submanifolds of $M$
stable under the action of $\mathscr{G}$.
A manifold $M$ is said to be a \textbf{homogeneous space} of a Lie
group $\mathscr{G}$ if $\mathscr{G}$ acts transitively on $M$. The
\textbf{stabiliser subgroup} of a point $p \in M$ is the subgroup
$\mathscr{H} \subset \mathscr{G}$ which fixes $p$:
$\mathscr{H} = \left\{g \in \mathscr{G} \middle | g \cdot p = p \right\}$. It is a
closed subgroup of $\mathscr{G}$. Its Lie algebra $\mathfrak{h}$ consists of those
fundamental vector fields which vanish at $p$. If $M$ is a
homogeneous space of $\mathscr{G}$, then the stabiliser subgroups of all of
its points are conjugate in $\mathscr{G}$. Indeed, let $\mathscr{H}_p$ denote the
stabiliser subgroup of $p \in M$ and $\mathscr{H}_q$ that of $q \in M$.
Since $\mathscr{G}$ acts transitively, there is some $g \in \mathscr{G}$ such that
$q = g \cdot p$ and hence $h \in \mathscr{H}_q$ if and only if
$h = g h' g^{-1}$ for some $h' \in \mathscr{H}_p$. Often one picks an
``origin'' $o \in M$ and lets $\mathscr{H}$ denote the stabiliser subgroup of
$o$. Then $M$ is diffeomorphic to the space of left cosets
$\mathscr{G}/\mathscr{H}$. This is why homogeneous spaces are often referred to as
\emph{coset spaces} or \emph{coset manifolds}. Of course the choice
of origin is immaterial, since from the point of $\mathscr{G}$ all points in
a homogeneous space ``look the same''.
It is not just that $M$ and $\mathscr{G}/\mathscr{H}$ are diffeomorphic, but that they are
$\mathscr{G}$-equivariantly so: the diffeomorphism $M \to \mathscr{G}/\mathscr{H}$ intertwines
between the left action of $\mathscr{G}$ on $M$ and the left action of $\mathscr{G}$ on
$\mathscr{G}/\mathscr{H}$ which is induced from left multiplication in $\mathscr{G}$:
if $g'\mathscr{H} \in \mathscr{G}/\mathscr{H}$ and $g \in \mathscr{G}$, we have that $g \cdot g'\mathscr{H} =
(gg')\mathscr{H}$.
Now recall that the vector fields which generate left multiplication
on $\mathscr{G}$ are the right-invariant vector fields and they satisfy the
opposite Lie algebra. This explains why it is natural for the map
$\xi : \mathfrak{g} \to \mathscr{X}(M)$ to be an antihomomorphism.
\subsubsection{Linear isotropy representation and invariant tensors}
\label{sec:line-isotr-repr}
Let $M$ be a homogeneous space of $\mathscr{G}$ and $\mathscr{H} \subset \mathscr{G}$ the stabiliser
of the origin $o \in M$. Since every $h \in \mathscr{H}$ preserves $o$, the
derivative at $o$ of the diffeomorphism $\alpha_h : M \to M$ defines a
linear transformation $\lambda(h)$ of the tangent space $T_oM$. Since
$\alpha$ is an action, in particular, $\alpha_{h_1} \circ \alpha_{h_2}
= \alpha_{h_1h_2}$ for all $h_1,h_2 \in \mathscr{H}$ and, by the chain rule,
$\lambda : \mathscr{H} \to \operatorname{GL}(T_oM)$ is a representation, known as the
\textbf{linear isotropy representation}.
The linear isotropy representation plays a very important rôle in
determining the $\mathscr{G}$-invariant tensor fields on a homogeneous space
$M$. An important result, which is a special case of the
\emph{fundamental principle of holonomy} (see, e.g.,
\cite[Para.~10.19]{Besse} in the riemannian case, but holds more
generally for any connection), states that there is a one-to-one
correspondence between $\mathscr{H}$-invariant tensors on $T_oM$ and
$\mathscr{G}$-invariant tensor fields on $M$. Briefly, it goes as follows.
If $\Phi$ is a $\mathscr{G}$-invariant tensor field on $M$, its value at the
origin is a tensor $\Phi_o$ on $T_oM$ which is invariant under the
linear isotropy representation of $\mathscr{H}$. Conversely, given an
$\mathscr{H}$-invariant tensor $\Phi_o$ on $T_oM$ we may extend it to a
tensor field on $M$ via the $\mathscr{G}$ action. Its value $\Phi_p$ at
$p \in M$ is defined by picking $g \in \mathscr{G}$ with $g \cdot o = p$ and
acting on $\Phi_o$ with $g$: $\Phi_p = g \cdot \Phi_o$. The problem
is that there is typically not a unique $g \in \mathscr{G}$ connecting $o$ to
$p$, so which one do we choose? It turns out that the choice is
immaterial: if $g' \in \mathscr{G}$ is any other such element, then
$g' = g h$ for some $h \in \mathscr{H}$ and precisely because $\Phi_o$ is
$\mathscr{H}$-invariant, $g'\cdot \Phi_o = g \cdot \Phi_o$ and it does not
matter whether we use $g$ or $g'$ to calculate $\Phi_p$.
If in addition, $\mathscr{H}$ is a connected subgroup with Lie algebra $\mathfrak{h}
\subset \mathfrak{g}$, then $\mathscr{G}$-invariant tensor fields on $M$ are in one-to-one
correspondence with $\mathfrak{h}$-invariant tensors on $T_oM$. Determining the
$\mathfrak{h}$-invariant tensors is a reasonably simple linear algebra problem
in most cases.
\subsubsection{Klein pairs}
\label{sec:klein-pairs}
Let $M$ be a homogeneous space of $\mathscr{G}$ with typical stabiliser $\mathscr{H}$.
Let $\mathfrak{g}$ and $\mathfrak{h}$ denote the Lie algebras of $\mathscr{G}$ and $\mathscr{H}$,
respectively. Then we may associate to $M$ the \textbf{Klein pair}
$(\mathfrak{g},\mathfrak{h})$. Not every pair $(\mathfrak{g},\mathfrak{h})$ consisting of a Lie algebra $\mathfrak{g}$
and a Lie subalgebra $\mathfrak{h}$ is a Klein pair. It has to be
\emph{geometrically realisable}, which says that there exists some Lie
group $\mathscr{G}$ with Lie algebra $\mathfrak{g}$ such that the connected subgroup $\mathscr{H}$
generated by $\mathfrak{h}$ is closed. As explained, for example, in
\cite[Appendix~B]{Figueroa-OFarrill:2018ilb}, there is a one-to-one
correspondence between (effective, geometrically realisable) Klein
pairs and simply-connected homogeneous spaces of $\mathscr{G}$. (See
\cite[Appendix~B.3]{Figueroa-OFarrill:2018ilb} for a simple example of
a Klein pair which is not geometrically realisable.) Paraphrasing
slightly, (effective, geometrically realisable) Klein pairs classify
homogeneous spaces up to covering, in the same way that Lie algebras
classify Lie groups up to covering.
There is a notion of isomorphism between Klein pairs which is crucial
in classifications. We say that two Klein pairs $(\mathfrak{g}_1,\mathfrak{h}_1)$ and
$(\mathfrak{g}_2, \mathfrak{h}_2)$ are \emph{isomorphic}, if there is a Lie algebra
isomorphism $\varphi: \mathfrak{g}_1 \to \mathfrak{g}_2$ with $\varphi(\mathfrak{h}_1)= \mathfrak{h}_2$.
Isomorphic Klein pairs, if geometrically realisable, give rise to
locally isomorphic homogeneous spaces.
Let $M$ be a homogeneous $\mathscr{G}$-space with Klein pair $(\mathfrak{g},\mathfrak{h})$. We say
that the Klein pair is \textbf{reductive} if there exists a
complementary subspace $\mathfrak{m}$ with $\mathfrak{g} = \mathfrak{h} \oplus \mathfrak{m}$ which is stable
under the restriction to $\mathscr{H}$ of the adjoint action of $\mathscr{G}$ on $\mathfrak{g}$. If
$\mathscr{H}$ is connected, reductivity says that $[\mathfrak{h},\mathfrak{m}] \subset \mathfrak{m}$. In
the reductive case, the vector space isomorphism $T_oM \cong \mathfrak{m}$
intertwines between the linear isotropy representation on $T_oM$ and
the restriction to $\mathscr{H}$ of the $\mathscr{G}$-adjoint representation on
$\mathfrak{m}$. In the non-reductive case, there is a vector space isomorphism
$T_oM \cong \mathfrak{g}/\mathfrak{h}$, where the quotient vector space $\mathfrak{g}/\mathfrak{h}$ is
naturally a representation of $\mathscr{H}$. In practice we work with
$\mathfrak{g}/\mathfrak{h}$ by working with $\mathfrak{g}$ and just dropping any terms belonging to
$\mathfrak{h}$ at the end.
A reductive Klein pair $(\mathfrak{g} = \mathfrak{h} \oplus \mathfrak{m},\mathfrak{h})$ is said to be
\textbf{symmetric} if $[\mathfrak{m},\mathfrak{m}] \subset \mathfrak{h}$. Symmetric Klein pairs are
the infinitesimal description (up to coverings) of symmetric spaces.
It may be convenient to write the reductive and symmetry conditions in
a basis. Let $X_i$ denote a basis for $\mathfrak{h}$. The Klein pair $(\mathfrak{g},\mathfrak{h})$
is reductive if we can complete to a basis $X_i, Y_I$ for $\mathfrak{g}$ such
that $[X_i, Y_I] = c_{iI}{}^J Y_J$; that is, no $X_i$ appear in the
RHS. For a reductive Klein pair, $[Y_I,Y_J] = c_{IJ}{}^i X_i+
c_{IJ}{}^K Y_K$ in general, but if it is symmetric then $c_{IJ}{}^K =
0$ and hence $[Y_I,Y_J] = c_{IJ}{}^i X_i$.
\subsubsection{Exponential coordinates}
\label{sec:expon-coord}
Let us now discuss coordinates on homogeneous spaces, but first we
review exponential coordinates on a Lie group.
In the neighbourhood of any point $g$ in a Lie group $\mathscr{G}$, we have
exponential coordinates associated to every choice of basis for the
Lie algebra $\mathfrak{g}$. Recall that the exponential map $\exp: \mathfrak{g} \to \mathscr{G}$,
which for a matrix Lie group is just the matrix exponential, is a
diffeomorphism between a neighbourhood of $0 \in \mathfrak{g}$ and a
neighbourhood of the identity $e \in \mathscr{G}$. Let $X_1,\dots,X_n$ be a
basis for $\mathfrak{g}$ and consider $\exp(x^1 X_1 + \cdots x^n X_n) \in \mathscr{G}$.
The $(x^1,\dots,x^n)$ are local coordinates for $\mathscr{G}$ centred at the
identity, which has coordinates $(0,\dots,0)$. We may now use left
(or right) multiplication to give coordinates in a neighbourhood of
any other $g \in \mathscr{G}$; for example, $g \exp(y^1 X_1 + \cdots y^n X_n)$
give local coordinates for $\mathscr{G}$ near $g$. On overlaps, the change of
coordinates between these exponential coordinates is real analytic,
which shows that Lie groups are not just smooth but actually real
analytic manifolds.
Now let us consider a homogeneous space $M \cong \mathscr{G}/\mathscr{H}$ with Klein pair
$(\mathfrak{g},\mathfrak{h})$. Recall that the identification of $M$ with $\mathscr{G}/\mathscr{H}$ implies a
choice of origin $o \in M$ (corresponding to the identity coset) with
stabiliser $\mathscr{H}$. Let us choose a vector space complement to $\mathfrak{h}$ in
$\mathfrak{g}$ and write $\mathfrak{g} = \mathfrak{h} \oplus \mathfrak{m}$. In the reductive case, we can
(and will) choose $\mathfrak{m}$ so that $[\mathfrak{h},\mathfrak{m}] \subset \mathfrak{m}$, but in general
this may not be possible. Choosing a basis $Y_1,\dots,Y_m$ for $\mathfrak{m}$
we obtain local coordinates near the origin on $M$ by
$\exp(x^1 Y_1 + \cdots + x^m Y_m) \cdot o$ or, having identified $M$
with $\mathscr{G}/\mathscr{H}$, by $\exp(x^1 Y_1 + \cdots + x^m Y_m)\mathscr{H}$. In general we can
only hope for local coordinates. Indeed, a coset representative is a
choice of section of the principal $\mathscr{H}$-bundle $\mathscr{G} \to \mathscr{G}/\mathscr{H}$ and
principal bundles admit sections if and only if they are trivial.
In effect, what we are doing is choosing a \textbf{coset representative}
(here $\exp(x^1 Y_1 + \cdots + x^m Y_m) \in \mathscr{G}$) for each coset in
$\mathscr{G}/\mathscr{H}$ near the identity coset. This is a locally defined smooth map
$M \to \mathscr{G}$, which is only defined in a neighbourhood of the origin, and
we may use it to pull back differential forms on $\mathscr{G}$ to $M$. Every
Lie group $\mathscr{G}$ has a distinguished $\mathfrak{g}$-valued one-form $\vartheta \in
\Omega^1(\mathscr{G},\mathfrak{g})$: the left-invariant Maurer--Cartan one-form. If we
identify $\mathfrak{g} = T_eG$ with the tangent space at the identity, then
$\vartheta_g : T_g \mathscr{G} \to T_e \mathscr{G}$ is simply the differential of left
multiplication by $g^{-1}$. We may use a (local) coset representative
$L : M \to \mathscr{G}$ to pull back $\vartheta$ to a local one-form
$L^*\vartheta$ on $M$ which, for $\mathscr{G}$ a matrix group, has the simpler
expression
\begin{equation}
L^*\vartheta = L^{-1}dL.
\end{equation}
Although this is strictly speaking only valid for $\mathscr{G}$ a matrix group,
one does not go wrong by assuming we are in a matrix group for
calculations provided that in the end we express the final result in a
way that makes sense for a general Lie group. For example, it follows
from the above expression that $L^*\vartheta$ is left-invariant, since
if we multiply $L(x)$ by a constant group element $g$ on the left, it
remains invariant:
\begin{equation}
(gL)^{-1} d(gL) = L^{-1} g^{-1} g dL = L^{-1} dL.
\end{equation}
Similarly, differentiating again, we find that
\begin{equation}
d(L^{-1}dL) = dL^{-1} \wedge dL = -L^{-1}dL \wedge L^{-1} dL =
-\tfrac12 [L^{-1}dL, L^{-1}dL],
\end{equation}
which is the Maurer--Cartan structure equation. Notice that in the
equation above we wrote the term $L^{-1}dL \wedge L^{-1}dL$ which
involves matrix multiplication as a commutator $\tfrac12 [L^{-1}dL,
L^{-1}dL]$, which makes sense (as the Lie bracket in the Lie algebra
$\mathfrak{g}$) for $\mathscr{G}$ any group, not necessarily a matrix
group.\footnote{Whereas Ado's theorem says that any finite-dimensional
real Lie algebra is a matrix Lie algebra, the similar result for Lie
groups is false. The simplest counterexample to the putative Lie
group version of Ado's theorem is the universal cover of
$\operatorname{SL}(2,\mathbb{R})$. See, for example, Graeme Segal's lectures in
\cite{MR1356712}.}
Suppose that $\mathscr{G}/\mathscr{H}$ is reductive, so that $\mathfrak{g} = \mathfrak{h} \oplus \mathfrak{m}$
with $[\mathfrak{h},\mathfrak{m}]\subset \mathfrak{m}$. Then we can split the pull-back of the
Maurer--Cartan form into its components along $\mathfrak{h}$ and $\mathfrak{m}$:
\begin{equation}
L^{-1} dL = (L^{-1}dL)_\mathfrak{h} + (L^{-1}dL)_\mathfrak{m} = \omega + \theta,
\end{equation}
where $\omega$, the component along $\mathfrak{h}$ is a connection one-form and
$\theta$, the component along $\mathfrak{m}$, is a soldering form. Indeed,
under right multiplication by a local $\mathscr{H}$ transformation $L \mapsto
L h^{-1}$,
\begin{equation}
L^{-1}dL \mapsto (Lh^{-1})^{-1} d(Lh^{-1}) = h (L^{-1}dL) h^{-1} + h dh^{-1} =
h\omega h^{-1} - dh h^{-1} + h \theta h^{-1},
\end{equation}
so that comparing the $\mathfrak{h}$ and $\mathfrak{m}$ components, we arrive at
\begin{equation}
\omega \mapsto h\omega h^{-1} - dh h^{-1} \qquad\text{and}\qquad
\theta \mapsto h \theta h^{-1}.
\end{equation}
If $\mathscr{G}/\mathscr{H}$ is not reductive, there is no natural split of the
Maurer--Cartan one-form. We can still project to $\mathfrak{g}/\mathfrak{h}$ to obtain a
soldering form, but there is no uniquely defined component along $\mathfrak{h}$
and, moreover, no such component can be chosen in such a way that
results in a connection.
Let us consider in this light the examples in
Section~\ref{sec:motivation}: the Galilei and Minkowski spacetimes.
Both spacetimes are homogeneous spaces of the translation subgroup: in
fact, they are principally homogeneous spaces since every point has
trivial stabiliser. However we wish to view them as homogeneous
spaces of their kinematical Lie groups: the Galilei and Poincaré
groups, respectively. Let us start with Minkowski spacetime, which
should be more familiar. We work now in general dimension $d+1$.
\subsubsection{Minkowski spacetime as a homogeneous space}
\label{sec:mink-spac}
The Poincaré algebra is given in
equation~\eqref{eq:poincare-brackets}. Let $\mathfrak{h}$ be the span of
$L_{AB}$ and $\mathfrak{m}$ the span of $P_A$, where $A,B =0,\dots,d$. Then it
follows that the Klein pair $(\mathfrak{g},\mathfrak{h})$ is reductive. Here we do have a
global coset representative $L(x) = \exp(x^A P_A)$, which gives global coordinates
to Minkowski spacetime. We can pull-back the Maurer--Cartan form and
we get
\begin{equation}
L^{-1} dL = dx^A P_A.
\end{equation}
We see that here $L^{-1}dL$ takes values in $\mathfrak{m}$. This is very
special. In general for a reductive Klein pair $(\mathfrak{g},\mathfrak{h})$, the
pull-back of the Maurer--Cartan one-form takes values in $\mathfrak{g}$: so it
has an $\mathfrak{h}$-component and an $\mathfrak{m}$-component. The $\mathfrak{h}$-component is a
connection one-form whereas the $\mathfrak{m}$-component is a soldering form
(i.e., a coframe or an inverse vielbein). Here we see that the
connection one-form is absent. We can explain this as follows. The
linear isotropy representation of $\mathfrak{h}$ on $\mathfrak{m}$ admits an invariant
symmetric inner product $\eta \in \odot^2\mathfrak{m}^*$, where $\odot$
denotes the symmetric tensor product, with entries $\eta(P_A,P_B) =
\eta_{AB}$ of lorentzian signature. We can apply this to $L^{-1}dL$
to obtain a Poincaré-invariant metric on Minkowski spacetime:
\begin{equation}
\eta(L^{-1}dL, L^{-1}dL) = \eta_{AB}dx^A dx^B,
\end{equation}
which is nothing else but the standard Minkowski metric in flat
coordinates. Of course, relative to flat coordinates, the connection
one-form (relative to the coordinate frame) vanishes, which explains
why there is no $\mathfrak{h}$-component in $L^{-1}dL$.
The action of the Poincaré group on Minkowski spacetime relative to
these coordinates is easy to work out, since it is induced by left
multiplication in the Poincaré group. Translations just shift the
coordinates:\footnote{This is not usually so simple, particularly in
exponential coordinates the way we have defined them. In some
examples, calculations are simpler in modified exponential
coordinates where we take product of exponentials instead of a
single exponential.}
\begin{equation}
\exp(a^A P_A) \exp(x^A P_A) = \exp((x^A+a^A) P_A),
\end{equation}
so that
$\exp(a^A P_A) \cdot (x^0,\dots,x^d) = (x^0 + a^0, \dots, x^d + a^d)$.
Lorentz transformations act linearly on the coordinates. If
$h \in \mathscr{H}$, then
\begin{equation}
h \exp(x^A P_A) = h \exp(x^A P_A) h^{-1} h = \exp(x^A \operatorname{Ad}_h P_A ) h,
\end{equation}
where we have introduced the notation $\operatorname{Ad}$ for the restriction to
$\mathscr{H}$ of the adjoint representation of $\mathscr{G}$. Since $\mathfrak{m}$ is stable
under the action of $\mathscr{H}$, $\operatorname{Ad}_h P_A \in \mathfrak{m}$, so we can write it
as $\operatorname{Ad}_h P_A = P_B h^B{}_A$ and hence
\begin{equation}
h \exp(x^A P_A) = \exp(h^B{}_A x^A h P_B) h.
\end{equation}
Acting on the ``origin'' of Minkowski spacetime or on the identity
coset $e\mathscr{H} = \mathscr{H}$, we have that
\begin{equation}
h \exp(x^A P_A) \mathscr{H} = \exp(h^B{}_A x^A h P_B) \mathscr{H},
\end{equation}
using that $h\mathscr{H} = \mathscr{H}$, since $\mathscr{H}$ is a subgroup. Therefore
Lorentz transformations in Minkowski spacetime are linear relative to
the exponential coordinates. This is a general fact about reductive
homogeneous spaces $\mathscr{G}/\mathscr{H}$: in exponential coordinates, $\mathscr{H}$
acts linearly. It is important to realise that this is a
coordinate-dependent statement and, moreover, only applies to the
reductive situation. It is the linear isotropy representation (on the
tangent space at the origin) which, as the name belies, is always
linear, regardless of reductivity.
\subsubsection{Galilei spacetime as a homogeneous space}
\label{sec:galil-spac-as}
Let us now consider Galilei spacetime, which is described by a
Klein pair $(\mathfrak{g},\mathfrak{h})$ where $\mathfrak{g}$ is the Lie algebra spanned by
$L_{ab}, B_a, P_a, H$, for $a,b = 1,\dots,d$, and whose brackets are
given by equation~\eqref{eq:gal-algebra-brackets} and $\mathfrak{h}$ is the
subalgebra spanned by $L_{ab},B_a$. We choose the reductive
complement $\mathfrak{m}$ to be the span of $P_a, H$. We choose exponential
coordinates $(t, x^a)$ via the coset representative
\begin{equation}
L(t,x) = \exp(t H + x^a P_a).
\end{equation}
Here again the pull-back of the Maurer--Cartan one-form has no
$\mathfrak{h}$-component:
\begin{equation}
L^{-1} dL = dt H + dx^a P_a.
\end{equation}
The action of the Galilei group is again easy to work out using left
multiplication: translations again shift the exponential coordinates
$\exp(s H + v^a P_a) \cdot (t,x^a) = (t + s, x^a + v^a)$, whereas
rotations and boosts act as follows. Let $R \in \mathscr{G}$ be a rotation;
that is, an element of the $\operatorname{SO}(d)$ subgroup generated by the
$L_{ab}$. Then
\begin{equation}
R \exp( t H + x^a P_a) = R \exp( t H + x^a P_a) R^{-1} R = \exp (t
H + x^a \operatorname{Ad}_R P_a) R,
\end{equation}
where we have used that $H$ is a scalar and hence commutes with the
rotations. Again $\operatorname{Ad}_R P_a = P_b R^b{}_a$ and hence acting on the
identity coset we read off the action of rotations on the exponential
coordinates: $R \cdot (t, x^a) = (t, R^b{}_a x^a)$. Now let us
consider the boosts. Let $h := \exp(v^a B_a)$. Then, as before,
\begin{equation}
h \exp(t H + x^a P_a) = \exp( \operatorname{Ad}_h (t H + x^a P_a)) h.
\end{equation}
We work out the term inside the exponential:
\begin{equation}
\operatorname{Ad}_h (t H + x^a P_a) = \operatorname{Ad}_{\exp(v^b B_b)} (t H + x^a P_a) = \exp(
v^b \operatorname{ad}_{B_b})(t H + x^a P_a),
\end{equation}
where $\operatorname{ad}_{B_b} H := [B_b, H] = P_b$ and $\operatorname{ad}_{B_b} P_a = [B_b, P_a]
= 0$. Substituting, we find
\begin{equation}
h (t H + x^a P_a) h^{-1} = (t H + (x^a + t v^a) P_a).
\end{equation}
Acting on the identity coset again we see that boosts act on the
exponential coordinates by $h \cdot (t, x^a) = (t, x^a + t v^a)$,
which are precisely the Galilei boosts we saw in
Section~\ref{sec:galilei-spacetime}.
To determine the Galilei-invariant tensors in Galilei spacetime $\mathscr{G}/\mathscr{H}$,
we need to determine the $\mathscr{H}$-invariant tensors of the linear isotropy
representation. Canonically dual to the basis $H,P_a$ for $\mathfrak{m}$ we
have a basis $\eta,\pi^a$ for $\mathfrak{m}^*$. We need to work out the linear
isotropy representation of $\mathscr{H}$ on both $\mathfrak{m}$ and $\mathfrak{m}^*$ and hence on
tensors. The linear isotropy representation is such that the $L_{ab}$
generate rotations and $B_a$ generate boosts. It is easy to
determine the tensors invariant under rotations. First of all $H \in
\mathfrak{m}$ is invariant, but also its dual $\eta$. A classical theorem of
Weyl's \cite[Theorem~2.11.A]{MR1488158} says that every $\operatorname{SO}(d)$
invariant tensor of the $d$-dimensional vector representation can be
constructed out of $\delta_{ab}$, its inverse and $\epsilon_{a_1\dots
a_d}$. Concerning the boosts, we have that
\begin{equation}
\begin{aligned}\relax
B_a \cdot H &= P_a\\
B_a \cdot P_b & =0
\end{aligned}
\qquad\text{and}\qquad
\begin{aligned}\relax
B_a \cdot \eta &= 0\\
B_a \cdot \pi^b & = -\delta^b_a \eta,
\end{aligned}
\end{equation}
where we have used that the action of $B_a$ on $\mathfrak{m}$ is induced by the
restriction of adjoint representation $B_a \cdot X = \operatorname{ad}_{B_a} X =
[B_a, X]$ for $X \in \mathfrak{m}$, whereas that on $\mathfrak{m}^*$ is induced by the
restriction of the coadjoint representation $B_a \cdot \alpha =
\operatorname{ad}^*_{B_a} \alpha = - \alpha \circ \operatorname{ad}_{B_a}$ for $\alpha \in \mathfrak{m}^*$.
We see that $\eta \in \mathfrak{m}^*$ is $\mathscr{H}$-invariant and so is $\delta^{ab}
P_a \otimes P_b$. Applying $\eta$ to the pull-back of the
Maurer--Cartan one-form we obtain a Galilei-invariant one-form on
Galilei spacetime, namely, the clock one-form
\begin{equation}
\tau := \eta(L^{-1}dL) = \eta (dt H + dx^a P_a) = dt.
\end{equation}
The vielbein dual to the soldering form $dt H + dx^a P_a$ is given by
$\frac{\d}{\d t}$ and $\frac{\d}{\d x^a}$. The Galilei-invariant
tensor field corresponding to $\delta^{ab}
P_a \otimes P_b$ is then $\delta^{ab}\frac{\d}{\d x^a} \otimes
\frac{\d}{\d x^b}$, which is the spatial cometric on Galilei
spacetime we saw in Section~\ref{sec:galilei-spacetime}.
\subsubsection{Summary}
\label{sec:summary}
We may summarise the above discussion as follows:
\begin{itemize}
\item Homogeneous spaces of a group $\mathscr{G}$ are described infinitesimally
by a Klein pair $(\mathfrak{g},\mathfrak{h})$ where $\mathfrak{g}$ is the Lie algebra of $\mathscr{G}$ and
$\mathfrak{h}$ a Lie subalgebra generating a closed subgroup $\mathscr{H}$ of $\mathscr{G}$. The
homogeneous space can be identified with the coset space $\mathscr{G}/\mathscr{H}$
consisting of left $\mathscr{H}$-cosets $g\mathscr{H}$ in $\mathscr{G}$. The action of $\mathscr{G}$ on
$\mathscr{G}/\mathscr{H}$ is induced from left multiplication on $\mathscr{G}$.
\item As a vector space, $\mathfrak{g} = \mathfrak{h} \oplus \mathfrak{m}$, where if possible we
choose $\mathfrak{m}$ in such a way that $[\mathfrak{h},\mathfrak{m}] \subset \mathfrak{m}$. If this is
possible, we say that $(\mathfrak{g},\mathfrak{h})$ is reductive.
\item Every choice of basis $X_1,\dots,X_m$ for $\mathfrak{m}$ gives rise to
exponential coordinates near the identity coset of $\mathscr{G}/\mathscr{H}$ corresponding to a
(locally defined) coset representative $L : \mathscr{G}/\mathscr{H} \to \mathscr{G}$, where $L(x)
= \exp(x^1 X_1 + \cdots x^m X_m)$. The action of $\mathscr{G}$ on
the exponential coordinates can be calculated in principle simply by
left multiplying in $\mathscr{G}$: $g L(x) = L(g\cdot x) h(g,x)$ for some
$h(g,x) \in \mathscr{H}$. In the reductive case, the action of $h \in
\mathscr{H}$ on the exponential coordinates is linear.
\item We can use the coset representative to pull-back the
Maurer--Cartan one-form to $\mathscr{G}/\mathscr{H}$. This results in a locally defined
one-form with values in $\mathfrak{g}$. In the reductive case, it decomposes
into an $\mathfrak{h}$-connection and a soldering form. In the non-reductive
case, the $\mathfrak{h}$-component is \emph{not} a connection, but the
projection to $\mathfrak{g}/\mathfrak{h}$ is still a soldering form.
\item In the reductive case, the representation of $\mathscr{H}$ on $\mathfrak{m}$ is
called the the linear isotropy representation. In the
non-reductive case, the linear isotropy representation is carried
by the quotient vector space $\mathfrak{g}/\mathfrak{h}$. In practice we work with
$\mathfrak{g}/\mathfrak{h}$ by calculating brackets in $\mathfrak{g}$ and then dropping from the
RHS anything belonging to $\mathfrak{h}$.
\item $\mathscr{G}$-invariant tensor fields on $\mathscr{G}/\mathscr{H}$ are in one-to-one
correspondence with $\mathscr{H}$-invariant tensors on $\mathfrak{g}/\mathfrak{h}$ (or on $\mathfrak{m}$ in
the reductive situation). If $\mathscr{H}$ is connected, this is the same as
$\mathfrak{h}$-invariant tensors, which are typically simple to determine, at
least if of small rank.
\end{itemize}
\subsection{Homogeneous kinematical spacetimes}
\label{sec:homog-kinem-spac}
A homogeneous kinematical spacetime is a homogeneous space of a
kinematical group of the right dimension. Recall from
Definition~\ref{def:KLA} (but now for arbitrary dimension) that a
kinematical Lie algebra for $(d+1)$-dimensional spacetimes consists of
a subalgebra $\r \cong \mathfrak{so}(d)$ with generators $L_{ab}$, two copies of
the vector representation with generators $B_a, P_a$ and an additional
scalar generator $H$. Suppose that $\mathfrak{g}$ is such a kinematical Lie
algebra. A kinematical Klein pair for a $(d+1)$-dimensional spacetime
takes the form $(\mathfrak{g},\mathfrak{h})$, where $\mathfrak{h} \subset \mathfrak{g}$ is a Lie subalgebra
spanned by $L_{ab}$ and $V_a = \alpha B_a + \beta P_a$ for some
$\alpha,\beta \in \mathbb{R}$.
The determination of such Klein pairs was done in
\cite{Figueroa-OFarrill:2018ilb}, whose results we summarise in
Table~\ref{tab:spacetimes}, where we have excluded some spacetimes
which only exist for $d=1,2$. We have chosen a basis for $\mathfrak{g}$ in such
a way that $\mathfrak{h}$ is always spanned by $L_{ab}$ and (the new) $B_a$.
This facilitates comparison of the different homogeneous spacetimes,
but also obscures the isomorphisms between some of the Lie algebras. For
example, the kinematical Lie algebras for Minkowski ($\mathsf{M}$) and
anti~de~Sitter--Carroll ($\mathsf{AdSC}$) spacetimes are isomorphic to the Poincaré
algebra. The explicit isomorphism is the identity on the rotation and
time-translation generators, but exchanges boosts and spatial momenta:
\begin{equation}\label{eq:Mink-AdSC-iso}
\L^{\mathsf{M}} = \L^{\mathsf{AdSC}}, \quad H^{\mathsf{M}} =
H^{\mathsf{AdSC}}, \quad \boldsymbol{B}^{\mathsf{M}} = \P^{\mathsf{AdSC}}
\quad\text{and}\quad \P^{\mathsf{M}} = - \boldsymbol{B}^{\mathsf{AdSC}}.
\end{equation}
This illustrates the need to specify a geometric realisation before
assigning a physical/geometric meaning to the generators of a
kinematical Lie algebra, since what is a translation in Minkowski
spacetime is a carrollian boost in anti~de~Sitter--Carroll.
Similarly, the kinematical Lie algebras for hyperbolic space ($\mathsf{H}$), de
Sitter spacetime $(\mathsf{dS})$ and the lightcone ($\mathsf{LC}$) are
isomorphic (to the Lorentz algebra in one dimension higher). The
explicit isomorphisms are again the identity on the rotation and
time-translation generators:
\begin{equation}
\L^{\mathsf{dS}} = \L^{\mathsf{H}} = \L^{\mathsf{LC}}
\quad\text{and}\quad H^{\mathsf{dS}} = H^{\mathsf{H}} =
H^{\mathsf{LC}},
\end{equation}
but now
\begin{equation}
\P^{\mathsf{dS}} = \boldsymbol{B}^{\mathsf{H}} \quad\text{and}\quad \boldsymbol{B}^{\mathsf{dS}} = - \P^{\mathsf{H}},
\end{equation}
and
\begin{equation}
\boldsymbol{B}^{\mathsf{LC}} = \tfrac1{\sqrt2} (\boldsymbol{B}^{\mathsf{dS}} -
\P^{\mathsf{dS}}) = -\tfrac1{\sqrt2} (\boldsymbol{B}^{\mathsf{H}} +
\P^{\mathsf{H}}) \quad\text{and}\quad \P^{\mathsf{LC}} =
\tfrac1{\sqrt2} (\boldsymbol{B}^{\mathsf{dS}} + \P^{\mathsf{dS}}) = \tfrac1{\sqrt2} (\boldsymbol{B}^{\mathsf{H}} -
\P^{\mathsf{H}}).
\end{equation}
\begin{table}[h!]
\centering
\caption{Homogeneous ($d+1$)-dimensional (spatially isotropic) kinematical spacetimes}
\label{tab:spacetimes}
\rowcolors{2}{red!10}{yellow!10}
\resizebox{\textwidth}{!}{
\begin{tabular}{l|>{$}l<{$}|*{5}{>{$}l<{$}}}\toprule
\multicolumn{1}{c|}{Name} & \multicolumn{1}{c|}{Klein pair} & \multicolumn{5}{c}{Nonzero Lie brackets in addition to $[\L,\L] = \L$, $[\L, \boldsymbol{B}] = \boldsymbol{B}$, $[\L,\P] = \P$} \\\midrule
Minkowski & (\mathfrak{iso}(d,1),\mathfrak{so}(d,1)) & [H,\boldsymbol{B}] = -\P & & [\boldsymbol{B},\boldsymbol{B}] = \L & [\boldsymbol{B},\P] = H &\\
de~Sitter & (\mathfrak{so}(d+1,1),\mathfrak{so}(d,1)) & [H,\boldsymbol{B}] = -\P & [H,\P] = -\boldsymbol{B} & [\boldsymbol{B},\boldsymbol{B}]= \L & [\boldsymbol{B},\P] = H & [\P,\P]= - \L \\
anti~de~Sitter & (\mathfrak{so}(d,2),\mathfrak{so}(d,1)) & [H,\boldsymbol{B}] = -\P & [H,\P] = \boldsymbol{B} & [\boldsymbol{B},\boldsymbol{B}]= \L & [\boldsymbol{B},\P] = H & [\P,\P] = \L \\\midrule
euclidean & (\mathfrak{iso}(d+1),\mathfrak{so}(d+1)) &[H,\boldsymbol{B}] = \P & & [\boldsymbol{B},\boldsymbol{B}] = -\L & [\boldsymbol{B},\P] = H & \\
sphere & (\mathfrak{so}(d+2),\mathfrak{so}(d+1)) & [H,\boldsymbol{B}] = \P & [H,\P] = -\boldsymbol{B} & [\boldsymbol{B},\boldsymbol{B}]= -\L & [\boldsymbol{B},\P] = H & [\P,\P]= - \L \\
hyperbolic & (\mathfrak{so}(d+1,1),\mathfrak{so}(d+1)) & [H,\boldsymbol{B}] = \P & [H,\P] = \boldsymbol{B} & [\boldsymbol{B},\boldsymbol{B}]= -\L & [\boldsymbol{B},\P] = H & [\P,\P] = \L \\\midrule
Galilei & (\mathfrak{g},\mathfrak{iso}(d)) & [H,\boldsymbol{B}] = -\P & & & & \\
de~Sitter--Galilei & (\mathfrak{n}^+_{\gamma =-1},\mathfrak{iso}(d)) & [H,\boldsymbol{B}] = -\P & [H,\P] = -\boldsymbol{B} & & & \\
torsional de~Sitter--Galilei & (\mathfrak{n}^+_{\gamma\in(-1,1)},\mathfrak{iso}(d)) & [H,\boldsymbol{B}] = -\P & [H,\P] = \gamma\boldsymbol{B} + (1+\gamma)\P & & & \\
torsional de~Sitter--Galilei & (\mathfrak{n}^0,\mathfrak{iso}(d)) & [H,\boldsymbol{B}] = -\P & [H,\P] = \boldsymbol{B} + 2\P & & & \\
anti~de~Sitter--Galilei & (\mathfrak{n}^-_{\chi=0},\mathfrak{iso}(d)) & [H,\boldsymbol{B}] = -\P & [H,\P] = \boldsymbol{B} & & & \\
torsional anti~de~Sitter--Galilei & (\mathfrak{n}^-_{\chi>0},\mathfrak{iso}(d)) & [H,\boldsymbol{B}] = -\P & [H,\P] = (1+\chi^2) \boldsymbol{B} + 2\chi \P & & & \\\midrule
Carroll & (\mathfrak{c},\mathfrak{iso}(d)) & & & & [\boldsymbol{B},\P] = H & \\
de~Sitter--Carroll & (\mathfrak{iso}(d+1), \mathfrak{iso}(d)) & & [H,\P] = -\boldsymbol{B} & & [\boldsymbol{B},\P] = H & [\P,\P] = -\L \\
anti~de~Sitter--Carroll & (\mathfrak{iso}(d,1), \mathfrak{iso}(d)) & & [H,\P] = \boldsymbol{B} & & [\boldsymbol{B},\P] = H & [\P,\P] = \L \\
lightcone & (\mathfrak{so}(d+1,1), \mathfrak{iso}(d)) & [H,\boldsymbol{B}] = \boldsymbol{B} & [H,\P] = -\P & & [\boldsymbol{B},\P] = H + \L & \\\bottomrule
\end{tabular}
}
\end{table}
Table~\ref{tab:spacetimes} is divided into sections corresponding to
the class of geometry the spacetime describes: lorentzian, riemannian,
galilean and carrollian. They can be distinguished by the type of
invariant tensor fields or, as explained in
Section~\ref{sec:line-isotr-repr}, by the $\mathfrak{h}$-invariant tensors of
the linear isotropy representation on $\mathfrak{g}/\mathfrak{h}$. We shall now go
through the table in some detail. Further details can be found in
\cite{Figueroa-OFarrill:2018ilb,Figueroa-OFarrill:2019sex}.
\subsubsection{Homogeneous lorentzian spacetimes}
\label{sec:homog-lor-spac}
The first section of the table consists of those homogeneous
kinematical spacetimes admitting an invariant lorentzian metric. They
admit an $\mathfrak{h}$-invariant lorentzian inner product on $\mathfrak{g}/\mathfrak{h}$. The
dimension of the kinematical Lie group for a ($d+1$)-dimensional
homogeneous spacetime is $\tfrac12 (d+1)(d+2)$, hence if the
kinematical Lie group is acting via isometries, the geometry is
maximally symmetric. In lorentzian signature they are Minkowski,
de~Sitter and anti~de~Sitter spacetimes. Geometrically, there is a
one-parameter (the scalar curvature) family of both de~Sitter and
anti~de~Sitter spacetimes, but as homogeneous spacetimes they are
isomorphic. The parameter is simply the scale of the invariant
metric. These spacetimes are symmetric spaces and the stabiliser
subalgebra $\mathfrak{h} \cong \mathfrak{so}(d,1)$ in all cases.
\subsubsection{Homogeneous riemannian "spacetimes"}
\label{sec:homog-riem-spac}
The second section of the table consists of homogeneous spaces of
kinematical Lie groups which admit an invariant riemannian metric.
They can hardly be considered as spacetimes, so we will not mention
them again. The same dimension arguments as for the lorentzian
spacetimes imply that these riemannian homogeneous spaces are
maximally symmetric, so they are the euclidean and hyperbolic spaces
and the round sphere. Again the curvature of the sphere and
hyperbolic space is a choice of additional structure on the
homogeneous spaces. All round spheres are described by the same Klein
pair $(\mathfrak{so}(d+2),\mathfrak{so}(d+1))$, for example. These riemannian spaces are
symmetric spaces and the stabiliser subalgebra $\mathfrak{h} \cong \mathfrak{so}(d+1)$ in
all cases.
\subsubsection{Homogeneous galilean spacetimes}
\label{sec:homog-galil-spac}
The third section of the table consists of homogeneous spaces of
kinematical Lie groups admitting an invariant galilean structure: a
clock one-form and a spatial cometric. The clock one-form comes from
an $\mathfrak{h}$-invariant covector in $(\mathfrak{g}/\mathfrak{h})^*$, the dual of the linear
isotropy representation. The spatial cometric comes from an
$\mathfrak{h}$-invariant symmetric bivector in $\odot^2(\mathfrak{g}/\mathfrak{h})$, the symmetric
square of the linear isotropy representation. Apart from Galilei
spacetime, discussed in \ref{sec:galil-spac-as}, which is the
non-relativistic limit of Minkowski spacetime, there are two
one-parameter families of spacetimes. One family is the
de~Sitter--Galilei family with parameter $\gamma \in [-1,1]$. For
$\gamma = -1$, it is the non-relativistic limit of de~Sitter
spacetimes and hence a symmetric space associated to one of the
Newton--Hooke algebras. For any $\gamma \in (-1,1)$, the spacetime is
reductive but not symmetric and associated to the kinematical Lie
algebra $\mathfrak{n}^+_\gamma$. The notation notwithstanding, the spacetime
with $\gamma = 1$ is not associated to $\mathfrak{n}^+_{\gamma=1}$ but instead
to the Lie algebra $\mathfrak{n}^0$, which is obtained as a (singular) limit
$\lim_{\gamma\to 1}\mathfrak{n}^+_\gamma$. This limit is analogous to a
contraction, but it is not a contraction in that the Lie algebra
$\mathfrak{n}^+_\gamma$ are not isomorphic for different values of
$\gamma \in [-1,1]$. The canonical invariant connection (see, e.g.,
\cite{MR0059050}) has torsion proportional to $1+\gamma$ and hence the
spacetimes for $\gamma \neq -1$ may be thought of as torsional
de~Sitter--Galilei spacetimes. The other family is the
anti~de~Sitter--Galilei family with parameter $\chi \geq 0$. For
$\chi = 0$, it is the non-relativistic limit of anti~de~Sitter
spacetime and hence a symmetric space, associated to the other
Newton--Hooke algebra. For any $\chi > 0$, it is a reductive,
non-symmetric homogeneous spacetime. Again the canonical invariant
connection has torsion (proportional to $\chi$) and these spacetimes
are therefore called torsional anti~de~Sitter--Galilei spacetimes.
The limit $\chi \to \infty$ of the torsional anti~de~Sitter--Galilei
spacetimes coincides with the limit $\gamma \to 1$ of the torsional
de~Sitter--Galilei spacetime. In all cases, the stabiliser subalgebra
$\mathfrak{h} \cong \mathfrak{iso}(d)$.
A final remark is that the torsional de~Sitter--Galilei spacetime with
$\gamma=0$ is isomorphic to Galilei spacetime as a (weak)
Newton--Cartan geometry: both arise as null reductions of Minkowski
spacetime, but they are homogeneous spaces of non-isomorphic
kinematical Lie groups and hence not isomorphic as homogeneous
spacetimes.
\subsubsection{Homogeneous carrollian spacetimes}
\label{sec:homog-carr-spac}
The fourth section of the table consists of four homogeneous
kinematical spacetimes admitting an invariant carrollian structure: a
nowhere-vanishing vector field and a ``spatial metric''. The vector
field comes from an invariant vector in the linear isotropy
representation $\mathfrak{g}/\mathfrak{h}$ and the spatial metric comes from an invariant
symmetric bilinear form in $\odot^2(\mathfrak{g}/\mathfrak{h})^*$. Three of these
carrollian spacetimes are the ultra-relativistic limits of the
lorentzian spacetimes in the Table: Carroll spacetime (of Minkowski)
and de~Sitter--Carroll and anti~de~Sitter--Carroll spacetimes (of
de~Sitter and anti~de~Sitter, respectively). They are symmetric
spaces. The fourth carrollian spacetime is the lightcone in Minkowski
spacetime one dimension higher. It is the only non-reductive
homogeneous spacetime in the table. In all cases, the stabiliser
subalgebra $\mathfrak{h} \cong \mathfrak{iso}(d)$, but despite being abstractly isomorphic
to the stabiliser subalgebra of the galilean spacetimes, their images
under the linear isotropy representation are not conjugate subalgebras
of $\mathfrak{gl}(\mathfrak{g}/\mathfrak{h})$, which explains why they have different invariants and
hence why the geometries are different: carrollian instead of
galilean.
\subsubsection{Homogeneous aristotelian spacetimes}
\label{sec:homog-arist-spac}
Although not in the Table, there are also aristotelian spacetimes
which are homogeneous spaces of Lie groups of the aristotelian Lie
algebras in Table~\ref{tab:ALAs}. The stabiliser subalgebra is always
the rotational subalgebra $\r \cong \mathfrak{so}(d)$ and hence there is a
unique Klein pair for each aristotelian Lie algebra. In the order
given in Table~\ref{tab:ALAs}, they are the static aristotelian
spacetime, the torsional static aristotelian spacetime and the product
of the round $d$-dimensional sphere or $d$-dimensional hyperbolic
space with the real line. All are reductive and all but the torsional
static spacetime, whose canonical connection has torsion, are
symmetric.
\subsection{Non-lorentzian geometries}
\label{sec:non-lorentz-geom}
As we saw in Section~\ref{sec:homog-kinem-spac}, the homogeneous
spatially isotropic kinematical spacetimes come in several families
depending on their invariant tensors. We shall ignore the riemannian
case in what follows, since they do not admit an interpretation as
spacetimes (e.g., the boosts are actually rotations). We shall
now describe the (Cartan) geometries modelled on the homogeneous
spacetimes. It turns out that all of the geometries we consider:
lorentzian, galilean, carrollian and aristotelian are examples of
$G$-structures; that is, they are defined by distinguished vielbeins
transforming under a subgroup of the general linear group. The prime
example is lorentzian geometry, where the distinguished vielbeins
transform under local Lorentz transformations on overlaps and can
subsequently be interpreted as the (pseudo) orthonormal frames
relative to a lorentzian metric.
\subsubsection{Basic notions about $G$-structures}
\label{sec:basic-notions-about}
Now consider an $n$-dimensional manifold $M$. Let $p \in M$.
A \textbf{frame at $p$} is an isomorphism $u: \mathbb{R}^n \to T_p M$ of
vector spaces. The images under $u$ of the standard basis
$(\boldsymbol{e}_1,\dots,\boldsymbol{e}_n)$ of $\mathbb{R}^n$ give a basis $(u(\boldsymbol{e}_1),\dots,u(\boldsymbol{e}_n))$
for the tangent space at $p$. If $u,u'$ are two frames at $p$, then
$h:= u^{-1}\circ u' \in \operatorname{GL}(n,\mathbb{R})$, which we may rewrite as
$u' = u \circ h$. This defines a right action of $\operatorname{GL}(n,\mathbb{R})$ on the
set $F_p$ of frames at $p$, which is free (if $u \circ h = u$, then
$h$ is the identity) and transitive (any two frames are
related by some $h \in \operatorname{GL}(n,\mathbb{R})$). The disjoint union
$F(M) = \bigsqcup_{p\in M} F_p$ is the total space of the
\textbf{frame bundle} of $M$: a smooth right principal
$\operatorname{GL}(n,\mathbb{R})$-bundle, whose (local) sections are called \textbf{moving
frames} or \textbf{vielbeins}.
Let $\mathscr{G} \subset \operatorname{GL}(n,\mathbb{R})$ be a Lie subgroup. A \textbf{$\mathscr{G}$-structure} on
$M$ is a principal $\mathscr{G}$-subbundle $P \subset F(M)$ of the frame bundle:
this amounts to restricting to a collection of frames such for any two
frames $u,u'$ at $p$ in this collection, $u^{-1} \circ u' \in \mathscr{G}$. The
existence of a $\mathscr{G}$-structure is not guaranteed: there are topological
obstructions. For example, if $(M,g)$ is a Lorentzian manifold, we can
always pick pseudo-orthonormal frames and it follows that if $u,u'$
are pseudo-orthonormal frames at $p$, $u^{-1}\circ u' \in \operatorname{O}(d,1)
\subset \operatorname{GL}(d+1,\mathbb{R})$. The Lorentz group $\operatorname{O}(d,1)$ is not connected: it
has four connected components: depending on whether or not the
temporal or spatial orientations are preserved. This then leads to
topological obstructions (temporal and/or spatial orientability) to
further reduce the structure group from $\operatorname{O}(d,1)$ to its connected
component $\operatorname{SO}(d,1)_0$ (i.e., the proper orthochronous Lorentz group)
or some group in between.
Associated to every $\mathscr{G}$-structure $\pi: P \to M$ there is a
\textbf{soldering form} $\theta \in \Omega^1(P;\mathbb{R}^n)$, which is a
$\mathbb{R}^n$-valued one-form on the total space $P$. Let $u$ be a frame at $p$
and suppose that $X_u \in T_u P$ is a tangent vector to $P$ at $u$.
Then $\theta_u(X_u) = u^{-1}(\pi_* X_u)$. In other words,
$\theta_u(X_u)$ is the coordinate vector of $\pi_* X_u \in T_p M$
relative to the frame $u$. Given a vielbein
(i.e., a local section $s : U \to P$ on some open subset $U \subset
M$) we can use it to pull back $\theta$ to $U$. Since $\theta$ is
$\mathbb{R}^n$-valued, so is its pull-back and we may write it as a linear
combination of the standard basis of $\mathbb{R}^n$: $s^*\theta = \vartheta^i
\boldsymbol{e}_i$ for some one-forms $\vartheta^i$ defined only on $U$. Then
$(\vartheta^1,\dots,\vartheta^n)$ is often called the \emph{inverse
vielbein}, but of course it is simply the canonically dual coframe
to the vielbein.
The soldering form is the fundamental object which allows to relate
the representation theory of $\mathscr{G}$ to the geometry of any manifold with
a $\mathscr{G}$-structure. For more details about this in the context of
non-lorentzian geometry, please see
\cite[Section~2]{Figueroa-OFarrill:2020gpr}.
\subsubsection{Lorentzian geometry}
\label{sec:lorentzian-geometry}
Let us see how these ideas play out in the familiar case of lorentzian
geometry.
Let $(\mathfrak{g}= \mathfrak{h} \oplus \mathfrak{m},\mathfrak{h})$ be a reductive Klein pair for any one of
the homogeneous lorentzian manifolds in Table~\ref{tab:spacetimes}.
In all cases, $\mathfrak{h} \cong \mathfrak{so}(d,1)$ and the infinitesimal linear
isotropy representation $\lambda : \mathfrak{h} \to \mathfrak{gl}(\mathfrak{m})$ preserves a
lorentzian inner product $\eta$, say, on $\mathfrak{m}$; that is, for all
$X \in \mathfrak{h}$ and $Y_1,Y_2\in \mathfrak{m}$, we have that
\begin{equation}
\eta(\lambda_X Y_1, Y_2) + \eta(Y_1, \lambda_X Y_2) = 0.
\end{equation}
Every choice of basis for $\mathfrak{m}$ defines an isomorphism $\mathfrak{m} \to
\mathbb{R}^{d+1}$ which may be used to transport the lorentzian inner product
to $\mathbb{R}^{d+1}$. Choosing a pseudo-orthonormal basis for $\mathfrak{m}$ brings
the inner product on $\mathbb{R}^{d+1}$ to be the standard one with diagonal
matrix with entries $(-1,1,\dots,1)$ and embeds $\mathfrak{h} \subset
\mathfrak{gl}(d+1,\mathbb{R})$ as the standard Lorentz algebra.
Let $M$ be a ($d+1$)-dimensional manifold with a
$G=\operatorname{O}(d,1)$-structure. Then $M$ is covered by open subsets
$\{U_\alpha\}$ and each $U_\alpha$ we have an inverse vielbein
$\vartheta_\alpha$ taking values in $\mathbb{R}^{d+1}$ and such that on
nonempty overlaps $U_{\alpha\beta}:= U_\alpha \cap U_\beta$, the
inverse vielbeins are related by local $\operatorname{O}(d,1)$-transformations
$h_{\alpha\beta}: U_{\alpha\beta} \to \operatorname{O}(d,1)$. Using the
$\operatorname{O}(d,1)$-invariant lorentzian inner product $\eta$ on $\mathbb{R}^{d+1}$,
we can define on each $U_\alpha$ a local lorentzian metric
\begin{equation}
g_\alpha := \eta(\vartheta_\alpha,\vartheta_\alpha).
\end{equation}
But because $\eta$ is $\operatorname{O}(d,1)$ invariant, these local metrics agree on
overlaps and hence they glue to a lorentzian metric $g$ on $M$. This
shows that a lorentzian metric on an ($d+1$)-dimensional manifold $M$
is equivalent to a $\mathscr{G}$-structure on $M$ with $G=\operatorname{O}(d,1)$.
This generalises in the sense that if an $n$-dimensional manifold $M$
admits a $\mathscr{G}$-structure with $\mathscr{G} \subset \operatorname{GL}(n,\mathbb{R})$, then every (nonzero)
$\mathscr{G}$-invariant tensor of $\mathbb{R}^n$ gives rise to a (nowhere-vanishing)
global tensor field on $M$: it is defined locally using the (inverse)
vielbeins, but the $\mathscr{G}$-invariance guarantees that these local tensor
fields glue on overlaps.
\subsubsection{Newton--Cartan geometry}
\label{sec:newt-cart-geom}
Let $(\mathfrak{g}= \mathfrak{h} \oplus \mathfrak{m},\mathfrak{h})$ be a reductive Klein pair for any one of
the homogeneous galilean manifolds in Table~\ref{tab:spacetimes}. In
all cases, $\mathfrak{h} \cong \mathfrak{iso}(d)$ and the infinitesimal linear isotropy representation
$\lambda : \mathfrak{h} \to \mathfrak{gl}(\mathfrak{m})$ preserves a covector in $\mathfrak{m}^*$ and a
symmetric bivector in $\odot^2\mathfrak{m}$. Choose basis $(P_0,P_1,\dots,P_d)$
for $\mathfrak{m}$ and canonical dual basis $\pi^0,\pi^1,\dots,\pi^d$ for
$\mathfrak{m}^*$, relative to which the invariant covector is $\pi^0$ and the
invariant symmetric bivector is $P_1^2 + \cdots + P_d^2 = \delta^{ab}
P_a P_b$. The subgroup $\mathscr{G} \subset \operatorname{GL}(d+1,\mathbb{R})$ which preserves these
tensors consists of matrices of the form
\begin{equation}\label{eq:gal-structure-group}
\begin{pmatrix}
1 & \boldsymbol{0}^T \\ \boldsymbol{v} & A
\end{pmatrix} \qquad\text{with}\qquad \boldsymbol{v} \in
\mathbb{R}^d, A \in \operatorname{O}(d).
\end{equation}
A \textbf{(weak) Newton--Cartan structure} on a ($d+1$)-manifold $M$
is a $\mathscr{G}$-structure with $\mathscr{G} \subset \operatorname{GL}(d+1,\mathbb{R})$ the subgroup given by the
matrices in equation~\eqref{eq:gal-structure-group}. The
$\mathscr{G}$-invariant tensors give rise to global (nowhere-vanishing) tensor
fields on $M$: the clock one-form $\tau \in \Omega^1(M)$ defined
locally on $U_\alpha$ by $\tau_\alpha := \pi^0(\vartheta_\alpha)$
relative to the inverse vielbein $\vartheta_\alpha$. Similarly the
spatial cometric is given locally on $U_\alpha$ by $\lambda_\alpha :=
\delta^{ab} (E_\alpha)_a (E_\alpha)_b$, where $E_\alpha$ is the
vielbein dual to $\vartheta_\alpha$. These symmetric bivectors glue
to give a symmetric $(2,0)$-tensor field $\lambda$ on $M$.
Equivalently, one could define a (weak) Newton--Cartan structure on
$M$ by specifying a nowhere vanishing one-form $\tau \in \Omega^1(M)$
and a corank-1 positive-semidefinite symmetric $(2,0)$-tensor field
$\lambda \in \Gamma(\odot^2TM)$ with the property that $\lambda(\tau,
-) = 0$.
A \textbf{Newton--Cartan structure} is obtained by enhancing a weak
Newton--Cartan structure with an \textbf{adapted connection}: an
affine connection relative to which $\tau$ and $\lambda$ are
parallel. Such connections were studied initially in
\cite{MR175340,MR334831}. As explained, e.g., in
\cite[Section~2]{Figueroa-OFarrill:2020gpr}, every $\mathscr{G}$-structure has
an \emph{intrinsic torsion} which is the part of the torsion tensor of
an adapted connection which is independent of the connection.
This is not something one is familiar with from lorentzian geometry,
since the Fundamental Theorem of lorentzian (or, more generally,
pseudo-riemannian) geometry states that there exists a unique
torsion-free adapted (here, metric) connection. So that intrinsic
torsion of a lorentzian geometry is always zero.
However for a Newton--Cartan structure this is not the case. As first
shown in \cite{MR334831}, the intrinsic torsion of a Newton--Cartan
connection can be identified with $d\tau \in \Omega^2(M)$, for $\tau$
the clock-one form. Hence the intrinsic torsion need not \emph{a
priori} be zero. Also shown in \cite{MR175340,MR334831} is that
specifying the torsion does not uniquely determine the adapted
connection: there is contorsion, which is measured by an arbitrary
two-form.
A study of how the bundle of two-forms decomposes under the action of
the structure group reveals that there are three\footnote{This is in
generic dimension $d+1$: if $d=1$ then there are only two classes and
if $d=4$ and assuming that $M$ is orientable, there are five
classes. See \cite[Appendix~B]{Figueroa-OFarrill:2020gpr}.}
classes of Newton--Cartan structures
\cite[Theorem~6]{Figueroa-OFarrill:2020gpr}:
\begin{itemize}
\item \textbf{torsionless} (NC): $d\tau = 0$;
\item \textbf{twistless torsional} (TTNC): $d\tau \wedge \tau = 0$;
and
\item \textbf{torsional} (TNC): $d\tau \wedge \tau \neq 0$.
\end{itemize}
These classes first appeared in \cite{Christensen:2013lma} (see
Table~I in that paper) in the context of Lifshitz holography.
The homogeneous examples in Table~\ref{tab:spacetimes} are all
such that $d\tau = 0$, but there are homogeneous examples of all three
kinds \cite{Grosvenor:2017dfs}.
A rich source of (weak) Newton--Cartan structures arise as null
reductions of lorentzian manifolds
\cite{PhysRevD.31.1841,Julia:1994bs}. Let $(N,g)$ be a lorentzian
manifold with a null nowhere-vanishing Killing vector $\xi$ and
suppose that $\xi$ is complete so that it integrates to a
one-parameter $\Gamma$ subgroup of isometries of $N$. Let us assume
that the action of $\Gamma$ on $N$ is such that the quotient
$M := N/\Gamma$ is smooth making the projection $\pi : N \to M$ into
a smooth submersion. Then $M$ inherits from $N$ a (weak)
Newton--Cartan structure as follows. The Killing one-form
$\xi^\flat$ dual to $\xi$ is the pull-back via $\pi$ of a clock
one-form $\tau \in \Omega^1(M)$
\begin{equation}
\xi^\flat = \pi^*\tau,
\end{equation}
which is nowhere vanishing since $\xi$ is. We define the ruler
$\lambda$ as follows. Clearly it is enough to know what
$\lambda(\alpha,\beta)$ is for any two one-forms
$\alpha,\beta \in \Omega^1(M)$. Give two such one-forms $\alpha,\beta$,
let $X_\alpha,X_\beta \in \mathscr{X}(N)$ be vector fields on $N$ which are
metrically dual to the pull-backs
$\pi^*\alpha,\pi^*\beta \in \Omega^1(N)$. Then
$\lambda(\alpha,\beta)$ is the function on $M$ whose pull-back to $N$
agrees with the inner product $g(X_\alpha,X_\beta)$. It follows that
$\lambda(\tau,-) = 0$ and hence that $(M,\tau,\lambda)$ is a (weak)
Newton--Cartan structure.
\subsubsection{Carrollian geometry}
\label{sec:carrollian-geometry}
Not all carrollian spacetimes in Table~\ref{tab:spacetimes} are
reductive: the lightcone is not. So in order to treat all cases
together we will be working with a Klein pair $(\mathfrak{g},\mathfrak{h})$ and simply
define the infinitesimal linear isotropy representation $\lambda :\mathfrak{h}
\to \mathfrak{gl}(\mathfrak{g}/\mathfrak{h})$. In all cases, $\mathfrak{h} \cong \mathfrak{iso}(d)$, but this is a
different (i.e., non-conjugate) Lie subalgebra of $\mathfrak{gl}(\mathfrak{g}/\mathfrak{h})$ than
the one in the galilean examples. This means that $\mathfrak{g}/\mathfrak{h}$ has
different $\mathfrak{h}$-invariant tensors in this case. The $\mathfrak{h}$-invariant
tensors are now a vector in $\mathfrak{g}/\mathfrak{h}$ and a symmetric bilinear form in
$\odot^2(\mathfrak{g}/\mathfrak{h})^*$. We can choose basis $(\overline P_0,\overline
P_1,\dots,\overline P_d)$ for $\mathfrak{g}/\mathfrak{h}$, where $\overline P_A = P_A \mod
\mathfrak{h}$, and canonical dual basis $(\pi^0,\pi^1,\dots,\pi^d)$ for
$(\mathfrak{g}/\mathfrak{h})^*$, relative to which the invariant tensors are $P_0$ and
$\delta_{ab}\pi^a\pi^b = (\pi^1)^2 + \dots + (\pi^d)^2$. The subgroup
$\mathscr{G} \subset \operatorname{GL}(d+1,\mathbb{R})$ which preserves these two tensors consists of
matrices of the form
\begin{equation}
\label{eq:car-structure-group}
\begin{pmatrix}
1 & \boldsymbol{v}^T\\ \boldsymbol{0} & A
\end{pmatrix} \qquad\text{with}\qquad \boldsymbol{v} \in
\mathbb{R}^d, A \in \operatorname{O}(d).
\end{equation}
This group is abstractly isomorphic to the one with matrices
\eqref{eq:gal-structure-group}, but of course they are not conjugate
in $\operatorname{GL}(d+1,\mathbb{R})$ since they have different invariants.
The connected component of the group $\mathscr{G}$ (where $A \in \operatorname{SO}(d)$) also
leaves invariant $\pi^0 \wedge \pi^1 \wedge \cdots \wedge \pi^d \in
\wedge^{d+1}(\mathfrak{g}/\mathfrak{h})^*$.
A \textbf{(weak) carrollian structure} on a ($d+1$)-dimensional
manifold $M$ is a $\mathscr{G}$-structure with $\mathscr{G} \subset \operatorname{GL}(d+1,\mathbb{R})$ the
subgroup consisting of the matrices in
equation~\eqref{eq:car-structure-group}. The $\mathscr{G}$-invariant tensors
give rise to a (nowhere-vanishing) vector field $\xi \in \mathscr{X}(M)$ and a
positive-semidefinite corank-$1$ symmetric $(0,2)$-tensor field $h \in
\Gamma(\odot^2T^*M)$ with the property that $h(\xi,-)=0$. If $M$ is
simply connected, then the structure group can be further reduced to
the connected component $G_0$ and hence there is also a ``volume''
form $\mu \in \Omega^{d+1}(M)$. Even if the structure group does not
reduce, we still have a locally defined volume form $\mu_\alpha$ on
each $U_\alpha$ and they can be chosen so that they may change by a
sign on overlaps.
A \textbf{carrollian structure} is a weak carrollian structure
enhanced by an adapted connection. As in the case of a Newton--Cartan
structure, the torsion of the adapted connection does not characterise
the connection uniquely: the contorsion here is measured by a section
of the subbundle $\odot^2 \operatorname{Ann} \xi \subset \odot^2 T^*M$, where
$\operatorname{Ann}\xi \subset T^*M$ is the bundle of one-forms which annihilate the
vector field $\xi$. The intrinsic torsion is now given by $\mathscr{L}_\xi
h$, the Lie derivative of $h$ along $\xi$ (see
\cite[Proposition~8]{Figueroa-OFarrill:2020gpr}) and studying the
decomposition of the bundle $\odot^2\operatorname{Ann} \xi$ under the action of the
structure group results in four\footnote{This is for $d>1$: if $d=1$
there are only two classes, as for the $d=1$ Newton--Cartan
geometries, consistent with the fact that in $1+1$ dimensions,
there is no real distinction between carrollian and Newton--Cartan
structures.} classes of carrollian structures
\cite[Theorem~10]{Figueroa-OFarrill:2020gpr}:
\begin{itemize}
\item \textbf{totally geodesic}: $\mathscr{L}_\xi h = 0$;
\item \textbf{minimal}: $\mathscr{L}_\xi \mu = 0$, where $\mu$ is the (possibly only locally defined) volume form;
\item \textbf{totally umbilical}: $\mathscr{L}_\xi h = f h$ for some $f \in C^\infty(M)$; and
\item \textbf{generic}, if none of the above are satisfied.
\end{itemize}
The names have been chosen in analogy with the theory of hypersurfaces
in riemannian geometry. This is more than an analogy in that, as
shown in \cite{Duval:2014uoa,Hartong:2015xda} a natural source of
carrollian manifolds are null hypersurfaces in lorentzian manifolds.
Indeed if $N \subset M$ is a null hypersurface in a lorentzian
manifold $(M,g)$, then $h$ is the pull-back of $g$ to $N$ and $\xi$ is
the null vector field (tangent to $N$) whose integral curves are the
null geodesic generators of $N$. Then $\mathscr{L}_\xi h$ is the null second
fundamental form of the hypersurface and the names above coincide with
the classification of hypersurfaces based on their second fundamental
form. The minimality condition is equivalently but more commonly
rephrased as the vanishing of the trace of the Weingarten map. There
is also a null Weingarten map for null hypersurfaces and it is
traceless if and only if the carrollian structure is minimal. Classic
references on null hypersurfaces are \cite{MR886772,MR1777311} and in
the present context \cite{Hartong:2015xda,Figueroa-OFarrill:2020gpr}.
The homogeneous carrollian spacetimes in Table~\ref{tab:spacetimes}
can be realised as null hypersurfaces in the maximally symmetric
lorentzian manifolds in Table~\ref{tab:spacetimes}, but in one
dimension higher: Carroll spacetime and the lightcone are null
hypersurfaces in Minkowski spacetime, whereas de~Sitter--Carroll and
anti~de~Sitter--Carroll spacetimes are null hypersurfaces in de~Sitter
and anti~de~Sitter spacetimes, respectively.
The symmetric carrollian spacetimes in Table~\ref{tab:spacetimes}
(i.e., all but the lightcone) are totally geodesic, whereas the
lightcone is totally umbilical. In fact, being homogeneous, the
function $f \in C^\infty(M)$ in the definition of totally umbilical is
a constant. We are not aware of homogeneous examples of minimal
and/or generic carrollian structures, but they should exist.
\subsubsection{Aristotelian geometry}
\label{sec:arist-geom}
Aristotelian geometries are also describable in terms of
$\mathscr{G}$-structures, where $\mathscr{G} \subset \operatorname{GL}(d+1,\mathbb{R})$ is the
intersection of any two of the groups defining a galilean, carrollian
or lorentzian structures. Comparing the matrices in
equations~\eqref{eq:gal-structure-group} and
\eqref{eq:car-structure-group}, we see that $\mathscr{G} \cong \operatorname{O}(d)$
consists of matrices of the form
\begin{equation}\label{eq:ari-structure-group}
\begin{pmatrix}
1 & \boldsymbol{0}^T \\ \boldsymbol{0} & A
\end{pmatrix} \qquad\text{with}\qquad A \in \operatorname{O}(d).
\end{equation}
Choosing basis $P_0,P_1,\dots,P_d$ for $\mathbb{R}^{d+1}$ and canonical dual
basis $\pi^0,\pi^1,\dots,\pi^d$, we see that $P_0$ and $\pi^0$ are
invariant and so are $\delta^{ab} P_a P_b = P_1^2 + \cdots + P_d^2$
and $\delta_{ab} \pi^a \pi^b = (\pi^1)^2 + \cdots + (\pi^d)^2$.
A \textbf{(weak) aristotelian structure} on a ($d+1$)-dimensional
manifold $M$ is a $\mathscr{G}$-structure with $\mathscr{G} \subset \operatorname{GL}(d+1,\mathbb{R})$ the
subgroup of matrices of the form given in
equation~\eqref{eq:ari-structure-group}. The $\mathscr{G}$-invariant tensors
described above give rise to the following: a vector field $\xi$, a
one-form $\tau$, a symmetric $(0,2)$-tensor field $h$ and a symmetric
$(2,0)$-tensor field $\lambda$ in such a way that $(\tau,\lambda)$ and
$(\xi,h)$ are simultaneously a (weak) Newton--Cartan and (weak)
carrollian structure. The details about the classification of
aristotelian $\mathscr{G}$-structures via their intrinsic torsion can be found
in \cite[Section~5]{Figueroa-OFarrill:2020gpr} and a recent discussion
of aristotelian geometry in the context of fractons can be found in
\cite[Section~5]{Bidussi:2021nmp}.
\subsection{Coadjoint orbits}
\label{sec:coadjoint-orbits}
In this section we describe the method of coadjoint orbits in order to
write down particle actions.
\subsubsection{Adjoint and coadjoint actions}
\label{sec:adjo-coadj-acti}
Let $\mathscr{G}$ be a Lie group and let $\mathfrak{g}$ be its Lie algebra, whose dual
vector space is denoted $\mathfrak{g}^*$. We identify $\mathfrak{g}$ with the tangent
space $T_e\mathscr{G}$ to $\mathscr{G}$ at the identity. The identity is fixed
under conjugation by any $g \in \mathscr{G}$ and therefore the differential
of conjugation by $g$ defines a group homomorphism
$\operatorname{Ad} : \mathscr{G} \to \operatorname{GL}(\mathfrak{g})$ known as the \textbf{adjoint representation}.
For a matrix group $\mathscr{G}$, if $g \in \mathscr{G}$ and $X \in \mathfrak{g}$, the adjoint
action is simply matrix conjugation $\operatorname{Ad}_g X = g X g^{-1}$. The
adjoint representation on $\mathfrak{g}$ induces the \textbf{coadjoint
representation} $\operatorname{Ad}^*: \mathscr{G} \to \operatorname{GL}(\mathfrak{g}^*)$: if $g \in \mathscr{G}$ and
$\alpha \in \mathfrak{g}^*$, we have that
$\operatorname{Ad}^*_g \alpha = \alpha \circ \operatorname{Ad}_{g^{-1}}$. In other words, for all
$X \in \mathfrak{g}$, and using dual pairing notation:
\begin{equation}
\left<\operatorname{Ad}_g^* \alpha, X\right> = \left<\alpha, \operatorname{Ad}_{g^{-1}} X\right>.
\end{equation}
Infinitesimally, we have the adjoint $\operatorname{ad} : \mathfrak{g} \to \mathfrak{gl}(\mathfrak{g})$ and
coadjoint $\operatorname{ad}^* : \mathfrak{g} \to \mathfrak{gl}(\mathfrak{g}^*)$ representations of the Lie
algebra, defined by $\operatorname{ad}_X Y = [X,Y]$ and $\operatorname{ad}^*_X \alpha = -\alpha
\circ \operatorname{ad}_X$ for all $X,Y \in \mathfrak{g}$ and $\alpha \in \mathfrak{g}^*$. This last
condition can be expressed in dual pairing notation as
\begin{equation}\label{eq:inf-coadjoint}
\left<\operatorname{ad}^*_X \alpha, Y\right> = - \left<\alpha, [X,Y]\right>.
\end{equation}
\subsubsection{The symplectic structure on a coadjoint orbit}
\label{sec:sympl-struct-coadj}
Let $\mathscr{O}_\alpha$ be the \textbf{coadjoint orbit} of $\alpha \in \mathfrak{g}^*$;
that is,
\begin{equation}
\mathscr{O}_\alpha = \left\{ \operatorname{Ad}^*_g \alpha ~ \middle | ~ g \in \mathscr{G}\right\}.
\end{equation}
A fundamental property of coadjoint orbits is that they admit a
$\mathscr{G}$-invariant symplectic structure, given by the
Kirillov--Kostant--Souriau symplectic form $\omega_{\text{KKS}}$.
There are several ways to describe this symplectic form. Perhaps the
simplest description is in terms of the corresponding Poisson
brackets. Every $X \in \mathfrak{g}$ defines a linear function $\ell_X$ on
$\mathfrak{g}^*$ by $\ell_X(\alpha) = \left<\alpha,X\right>$ for all $\alpha \in
\mathfrak{g}^*$. We may restrict the $\ell_X$ to smooth functions on the
coadjoint orbit $\mathscr{O}_\alpha$. Their differentials $d\ell_X$ span the
cotangent space to the orbit at any point in the orbit. Therefore the
Poisson bivector $\Pi_{\text{KKS}}$ dual to the symplectic form (and
hence the symplectic form itself) is uniquely determined by its value
on the $d\ell_X$. These are given by the Lie algebra itself:
\begin{equation}
\Pi_{\text{KKS}}(d\ell_X, d\ell_Y) = \left\{\ell_X,\ell_Y \right\}_{\text{KKS}} = \ell_{[X,Y]},
\end{equation}
and hence the Jacobi identity follows from that of the Lie algebra. The
Jacobi identity for the Poisson brackets is equivalent to the closure
of the $2$-form inverse of the Poisson bivector. The functions
$\ell_X$ are hamiltonians for the $\mathscr{G}$-action on $\mathscr{O}_\alpha$: the
hamiltonian vector fields $\left\{ \ell_X,- \right\}$ generate the
infinitesimal action of $\mathscr{G}$ on $\mathscr{O}_\alpha$. To show this, let
$\zeta_X \in \mathscr{X}(\mathscr{O}_\alpha)$ be the vector fields which generate the
$\mathscr{G}$ action: $\zeta_X(\alpha) = \operatorname{ad}^*_X \alpha$. It follows that
\begin{equation}
\zeta_X (\ell_Y)(\alpha) = \left<\alpha, [X,Y] \right> = \ell_{[X,Y]}
\end{equation}
A different description, in the spirit of
Section~\ref{sec:line-isotr-repr}, is via the holonomy principle for
homogeneous spaces. A $\mathscr{G}$-invariant $2$-form
$\omega \in \Omega^2(\mathscr{O}_\alpha)$ determines and is determined by a
$\mathscr{G}_\alpha$-invariant
$\omega_\alpha \in \wedge^2 T^*_\alpha \mathscr{O}_\alpha$, where
$\mathscr{G}_\alpha \subset \mathscr{G}$ is the stabiliser of $\alpha$. Every $X
\in \mathfrak{g}$ defines a vector field on $\mathscr{O}_\alpha$ and, evaluating it at
$\alpha$, gives a tangent vector there. This defines a linear map $\mathfrak{g}
\to T_\alpha\mathscr{O}_\alpha$, sending $X \in \mathfrak{g}$ to $\operatorname{ad}^*_X\alpha$ which, since
$\mathscr{O}_\alpha$ is an orbit, is surjective. The kernel of this map is
the Lie algebra $\mathfrak{g}_\alpha$ of the stabiliser group $\mathscr{G}_\alpha$ of
$\alpha$. This shows that $T_\alpha \mathscr{O}_\alpha$ is isomorphic to
$\mathfrak{g}/\mathfrak{g}_\alpha$. Denoting the quotient map $\mathfrak{g} \to \mathfrak{g}/\mathfrak{g}_\alpha$ by
$X \mapsto \overline X$, we have that
\begin{equation}
\omega_\alpha(\overline X, \overline Y) := \left<\alpha, [X,Y]\right>.
\end{equation}
It is not hard to check that the RHS only depends on $X,Y$ modulo
$\mathfrak{g}_\alpha$ and that $\omega_\alpha$ is non-degenerate. The resulting
$\mathscr{G}$-invariant $2$-form, denoted $\omega_{\text{KKS}}$ is closed:
indeed, its pull-back $\pi^*\omega_{\text{KKS}} \in \Omega^2(\mathscr{G})$
to $\mathscr{G}$ under the orbit map $\pi: \mathscr{G} \to \mathscr{O}_\alpha$ sending $g
\mapsto \operatorname{Ad}^*_g \alpha$ is not just closed but in fact exact:
\begin{equation}\label{eq:KKS-exact-pullback}
\pi^*\omega_{\text{KKS}} = -d \left<\alpha, \vartheta\right>,
\end{equation}
where $\vartheta$ is the left-invariant Maurer--Cartan one-form on
$\mathscr{G}$. This shows that $\omega_{\text{KKS}}$ is a $\mathscr{G}$-invariant
symplectic form on $\mathscr{O}_\alpha$.
Finally, and perhaps the most conceptual reason why coadjoint orbits
are symplectic manifolds is that they arise by symplectic reduction
from the canonical symplectic structure on the cotangent bundle
$T^*\mathscr{G}$, which is the phase space of the Lie group $\mathscr{G}$ thought of
as a configuration space. Any diffeomorphism of $\mathscr{G}$ induces a
diffeomorphism of $T^*\mathscr{G}$ which preserves the symplectic form. In
particular, the symplectic form on $T^*\mathscr{G}$ is invariant under the
diffeomorphisms induced from left- and right-multiplications in
$\mathscr{G}$. The existence of left- and right-invariant vector fields on
Lie groups says that $\mathscr{G}$ is parallelisable and hence that $T^*\mathscr{G}$
is trivial: that is, $T^*\mathscr{G} \cong \mathscr{G} \times \mathfrak{g}^*$. There are two
natural trivialisations: one using left-multiplication and the other
using right-multiplication. Let us use left-multiplication to
trivialise $T^*\mathscr{G}$ and hence identify it with $\mathscr{G} \times \mathfrak{g}^*$.
The cartesian projection $\mathscr{G} \times \mathfrak{g}^* \to \mathfrak{g}^*$ defines a
function $\mu : T^*\mathscr{G} \to \mathfrak{g}^*$ which is $\mathscr{G}$-equivariant: it
intertwines between the action of $\mathscr{G}$ on $T^*\mathscr{G}$ induced by
left-multiplication and the coadjoint action of $\mathscr{G}$ on $\mathfrak{g}^*$.
Pick $\alpha \in \mathfrak{g}^*$ and consider $\mu^{-1}(\alpha)$. These are all
the points in $\mathscr{G} \times \mathfrak{g}^*$ of the form $(g,\alpha)$ for any $g \in
\mathscr{G}$ and hence it is a copy of $\mathscr{G}$. This copy of $\mathscr{G}$ in
$T^*\mathscr{G}$ is preserved by the stabiliser $\mathscr{G}_\alpha$ of $\alpha$.
Quotienting gives the symplectic quotient
$\mu^{-1}(\alpha)/\mathscr{G}_\alpha$ which is a symplectic manifold
diffeomorphic to $\mathscr{G}/\mathscr{G}_\alpha$ or, equivalently, to the coadjoint
orbit $\mathscr{O}_\alpha$. The resulting symplectic form on $\mathscr{O}_\alpha$ is
uniquely characterised by the fact that its pull-back to
$\mu^{-1}(\alpha)$ agrees with the restriction to $\mu^{-1}(\alpha)$
of the canonical symplectic form on $T^*\mathscr{G}$ and a calculation shows
that this is again $\omega_{\text{KKS}}$.
In summary, coadjoint orbits of a group $\mathscr{G}$ are homogeneous
symplectic manifolds of $\mathscr{G}$. There is a partial converse to this
result, which roughly speaking says that all homogeneous symplectic
manifolds are coadjoint orbits. More precisely, one has the following
``folkloric'' theorem, proved recently in \cite{Beckett:2022wvo}.
\begin{mainthm}
Let $\mathscr{G}$ be a connected Lie group and $(M,\omega)$ a
simply-connected homogeneous symplectic manifold of $\mathscr{G}$. Then
there exists a covering $\pi: (M,\omega) \to (\mathscr{O},\omega_{\text{KKS}})$,
with $\mathscr{O}$ a coadjoint orbit of a one-dimensional central extension
of $\mathscr{G}$, such that $\pi^*\omega_{\text{KKS}} = \omega$.
\end{mainthm}
\subsubsection{Elementary classical systems}
\label{sec:elem-class-syst}
Homogeneous symplectic manifolds of a Lie group $\mathscr{G}$ are the
\textbf{elementary classical systems} with symmetry $\mathscr{G}$, or perhaps
more colloquially, the \textbf{elementary particles} with symmetry
$\mathscr{G}$ in the nomenclature of Souriau \cite{Souriau}. The above
theorem implies that they are locally symplectomorphic to coadjoint
orbits of $\mathscr{G}$ or possibly a one-dimensional central\footnote{The
extension has to be central since the coadjoint orbit of a
non-central extension of a Lie group $\mathscr{G}$ does not admit a natural
action of $\mathscr{G}$.} extension of $\mathscr{G}$. As shown by
Souriau\cite{Souriau}, whether or not we need to centrally extend the
group comes down to the symplectic cohomology of $\mathscr{G}$, which we now
describe briefly.
A smooth function $\theta: \mathscr{G} \to \mathfrak{g}^*$ allows us to define an
affinisation of the coadjoint representation
\begin{equation}\label{eq:affine-action}
g \cdot \alpha := \operatorname{Ad}_g^*\alpha + \theta(g).
\end{equation}
This defines an affine action $g_1 \cdot (g_2 \cdot \alpha) = (g_1g_2)
\cdot \alpha$ precisely when $\theta$ obeys the cocycle condition
\begin{equation}
\theta(g_1g_2) = \operatorname{Ad}_{g_1}^* \theta(g_2) + \theta(g_1).
\end{equation}
Differentiating $\theta$ at the identity gives a linear map $d_e\theta
: \mathfrak{g} \to \mathfrak{g}^*$ and hence a bilinear form $c$ on the Lie algebra
defined by
\begin{equation}
c(X,Y) = \left<(d_e\theta)(X),Y\right>.
\end{equation}
We say that $\theta$ is a \emph{symplectic cocycle} if it satisfies
the cocycle condition and $c(X,Y) = - c(Y,X)$. In that case,
$c \in \wedge^2\mathfrak{g}^*$ is a Chevalley--Eilenberg cocycle and hence
defines a central extension of $\mathfrak{g}$, which is trivial if and only if
there exists $\beta \in \mathfrak{g}^*$ such that
$c(X,Y) = - \left<\beta, [X,Y]\right>$. A symplectic cocycle defines
a class in the \emph{symplectic cohomology}
$H_{\text{symp}}^1(\mathscr{G},\mathfrak{g}^*)$, which is trivial if and only if there
exists $\beta \in \mathfrak{g}^*$ such that $\theta(g) = \operatorname{Ad}_g^* \beta - \beta$.
In that case, the affine coadjoint action~\eqref{eq:affine-action}
becomes equivalent to the linear coadjoint representation, being
simply the conjugation of the linear coadjoint representation by the
constant translation $\alpha \mapsto \alpha - \beta$.
As shown by Souriau, every time a Lie group $\mathscr{G}$ acts symplectically
on a symplectic manifold $(M,\omega)$ and assuming that the
fundamental vector fields are hamiltonian, we get a class in the
symplectic cohomology of $\mathscr{G}$. Indeed, consider the linear map
$\varphi: \mathfrak{g} \to C^\infty(M)$, sending $X \to \varphi_X$, where the
hamiltonian vector field $\left\{-, \varphi_X \right\}$ generates the
infinitesimal action of $\mathscr{G}$. Dual to $\varphi$ we have
the\footnote{\emph{moment} in the original French and also in part of
the symplectic geometry literature} \emph{momentum map}
$\mu : M \to \mathfrak{g}^*$, defined by $\left<\mu,X\right> = \varphi_X$,
originally introduced by Souriau. The symplectic cocycle
$\theta: G \to \mathfrak{g}^*$ measures the failure of the moment map to be
equivariant relative to the coadjoint representation:
\begin{equation}\label{eq:symplectic-cocycle}
\theta(g) := \operatorname{Ad}_g^* \mu(p) - \mu(g\cdot p)
\end{equation}
for any $p \in M$. Surprisingly, perhaps, assuming that $M$ is
connected, this does not depend on $p$. If the symplectic cohomology
class of $\theta$ is trivial, so that $\theta(g) = \operatorname{Ad}_g^*\beta -
\beta$ for some fixed $\beta \in \mathfrak{g}^*$, we can translate $\mu \mapsto
\mu - \beta$ in such a way that the translated $\mu$ is equivariant:
\begin{equation}
\mu(g\cdot p) - \beta = \operatorname{Ad}_g^* (\mu(p) - \beta).
\end{equation}
This modifies the functions $\varphi_X$ by constants $\varphi_X
\mapsto \varphi_X - \left<\beta,X\right>$, which do not change the
hamiltonian vector fields.
Suppose that $\mathscr{G}$ acts transitively on $M$, so that $M$ is one orbit
of $\mathscr{G}$. If the class of $\theta$ (given in
\eqref{eq:symplectic-cocycle}) in symplectic cohomology vanishes, then
the image of the moment map $\mu : M \to \mathfrak{g}^*$ is a coadjoint orbit of
$\mathscr{G}$. One can show that $\mu$ is a covering map and hence $M$
covers a coadjoint orbit of $\mathscr{G}$. If $\mathscr{G}$ has vanishing
symplectic cohomology, all elementary systems with symmetry $\mathscr{G}$ are
(up to covering) coadjoint orbits of $\mathscr{G}$. In contrast, if the
symplectic cohomology of $\mathscr{G}$ is not zero, then some elementary
systems with symmetry $\mathscr{G}$ do not cover coadjoint orbits of $\mathscr{G}$
but of a one-dimensional central extension of $\mathscr{G}$.
For example, the symplectic cohomology of the Poincaré group vanishes,
so that Poincaré-invariant classical elementary systems (i.e.,
particles) are classified by the coadjoint orbits of the Poincaré
group. In contrast, the Galilei group does have nontrivial symplectic
cohomology and hence Galilei-invariant particles are classified by
coadjoint orbits of the Bargmann group, the one-dimensional central
extension of the Galilei group already discussed (at the Lie algebraic
level) in Section~\ref{sec:central-extensions}.
\subsubsection{Coadjoint orbits from geodesic motion}
\label{sec:coadj-orbits-from}
In the context of lorentzian geometry we can understand the emergence
of the coadjoint orbits as follows. Suppose that $(M,g)$ is a
lorentzian manifold and let $\gamma$ be an affinely parametrised
geodesic for the Levi-Civita connection; i.e., a solution of
$\frac{D\dot\gamma}{dt} = 0$. If $\xi \in \mathscr{X}(M)$ is a Killing
vector field, then the inner product $g(\xi,\dot\gamma)$ is constant
along the geodesic. Let $\mathscr{G}$ be the isometry group and let $\mathfrak{g}$ be
its Lie algebra. Then to every $X \in \mathfrak{g}$ we associate a Killing
vector field $\xi_X$ and therefore every geodesic defines a
momentum $\mu$ in $\mathfrak{g}^*$; namely,
the linear map $\mu: \mathfrak{g} \to \mathbb{R}$ defined by $\left<\mu,X\right> =
g(\xi_X,\dot\gamma)$. For every $a \in \mathscr{G}$, let $\phi_a : M \to M$ be
the corresponding isometry and suppose that $\gamma$ is a geodesic.
Then $\phi_a \circ \gamma$ is also a geodesic and its momentum is
given by $\operatorname{Ad}^*_a \mu$, where $\mu$ is the momentum of $\gamma$. The
collection of momenta corresponding to all geodesics which are related
to $\gamma$ by an isometry define the coadjoint orbit of the momentum
of $\gamma$.
As an example, consider affinely parametrised geodesics in Minkowski
spacetime $(\mathsf{M},\eta)$. In flat coordinates $x^\mu$, they are given by
straight lines $x^\mu(\lambda) = a^\mu + \lambda k^\mu$. Therefore
$\dot\gamma = \dot x^\mu \d_\mu = k^\mu\d_\mu$ and the momentum $\mu$
is given by the linear function sending
\begin{equation}
P_\mu \mapsto \eta(\d_\mu, \dot\gamma) = \eta(\d_\mu, k^\nu \d_\nu) = k^\mu
\eta_{\mu\nu} = k_\mu
\end{equation}
and
\begin{equation}
L_{\mu\nu} \mapsto \eta(x_\mu\d_\nu - x_\nu\d_\mu, \dot\gamma) =
\eta(x_\mu\d_\nu - x_\nu\d_\mu, k^\rho\d_\rho) = a_\mu k_\nu - a_\nu k_\mu,
\end{equation}
which are the linear and relativistic angular momenta, respectively,
of the particle. The quadratic function
$P^2 = \eta^{\mu\nu}P_\mu P_\nu$ on $\mathfrak{g}^*$ which takes the value $k^2$
on the momentum $\mu$ is constant on the coadjoint orbit and
corresponds to $-m^2$, where $m$ is the particle mass. Acting with the
translations in the Poincaré group on the geodesic, we can set
$a^\mu = 0$ and acting with the Lorentz transformations which preserve
the origin, we can bring $k^\mu$ to any desired point on the
mass-shell $k^2 = -m^2$. For $m\neq 0$, we can take
$k^\mu = (m,0,0,0)$, and for $k^2 = 0$ we can take
$k^\mu = (1,0,0,1)$, for instance.
As an example, consider the geodesic traced by a massive particle with
mass $m$ in the rest frame, whose momentum is $\mu = m \pi^0$ relative
to the basis $\pi^\mu,\lambda^{\mu\nu}$ for $\mathfrak{g}^*$ canonically dual to
the basis $P_\mu, L_{\mu\nu}$ for $\mathfrak{g}$. Relative to this basis, the
infinitesimal coadjoint action is given by
\begin{equation}
\begin{split}
\operatorname{ad}^*_{L_{\mu\nu}} \pi^\rho &= \delta^\rho_\nu \pi_\mu - \delta^\rho_\mu \pi_\nu\\
\operatorname{ad}^*_{P_\mu} \pi^\rho &= \lambda^\rho{}_\mu\\
\operatorname{ad}^*_{L_{\mu\nu}} \lambda^{\alpha\beta} &= \delta^\alpha_\nu \lambda_\mu{}^\beta - \delta^\alpha_\mu \lambda_\nu{}^\beta - \delta^\beta_\nu \lambda_\mu{}^\alpha + \delta^\beta_\mu \lambda_\nu{}^\alpha\\
\operatorname{ad}^*_{P_\mu} \lambda^{\alpha\beta} &= 0,
\end{split}
\end{equation}
where we have raised and lowered indices with $\eta_{\mu\nu}$. This
allows us to determine the stabiliser subalgebra of $\mu = m \pi^0$,
which is seen to be spanned by $L_{12},L_{13}, L_{23},P_0$: that is,
by the infinitesimal generators of rotations and time translations.
These generate a subgroup of the Poincaré group isomorphic to
$\operatorname{SO}(3) \times \mathbb{R}$. The coadjoint orbit $\mathscr{O}$ is a homogeneous space
of the Poincaré group with Klein pair $(\mathfrak{iso}(3,1),\mathfrak{so}(3)\oplus \mathbb{R})$
and can be identified with the cotangent bundle of (one sheet of) the
mass-shell hyperboloid. The homogeneous space with Klein pair
$(\mathfrak{iso}(3,1), \mathfrak{so}(3))$ is, in the language of Souriau\cite{Souriau},
the \emph{evolution space} $\mathscr{E}$ of the free massive particle. It is a
principal bundle over the coadjoint orbit $\mathscr{O}$ with structure group
the one-dimensional group generated by time translations (in Minkowski
spacetime). If we let $\varpi : \mathscr{E} \to \mathscr{O}$ denote the bundle
projection, the pullback $\sigma := \varpi^*\omega_{\text{KKS}}$ of
the symplectic structure defines a pre-symplectic structure on
$\mathscr{E}$. It is a closed degenerate two-form on $\mathscr{E}$ whose kernel $\ker
\sigma$ defines a rank-one integrable distribution whose leaves are
the trajectories of the massive particle. In Souriau's language, but
going back to Lagrange, the coadjoint orbit is the \emph{space of
motions} of the massive particle.
\subsubsection{Particle actions from coadjoint orbits}
\label{sec:part-acti-from}
The trajectory of a free particle defines a point in the space of
motions but a curve in the evolution space. Therefore if we wish to
define a variational problem whose extrema are the trajectories of a
free particle, the lagrangian should be defined on the evolution
space. We shall assume that just like the space of motions, the
evolution space is also a homogeneous space of the symmetry group
$\mathscr{G}$ under discussion. In the example above, $\mathscr{G}$ is the Poincaré
group and for the massive spinless particle, it is indeed the case
that both the space of motions and the evolution space are homogeneous
spaces of $\mathscr{G}$.
In this more general discussion, we shall assume that the space of
motions is a coadjoint orbit $\mathscr{O}_\alpha$ of $\mathscr{G}$ and we shall let
$\mathscr{G}_\alpha \subset \mathscr{G}$ denote the stabiliser subgroup of $\alpha$.
We shall let $\varpi : \mathscr{E} \to \mathscr{O}_\alpha$ be the projection sending a
point $p \in \mathscr{E}$ to the unique trajectory passing through $p$, which
is a point in the space of motions. This projection is
$\mathscr{G}$-equivariant, so that $\varpi(g \cdot p) = \operatorname{Ad}_g^* \varpi(p)$.
Choosing a point $o \in \mathscr{E}$ with $\varpi(o) =\alpha$, we have a a
commuting triangle
\begin{equation}
\begin{tikzcd}
\mathscr{G} \arrow[r,"\widehat\pi"] \arrow[dr,"\pi"] & \mathscr{E}
\arrow[d,"\varpi"]\\
& \mathscr{O}_\alpha,
\end{tikzcd}
\end{equation}
where $\widehat\pi :\mathscr{G} \to \mathscr{E}$ and $\pi : \mathscr{G} \to \mathscr{O}_\alpha$ are
the orbit maps: $\widehat\pi(g) = g \cdot o$ and $\pi(g) = \operatorname{Ad}^*_g
\alpha$, respectively. Commutativity of the triangle says that $\pi
= \varpi \circ \widehat\pi$. We will let $\mathscr{G}_o \subset \mathscr{G}$ be the
stabiliser subgroup of $o \in \mathscr{E}$ and we observe that $\mathscr{G}_o \subset
\mathscr{G}_\alpha$. Indeed, if $g \in \mathscr{G}_o$, then
\begin{equation}
\alpha = \varpi(o) = \varpi(g \cdot o) = \operatorname{Ad}_g^* \varpi(o) = \operatorname{Ad}_g^* \alpha,
\end{equation}
where we have used the equivariance of $\varpi$. Let
$\sigma = \varpi^*\omega_{\text{KSS}} \in \Omega^2(\mathscr{E})$ and $\omega =
\pi^*\omega_{\text{KSS}} \in \Omega^2(G)$. Then the commutativity of
the above triangle implies that
\begin{equation}
\widehat\pi^*\sigma = \widehat\pi^* \varpi^* \omega_{\text{KKS}} =
(\varpi \circ \widehat\pi)^*\omega_{\text{KKS}} =
\pi^*\omega_{\text{KSS}} = \omega.
\end{equation}
Now let $I \subset \mathbb{R}$ be an interval with parameter $\lambda$ and let
$\gamma : I \to \mathscr{E}$ be a curve in the evolution space passing through
$o$. It is a physical trajectory if and only if
$\dot\gamma \in \ker \sigma$, so that $\varpi(\gamma(\lambda))$ is
constant and equal to $\varpi(o) = \alpha$. We now set up a
variational problem whose extremals are precisely such curves.
Any curve $\gamma : I \to \mathscr{E}$ may be lifted to a curve
$\widehat\gamma :I \to \mathscr{G}$ in the group so that
$\widehat\gamma(\lambda) \cdot o = \gamma(\lambda)$. This lift is not
unique, since we may multiply on the right with any $h : I \to
\mathscr{G}_o$. Indeed,
\begin{equation}
(\widehat\gamma h)(\lambda) \cdot o = \left(\widehat\gamma(\lambda)
h(\lambda)\right)\cdot o = \widehat\gamma(\lambda) \cdot h(\lambda)
\cdot o = \widehat\gamma(\lambda) \cdot o = \gamma(\lambda).
\end{equation}
Recall that $\widehat\pi^*\sigma = \omega = \pi^*\omega_{\text{KSS}}$
and hence by equation~\eqref{eq:KKS-exact-pullback}, it is exact:
\begin{equation}
\widehat\pi^*\sigma = -d \left<\alpha, \vartheta\right>.
\end{equation}
We may define an action functional for $\gamma: I
\to \mathscr{E}$ by lifting the curve to the group $\widehat\gamma : I \to
\mathscr{G}$ and defining
\begin{equation}\label{eq:particle-functional}
S[\widehat\gamma] := \int_I \left<\alpha, \widehat\gamma^*\vartheta\right>.
\end{equation}
At first sight it seems that this depends on the lift
$\widehat\gamma$, but notice that under a gauge transformation
$\widehat\gamma \mapsto \widehat\gamma h$, the above action transforms
as
\begin{equation}
S[\widehat\gamma h] = S[\widehat\gamma] + S[h],
\end{equation}
where $S[h]$ is a constant. We thus conclude that the variational
problem for the action functional~\eqref{eq:particle-functional} is
independent on the lift and, therefore, it defines an action
functional for curves $\gamma : I \to \mathscr{E}$.
Varying the action functional we find, using the Maurer--Cartan
structure equation $d\vartheta = - \tfrac12 [\vartheta,\vartheta]$, that
\begin{equation}
\delta S[\widehat\gamma] = - \int_I \left<\alpha,
\left[\widehat\gamma^{-1}\dot{\widehat\gamma},
\widehat\gamma^{-1}\delta\widehat\gamma\right]\right> d\lambda = \int_I
\left<\operatorname{ad}^*_{\widehat\gamma^{-1}\dot{\widehat\gamma}} \alpha,
\widehat\gamma^{-1}\delta\widehat\gamma\right> d\lambda,
\end{equation}
where we have used equation~\eqref{eq:inf-coadjoint}. This vanishes
for all variations if and only if
$\operatorname{ad}^*_{\widehat\gamma^{-1}\dot{\widehat\gamma}} \alpha = 0$, so that
$\widehat\gamma^{-1}\dot{\widehat\gamma} =
\vartheta(\dot{\widehat\gamma}) \in \mathfrak{g}_\alpha$. We claim that this is
equivalent to $\dot{\widehat\gamma} \in \ker \omega$. Indeed,
\begin{equation}
\begin{split}
\imath_{\dot{\widehat\gamma}}\omega &= -
\imath_{\dot{\widehat\gamma}} d \left<\alpha, \vartheta\right>\\
&= \tfrac12 \imath_{\dot{\widehat\gamma}} \left<\alpha,
[\vartheta,\vartheta]\right>\\
&= \left<\alpha, [\vartheta(\dot{\widehat\gamma}),
\vartheta]\right>\\
&= - \left<\operatorname{ad}^*_{\vartheta(\dot{\widehat\gamma})} \alpha, \vartheta\right>,
\end{split}
\end{equation}
which vanishes if and only if $\vartheta(\dot{\widehat\gamma}) \in
\mathfrak{g}_\alpha$. Finally, we observe that $\dot{\widehat\gamma} \in \ker
\omega$ if and only if $\dot\gamma \in \ker \sigma$.
In summary, free particle motion with momentum in the coadjoint orbit
$\mathscr{O}_\alpha$ defines a curve in the evolution space which is an
extremal of the action functional \eqref{eq:particle-functional}. In
the following section we will see several examples of this
construction, but before doing that let us make an important remark.
As observed in Section~\ref{sec:homog-kinem-spac}, the same
kinematical Lie group might have inequivalent homogeneous spacetimes.
For example, the Poincaré group has both Minkowski and
anti~de~Sitter--Carroll as homogeneous spaces. Since coadjoint orbits
are a property of the group, their interpretation as the space of
motions of a particle in a homogeneous space requires additional
information. In this example, the Poincaré coadjoint orbit $\mathscr{O}$ of
$\alpha = m \pi^0$ can be interpreted as the space of motions of a
spinless particle of mass $m$ in Minkowski spacetime. What is its
interpretation in terms of anti~de~Sitter--Carroll spacetime? The
evolution space $\varpi: \mathscr{E} \to \mathscr{O}$ is also common to both Minkowski
and anti~de~Sitter--Carroll spacetimes, but it admits projections to
both spacetimes. In terms of their Klein pairs, with $\mathfrak{g}$ standing
for the Poincaré algebra and $\mathfrak{h}_{\mathscr{O}} = \left<L_{ab},H\right>$,
$\mathfrak{h}_{\mathscr{E}} = \left<L_{ab}\right>$, $\mathfrak{h}_{\mathsf{M}} = \left<L_{ab},
B_a\right>$ and $\mathfrak{h}_{\mathsf{AdSC}} = \left<L_{ab}, P_a\right>$, we
have the following maps:
\begin{equation}
\begin{tikzcd}
& \mathscr{E} \arrow[ld] \arrow[rd] \arrow[d] & \\
\mathsf{M} & \mathscr{O} & \mathsf{AdSC}
\end{tikzcd}
\qquad\qquad
\begin{tikzcd}
& (\mathfrak{g},\mathfrak{h}_{\mathscr{E}}) \arrow[ld] \arrow[rd] \arrow[d] & \\
(\mathfrak{g},\mathfrak{h}_{\mathsf{M}}) & (\mathfrak{g},\mathfrak{h}_{\mathscr{O}}) & (\mathfrak{g}, \mathfrak{h}_{\mathsf{AdSC}}).
\end{tikzcd}
\end{equation}
A point in the coadjoint orbit $\mathscr{O}$ lifts to a curve in the evolution
space $\mathscr{E}$ and this projects to a curve in Minkowski spacetime
$\mathsf{M}$ or to a curve in anti~de~Sitter--Carroll spacetime
$\mathsf{AdSC}$. The curve in Minkowski spacetime corresponds to the
trajectory of a massive spinless particle, since that is how we
arrived at this space of motions. We may similarly interpret the
curve in anti~de~Sitter--Carroll spacetime as the trajectory of a
carrollian particle. We will see this example in detail below.
\section{Dynamics}
\label{sec:dynamics}
In this section we will construct dynamical systems living in
some of the homogeneous kinematical spacetimes introduced in
Section~\ref{sec:homog-kinem-spac}. We will focus on the construction
of particle actions in various dimensions for some of these
spacetimes. We will use the techniques of nonlinear
realisations \cite{Coleman:1969sm,Callan:1969sn} and the coadjoint
orbit method \cite{MR1066693,Souriau} described in
Section~\ref{sec:coadjoint-orbits}. Although we are in the context of
particle dynamics, the language of nonlinear realisations borrows from
its original use in quantum field theory.
In the current context and at its most basic, a nonlinear realisation
of a Lie group $\mathscr{G}$ is a smooth transitive action of $\mathscr{G}$ on a
manifold $M$. As we discussed in
Section~\ref{sec:homogeneous-spaces}, once we choose an origin $o \in
M$, we get a diffeomorphism $M \cong \mathscr{G}/\mathscr{H}$, with $\mathscr{H}$ the
stabiliser of the origin. This diffeomorphism is $\mathscr{G}$-equivariant,
intertwining between the $\mathscr{G}$ action on $M$ and the $\mathscr{G}$ action on
$\mathscr{G}/\mathscr{H}$ given simply by left multiplication. Another choice of
origin would select a different subgroup $\mathscr{H}$, but any two such
subgroups are conjugate in $\mathscr{G}$ and hence the choice is immaterial.
Let us choose an origin once and for all and hence a description of
$M$ as the coset space $\mathscr{G}/\mathscr{H}$. As discussed in
Section~\ref{sec:klein-pairs}, such a nonlinear realisation of $\mathscr{G}$
is described (up to coverings) infinitesimally by a Klein pair
$(\mathfrak{g},\mathfrak{h})$, where $\mathfrak{g}$ is the Lie algebra of $\mathscr{G}$ and $\mathfrak{h} \subset \mathfrak{g}$
the Lie subalgebra corresponding to the subgroup $\mathscr{H}$. In the
language of nonlinear realisations, the subalgebra $\mathfrak{h}$ generates some
of the \emph{unbroken} symmetries.
One of the fundamental assumptions in the pioneering papers
\cite{Coleman:1969sm,Callan:1969sn} on nonlinear realisations is that
the group $\mathscr{G}$ is compact, connected and semisimple. Since $\mathscr{H}$
is a closed subgroup, it is also compact and hence any
finite-dimensional representation of $\mathscr{H}$ is completely reducible
into simple (i.e., irreducible) representations. In particular, we
can consider the restriction to $\mathscr{H}$ of the adjoint representation
of $\mathscr{G}$ on $\mathfrak{g}$. The subalgebra $\mathfrak{h}$ is a subrepresentation and
since $\mathscr{H}$ is compact, it has a complementary subrepresentation $\mathfrak{g}
= \mathfrak{h} \oplus \mathfrak{m}$, where $\mathfrak{m}$ is isomorphic to the tangent space of the
homogeneous space $M$ at the origin, not just as a vector space but as
a representation space of $\mathscr{H}$. (Recall that $\mathscr{H}$ acts on $T_oM$
via the linear isotropy representation.) In other words, the Klein pair
$(\mathfrak{g},\mathfrak{h})$ is reductive. The subspace $\mathfrak{m}$ is said to generate the
\emph{broken} symmetries and its elements are often referred as
\emph{Goldstone bosons}.
Of course, as we have already seen, kinematical groups are certainly
not compact and seldom semisimple, so its Klein pairs $(\mathfrak{g},\mathfrak{h})$ need
not be reductive. However, as we saw in
Section~\ref{sec:homog-kinem-spac}, with one notable exception (the
lightcone), the Klein pairs of the kinematical spacetimes are
reductive and hence we can talk unambiguously about broken and
unbroken generators. In general, the broken generators are
equivalence classes in $\mathfrak{g}/\mathfrak{h}$.
Spacetimes are not the only nonlinear realisations of kinematical
groups that we will be interested in. In a sense, these describe the
vacua. When discussing particle dynamics, the mere existence of the
particle in the spacetime breaks the symmetry further. The resulting
nonlinear realisation can often be interpreted as the evolution space
(in the sense of Souriau \cite{Souriau}) of the particle dynamical
system. In the case of elementary particles (again in the sense of
Souriau), the evolution space fibres over a coadjoint orbit of the
kinematical group. And indeed, one of the approaches to elementary
particles in a given homogeneous spacetime $\mathscr{G}/\mathscr{H}$ is to classify
coadjoint orbits of $\mathscr{G}$, pass to their evolution spaces and project
onto the spacetime. This method was illustrated in
Section~\ref{sec:part-acti-from}, resulting in an explicit expression
for the particle action functional \eqref{eq:particle-functional}.
In practical terms, the method we will follow in this section is the
following.
\begin{enumerate}
\item We consider nonlinear realisations of a kinematical (or
closely related) group $\mathscr{G}$ on the evolution space of the
dynamical system corresponding to a particle propagating on a
homogeneous spacetime. Let the evolution space be the coset
manifold $\mathscr{G}/\mathscr{H}$ where $\mathscr{H}$ a subgroup of unbroken
symmetries.
\item We then choose a basis of the Lie algebra $\mathfrak{h}$ and extend it to
a basis for $\mathfrak{g}$. If the Klein pair $(\mathfrak{g},\mathfrak{h})$ is reductive, we will
choose the basis in such a way that the split $\mathfrak{g} = \mathfrak{h} \oplus \mathfrak{m}$ is
preserved by the adjoint action of $\mathscr{H}$. Even if $(\mathfrak{g},\mathfrak{h})$ is not
reductive, we will choose a vector space complement $\mathfrak{m}$.
\item This choice of basis for $\mathfrak{m}$ gives local coordinates for
$\mathscr{G}/\mathscr{H}$ near the origin via (modified) exponential coordinates,
as discussed in Section~\ref{sec:expon-coord}. In effect what this
local coordinates do is define a local coset representative $g:
\mathscr{G}/\mathscr{H} \to \mathscr{G}$, which in principle is only defined in a
neighbourhood of the origin.
\item We pull back the left-invariant Maurer--Cartan one form on
$\mathscr{G}$ via the local coset representative and obtain a $\mathfrak{g}$-valued
one-form $\Omega = g^{-1} dg$. Being $\mathfrak{g}$-valued it may be
decomposed into a component along $\mathfrak{h}$ and a component along the
chosen complement $\mathfrak{m}$: $\Omega = \Omega_{\mathscr{H}} +
\Omega_{\mathscr{G}/\mathscr{H}}$.
\item In order to obtain a particle lagrangian with lowest order in
derivatives, we take a linear combination of the $\mathscr{H}$-invariant
components of $\Omega$ and pull them back to the interval
parametrising the worldline of the particle. In some cases, for
example that of the one-dimensional Schwarzian particle discussed in
Section~\ref{sec:schwarzian}, we may consider instead
$\mathscr{H}$-invariant quadratic expressions in the components of the
Maurer--Cartan form.
\end{enumerate}
We shall have ample opportunity to see how this process works in
practice in a number of examples, to which we now turn.
\subsection{Relativistic Particle lagrangians}
\label{sec:rel-particles}
Here, we give a short exposition of the lagrangians of free spinless
particles built on the Poincaré group for spacetime dimensions $>3$
using nonlinear realisations, see e.g.~\cite{Gomis:2006xw}. Free
particles can be timelike, lightlike and tachyonic due to the causal
structure of Minkowski spacetime. The Poincaré algebra in $d+1$
spacetime dimensions, denoted $\mathfrak{iso}(d,1)$, is given in
\eqref{eq:poincare-brackets}, where now $A,B = 0,\dots, d$ and $\eta_{AB}$
is the mostly-plus Minkowski metric. We separate the time and space
indices according to $A = (0,a)$, with $a =1,\dots, d$. We may also
consider lightcone coordinates, where $A = (+,-,i)$ with
$i =1,\dots,d-1$.
\subsubsection{Massive particle}
\label{sec:massive-rel}
We begin with the construction from nonlinear realisations. For a
massive particle in the rest frame, the momentum eigenvalues take the
form $p_A=(m,0,0,\dots,0)$ with $SO(d)$ stabiliser in the Lorentz
group generated by rotations\footnote{The stabiliser in the Poincaré
group contains also the time translations $P_0$.}. The Klein pair is
$(\mathfrak{iso}(d,1),\mathfrak{so}(d))$, which describes the
evolution space of a spinless massive particle in Minkowski spacetime,
as described in Section~\ref{sec:coadj-orbits-from} in the special
case of $d=3$.
The local subgroup $H$ of the nonlinear realisation is thus $SO(d)$
and we write the coset representative for the evolution space as
\begin{align}\label{eq:gP-massive}
g = \underbrace{e^{x^A P_A}}_{g_0} b,
\end{align}
where $g_0$ is a coset representative of Minkowski spacetime thought
of as the coset space $\text{Poincaré}/\text{Lorentz}$ and $b=e^{v^a
B_a}$ ($a=1,\dots, d$) is a general boost generated by those
boost generators $B_a := L_{0a}$ of the Lorentz group which are broken
due to the presence of a massive particle in Minkowski spacetime.
The pull-back of the Maurer--Cartan form of the Poincaré group is
\begin{equation}\label{eq:MC-one-form}
\Omega = g^{-1} dg = b^{-1} \Omega_0 b + b^{-1} db = \Omega_{(P)}^A
P_A + \tfrac12 \Omega^{AB}_{(M)} L_{AB},
\end{equation}
where $\Omega_0 = dx^A P_A = g_0^{-1}dg_0$ is the Maurer--Cartan form
of Minkowski spacetime.
For a relativistic massive particle in another background, for example
$\mathsf{AdS}$, the form of $\Omega_0$ will differ: it would be the pull-back
of the Maurer--Cartan form via a local coset representative for
$\mathsf{AdS}$.
The explicit form of the Maurer--Cartan forms are obtained by
computing the adjoint representation of the boost $b$ on the
generators of space-time translation $P_A$:
\begin{equation}\label{eq:adjrep}
b^{-1} P_A b = \Phi_A{}^B(v^{a})P_B,
\end{equation}
and
\begin{equation}\label{eq:adjrep1}
b^{-1} db= \tfrac12 \eta^{AB}\Phi_A{}^C(v^a) \,d \Phi_B{}^D(v^a) L_{CD},
\end{equation}
where $\Phi_A{}^B(v^a)$ is the fundamental representation of the
Lorentz group, which here depends on the $d$ boost parameters $v^a$.
Explicitly,
\begin{equation}
\begin{split}
b^{-1} P_0 b &= cosh\|v\| P_0 - \frac{\sinh\|v\|}{\|v\|} v^a P_a\\
b^{-1} P_a b &= P_a - \frac{1 - \cosh\|v\|}{\|v\|^2} v_a v^b P_b -
\frac{\sinh\|v\|}{\|v\|} v_a P_0,
\end{split}
\end{equation}
where $\|v\|^2 = \delta_{ab}v^a v^b$.
We want to construct a lagrangian in terms of the pull-back of the
Maurer--Cartan forms subject to two conditions: it should have the
lowest possible number of derivatives and it should be invariant under
the unbroken $SO(d)$ subgroup of the Lorentz group. One can therefore
choose any component of the Maurer--Cartan form which is invariant
under rotations. In this example, we can take the component
$\Omega_{(P)}^0$ along $P_0$. Therefore we take lagrangian\footnote{The cases of
two- and three-dimensional spacetimes are special. In three
dimensions we could also add a Wess--Zumino term associated to the
$\mathfrak{so}(2)$ rotational component of the Maurer--Cartan form
\cite{Schonfeld:1980kb, Mezincescu:2010gb, Gomis:2012ki}; whereas in
two dimensions, the absence of spatial rotations implies that we can
use any component of the Maurer--Cartan form in the lagrangian.} as
the pull-back of $\Omega_{(P)}^0$ to the world-line of the particle: a
curve $\gamma(\tau)$ parametrised by $\tau$.
The action of a free massive particle is given by
\begin{equation}\label{eq:lag-massive-spinless-mink}
I[t,x^a,v^a]=-m\int \gamma^* \Omega_{(P)}^0 = -m \int d\tau \,\dot
x^A \Phi_A{}^0(v^a) = - m \int \left(\cosh\|v\| \dot t -
\frac{\sinh\|v\|}{\|v\|} v_a \dot x^a \right) d\tau.
\end{equation}
The vector $\Phi_A{}^0(v^a)$ is a timelike unit Lorentz vector and
therefore $\eta^{AB}\Phi_A{}^0 \Phi_B{}^0=-1$. The lagrangian depends
on the $d+1$ spacetime coordinates $t, x^a$ and the $d$ boost
parameters $v^a$. The action constructed by nonlinear realisations
could be interpreted as a canonical action \cite{Gomis:2012ki}. In
fact the momentum is
\begin{equation}
p_A =\frac{\partial L}{\partial\dot x^A}= -m \Phi_A{}^0(v^a),
\end{equation}
so that
\begin{equation}
p_0 = -m \cosh\|v\| \qquad\text{and}\qquad p_a = m \frac{\sinh\|v\|}{\|v\|}v_a.
\end{equation}
and the action becomes
\begin{equation}\label{eq:nlrcoadj}
I[x^A,v^a]= \int d\tau \,p_A(v^a)\dot x^A.
\end{equation}
This action is invariant under reparametrisations and the reduced
physical space (e.g., by choosing $x^0 = \tau$) has a symplectic
structure. For other orbits the structure of the action is the same,
the only difference will be the form of the constraint of $\Phi_A{}^0(v^a)$.
This form of the action is recovered in the coadjoint orbit approach,
see for example \cite{Gomis:2021irw} and below.
Now if we regard $p_A$ as $d+1$ independent degrees of freedom,
we can rewrite the action as
\begin{equation}\label{eq:particlepaction}
I[x^A,p_A]= \int d\tau\left(p_A \dot x^A- \tfrac{\gamma}{2}\left(\eta^{AB}{p_A}{p_B}+m^2)\right)\right),
\end{equation}
which is the canonical action of a massive spinless relativistic free
particle. Using the equations of motion of ${\Phi_A}^0$ and $\gamma$, we obtain
\begin{equation}\label{eq:inversehiggs}
\Phi_A{}^0=-\frac{\dot x_A}{\sqrt{-\dot x^2}}.
\end{equation}
Note that the previous relation can also be obtained from the
the vanishing the Maurer--Cartan form $\Omega_{(P)}^a$ associated to
the broken translations
\begin{equation}\label{eq:IH00}
\Omega_{(P)}^a = dx^A \Phi_A{}^a=0,
\end{equation}
which is known as the inverse Higgs mechanism \cite{Ivanov:1975zq}.
Substituting back \eqref{eq:inversehiggs} in the canonical action and
using the equations of motion of $p_A$ and $\gamma$, we obtain the
geometrical action
\begin{equation}
I[x^A]=-m \int d\tau{\sqrt{-\dot x^2}}.
\end{equation}
The quantisation of the mass-shell constraint for a massive
particle gives the wave equation
\begin{equation}
\left(\Box - m^2\right)\Phi(t,\vec x)=0.
\end{equation}
\subsubsection{Relativistic Massless particle}
\label{sec:massless-rel}
Let us now consider a spinless massless particle. In the standard
frame the momentum takes the form $p_A=(1,0,0,\dots, 1)$, whose
stabiliser in the Lorentz group is isomorphic to the euclidean group
$ISO(d-1)$, with null rotations playing the rôle of euclidean
translations. The Klein pair for the evolution space is therefore
$(\mathfrak{iso}(d,1),\mathfrak{iso}(d-1))$. It is useful to work
in a lightcone frame, associated to the lightcone coordinates:
$x^+,x^-,x^i$ with $x^\pm=\tfrac1{\sqrt2}(x^d \pm x^0)$ and transverse
coordinates $x^i$ with $i=1,\dots,d-1$. The Poincaré algebra in
a lightcone frame has the following nonzero brackets:
\begin{equation}\label{eq:PoincareLC}
\begin{aligned}
\left[L_{ij}, L_{kl}\right] &= \delta_{jk} L_{il} - \delta_{ik} L_{jl} - \delta_{jl} L_{ik} + \delta_{il}L_{jk}\\
\left[L_{ij}, L_{\pm k}\right] &= \delta_{jk} L_{\pm i} - \delta_{ik} L_{\pm j}\\
\left[L_{+-}, L_{\pm i}\right] &= \pm L_{\pm i}\\
\left[ L_{+i}, L_{-j} \right] &= -\delta_{ij}L_{+-} - L_{ij}
\end{aligned}
\qquad\qquad
\begin{aligned}
\left[L_{+-}, P_\pm\right] &= \pm P_\pm\\
\left[L_{ij}, P_k\right] &= \delta_{jk} P_i - \delta_{ik} P_j\\
\left[ L_{\pm i}, P_\mp \right] &= - P_i\\
\left[ L_{\pm i}, P_j \right] &= \delta_{ij} P_\pm,
\end{aligned}
\end{equation}
where we have used that $\eta_{ij}=\delta_{ij}$ and $\eta_{+-} = 1$. In
this case, the Klein pair is $(\mathfrak{iso}(d,1),\mathfrak{iso}(d-1))$ with
$\mathfrak{iso}(d-1)$ spanned by $L_{ij}, L_{+i}$ and hence it is not reductive:
indeed, $[L_{+i}, L_{-j}]$ always has a rotational component.
The coset space describing the evolution space is now
$ISO(d,1)/ISO(d-1)$ and we can choose a local coset representative
\begin{equation}
\label{eq:gP-massless}
g = e^{x^A P_A} b \,=g_0 b,
\end{equation}
where now $b=e^{v^i L_{-i}} e^{u L_{+-}}$ with $L_{-i},L_{+-}$ are the
broken boost generators.
In order to compute the Maurer--Cartan form we need the analogue of \eqref{eq:adjrep} for this new
form of $b$. Computing the adjoint representation in this case, the
result is a Lorentz matrix $\Phi_A{}^B(v^i,u)$ where now $A,B$ are
lightcone indices. The translational component of the
Maurer--Cartan form invariant under $ISO(d-1)$ is $\Omega^+_{(P)}$
and, therefore, the invariant action is
\begin{equation}
I[x^A,v^i,u]= \int d\tau \,\dot x^A \Phi_A{}^+(v^i,u),
\end{equation}
where now $\Phi_A{}^+(v^i,u)$ is a Lorentz vector with vanishing norm.
The momenta are
\begin{equation}
p_A=\frac{\partial L}{\partial\dot x^A}= \Phi_A{}^+(v^i,u)
\end{equation}
and the action becomes
\begin{equation}\label{eq:nlrcoadj1}
I[x^A,v^i,u]= \int d\tau \,p_A(v^i,u)\dot x^A.
\end{equation}
In this case, $p_A$ are the components of a null vector, but if we
consider $p_A$ as $d+1$ independent variables, we need to introduce a
Lagrange multiplier $\gamma$ to implement the constraint $p^2 = 0$ so
that the action becomes
\begin{equation}\label{particlepaction}
I[x^A,p_A]= \int d\tau\left(p_A\dot x^A - \tfrac{1}{2}\gamma \eta^{AB}{p_A}{p_B}\right),
\end{equation}
which is the canonical action of a massless spinless relativistic particle.
\subsubsection{The coadjoint orbit method for relativistic particles}
\label{sec:coadjoint-rel}
We note that the component $\Omega_{(P)}^0$ of the Maurer--Cartan form
that we used to construct the action for the massive spinless particle
is nothing but the pairing of the Maurer--Cartan form with the
momentum vector. Indeed, as was done inn
Section~\ref{sec:coadj-orbits-from} in four dimensions, canonically
dual to the basis $L_{AB}, P_A$ of the Poincaré algebra $\mathfrak{g}$, we have
the basis $\lambda^{AB},\pi^A$ for the dual $\mathfrak{g}^*$. Then the momentum
for a spinless massive particle in the restframe is $p_A \pi^A = m
\pi^0$, where $p_A = (m,0,\dots,0)$, and hence
\begin{equation}
\Omega_{(P)}^0 = \left< \pi^0, \Omega_{(P)}\right>,
\end{equation}
which, using equation~\eqref{eq:MC-one-form}, can be rewritten as
\begin{equation}
\Omega_{(P)}^0 = \left< \pi^0, b^{-1}\left( dx^A P_A\right) b \right>
= \left< \pi^0, \operatorname{Ad}_{b^{-1}} \left( dx^A P_A \right)\right> =
\left<\operatorname{Ad}^*_b \pi^0 , dx^A P_A\right>,
\end{equation}
where we have used that the action of the Lorentz group element
$b^{-1}$ on the vector representation is the adjoint action of
$b^{-1}$ as an element of the Poincaré group, which a semi-direct
product of the Lorentz group and the vector representation. Similarly,
$\operatorname{Ad}^*_b$ is the co-adjoint action on the dual space.
Writing $x^A P_A =\mathbb{X}$ we therefore have the equivalent form of the
lagrangian~\eqref{eq:inversehiggs}, see for example
\cite{Gomis:2021irw},
\begin{equation}
\label{eq:Lpair}
L = \langle \pi , \dot{\mathbb{X}} \rangle\,,
\end{equation}
where $\pi$ is an arbitrary element of the orbit of $m \pi^ 0$. This
orbit can be parametrised with the boost parameters $v^a$ that then
appear algebraically in the lagrangian. The momenta can be thought as
elements of the dual of the Lie algebra. The derivative in $\dot{\mathbb{X}}$
denotes the derivative with respect to the parameter of the world-line
and so we have explicitly carried out the pull-back.
For other orbits like the massless case, we take as element of the dual Lie algebra
$\pi^0 + \pi^d$ or for the tachyonic case, the element $\pi^d$. The
action of the Lorentz group on the space of momenta take the
form~\eqref{eq:Lpair} is universal in all cases.
\subsubsection{Comparing Minkowski and $\mathsf{AdSC}$ particles}
\label{sec:comp-mink-mathsf}
As discussed at the end of Section~\ref{sec:part-acti-from}, coadjoint
orbits are intrinsic to the group $\mathscr{G}$ and the same kinematical
group might give rise to different kinematical spacetimes: e.g.,
Minkowski and anti~de~Sitter--Carroll ($\mathsf{AdSC}$) are both
homogeneous spacetimes of the Poincaré group.
In Section~\ref{sec:massive-rel} we derived the lagrangian for a
massive spinless particle in Minkowski spacetime. Consider a curve
\begin{equation}
\gamma(\tau) = e^{tH} e^{x^aP_a} e^{v^a B_a}
\end{equation}
in the evolution space, where $t,x^a, v^a$ are functions of $\tau$.
Then the lagrangian corresponding to the coadjoint orbit of $m \pi^0
\in \mathfrak{g}^*$ is given in equation~\eqref{eq:lag-massive-spinless-mink} by
\begin{equation}
L = -m \left( \cosh\|v\| \dot t - \frac{\sinh\|v\|}{\|v\|} v_a \dot
x^a \right).
\end{equation}
We would like to interpret this as a particle action in
$\mathsf{AdSC}$. As shown in equation~\eqref{eq:Mink-AdSC-iso}, what
is a translation in Minkowski is a carrollian boost in
anti~de~Sitter--Carroll. This suggests considering the same curve but
written in different coordinates adapted to $\mathsf{AdSC}$:
\begin{equation}
\gamma(\tau) = e^{u H} e^{y^a B_a} e^{w^a P_a}
\end{equation}
for some functions $u,y^a,w^a$ of $\tau$. We now interpret $u,y^a$ as
local coordinates on $\mathsf{AdSC}$ and $w^a$ as parametrising the
carrollian boosts which are broken due to the presence of a particle
in $\mathsf{AdSC}$. The explicit change of coordinates from
$(t,x^a,v^a)$ to $(u,y^a,w^a)$ is given by
\begin{equation}
\begin{aligned}
u &= t - \frac{\tanh\|v\|}{\|v\|} x^a v_a\\
y^a &= v^a\\
w^a &= x^a + \left( \frac{1-\cosh\|v\|}{\cosh\|v\|} \right)
\frac{x_b v^b}{\|v\|^2} v^a
\end{aligned}
\qquad\text{with inverse}\qquad
\begin{aligned}
t &= u + \frac{\sinh\|y\|}{\|y\|} w_a y^a\\
x^a &= w^a - \frac{1 -\cosh\|y\|}{\|y\|^2} w_b y^b y^a\\
v^a &= y^a.
\end{aligned}
\end{equation}
We can then perform the change of variables in the lagrangian to
arrive at the following lagrangian for a particle in $\mathsf{AdSC}$:
\begin{equation}
L = -m \left( \cosh\|y\| \dot u + \frac{\sinh\|y\|}{\|y\|} w_a \dot
y^a + \left( 1 - \frac{\sinh\|y\|}{\|y\|} \right) \frac{w_b y^b y_a \dot y^a}{\|y\|^2}\right).
\end{equation}
The canonical momenta are
\begin{equation}
\begin{split}
p_0 &= \frac{\partial L}{\partial \dot u} = - m \cosh\|y\|\\
p_a &= \underbrace{\frac{w_b y^b}{\|y\|^2} y_a}_{p_a^\parallel} + \underbrace{\frac{\sinh\|y\|}{\|y\|}
\left( \delta_{ab} - \frac{y_ay_b}{\|y\|^2} \right) w^b}_{p_a^\perp}.
\end{split}
\end{equation}
We see that the ``spatial'' momentum $p_a$ breaks up into a
longitudinal component $p_a^\parallel$ along $y^a$ and a transverse
component $p_a^\perp$. The Euler--Lagrange equation for $u$ says that
$\|y\|$ is constant, whereas the Euler--Lagrange equation for $w^a$
says that if $\|y\| \neq 0$, then $y^a$ is constant. Hence a massive
particle in $\mathsf{AdSC}$ does not move.
\subsection{Non-relativistic particle lagrangians}
\label{sec:nonrel-particles}
In this section we will consider non-relativistic particle
lagrangians. We will start by considering the centrally extended
Newton--Hooke algebra $\widetilde\mathfrak{n}^-$. The Newton--Hooke algebra
$\mathfrak{n}^-$ was defined in Section~\ref{sec:kinem-lie-algebr} and
corresponds to $\chi = 0$ in the family of kinematical Lie algebras
$\mathfrak{n}^-_\chi$ in Table~\ref{tab:KLAs}. Its central extension is listed
in Table~\ref{tab:gen-bargmann}. We will introduce an additional
parameter in the Lie brackets to allow us to take a limit to the
Bargmann Lie algebra which is the universal central extension of the
Galilei algebra $\mathfrak{g}$.
The centrally extended Newton--Hooke Lie algebra $\widetilde\mathfrak{n}^-$ is
spanned by $L_{ab}, B_a, P_a, H, Z$ where $L_{ab}$ span the $\mathfrak{so}(d)$
rotational subalgebra. The Lie brackets are the generic kinematical
Lie brackets of equations~\eqref{eq:gen-kla-1}, \eqref{eq:gen-kla-2}
and \eqref{eq:gen-kla-3} together with the following nonzero brackets:
\begin{equation}
\label{eq:nh-bargmann}
[H,B_a] = P_a, \qquad [H,P_a] = \tfrac1{R^2} B_a
\qquad\text{and}\qquad [B_a, P_b] = \delta_{ab} Z,
\end{equation}
with $Z$ central. Notice that for any nonzero real number $R$, these
Lie algebras are all isomorphic, but if we take the limit $R \to
\infty$ we obtain the Bargmann algebra in
Section~\ref{sec:kinem-lie-algebr} after $H \mapsto -H$.
\subsubsection{Massive particle}
In this section we will see that a massive spinless particle with
symmetry algebra $\widetilde\mathfrak{n}^-$ is equivalent to a $d$-dimensional
harmonic oscillator. In order to see that, we will consider the
coset space with Klein pair $(\widetilde\mathfrak{n}^-, \mathfrak{so}(d))$.
We choose a local coset representative of the form
\begin{equation}\label{eq:coset-rep}
g= \underbrace{e^{x^0 H} e^{x^a P_a} e^{u Z}}_{g_0} \underbrace{e^{v^a B_a}}_b.
\end{equation}
Here $g_0$ is the coset representative of the generalised
non-relativistic spacetime and $b$ is a Galilei boost parametrised by
$v^a$.
The role of the coordinate $u$ of the central charge $Z$ in
non-relativistic theories to construct a Wess--Zumino term was first
discussed in \cite{Gauntlett:1990nk}. The coordinates $x^0,x^a,u$
suggest an interpretation of these coordinates as relativistic
coordinates in a space of one higher dimension, here $d+2$.
We now calculate the pull-back of the Maurer--Cartan form along the
coset representative:
\begin{equation}
\Omega = g^{-1} dg = b^{-1} \Omega_0 b + b^{-1} db,
\end{equation}
where $\Omega_0 = g_0^{-1} dg_0$. These are easy to calculate and one
finds
\begin{equation}
b^{-1}db = dv^a B_a
\end{equation}
and
\begin{equation}
\Omega_0 = dx^0 H + dx^a P_a + \left( du - \frac{x^2}{2R^2}
dx^0\right) Z - \frac{x^a}{R} dx^0 B_a.
\end{equation}
We then calculate $b^{-1} \Omega_0 b$ to arrive at the final
expression
\begin{equation}
\Omega = \left( du - \frac{x^2}{2R^2} dx^0 - \tfrac12 v^2
dx^0 - v_a dx^a \right) Z + dx^0 H + (dv^a - \tfrac1R x^a dx^0)
B_a + (dx^a + v^a dx^0) P_a.
\end{equation}
The $\mathfrak{so}(d)$-invariants in the adjoint representation are $H$ and $Z$
and hence the $\mathfrak{so}(d)$-invariant lagrangian is built out of the $H$
and $Z$ components of $\Omega$. The $H$-component is an exact form,
hence it does not contribute to the Euler--Lagrange equations. We
will therefore concentrate on the $Z$-component. Pulling it back to
the interval parametrising the worldline of the particle, we arrive at
the following lagrangian
\begin{equation}
L = \dot u - \tfrac12 \left( \frac{x^2}{R^2} + v^2 \right) \dot x^0
- v_a \dot x^a.
\end{equation}
The first term is again a total derivative, so it does not contribute
to the Euler--Lagrange equations. Its role is to make the lagrangian
invariant, since without it the lagrangian is only quasi-invariant.
In other words, it is a Wess--Zumino term.
Solving for $v^a$ via its equation
$\frac{\partial L}{\partial v^a} = 0$, we find
\begin{equation}
v^a = - \frac{\dot x^a}{\dot x^0}
\end{equation}
and re-introducing this into the lagrangian, we obtain
\begin{equation}\label{eq:BNHLag}
L = \frac{\dot x^2}{2\dot x^0} - \frac{x^2}{2R^2}
\dot x^0.
\end{equation}
If we choose the gauge $\dot x^0 = 1$, so we use $x^0$ as the
parameter along the worldline, we see that $L$ is indeed the
lagrangian for a $d$-dimensional harmonic oscillator with
characteristic frequency $\frac1R$. Taking the limit $R \to \infty$
in the lagrangian, we arrive at the lagrangian for a non-relativistic
spinless particle of unit mass:
\begin{equation}
L = \frac{\dot x^2}{2\dot x^0}.
\end{equation}
Had we considered instead the central extension of the other
Newton--Hooke algebra $\mathfrak{n}^+ = \mathfrak{n}^+_{\gamma=-1}$ in
Table~\ref{tab:KLAs}, we would have obtained the inverted harmonic
oscillator \cite{Gao:2001sr}.
\subsubsection{Massless Galilei Particle}
Is there a massless particle associated to the (unextended) Galilei
algebra? The answer is yes\footnote{We acknowledge
discussions with Axel Kleinschmidt on this point.}. The model was
first introduced by Souriau \cite{Souriau}. A massless relativistic
particle follows a direction on the lightcone. In the
non-relativistic case, since the speed of light is infinite, the
particle follows a spatial longitudinal direction, say $x^d$.
In this case, the unbroken group is generated by $L_{ij}, B_i, P_d$;
that is, the infinitesimal rotations $L_{ij}$ on the hyperplane
spanned by $x^1,x^2,\dots,x^{d-1}$, the infinitesimal galilean boosts
$B_i$ in directions perpendicular to $x^d$ and the infinitesimal
longitudinal translations along $x^d$. The evolution space has Klein
pair $(\mathfrak{g}, \mathfrak{iso}(d-1))$, where $\mathfrak{g}$ is the Galilei algebra, as in
Table~\ref{tab:KLAs}, and the $\mathfrak{iso}(d-1)$ subalgebra is spanned by
$L_{ij}, B_i$, for $i,j =1,\dots,d-1$.
A local coset representative is
\begin{equation}\label{eq:gP-massless-gal}
g = \underbrace{e^{t H + x^i P_i + x^d P_d}}_{g_0} \underbrace{e^{\theta^i R_i} e^{v B_d}}_b,
\end{equation}
with $i=1,\dots,d-1$, where $R_i:= L_{id}$ are the broken rotations
and $B_d$ is the broken longitudinal boost.
The pull-back of the Maurer--Cartan form is given as usual by
\begin{equation}
\Omega = \operatorname{Ad}_{b^{-1}} \Omega_0 + b^{-1} db,
\end{equation}
where
\begin{equation}
\Omega_0 = g_0^{-1} dg_0 = dt H + dx^i P_i + dx^d P_d.
\end{equation}
The $ISO(d-1)$-invariant subspace of $\mathfrak{g}$ is spanned by $P_d$ and
$B_d$, hence we need to extract those components of $\Omega$ in order
to write down the lagrangian. We notice that
\begin{equation}
b^{-1}db = e^{-v B_d} e^{-\theta^i R_i} d (e^{\theta^i R_i} e^{v
B_d}) = e^{-v B_d} \left( e^{-\theta^i R_i} d e^{\theta^i R_i}
\right) e^{v B_d} + dv B_d.
\end{equation}
Since $[R_i,R_j] = - L_{ij}$, the expression in parenthesis lives in
the span of $L_{ij}, R_i$ and hence the first of the above terms lives
in the span of $L_{ij}, R_i, B_i$. Therefore the only term in
$b^{-1}db$ which contributes to the lagrangian is $dv B_d$.
We calculate $\operatorname{Ad}_{b^{-1}} \Omega_0$ paying particular attention to
the $P_d$ component:
\begin{equation}
\begin{split}
\operatorname{Ad}_{b^{-1}} \Omega_0 &= dt H - v dt P_d + \exp(\operatorname{ad}_{-\theta\cdot
R}) dx^i P_i + \exp(\operatorname{ad}_{-\theta\cdot R}) dx^d P_d\\
&= dt H + \left(dx^i + \frac{\cos\|\theta\| -1}{\|\theta\|^2}
\theta^i\theta_j dx^j - \frac{\sin\|\theta\|}{\|\theta\|}
\theta^i dx^d\right) P_i\\
& \qquad {} + \left( \cos\|\theta\| dx^d +
\frac{\sin\|\theta\|}{\|\theta\|} \theta_i dx^i - v dt \right) P_d,
\end{split}
\end{equation}
where $\|\theta\|^2 =\delta_{ij} \theta^i \theta^j$. In summary,
\begin{equation}
\Omega = dv B_d + \left( \cos\|\theta\| dx^d +
\frac{\sin\|\theta\|}{\|\theta\|} \theta_i dx^i - v dt \right)
P_d + \cdots
\end{equation}
omitting terms which are not $ISO(d-1)$-invariant. The $B_d$
component is exact, so that it does not contribute to the
Euler--Lagrange equations. Therefore we concentrate on the $P_d$
component and introducing the ``colour'' $k$ \cite{Souriau}, we
may write the lagrangian as the pull-back to the interval of the
$P_d$-component:
\begin{equation}
L = k \left(\cos\|\theta\| \dot x^d +
\frac{\sin\|\theta\|}{\|\theta\|} \theta_i \dot x^i - v \dot t \right).
\end{equation}
We calculate the spatial canonical momentum $\vec p =
(p_1,p_2,\dots,p_d)$ to be
\begin{equation}
p_d = \frac{\partial L}{\partial \dot x^d} = k \cos\|\theta\|
\qquad\text{and}\qquad p_i = \frac{\partial L}{\partial \dot x^i} =
k \frac{\sin\|\theta\|}{\|\theta\|} \theta_i,
\end{equation}
from where we see that $\vec p \cdot \vec p = k^2$. Introducing the
associated unconstrained momentum, we implement the above constraint
via a Lagrange multiplier $e$ to arrive at the lagrangian
\begin{equation}
L = p_d \dot x^d + p_i \dot x^i - v \dot t + \tfrac12 e \left( \vec
p \cdot \vec p - k^2 \right).
\end{equation}
Notice that the $v$ equation of motion is $\frac{\partial L}{\partial v} = - k
\dot t = 0$, so that propagation is instantaneous.
The quantisation of the mass-shell constraint gives the Helmholtz equation
\begin{equation}\label{eq:masslessg}
(\nabla^2 + k^2) \Phi(t,\vec x)=0,
\end{equation}
which agrees with the field equation of the galilean magnetic
Klein--Gordon field (see equation~\eqref{case1bflat}). The approach
to quantum field theory in terms of particle variables is know as the
\emph{world approach} to field theory and it was first considered by
Feynman \cite{Feynman:1950ir,Feynman:1951gn}, see also for example
\cite{Schubert:2001he,Casalbuoni:1974pj}.
\subsection{Carroll particle lagrangians}
In this section we will construct the action of a massive (timelike) particle in
Carroll spacetime. The case of a massless Carroll particle can be
obtained from from the massive one by taking the mass to be zero.
We will also construct the lagrangian of a tachyonic particle. The
presence of these three kinds of particles (timelike, lightlike and
tachyonic) is due to the causal structure of the Carroll geometry.
This is analogous to what happens in lorentzian geometry, but in
contrast with the galilean case, where the notion of mass is not
related to the causal structure.
The Carroll algebra is denoted $\mathfrak{c}$ and given in
Table~\ref{tab:KLAs}. Besides the Lie brackets in
equations~\eqref{eq:gen-kla-1}, \eqref{eq:gen-kla-2} and
\eqref{eq:gen-kla-3}, which are shared by all kinematical Lie
algebras, the only nonzero bracket in $\mathfrak{c}$ is
\begin{equation}\label{eq:Calgebra-again}
[B_a, P_b] = \delta_{ab} H.
\end{equation}
In contrast to the Galilei algebra, the Carroll algebra (for $d\geq3$)
does not allow nontrivial central extensions, although $H$ is a
central element.
\subsubsection{Massive Carroll particle}
We construct the timelike massive Carroll particle lagrangian \cite{Bergshoeff:2014jla}
\cite{Duval:2014uoa} using the method of nonlinear realisations. The
Klein pair for the evolution space is $(\mathfrak{c}, \mathfrak{so}(d))$, where $\mathfrak{so}(d)$
is the span of the rotations $L_{ab}$. A coset representative for the
corresponding coset space is given by
\begin{equation}
g = \underbrace{e^{t H + x^a P_a}}_{g_0} \underbrace{e^{v^a B_a}}_{b},
\end{equation}
where $t, x^a$ are the Goldstone bosons associated to spacetime
translations and $v^a$ are the Goldstone bosons associated to the
broken boosts.
The Maurer-Cartan form $\Omega$ is given by
\begin{equation}
\Omega = g^{-1} dg = \operatorname{Ad}_{b^{-1}} \Omega_0 + b^{-1}db,
\end{equation}
where
\begin{equation}
\Omega_0 = g_0^{-1} dg_0 = dt H + dx^a P_a \qquad\text{and}\qquad
b^{-1}db = dv^a B_a.
\end{equation}
It follows that
\begin{equation}
\operatorname{Ad}_{b^{-1}} \Omega_0 = dt H + \exp(-\operatorname{ad}_{v^aB_a}) dx^b P_b = dt H +
dx^b (P_b - v_b H) = (dt - v_b dx^b) H + dx^b P_b.
\end{equation}
The lagrangian is the pull-back to the interval of the rotationally
invariant component of $\Omega$, which is the component along $H$:
\begin{equation}
L = M (-\dot t + v_a \dot x^a)
\end{equation}
where we have introduced a mass $M$. Notice that the ordinary massive
particle does not move: the momentum of the Carroll particle $p_a = M
v_a$ and there is no relation between the momentum and the velocity of
the particle. The canonical lagrangian is obtained by introducing a
Lagrange multiplier $e$:
\begin{equation}
L_{\text{can}} = -E \dot t + p_a \dot x^a - \tfrac12 e \left( E^2 -
M^2 \right).
\end{equation}
Note that in this form we have introduced also negative energies that
are allowed in the Carroll case. Since $H$ is a Casimir, its
eigenvalues can take any real value: positive negative or zero.
Physically, a timelike or lightlike Carroll particle does not move.
The quantisation of the mass-shell constraint for a Carroll massive
particle gives the wave equation \cite{Bergshoeff:2014jla}
\begin{equation}\label{eq:carroll-wave}
\left(\frac{\partial^2}{\partial t^2}+M^2\right)\Phi(t,\vec x)=0;
\end{equation}
that is, the equation of motion of the carrollian electric
Klein-Gordon field theory, see \eqref{eq:case3c-flat}.
\subsubsection{Tachyonic Carroll Particle}
Here we construct the tachyon Carroll particle lagrangian \cite{deBoer:2021jej}.
A relativistic tachyon has a spacelike momentum, but in the
ultra-relativistic limit, the lightcone collapses to the timeline and
hence any momentum having a nonzero component along a spacelike
direction is tachyonic. For example, we may take the momentum purely
along the $d$-direction: $\alpha = M \pi^d \in \mathfrak{c}^*$ in the dual of
the Carroll algebra. The resulting coadjoint orbit has Klein pair
$(\mathfrak{c},\mathfrak{h}_\alpha)$, where $\mathfrak{h}_\alpha$ is spanned by $L_{ij}, B_i, B_d,
P_d, P_0$, where now $i,j = 1,\dots,d-1$. The subalgebra $\mathfrak{h}_\alpha$
is isomorphic to the direct sum of the $\mathfrak{iso}(d-1)$ algebra generated
by $L_{ij},B_i$ and the Heisenberg algebra generated by
$B_d,P_d,P_0$. The evolution space is obtained by breaking the
translation symmetry in the $d$-direction. Therefore the Klein pair
for the evolution space is $(\mathfrak{c},\mathfrak{h})$, where $\mathfrak{h}$ is spanned by
$L_{ij}, B_i, B_d, P_0$ and isomorphic now to $\mathfrak{iso}(d-1) \oplus
\mathbb{R}^2$. This Klein pair is not reductive, but we may choose a
complement $\mathfrak{m}$ spanned by $R_i := L_{id}, P_i, P_d$. It is not
reductive because $[B_d, P_d] = P_0 \not\in \mathfrak{m}$. Nevertheless the
image $\bar P_d$ of $P_d$ in $\mathfrak{c}/\mathfrak{h}$ is an invariant of the linear
isotropy representation of $\mathfrak{h}$ on $\mathfrak{c}/\mathfrak{h}$.
Let us choose a coset representative
\begin{equation}\label{eq:gP-tachyon-carroll}
g = \underbrace{e^{x^d P_d} e^{x^iP_i}}_{g_0} \underbrace{e^{\theta^i R_i}}_{b}
\end{equation}
and pull back the left-invariant Maurer--Cartan form. This will take
values in the Carroll algebra, but we project to $\mathfrak{c}/\mathfrak{h}$ and keep the
invariant components, which here is only the component along $P_d$.
Equivalently, we calculate the dual pairing $\left<\alpha,
g^{-1}dg\right>$. A calculation shows that
\begin{equation}
g^{-1}dg = \left( \cos\|\theta\| dx^d +
\frac{\sin\|\theta\|}{\|\theta\|}\theta_i dx^i \right) P_d + \cdots
\end{equation}
where $\|\theta\|^2 = \delta_{ij}\theta^i\theta^j$, hence
\begin{equation}
\left<\alpha, g^{-1}dg\right> = M \left<\pi^d, g^{-1}dg\right> =M
\left( \cos\|\theta\| dx^d +
\frac{\sin\|\theta\|}{\|\theta\|}\theta_i dx^i \right).
\end{equation}
The lagrangian is now obtained by pulling back this component to the
interval parametrising the worldline of the particle:
\begin{equation}
L = M \left( \cos\|\theta\| \dot x^d +
\frac{\sin\|\theta\|}{\|\theta\|}\theta_i \dot x^i \right).
\end{equation}
The canonical momenta are given by
\begin{equation}
p_d := \frac{\partial L}{\partial \dot x^d} = M \cos\|\theta\|
\qquad\text{and}\qquad p_i := \frac{\partial L}{\partial \dot x^i} =
M \frac{\sin\|\theta\|}{\|\theta\|}\theta_i,
\end{equation}
from where it follows that they are constrained:
\begin{equation}
\vec p \cdot \vec p := p_d^2 + \sum_i p_i^2 = M^2.
\end{equation}
We may implement this constraint via a Lagrange multiplier to arrive
at the lagrangian
\begin{equation}
L = p_d \dot x^d + p_i \dot x^i + \tfrac12 \lambda (\vec p \cdot
\vec p - M^2).
\end{equation}
Since $\dot x^0$ does not appear in the action, its momentum $p_0$ is
also zero. We may implement that constraint with a second Lagrange
multiplier and write the canonical lagrangian as
\begin{equation}
L = p_A \dot x^A + \tfrac12 \lambda (\vec p \cdot \vec p - M^2) + \mu p_0,
\end{equation}
Notice that the mass-shell constraint $\vec p \cdot \vec p - M^2=0$
coincides with the one of the massless galilean particle and also that
the energy ($p_0$) of a tachyon particle is zero. The associated
wave equation reduces to a Helmholtz equation:
\begin{equation}\label{magcarroll}
(\vec{\nabla}^2+M^2)\Phi(t,\vec x)=0 \qquad\text{and}\qquad
\frac{\partial}{\partial t} \Phi(t,\vec x)=0,
\end{equation}
which is related to the equations of motion of magnetic Carroll
field theory as in equation~\eqref{case3bflat}. The relation among
Galilean and Carroll particles has been studied in
\cite{Gomis:2022spp} based on the duality among Galilei and Carroll
algebras \cite{Barducci:2018wuj}, at the level of the associated wave
equations~\eqref{eq:masslessg} and \eqref{eq:carroll-wave}.
\subsection{One- and two-dimensional particle dynamics with
$\operatorname{SL}(2,\mathbb{R})$ symmetry}
\label{sec:sl2R-invariant-dynamics}
In this section we will give several examples of one- and
two-dimensional particle dynamics with $\operatorname{SL}(2,\mathbb{R})$ symmetry. There
are three two-dimensional homogeneous spaces of $\operatorname{SL}(2,\mathbb{R})$:
corresponding to the hyperbolic plane, the lightcone and (anti)~de
Sitter spacetime. In addition, $\operatorname{SL}(2,\mathbb{R})$ acts on the real
projective line $\mathbb{R} P^1$ (also known as one-dimensional conformal
space) via projective transformations. Among the particle dynamics
discussed here, we will recover the conformal mechanics of
\cite{deAlfaro:1976vlx}, see, e.g.,
\cite{Ivanov:1988vw,deAzcarraga:1998ni} and the Schwarzian particle
action of \cite{Kitaev:2017awl,Maldacena:2016hyu,Stanford:2017thb}.
As already discussed in Section~\ref{sec:homog-kinem-spac}, there are
three spatially isotropic homogeneous spaces associated to the Lorentz
group $\operatorname{SO}(d,1)$: namely, hyperbolic space $\mathsf{H}_d$, de Sitter
spacetime $\mathsf{dS}_d$ and the future lightcone $\mathbb{L}_d$. The picture is
the familiar foliation of Minkowski spacetime into orbits of the
Lorentz group. Whereas de Sitter spacetime is a maximally symmetric
lorentzian manifold and hyperbolic space is a maximally symmetric
riemannian manifold, the future lightcone is what we could term a
maximally symmetric carrollian manifold. Each of these three spaces
is described infinitesimally by a Klein pair $(\mathfrak{g},\mathfrak{h})$ where
$\mathfrak{g} = \mathfrak{so}(d,1)$ and
\begin{equation}
\mathfrak{h} \cong
\begin{cases}
\mathfrak{so}(d) & (\mathsf{H}_d)\\
\mathfrak{so}(d-1,1) & (\mathsf{dS}_d)\\
\mathfrak{iso}(d-1) & (\mathbb{L}_d)
\end{cases}
\end{equation}
In this section we will concentrate in the case $d=2$ and we will use
the isomorphism $\mathfrak{so}(2,1) \cong \sl(2,\mathbb{R})$. Each of the above Klein
pairs can thus be realised geometrically as coset spaces
$\operatorname{SL}(2,\mathbb{R})/\mathscr{H}$ for some one-dimensional connected closed Lie subgroup
$H \subset \operatorname{SL}(2,\mathbb{R})$. Up to conjugation in $\operatorname{SL}(2,\mathbb{R})$, there are three connected closed
one-dimensional Lie subgroups $H \subset \operatorname{SL}(2,\mathbb{R})$:
\begin{eqnarray}
\text{(elliptic)} & \qquad H &= \left\{ \begin{pmatrix} \cos\theta & - \sin\theta \\ \sin\theta & \cos\theta \end{pmatrix} ~\middle |~ \theta \in \mathbb{R}/2\pi\mathbb{Z} \right\}\\
\text{(hyperbolic)} & \qquad H &= \left\{ \begin{pmatrix} \cosh\tau & \sinh\tau \\ \sinh\tau & \cosh\tau \end{pmatrix} ~\middle |~ \tau\in \mathbb{R} \right\}\\
\text{(parabolic)} & \qquad H &= \left\{ \begin{pmatrix} 1 & \zeta \\ 0 & 1 \end{pmatrix} ~\middle |~ \zeta\in \mathbb{R} \right\}.
\end{eqnarray}
They can be distinguished by the trace of the non-identity elements:
$<2$ in the elliptic case, $>2$ in the hyperbolic case and $=2$ in the
parabolic case. They can also be distinguished by the causal nature
of the vectors they leave invariant in the three-dimensional vector
representation of $\mathfrak{so}(2,1)$: timelike in the elliptic case, spacelike
in the hyperbolic case and lightlike in the parabolic case.
Since the vector representation of $\operatorname{SL}(2,\mathbb{R})$ is isomorphic to the
coadjoint representation, these homogeneous spaces can also be
interpreted as coadjoint orbits and hence, according to Souriau, as
the space of motions of elementary systems. The evolution spaces can
in all cases be interpreted as the Lie group $\operatorname{SL}(2,\mathbb{R})$ itself. It
is then a matter of interpretation how to project the trajectories on
the evolution space into particle trajectories in the spacetime.
The Lie algebras of these Lie subgroups are given by
\begin{eqnarray}
\text{(elliptic)} & \qquad \mathfrak{h} &= \left\{ \begin{pmatrix} 0 & - z \\ z & 0 \end{pmatrix} ~\middle |~ z\in \mathbb{R} \right\}\\
\text{(hyperbolic)} & \qquad \mathfrak{h} &= \left\{ \begin{pmatrix} 0 & z \\ z & 0 \end{pmatrix} ~\middle |~ z\in \mathbb{R} \right\}\\
\text{(parabolic)} & \qquad \mathfrak{h} &= \left\{ \begin{pmatrix} 0 & z \\ 0 & 0 \end{pmatrix} ~\middle |~ z\in \mathbb{R} \right\}.
\end{eqnarray}
We will write $\mathfrak{g} = \mathfrak{h} \oplus \mathfrak{m}$ in each case with
\begin{eqnarray}
\text{(elliptic)} & \qquad \mathfrak{m} &= \left\{ \begin{pmatrix} x & y \\ y & -x \end{pmatrix} ~\middle |~ x,y\in \mathbb{R} \right\}\\
\text{(hyperbolic)} & \qquad \mathfrak{m} &= \left\{ \begin{pmatrix} x & -y \\ y & x \end{pmatrix} ~\middle |~ x,y\in \mathbb{R} \right\}\\
\text{(parabolic)} & \qquad \mathfrak{m} &= \left\{ \begin{pmatrix} x & 0 \\ y & -x \end{pmatrix} ~\middle |~ x,y\in \mathbb{R} \right\}.
\end{eqnarray}
In the elliptic and hyperbolic cases, the split $\mathfrak{g} = \mathfrak{h} \oplus \mathfrak{m}$ is
reductive, so that $[\mathfrak{h},\mathfrak{m}] \subset \mathfrak{m}$ in the obvious notation,
whereas in the parabolic case no reductive split exists. Let us write
$\mathfrak{m} = \left\{x P_1 + y P_2 ~\middle |~ x,y \in \mathbb{R}\right\}$ and $\mathfrak{h} =
\left\{z B ~\middle |~ z \in \mathbb{R}\right\}$ in all cases, which defines
the matrices $B,P_1,P_2$:
\begin{eqnarray}
\text{(elliptic)} & \qquad B &= \begin{pmatrix}0 & -1 \\ 1 &
0 \end{pmatrix} \qquad P_1 = \begin{pmatrix}1 & 0 \\ 0 & -1 \end{pmatrix} \qquad P_2 = \begin{pmatrix}0 & 1 \\ 1 & 0 \end{pmatrix} \\
\text{(hyperbolic)} & \qquad B &= \begin{pmatrix}0 & 1 \\ 1 &
0 \end{pmatrix} \qquad P_1 = \begin{pmatrix}1 & 0 \\ 0 & -1 \end{pmatrix} \qquad P_2 = \begin{pmatrix}0 & -1 \\ 1 & 0 \end{pmatrix} \\
\text{(parabolic)} & \qquad B &= \begin{pmatrix}0 & 1 \\ 0 &
0 \end{pmatrix} \qquad P_1 = \begin{pmatrix}1 & 0 \\ 0 & -1 \end{pmatrix} \qquad P_2 = \begin{pmatrix}0 & 0 \\ 1 & 0 \end{pmatrix}.
\end{eqnarray}
In the elliptic case, there is a positive-definite inner product on
$\mathfrak{m}$ which is $\mathscr{H}$-invariant: $\left<P_a,P_b\right> = \delta_{ab}$,
whereas in the hyperbolic case there is an $\mathscr{H}$-invariant lorentzian
inner product on $\mathfrak{m}$ given by $\left<P_a,P_b\right>= \eta_{ab}$, with
$\eta$ diagonal with $\eta_{11}= - \eta_{22} = 1$. In the parabolic
case, $P_1$ is $\mathscr{H}$-invariant (in $\mathfrak{g}/\mathfrak{h}$) and so is the degenerate
bilinear form $b$ whose only nonzero entry is $b(P_2,P_2)$.
We shall describe $\operatorname{SL}(2,\mathbb{R})$-invariant particle dynamics on each of
the coset manifolds $\operatorname{SL}(2,\mathbb{R})/\mathscr{H}$, where $\mathscr{H}$ is either an elliptic,
hyperbolic or parabolic subgroup. To do so we will parametrise a
neighbourhood of the identity of $\operatorname{SL}(2,\mathbb{R})$ via $g : \mathbb{R}^3 \to
\operatorname{SL}(2,\mathbb{R})$ where
\begin{equation}
g(x,y,z) = \underbrace{e^{y P_2} e^{x P_1}}_{g_0} \underbrace{e^{z B}}_b,
\end{equation}
where $g_0$ is a coset representative for the spacetime and $b$
corresponds to the extra generator that is broken by the presence of
the particle. Notice that $B,P_1,P_2$ are defined differently in each of
the three cases, as show above, consistent with this interpretation.
The left-invariant Maurer--Cartan one-form on $\operatorname{SL}(2,\mathbb{R})$ pulls back
to $g^{-1}dg \in \Omega^1(\mathbb{R}^3,\mathfrak{g})$. Choosing $\alpha \in \mathfrak{g}^*$, we
have that
$L_\alpha := \left<\alpha,g^{-1}dg\right> \in \Omega^1(\mathbb{R}^3)$, where
we have used $\left<-,-\right>$ to denote the dual pairing between
$\mathfrak{g}$ and $\mathfrak{g}^*$. Let $I := [a,b] \subset \mathbb{R}$ and let
$\gamma : I \to \mathbb{R}^3$ be a regular curve. We may pull back
$L_\alpha$ via $\gamma$ to produce a one-form
$\gamma^* L_\alpha \in \Omega^1(I)$ which we may integrate to arrive
at the following action functional:
\begin{equation}
S_\alpha = \int_I \gamma^* L_\alpha.
\end{equation}
We will see that after partially solving the Euler--Lagrange
equations, $S_\alpha$ will induce an action for particle dynamics in
$\operatorname{SL}(2,\mathbb{R})/\mathscr{H}$.
\subsubsection{Particle dynamics on the hyperbolic plane}
\label{sec:part-dynam-H}
Despite the name, the hyperbolic plane $\mathsf{H}_2$ is the quotient of
$SL(2,\mathbb{R})$ by an elliptic subgroup. Let us write
$ g^{-1}dg = \theta^1 P_1 + \theta^2 P_2 + \theta^3 B$, where
\begin{equation}
\begin{split}
\theta^1 &= \cosh(2x) \cos(2z) dy - \sin(2z) dx\\
\theta^2 &= \cos(2z) dx + \cosh(2x) \sin(2z) dy\\
\theta^3 &= dz + \sinh(2x) dy.
\end{split}
\end{equation}
The invariant metric on $\operatorname{SL}(2,\mathbb{R})/\mathscr{H}$ is given (up to homothety)
by
\begin{equation}
ds^2 = (\theta^1)^2 + (\theta^2)^2 = dx^2 + \cosh(2x)^2 dy^2.
\end{equation}
The action is given by
\begin{equation}
S_\alpha = \int_a^b \left( \alpha_1 (\cosh(2x) \cos(2z) \dot y -
\sin(2z) \dot x) + \alpha_2 (\cos(2z) \dot x + \cosh(2x)\sin(2z)
\dot y)+ \alpha_3(\dot z + \sinh(2x) \dot y) \right) dt.
\end{equation}
The Euler--Lagrange equation for $z$ is simply $\frac{\d L}{\d z} =
0$, which is equivalent to
\begin{equation}
\alpha_1 (\cosh(2x) \sin(2z) \dot y + \cos(2z) \dot x) = \alpha_2
(\cosh(2x)\cos(2z)\dot y - \sin(2z)\dot x),
\end{equation}
from where we may solve (implicitly) for $z$ as follows:
\begin{equation}
\tan (2z) = \frac{\alpha_2 \cosh(2x) \dot y - \alpha_1 \dot
x}{\alpha_1 \cosh(2x)\dot y + \alpha_2 \dot x}.
\end{equation}
Reinserting into the action (and dropping total derivatives), we
arrive at
\begin{equation}
S'_\alpha = \int_a^b \left(\sqrt{\alpha_1^2 + \alpha_2^2} \sqrt{\dot
x^2 + \cosh(2x)^2 \dot y^2} + \alpha_3 \sinh(2x) \dot y \right) dt.
\end{equation}
We recognise the first term as the line element in $\mathsf{H}_2$ with
hyperbolic metric
\begin{equation}\label{eq:metric-H}
ds^2 = (\alpha_1^2 + \alpha_2^2) (dx^2 + \cosh(2x)^2 dy^2),
\end{equation}
whereas the second term is the coupling to a Maxwell field
\begin{equation}
A = \alpha_3 \sinh(2x) dy,
\end{equation}
whose fieldstrength
\begin{equation}
F = dA = 2 \alpha_3 \cosh(2x) dx \wedge dy =
\frac{2\alpha_3}{\alpha_1^2 + \alpha_2^2} d\text{vol},
\end{equation}
where $d\text{vol}$ is the hyperbolic area form of the metric in
equation~\eqref{eq:metric-H}.
\subsubsection{Particle dynamics on de~Sitter spacetime}
\label{sec:part-dynam-dS}
This case is very similar \emph{mutatis mutandis} to the previous
case, although now we quotient by a hyperbolic subgroup. Again we
write $g^{-1}dg = \theta^1 P_1 + \theta^2 P_2 + \theta^3 B$, where
\begin{equation}
\begin{split}
\theta^1 &= \cosh(2x) \cosh(2z) dy + \sinh(2z) dx\\
\theta^2 &= \cosh(2z) dx + \cosh(2x) \sinh(2z) dy\\
\theta^3 &= dz - \sinh(2x) dy.
\end{split}
\end{equation}
The invariant metric on $\operatorname{SL}(2,\mathbb{R})/\mathscr{H}$ is now given (up to homothety)
by
\begin{equation}
ds^2 = (\theta^1)^2 - (\theta^2)^2 = - dx^2 + \cosh(2x)^2 dy^2.
\end{equation}
As we see from the metric, $x$ is a time coordinate and $y$ is a space
coordinate. This metric could also be re-interpreted as
$\mathsf{AdS}_2$, by reinterpreting $x$ as space and $y$ as time.
The action is now given by
\begin{equation}
S_\alpha = \int_a^b \left( \alpha_1 (\cosh(2x) \cosh(2z) \dot y +
\sinh(2z) \dot x) + \alpha_2 (\cosh(2z) \dot x + \cosh(2x)\sinh(2z)
\dot y)+ \alpha_3(\dot z - \sinh(2x) \dot y) \right) dt.
\end{equation}
The Euler--Lagrange equation for $z$ is again simply $\frac{\d L}{\d z} =
0$, which translates into
\begin{equation}
\alpha_1 (\cosh(2x) \sinh(2z) \dot y + \cosh(2z) \dot x) + \alpha_2
(\cosh(2x)\cosh(2z)\dot y + \sinh(2z)\dot x),
\end{equation}
and which allows us to solve for $z$ implicitly:
\begin{equation}
\tanh (2z) = \frac{-(\alpha_2 \cosh(2x) \dot y + \alpha_1 \dot x)}{\alpha_1 \cosh(2x)\dot y + \alpha_2 \dot x}.
\end{equation}
Reinserting into the action (and dropping total derivatives), we
arrive at (see, also, \cite[eq.(5.10)]{Anabalon:2006ii})
\begin{equation}
S'_\alpha = \int_a^b \left(\sqrt{\alpha_1^2 - \alpha_2^2} \sqrt{-\dot
x^2 + \cosh(2x)^2 \dot y^2} - \alpha_3 \sinh(2x) \dot y \right) dt.
\end{equation}
We recognise the first term as the line element in $\mathsf{dS}_2$ with
metric
\begin{equation}\label{eq:metric-dS}
ds^2 = (\alpha_1^2 - \alpha_2^2) (-dx^2 + \cosh(2x)^2 dy^2),
\end{equation}
whereas the second term is the coupling to a Maxwell field
\begin{equation}
A = -\alpha_3 \sinh(2x) dy,
\end{equation}
whose fieldstrength
\begin{equation}
F = dA = - 2 \alpha_3 \cosh(2x) dx \wedge dy =
\frac{2\alpha_3}{\alpha_2^2 - \alpha_1^2} d\text{vol},
\end{equation}
where $d\text{vol}$ is now the area form of the de Sitter metric in
equation~\eqref{eq:metric-dS}.
\subsubsection{Particle dynamics on the lightcone}
\label{sec:part-dynam-L}
Finally, we discuss the parabolic case. Again we write $g^{-1}dg =
\theta^1 P_1 + \theta^2 P_2 + \theta^3 B$, where now
\begin{equation}
\begin{split}
\theta^1 &= e^{-2x}dy \\
\theta^2 &= dx - z e^{-2x} dy\\
\theta^3 &= dz - 2 z dx + z^2 e^{-2x} dy.
\end{split}
\end{equation}
There is no $\operatorname{SL}(2,\mathbb{R})$-invariant metric here, but only a carrollian
structure $(\kappa,\eta)$, where the carrollian vector field is $\kappa =
z \d_x + e^{2x} \d_y$ and the carrollian degenerate metric is given by
$\eta = (dx - z e^{-2x}dy)^2$.
The action is now given by
\begin{equation}
S_\alpha = \int_a^b \left( \alpha_1 e^{-2x} \dot y + \alpha_2 (\dot
x - z e^{-2x}\dot y) + \alpha_3 (\dot z - 2z \dot x + z^2 e^{-2x} \dot y) \right) dt.
\end{equation}
The Euler--Lagrange equation for $z$ is again simply $\frac{\d L}{\d z} =
0$, which is easily solved for $z$:
\begin{equation}
z = \frac{\alpha_2}{2\alpha_3} + e^{2x} \frac{\dot x}{\dot y}.
\end{equation}
Reinserting into the action (and dropping total derivatives), we
arrive at
\begin{equation}
S'_\alpha = \int_a^b \left(\left( \alpha_1 -
\frac{\alpha_2^2}{4\alpha_3}\right) e^{-2x} \dot y^2 - \alpha_3 e^{2x}
\dot x^2 \right)\frac{dt}{\dot y}.
\end{equation}
Choosing the ``static gauge'' where $\dot y = 1$ and changing
variables to $u = e^x$, we arrive at the following action
\begin{equation}
S''_\alpha = \int_a^b \left(\left( \alpha_1 -
\frac{\alpha_2^2}{4\alpha_3}\right) u^{-2} - \alpha_3 \dot u^2 \right) dt,
\end{equation}
which we recognise as a version of the one-dimensional conformal
mechanics of \cite{deAlfaro:1976vlx}.
\subsubsection{One-dimensional Schwarzian particle}
\label{sec:schwarzian}
Here we will rederive the $\operatorname{SL}(2,\mathbb{R})$-invariant Schwarzian action of
\cite{Kitaev:2017awl,Maldacena:2016hyu,Stanford:2017thb} using the
method of nonlinear realisations and the inverse Higgs mechanism
applied to $\operatorname{SL}(2,\mathbb{R})$. An alternative derivation using nonlinear
realisations for $\operatorname{SL}(2,\mathbb{R})\times R^+$ can be found in
\cite{Galajinsky:2019lak}.
Let $\mathbb{R} P^1$ denote the real projective line: the space of straight
lines through the origin in the plane $\mathbb{R}^2$. The real projective
line is diffeomorphic to the circle. Given a diffeomorphism $\varphi : \mathbb{R}
P^1 \to \mathbb{R} P^1$, we define its \emph{Schwarzian derivative} (or
simply its Schwarzian) by the formula
\begin{equation}\label{eq:schwarzian-derivative}
\operatorname{Sch}(\varphi) := \frac{\varphi'''}{\varphi'} - \frac32
\left(\frac{\varphi''}{\varphi'} \right)^2,
\end{equation}
where the primes represent derivatives with respect to the local
coordinate on $\mathbb{R} P^1$. The Schwarzian defines a quadratic
differential on $\mathbb{R} P^1$ or, in physical terms, a quasiprimary field
with weight $2$ under diffeomorphisms of the circle and it plays an
important rôle in projective geometry (see, e.g., \cite{MR2177471}).
One of its most important properties is its invariance under
$\operatorname{PSL}(2,\mathbb{R})$ Möbius transformations:
\begin{equation}
\varphi \mapsto \frac{a \varphi + b}{c \varphi + d}
\qquad\text{with}\qquad ad - bc = 1.
\end{equation}
We will re-use the basis for $\sl(2,\mathbb{R})$ in
Section~\ref{sec:part-dynam-L}, but with a change of notation to
reflect that $\operatorname{SL}(2,\mathbb{R})$ is the one-dimensional group of
conformal transformations. Therefore we introduce the basis $H,K,D$
for $\sl(2,\mathbb{R})$ where
\begin{equation}
K = \begin{pmatrix}0 & 1 \\ 0 &
0 \end{pmatrix}, \qquad D = \begin{pmatrix}1 & 0 \\ 0 &
-1 \end{pmatrix} \qquad\text{and}\qquad H = \begin{pmatrix}0 & 0 \\
1 & 0 \end{pmatrix}.
\end{equation}
Here $H$ generates translations, $K$ generates special conformal
transformations and $D$ generates dilatations, which are all the
one-dimensional conformal transformations. The ad-invariant inner
product on $\sl(2,\mathbb{R})$, which is a multiple of the Killing form, can
be normalised to $\left<D,D\right>=2$ and $\left<H,K\right>=1$ in this
basis.
We will choose a local chart $(\rho,y,u)$ for $\operatorname{SL}(2,\mathbb{R})$ different
than the one we introduced in Section~\ref{sec:part-dynam-L} to derive
the lagrangian for one-dimensional conformal mechanics. We shall
parametrise group elements near the identity by
\begin{equation}
g = \underbrace{e^{\rho H}}_{g_0} e^{yK} e^{uD},
\end{equation}
where $g_0$ is a local coset representative for the one-dimensional
conformal space thought of as the coset space $\operatorname{SL}(2,\mathbb{R})/\mathscr{H}$, with
$\mathscr{H}$ the two-dimensional non-abelian Lie group generated by $K$ and
$D$. Explicitly, the above parametrisation is
\begin{equation}
g =
\begin{pmatrix}
e^u & e^{-u} y\\ e^u \rho & e^{-u} (1 + \rho y)
\end{pmatrix}.
\end{equation}
The Maurer--Cartan form is given by
\begin{equation}
\begin{split}
\Omega = g^{-1}dg &= \Omega_H H + \Omega_D D + \Omega_K K \\
&= e^{2u} d\rho H + (du - y d\rho) D + e^{-2u} (dy - y^2 d\rho) K.
\end{split}
\end{equation}
In contrast to what we did in Section~\ref{sec:part-dynam-L}, the
lagrangian here will not be linear in the components of the
Maurer--Cartan form, but rather quadratic, resulting from applying the
inverse Higgs mechanism to the lagrangian for geodesic motion on
$\operatorname{SL}(2,\mathbb{R})$ relative to the bi-invariant metric
\begin{equation}
\left<\Omega,\Omega\right> = 2 \Omega_D^2 + 2 \Omega_H \Omega_K = 2 \left( du^2 - 2 y du d\rho + d\rho dy \right).
\end{equation}
The geodesic lagrangian is obtained by pulling back the metric to
the interval parametrising the world-line of the particle:
\begin{equation}\label{eq:geodesics-sl2R}
L = \tfrac12 \left<g^{-1}g', g^{-1} g'\right> = u'^2 + (y' - 2 y u') \rho',
\end{equation}
where we are using primes to denote differentiation with respect to
the parameter of the world-line of the particle. This lagrangian is
invariant under both left and right multiplication by
$\operatorname{SL}(2,\mathbb{R})$. For example, under infinitesimal left multiplication
with parameter $\alpha H + \beta D + \gamma K$, we have
\begin{equation}\label{eq:leftglobalsymmSch}
\delta u = \beta + \gamma \rho, \qquad \delta \rho = \alpha - 2
\beta \rho + \gamma \rho^2 \qquad\text{and}\qquad \delta y = \gamma
+ 2 y \beta + 2 \rho y \gamma.
\end{equation}
We recognise in \eqref{eq:leftglobalsymmSch} the transformation of
the Goldstone field $\rho$ under an infinitesimal Möbius
transformation.
We can reduce the number of Goldstone fields in the action by imposing
some conditions on the Maurer--Cartan form, a procedure also known as
the inverse Higgs mechanism \cite{Ivanov:1975zq}. In the present
context, the conditions are familiar from Drinfel'd--Sokolov reduction
\cite{Drinfeld:1984qv,Polyakov:1989dm} and are given by
\begin{equation}\label{eq:constraintSL2R}
\Omega_H = 1 \qquad\text{and}\qquad \Omega_D = 0.
\end{equation}
This breaks the symmetry of the lagrangian under right multiplication,
leaving only the global symmetry described infinitesimally in
equation~\eqref{eq:leftglobalsymmSch}.
We can solve the constraints~\eqref{eq:constraintSL2R} explicitly for
$u,y$ in terms of $\rho$:
\begin{equation}\label{eq:constraintSL2R-solved}
y = \frac{u'}{\rho'} \qquad\text{and}\qquad u = \tfrac12 \log\left( \frac1{\rho'} \right).
\end{equation}
It follows that
\begin{equation}
u'= -\tfrac12 \frac{\rho''}{\rho'} \qquad\text{and}\qquad y' =
\frac{\rho''^2}{\rho'^3} - \frac12 \frac{\rho'''}{\rho'^2}.
\end{equation}
Substituting this in the lagrangian \eqref{eq:geodesics-sl2R}, we
obtain
\begin{equation}
L = -\frac12 \left( \frac{\rho'''}{\rho'} - \frac32
\left(\frac{\rho''}{\rho'} \right)^2 \right) = -\tfrac12 \operatorname{Sch}(\rho).
\end{equation}
In summary, it is possible to obtain the Schwarzian action using the
inverse Higgs mechanism. It is also possible to obtain the Schwarzian
action by integrating out the gauge transformations of the particle
model with variables $x^\mu, \lambda$ and lagrangian $L=\frac 12 \dot
x^2-\frac 12 \lambda x^2$, as in \cite{Siegel:1988ru,Gomis:1993pp}.
\subsection{Non-relativistic limit of relativistic particle actions}
\label{sec:nrlimits-particles}
In this section we obtain some of the non-lorenztian particle
dynamics studied in the previous sections as non-relativistic limits
of relativistic particles.
\subsubsection{Non-relativistic limit of the $\mathsf{AdS}_{d+1}$ particle action}
We consider the action of a massive particle propagating in
$\mathsf{AdS}_{d+1}$, with metric
\begin{equation}\label{eq:AdS-metric}
ds^2 = - \cosh^2{\frac{r}{R}}(dX^0)^2+\left(
\frac{\sinh{\frac{r}{R}}}{{\frac{r}{R}}}\right)^2 (dX^a)^2-
\left(\left( \frac{\sinh{\frac{r}{R}}}{{\frac{r}{R}}}\right)^2-1\right)
(dr)^2,
\end{equation}
where $r=\sqrt{X_a X^a}$. The particle lagrangian is that describing
geodesic motion on this geometry:
\begin{equation}\label{eq:AdS-lag}
L = -m\sqrt{\cosh^2{\frac{r}{R}}(\dot X^0)^2-\left(
\frac{\sinh{\frac{r}{R}}}{{\frac{r}{R}}}\right)^2 (\dot X^a)^2+
\left(\left(\frac{\sinh{\frac{r}{R}}}{{\frac{r}{R}}}\right)^2-1\right)
(\dot r)^2}.
\end{equation}
In order to take the non-relativistic limit, we introduce an
invertible change of variables with a dimensionless parameter $\omega$:
\begin{equation}
X^0=\omega x^0, \qquad m=\omega M \qquad\text{and}\qquad R= \omega \tilde R.
\end{equation}
After this change of variable, the lagrangian becomes
\begin{equation}
L = -M\omega^2 \dot x^0 + \frac{M(\dot x^a)^2}{2\dot x^0}- \dot
x^0\frac{Mr^2}{2R^2} + O(\omega^{-2}).
\end{equation}
The omitted terms $O(\omega^{-2})$ will not contribute in the limit
$\omega \to \infty$, but this limit is problematic due to the presence
of a quadratically divergent term. This term may be cancelled if we
introduce at the relativistic level a coupling to a constant
electromagnetic field \cite{Gomis:2000bd,Gomis:2005pg} $A$ (with
$F=dA=0$) in such a way we preserve the same physical degrees of
freedom:
\begin{equation}
L_{\text{em}}= A_\mu e^\mu, \quad A_\mu=(M\omega, \vec 0),
\end{equation}
where $e^\mu$ are the components of the vielbein of the
metric~\eqref{eq:AdS-metric}.
Doing so and taking the limit $\omega \to \infty$, the lagrangian
becomes
\begin{equation}
L=\frac{M(\dot x^a)^2}{2\dot x^0}-\dot x^0\frac{M r^2}{2R^2},
\end{equation}
which takes the expected form of the reparametrization invariant
$\mathfrak{n}^-$-particle lagrangian in equation~\eqref{eq:BNHLag}.
\subsubsection{Massless galilean particle and the non-relativistic limit of a tachyon}
Now we will consider the non-relativistic limit of a tachyon. We
start with the relativistic canonical action of a tachyon of mass m
\begin{equation}
S= \int d\tau \left( p_A\dot x^A- \frac{e}{2} \left( {p}^{\, 2} -m^2c^2\right)\right).
\end{equation}
The non-relativistic limit is defined by taking $c \to \infty$ in
\begin{equation}
x^0=ct, \qquad\text{and}\quad p_0=-\frac{E}{c}
\end{equation}
while keeping the \emph{colour} $k=mc$ finite. The action
becomes~\cite{Batlle:2017cfa}
\begin{equation}\label{eq:massG}
S= \int d\tau \left( -E \dot{t} + \vec{p}\cdot \dot{\vec{x}}- \frac{e}{2} \left( \vec{p}^{\, 2} -k^2 \right)\right).
\end{equation}
If we eliminate the momenta we have
\begin{equation}\label{eq:massG1}
S= \int d\tau \left( -E \dot{t} + \frac{k}{2} \sqrt{\dot{\vec{{x}}}^2}\right).
\end{equation}
In this form the action can be interpreted a relativistic tachyonic
particle with an instantaneous interaction \cite{Barducci:2018wuj}. The
field theory associated to this particle model is the galilean
magnetic Klein--Gordon field theory as in
equation~\eqref{case1bflat}.
The non-relativistic limit of the one-dimensional conformal mechanics
and of the Schwarzian particle has been studied in
\cite{Grumiller:2020elf,Gomis:2020wxp}.
\subsection{Carrollian limits of particle actions}
In this section we obtain some of the non-lorenztian particle
dynamics studied in the previous sections as carrollian limits of
relativistic particles.
\subsubsection{Carrollian limit of a massive particle in $\mathsf{AdS}_{d+1}$ background}
We consider the canonical action of a massive particle in
AdS$_{d+1}$ background
\begin{equation}
S= \int d\tau \left( p_\mu\dot x^\mu- \frac{e}{2} \left(
g^{\mu\nu}p_\mu p_\nu -m^2\right)\right),
\end{equation}
where $g^{\mu\nu}$ is the inverse metric of \eqref{eq:AdS-metric}.
The carrollian limit is defined by taking $\omega \to \infty$ in
\begin{equation}\label{eq:carrolllimit}
x^0=\frac{t}{\omega}, \qquad p_0=-\omega E,, \qquad\text{and}\qquad m=M\omega,
\end{equation}
and keeping $R$ fixed. It is understood that, before taking the
limit, we rescale the Einbein variable like
\begin{equation}
e\to\frac{-e}{\omega^2}.
\end{equation}
The carrollian action is given by
\begin{equation}
S_{C} = \int d\tau\, \left( - E \dot{t} + \dot{\vec{x}}\cdot\vec{p}
-\frac{e}{2} \left(\frac{E^2}{\cosh^2\frac{r}{R}} - M^2\right) \right).
\end{equation}
A particle in AdS Carroll does not move. The Carroll particle in flat
space time is obtained by sending $R\to\infty$ and it can be written as
\begin{equation}
S=\int d\tau(-M\sqrt{\dot t^2}+M\vec p\dot{\vec x}),
\end{equation}
which can be interpreted\footnote{We acknowledge discussions with
Roberto Casalbuoni on this point.} as a timelike relativistic
particle which is at rest in a given point in space: $\dot{\vec x}=0$
\cite{Barducci:2018wuj}. The field theory associated to this particle
model is the Carroll electric Klein--Gordon field theory as in
equation~\eqref{eq:case3c-flat} \cite{Bergshoeff:2014jla}.
The massless Carroll particle is obtained by putting $M=0$.
\subsubsection{Carrollian limit of relativistic tachyon}
We consider the action of a relativistic tachyon in configuration
space the action is given by
\begin{equation}
S=-mc\int d\tau \sqrt{({\dot {\vec x}})^2-({\dot {x}^0})^2}.
\end{equation}
In order to take the carrollian limit, it is useful to introduce the
carrollian time and mass $M$ given by
\begin{equation}
s=C, \qquad x^0=C ct \qquad\text{and}\qquad mc=MC.
\end{equation}
Substituting in the action, we obtain
\begin{equation}
S = M C \int d\tau \sqrt{ \dot{\vec{x}}^{2} - \frac{\dot{s}^2}{C^2}}.
\end{equation}
The Carroll limit in these variables is given by taking
\begin{equation}
s\to\infty \qquad\text{and}\qquad MC\to \tilde M.
\end{equation}
The Carroll action of a tachyon is given by \cite{deBoer:2021jej}
\begin{equation}
L=-\tilde M\sqrt{({\dot {\vec X}})^2}.
\end{equation}
The canonical action is given by
\begin{equation}\label{eq:Carrolltach}
S= \int d\tau \left( -E \dot{t} + \vec{p}\cdot \dot{\vec{x}}-
\frac{e}{2} \left( \vec{p}^{\, 2} -\tilde M^2 \right)-\mu
E\right).
\end{equation}
The quantisation of the constraints gives the Helmholtz equation
\begin{equation}
(\nabla^2+M^2)\Phi(t,\vec x)=0 \qquad\text{and}\qquad
\frac{\partial}{\partial t}\Phi(t,\vec x)=0.
\end{equation}
Note that the Helmholtz equation as also appearing the quantisation of
massless galilean particles. The relation among Galilei and Carroll
particles in its $v/c$ coorrections is analysed in \cite{Gomis:2022spp}.
This ends our discussion of the dynamics of non-lorentzian particles.
\section{Gravity}
\label{sec:gravity}
This section contains three subsections. In the first two subsections
we will describe gravity from a kinematical point of view by a gauging
procedure that uses the Lie algebra of symmetries that underlies the
theory as a starting point. We call it a gauging procedure because
there are additional steps involved as compared to gauging a Lie
algebra of internal symmetries leading to Yang-Mills which makes the
relation between the final result and the original Lie algebra less
direct (see, e.g., \cite{Chamseddine:1976bf}). In the first subsection
we explain this gauging procedure for the relativistic case while in
the second subsection we will focus on three non-Lorentzian algebras:
the Bargmann algebra underlying Newton-Cartan gravity, the Galilei
algebra and the Carroll algebra. In the third subsection we will
describe Newton-Cartan gravity from a dynamical point of view by
defining a suitable non-relativistic limit of the Einstein equations
of motion. Next, we will discuss the non-Lorentzian gravity theories
underlying the Galilei and Carroll algebras, called Galilei gravity
and Carroll gravity, respectively.
\subsection{Gauging the Poincar\'e algebra}
\label{subsec:gauging}
We first consider the relativistic case. Our starting point is the Poincar\'e algebra
\begin{multicols}{2}
\begin{subequations}\label{eq:PoincareAlgebraCommutators}
\setlength{\abovedisplayskip}{-15pt}
\allowdisplaybreaks
\begin{align}
[P_{{A}},P_{{B}}]& = 0\, ,\\
[M_{{A}{B}},P_{{C}}]
& = 2\eta_{{C}[{B}}P_{{A}]}\, , \\
[M_{{A}{B}},M_{{C}{D}}]
&= 4\eta_{[{A}[{C}}M_{{D}]{B}]}\, ,
\end{align}
\end{subequations}
\end{multicols}
\noindent
where $P_{{A}}$ and $M_{{A}{B}}$ are the generators of
spacetime translations and Lorentz transformations, respectively. The
capital indices run over ${A}=0,...,d$ and we have chosen the
Minkowski metric to have mostly plus signature. In this subsection we will apply a gauging procedure to this Poincar\'e algebra keeping the relativistic symmetries intact.
As a first step in the gauging procedure, we associate to the translation and Lorentz rotation generators the independent gauge fields $E_\mu{}^{ A}$ and $\Omega_\mu{}^{A B}$ which we call the Vierbein and Lorentz spin-connection, respectively:
\begin{equation}
A_\mu^I T_I = E_\mu{}^{ A} P_{ A} + \frac{1}{2} \Omega_\mu{}^{ A B}M_{ A B}\,.
\end{equation}
These gauge fields transform as covariant vectors under a general coordinate transformation with parameter $\xi^\mu$ while their P-transformations corresponding to the translation generators $P_{ A}$, with parameters $\eta^{ A}$, and their Lorentz transformation rules corresponding to the Lorentz generators $M_{ A B}$, with parameters $\Lambda^{ A B}$, follow from the structure constants $f^I{}_{JK}$ of the Poincar\'e algebra:
\begin{equation}\label{transfgf}
\delta A_\mu^I = \xi^{\lambda}\partial_{\lambda} A_\mu^I +\partial_{\mu}\xi^{\lambda} A_\lambda^I + \partial_\mu\Lambda^I - f^I{}_{JK}\Lambda^JA^K
\end{equation}
or
\begin{subequations}
\begin{eqnarray}\label{gctL}
\delta E_{\mu}{}^{{A}}
& = &
\xi^{\lambda}\partial_{\lambda} E_{\mu}{}^{{A}}
+\partial_{\mu}\xi^{\lambda} E_{\lambda} {}^{{A}} + \partial_{\mu} \eta^{{A}} -\Omega_{\mu}{}^{{A}}{}_{{B}}\eta^{{B}}
+\Lambda^{{A}}{}_{{B}}E_{\mu}{}^{{B}}\, ,
\\[4pt]
\delta \Omega_{\mu}{}^{{A}{B}}
& = &
\xi^{\lambda}\partial_{\lambda} \Omega_{\mu}{}^{{A}{B}}
+\partial_{\mu}\xi^{\lambda} \Omega_{\lambda}{}^{{A}{B}}
+\partial_{\mu} \Lambda^{{A}{B}}
+2\Lambda^{{C} [{A}}\Omega_{\mu}{}^{{B}]}{}_{{C}}\,.\label{second}
\end{eqnarray}
\end{subequations}
We now wish to argue that in the context of general relativity, the general coordinate transformations, Lorentz rotations and P-transformations do not define three independent symmetries of the Einstein-Hilbert action. To write down such an Einstein-Hilbert action we first define the curvature tensors associated to each gauge field as follows:
\begin{equation}\label{curvatures}
R_{\mu\nu}{}^I(T) = 2\partial_{[\mu} A_{\nu]}^I + \frac{1}{2}f^I{}_{JK}A_\mu^J A_\nu^K
\end{equation}
or
\begin{eqnarray}
R_{\mu\nu}{}^{ A}(P) &=& 2\partial_{[\mu}E_{\nu]}{}^{ A} - 2 \Omega_{[\mu}{}^{ A}{}_{ B}E_{\nu]}{}^{ B}\,\\[.1truecm]
R_{\mu\nu}{}^{ A B}(M) &=& 2\partial_{[\mu}\Omega_{\nu]}{}^{ A B} +2 \Omega_{[\mu}{}^{ A C}\Omega_{\nu]}{}^{ B}{}_{ C}\,.
\end{eqnarray}
The Ricci tensor and Ricci scalar are defined by
\begin{eqnarray}
&&R_\mu{}^{ A}(M) = - E^\nu{}_{ C} R_{\mu\nu}{}^{ C A}(M)\,,\hskip 1.5truecm R(M) = E^\mu{}_{ A} R_\mu{}^{ A}(M),
\end{eqnarray}
where we have used the inverse Vierbein field $E^\mu{}_{ A}$ defined by
\begin{equation}
E^\mu{}_{ A} E_\mu{}^{ B} = \delta_{ A}{}^{ B}\,,\hskip 2truecm E^\mu{}_{ A} E_\nu{}^{ A} = \delta_\nu{}^\mu\,.
\end{equation}
We now consider the Einstein-Hilbert action (without cosmological constant)
\begin{equation}\label{standard}
S_{\rm EH} = \frac{1}{16 \pi {\rm G}_N} \int d^{d+1} x\, E R(M)\,,
\end{equation}
where $E$ is the determinant of the Vierbein field $E_\mu{}^{ A}$ and ${\rm G}_N$ is Newton's constant. By construction this action is invariant under general coordinate transformations and Lorentz rotations. However, except for $d=2$, it is not manifestly invariant under the P-transformations given in equation~\eqref{gctL} of the Poincar\'e algebra. This can for instance be seen by writing the Einstein-Hilbert action \eqref{standard} in the equivalent form\,\footnote{The equivalence between the expressions \eqref{standard} and \eqref{equivalent} can be seen by writing out the definition of the determinant $E$ in equation~\eqref{standard}:
\begin{equation}
E = \frac{1}{(d+1)!} \epsilon^{\mu_0\cdots \mu_{d}}\epsilon_{{ A}_0\cdots { A}_{d}}E_{\mu_0}{}^{{ A}_0}\cdots E_{\mu_{d}}{}^{{ A}_{d}}\,.
\end{equation}}
\begin{equation}\label{equivalent}
S_{\rm EH} = \frac{1}{16 \pi {\rm G}_N} \int d^{d+1} x\,
\epsilon^{\mu_0\cdots \mu_{d}}\epsilon_{{ A}_0\cdots { A}_{d}}E_{\mu_0}{}^{{ A}_0}\cdots E_{\mu_{d-2}}{}^{{ A}_{d-2}}
R_{\mu_{d-1}\mu_{d}}{}^{{ A}_{d-1}{ A}_{d}}(M)\,.
\end{equation}
The special thing about $d=2$ is that the Einstein-Hilbert action as given in \eqref{equivalent} reduces to the Chern-Simons form
\begin{equation}\label{CSform}
S_{\rm EH} = \frac{1}{16 \pi {\rm G}_N} \int d^{3} x\,
\epsilon^{\mu\nu\rho}\epsilon_{ A B C}E_{\mu}{}^{{ A}}
R_{\nu\rho}{}^{ B C}(M)\,,
\end{equation}
which is manifestly invariant under all the gauge symmetries of the three-dimensional Poincar\'e algebra. In $d=3$, one could consider, besides the term
\begin{equation}
\epsilon^{\mu\nu\rho\sigma}\epsilon_{ A B C D}E_\mu{}^{ A}E_\nu{}^{ B} R_{\rho\sigma}{}^{ C D}(M)
\end{equation}
given in \eqref{equivalent} the so-called Holst term \cite{Holst:1995pc}
\begin{equation}
\alpha \epsilon^{\mu\nu\rho\sigma}E_\mu{}^{ A}E_\nu{}^{ B} R_{\rho\sigma}{}^{ A B}(M)\,,
\end{equation}
where $\alpha$ is a real parameter. The two terms together give rise to the usual Einstein equations. This can be seen by first noting that varying the action with respect to the spin-connection gives the same equation of motion as without the Holst term. This follows from the following identity:
\begin{equation}
X^{ A B} +\alpha \epsilon^{ A B C D}X_{ C D} =0\hskip .5truecm \rightarrow\hskip .5truecm X^{ AB}=0\,,
\end{equation}
where $X^{ AB}$ is a three-form given by
\begin{equation}
X_{\mu\nu\rho}^{ A B} = R_{[\mu\nu}{}^{[ A}(P)E_{\rho]}{}^{ B]}\,.
\end{equation}
The field equation $X^{ A B}=0$ implies $R_{\mu\nu}{}^{ A}(P)=0$ which is a curvature constraint that can be used to solve for the spin-connection as will be explained below, see the solution given in equation~\eqref{solution}. Next, varying the action with respect to the Vierbein, the Holst term does not contribute to the equations of motion for a dependent spin-connection due to the Riemann tensor identity $R_{[ A B C] D}(M)=0$.
Although, for $d>2$, the Einstein-Hilbert action is not invariant under P-transformations, it does transform into terms that vanish upon using the
equation of motion of the spin connection field which is given by
\begin{equation}\label{eomO}
R_{\mu\nu}{}^{{A}}(P)=0\,.
\end{equation}
Such a variation can always be
cancelled by adding terms to the $P$-transformation rule of the spin
connection. After a long calculation one finds the result given in equation~\eqref{Ptransf}. Note that, except for the first term, all terms in the transformation rule \eqref{Ptransf} are proportional to the Ricci tensor and Ricci scalar and therefore vanish upon using the equations of motion corresponding to the inverse Vierbein field $E^\mu{}_{ A}$, i.e.~the Einstein equations:
\begin{equation}
R_\mu{}^{ A}(M) - 2E_\mu{}^{ A}R(M) =0\,.
\end{equation}
One thus ends up with a set of P-transformations that do not straightforwardly follow from the Poincar\'e algebra. Instead of doing the long calculation mentioned above to obtain the transformation rule \eqref{Ptransf}, there is an easier way to derive the P-transformations of the spin-connection field by making use of the fact that these P-transformations are not new but, instead, related to the general coordinate transformations and the Lorentz transformations of the Poincar\'e algebra. To show this relation we need to make use of a special symmetry which in the literature is called a `trivial' or `equation of motion' symmetry (see, e.g., \cite{Freedman:2012zz}. These symmetries, which are easier to derive than the P-transformation of the spin-connection field, are called `trivial' because they have the distinguishing
feature that all terms in the transformation rules vanish upon using the equations of motion. They therefore correspond to vanishing Noether charges.
A most simple example of a trivial symmetry is provided by the following action describing two real Klein-Gordon scalars $A$ and $B$:
\begin{equation}
S = \int d^{d+1} x\, \frac{1}{2}\big( A\Box A + B\Box B\big)\,.
\end{equation}
This action is invariant under the trivial symmetries with parameter $\lambda$
\begin{equation}
\delta A = \lambda \Box B\,,\hskip 2truecm \delta B = -\lambda \Box A\,.
\end{equation}
We can see this by writing
\begin{equation}\label{seen1}
\delta S = \lambda \frac{\delta S}{\delta\phi^i}\Omega^{ij}\frac{\delta S}{\delta\phi^j} =0
\end{equation}
for $\phi^i = (A,B)$ and using the fact that $\Omega^{ij} = \epsilon^{ij}$ is anti-symmetric.
Similarly, the Einstein-Hilbert action is invariant under the following trivial symmetries with parameters $\sigma^{ A}$:
\begin{subequations}
\begin{eqnarray}\label{eom}
\delta E_{\mu}{}^{{A}}
& = &
R_{\mu\nu}{}^{{A}}(P) \sigma^{\nu}\,,
\\[4pt]
\delta \Omega_{\mu}{}^{{A}{B}}
& = &
-R_{\mu}{}^{[{A}}(M) \sigma^{{B}]}
-\frac{1}{2} E_{\mu}{}^{[{A}}R_{{C}}{}^{{B}]}(M) \sigma^{{C}}
+\frac{3}{4} E_{\mu}{}^{[{A}} R(M) \sigma^{{B}]}\,,
\end{eqnarray}
\end{subequations}
with $\sigma^{\nu} \equiv \sigma^{{B}} E^{\nu}{}_{{B}}$. Like in the example of the two scalar fields the Vierbein field transforms to the equation of the spin-connection field while the spin-connection field transforms to the equation of motion of the Vierbein field leading to a zero variation of the action as follows:
\begin{equation}
\delta S \sim
\begin{pmatrix}
\frac{\delta S}{\delta E_\mu{}^{ A}} &\frac{\delta S}{\delta \Omega_\rho{}^{ C D}}
\end{pmatrix}
\begin{pmatrix}
0&E^\mu{}_{ C}E^\rho{}_{ A}\\
-E^\mu{}_{ C}E^\rho{}_{ A} &0
\end{pmatrix}
\begin{pmatrix}
\frac{\delta S}{\delta E_\mu{}^{ A}}\\
\frac{\delta S}{\delta \Omega_\rho{}^{ C D}}
\end{pmatrix}
\sigma_{ D} =0\,.
\end{equation}
Using these trivial symmetries, we can write the P-transformation given in
equation~\eqref{gctL} of the Vierbein field as the sum of a special general
coordinate transformation, Lorentz transformation and trivial symmetry
transformation with parameters given by
\begin{equation}
\xi^{\mu}
= \eta^{\mu}\,,
\hskip 1truecm
\Lambda^{{A}{B}}=\eta^{\lambda}\Omega_{\lambda}{}^{{A}{B}}\,,
\hskip 1truecm
\sigma^{{A}} = \eta^{{A}}\,,
\end{equation}
with $\eta^{\mu} \equiv \eta^{{B}} E^{\mu}{}_{{B}}$.
Since the same decomposition rule must apply to the spin-connection field, it follows that the P-transformation of this spin-connection field is given by
\begin{eqnarray}\label{Ptransf}
\delta_\eta \Omega_{\mu}{}^{{A}{B}}
& = &
\eta^{\lambda} R_{\lambda\mu}{}^{{A}{B}}(M)
+R_{\mu}{}^{[{A}}(M) \eta^{{B}]}
+E_{\mu}{}^{[{A}}R_{{C}}{}^{{B}]}(M) \eta^{{C}}
+E_{\mu}{}^{[{A}} R(M) \eta^{{B}]}\,,
\end{eqnarray}
which is the same expression that one obtains by requiring that the Einstein-Hilbert term is invariant under P-transformations.
Summarizing, the P-transformations given in eqs.~\eqref{gctL}, \eqref{second}, \eqref{Ptransf} and the general coordinate transformations given in the same equations equation~\eqref{gctL} and \eqref{second} do not define two independent symmetries of the first-order Einstein-Hilbert action \eqref{standard}. In fact, if they would be independent symmetries, the theory would
have no propagating degrees of freedom left. Both symmetries have their
advantages. On the one hand, the general
coordinate transformations have a nice geometrical interpretation, but, on the other hand, the
$P$-transformations are more directly related to the underlying Poincar\'e algebra.
When taking the non-relativistic limit of general relativity in subsection \ref{subsec:limi} , we prefer to work with the second-order formulation of general relativity. The reason for this is that for matter-coupled gravity theories, such as supergravity, it is more convenient to work in such a second-order formulation.
In that case, it is understood that the equation of motion \eqref{eomO} has been used to solve for the spin-connection field $\Omega$ in terms of the Vierbeine $E_\mu{}^{ A}$ and their inverses $E^\mu{}_{ A}$. To solve this constraint it is convenient to introduce the notation
\begin{equation}
E_{\mu\nu}{}^{ A} \equiv \partial_{[\mu}E_{\nu]}{}^{ A}
\end{equation}
and to write the constraint \eqref{eomO} in terms of flat indices as
\begin{equation}\label{flatcurvature}
2E_{ A B C} -\Omega_{ A C B} + \Omega_{ B C A}=0\,.
\end{equation}
Following the solution of the Christoffel symbol in general relativity, we add three times this equation with the flat indices cyclic interchanged and multiply one of the three equations with a minus sign. Adding up the three resulting equations leads to the solution
\begin{equation}\label{solution}
\Omega_{ A B C} = 2E_{ A[ B C]} +E_{ B C A}\hskip .3truecm \textrm{or}\hskip .3truecm
\Omega_\mu{}^{ A B} = -2 E_\mu{}^{[ A B]} + E^{ A B}{}_\mu\,.
\end{equation}
The independent fields are then given by the Vierbein fields $E_\mu{}^{ A}$ only. They transform under general coordinate transformations and local lorentz rotations as follows:
\begin{eqnarray}
\delta E_\mu{}^{ A} &=& \xi^\lambda\partial_\lambda E_\mu{}^{ A} + \partial_\mu \xi^\lambda E_\lambda{}^{ A}
+\Lambda^{ A}{}_{ B} E_\mu{}^{ B}\,.
\end{eqnarray}
The general coordinate transformations are not affected by the NR limit we consider in subsection \ref{subsec:limi}, they are the same before and after taking the limit.
\subsection{Gauging non-Lorentzian algebras}
\label{subsec:gauging2}
We next consider the non-Lorentzian case. There are several non-Lorentzian algebras we could consider. As specific examples we will consider the Galilei algebra, its central extension called the Bargmann algebra and the so-called Carroll algebra.
\vskip .3truecm
\noindent {\bf The Galilei algebra.}\ \ Before discussing the Bargmann algebra that underlies the symmetries of NC gravity, we will first, as a warming up exercise, shortly discuss
the special case of the Bargmann algebra with {\it zero} central extension, i.e.~the Galilei algebra. In the next Chapter, we will show how the symmetries corresponding to the Galilei algebra arise if one takes the so-called Galilei limit of a real Klein-Gordon scalar field. Here, we will show how the Galilei algebra can be obtained as a particular contraction of the Poincar\'e algebra and how the structure constants of this Galilei algebra fix the transformation rules of the gauge fields under the Galilei symmetries.
To show how the Galilei algebra is obtained by a contraction of the Poincar\'e algebra, we first decompose the relativistic flat Lorentz index ${ A}$ into ${ A}=\{0,a\} $ with $a=(1,\dots,d)$, and redefine, using a contraction parameter $\omega$, the Poincar\'e generators according to
\begin{eqnarray}
P_0 &=& \omega^{-1} H \,, \label{eq: GalP0 redef} \\
J_{0a} &=& \omega G_a \,, \label{eq: GalJ0a redef}
\end{eqnarray}
where $H$ and $G_a$ are the generators of time translations and boosts, respectively. The generators $P_{a}$ of space translations and $J_{ab}$ of spatial rotations are not redefined. Next, taking the limit $\omega \rightarrow \infty$ we obtain the following Galilei algebra:
\begin{eqnarray}\label{Galgebra}
&&[J_{ab}, P_c] = 2\delta_{c[a}P_{b]}\,,\hskip 1.5truecm [J_{ab}, G_c] = 2\delta_{c[a}G_{b]}\,,\nonumber\\[.2truecm]
&&[J_{ab}, J_{cd}] = 4
\delta_{[a[d}\,J_{c]b]}\,,\hskip 1.3truecm [H,G_{a}] = P_{a}\,.
\end{eqnarray}
Following \cite{Andringa:2010it}, we next associate to each generator of the Galilei algebra a gauge field as follows:
\begin{equation}
A_\mu^I T_I = \tau_\mu H + e_\mu{}^{a} P_{a} + \frac{1}{2} \omega_\mu{}^{ ab}J_{ab}+ \omega_\mu{}^{a}G_{a}\,.
\end{equation}
Using the general formula \eqref{transfgf}, the gauge fields transform as 1-forms under general coordinate transformations while under spatial rotations with parameters $\lambda^{a}{}_{b}$ and Galilean boosts with parameters $\lambda^{a}$ they transform as follows:
\begin{eqnarray} \label{Gtransf1}
\delta \tau_\mu &=& 0\,, \\[.1truecm]
\delta e_\mu{}^{a} &=& \lambda^{a}\tau_\mu + \lambda^{ab} e_\mu{}^{b} \,, \\[.1truecm]
\delta \omega_\mu{}^{ab} &=& (D_\mu \lambda)^{a b}, \label{eq: Gal trans omegaab}\\[.1truecm]
\delta \omega_\mu{}^{a} &=& (D_\mu \lambda)^{a} + \lambda^{a}{}_{b}\omega_\mu{}^{b} \,.\label{Gtransf4}
\end{eqnarray}
Here $D_\mu$ is the covariant derivative with respect to spatial rotations, e.g., $(D_\mu\lambda)^{a}= \partial_\mu\lambda^{a} -\omega_\mu{}^{ab}\lambda^{b}$.
\vskip .3truecm
\noindent {\bf The Bargmann algebra.}\ \ Our starting point is now the centrally extended Galilei algebra which is called the Bargmann algebra. The reason that we need to add one more generator to the Galilei algebra, which has the same number of generators as the Poincar\'e algebra, is that in the relativistic case energy is equivalent to mass but in the non-relativistic case energy and mass are two separately conserved quantities. The corresponding Noether symmetries lead to two generators in the Bargmann algebra: the time translation generator corresponding to the conservation of energy and the central charge or mass generator corresponding to the conservation of mass.
The Bargmann algebra can be obtained by performing a special Wigner-In\"on\"u contraction of the direct product of the Poincar\'e algebra given in equation~\eqref{eq:PoincareAlgebraCommutators} with a U(1) algebra with generator $\mathcal{Z}$.
As a first step we make the following invertable redefinition of the relativistic generators
\begin{align}\begin{split}\label{contraction}
P_0 &= \frac{1}{2\omega}\,H +\omega\,Z \,,
\hskip1.5cm M_{ab} = J_{ab} \,,
\hskip1.5cm M_{a0}=\omega\, G_{a} \,,\\[.1truecm]
P_{a} &= P_{a}\,, \hskip3.5cm \mathcal{Z}= \frac{1}{2\omega}\, H -\omega\, Z\,,
\end{split}\end{align}
where $\omega$ is a (dimensionless) contraction parameter and where we have decomposed the flat space-time index $ A$ into a time-like $0$-index and
spatial $a$-indices as $A = (0,a)$. Note the off-diagonal nature of the redefinitions. Would we have restricted to rescaling each generator separately, we would only be able to obtain the Galilei algebra times U(1). Using the redefinition \eqref{contraction}, we find that the redefined generators, after taking the limit that $\omega$ goes to infinity, generate the following Bargmann algebra:
\begin{align}\begin{split}\label{Bargmannalg}
\big[P_{a},J_{bc}\big] &= 2\,\delta_{a[b}\,P_{c]} \,, \hskip1cm \big[J_{ab},J_{cd}\big]=4\,\delta_{[a[c}\,J_{d]b]} \,,\\[.1truecm]
\big[G_{a},J_{bc}\big] &= 2\,\delta_{a[b}\,G_{c]} \,, \hskip1.8cm \big[H,G_{a}\big] = P_{a} \,,
\hskip1.5cm \big[P_{a},G_{b}\big] = \delta_{ab}\,Z \,,
\end{split}\end{align}
where the generator $Z$ has taken the role of the central charge generator.
We next associate to each generator of the Bargmann algebra a gauge field as follows:
\begin{equation}
A_\mu^I T_I = \tau_\mu H + e_\mu{}^{a} P_{a} + \frac{1}{2} \omega_\mu{}^{ ab}J_{ab}+ \omega_\mu{}^{a}G_{a}+ m_\mu Z\,.
\end{equation}
Using the general formula \eqref{transfgf}, the gauge fields transform as 1-forms under general coordinate transformations while under spatial rotations with parameters $\lambda^{a}{}_{b}$, Galilean boosts with parameters $\lambda^{a}$ and central charge transformations with parameter $\sigma$ they transform as follows:
\begin{align}\begin{split} \label{NCtraforules}
\delta \tau_\mu &=0 \,, \\[.1truecm]
\delta e_\mu{}^{a} &=\lambda^{a}{}_{b} \, e_\mu{}^{b} +\lambda^{a}\tau_\mu \,, \\[.1truecm]
\delta \omega_\mu{}^{ab} &= \partial_\mu\lambda^{ab} +2\,\lambda^{[a}{}_{c}\,\omega_\mu{}^{cb]} \,, \\[.1truecm]
\delta \omega_\mu{}^{a} &= \partial_\mu \lambda^{a} +\lambda^{a}{}_{b}\,\omega_\mu{}^{b}
-\omega_\mu{}^{a}{}_{c}\,\lambda^{c}\,,\\[.1truecm]
\delta m_\mu &=\partial_\mu\sigma +\lambda_{a} \,e_\mu{}^{a} \,.
\end{split}\end{align}
Using the general formula \eqref{curvatures} the curvatures corresponding to these gauge fields that transform covariantly under these symmetries are given by
\begin{eqnarray}
R_{\mu\nu}(H) &=& 2\partial_{[\mu}\tau_{\nu]}\,,\\[.1truecm]
R_{\mu\nu}{}^{a}(P) &=& 2\partial_{[\mu}e_{\nu]}{}^{a} -2 \omega_{[\mu}{}^{ab}e_{\nu]b} -2\omega_{[\mu}{}^{a}\tau_{\nu]}\,,\\[.1truecm]
R_{\mu\nu}{}^{ a b}(J) &=& 2\partial_{[\mu}\omega_{\nu]}{}^{ a b} +2 \omega_{[\mu}{}^{ a c}\omega_{\nu]}{}^{ b}{}_{c}\,,\\[.1truecm]
R_{\mu\nu}{}^{a}(G) &=& 2\partial_{[\mu}\omega_{\nu]}{}^{ a } -2 \omega_{[\mu}{}^{a}{}_{b} \omega_{\nu]}{}^{b}\,,\\[.1truecm]
R_{\mu\nu}(Z) &=& 2\partial_{[\mu}m_{\nu]} -2\omega_{[\mu}{}^{a}e_{\nu]a}\,.
\end{eqnarray}
Note that the curvature $R_{\mu\nu}(H)$ corresponding to $\tau_\mu$ does not contain any of the other gauge fields. It therefore can describe an {\it intrinsic torsion} \cite{Figueroa-OFarrill:2020gpr}. Imposing a constraint on this curvature leads to a purely geometric constraint.\,\footnote{The following discussion on the intrinsic torsion also applies when we gauge the Galilei algebra.} This is quite different from the conventional curvature constraints, to be discussed below, that will be used to solve some gauge fields in terms of the others. Instead of $R_{\mu\nu}(H)$ we will sometimes use a notation in terms of the torsion tensor
\begin{equation}
T_{\mu\nu} = \partial_{[\mu}\tau_{\nu]}\,.
\end{equation}
One may distinguish between the following three different cases:\footnote{One cannot impose $T_{0a}=0$ since such a constraint is not invariant under Galilean boost transformations.}
\begin{eqnarray}
T_{\mu\nu} =0&:&\ \ \textrm{zero torsion}\,,\label{zerotorsion}\\[.1truecm]
T_{ab}=0&:&\ \ \textrm{twistless torsional}\,,\\[.1truecm]
T_{\mu\nu} \ne 0&:&\ \ \textrm{general torsion}\,.
\end{eqnarray}
We have used here the projective inverse NR Vierbeine $\tau^\mu$ and $e^\mu{}_{a}$ defined by
\begin{equation}\label{inverseVb}
\tau_\mu\tau^\mu =1\,,\hskip .5truecm \tau_\mu e^\mu{}_{a} = \tau^\mu e_{\mu}{}^{a} = 0\,,\hskip .5truecm e_\mu{}^{a} e^\nu{}_{a} + \tau_\mu\tau^\nu = \delta_\mu{}^\nu\,.
\end{equation}
to convert curved indices into flat indices. For instance,
\begin{equation}
T_{ab} = e^\mu{}_{a} e^\nu{}_{b}T_{\mu\nu}\,.
\end{equation}
The zero torsion case defines a Newtonian spacetime with a co-dimension 1 foliation or, equivalently, a preferred time direction $t$ given by $\tau_\mu = \partial_\mu t$. Any observer traveling along a curve $\mathcal{C}$ from a time slice $\Sigma_{t_A}$ at $t=t_A$ to a time slice $\Sigma_{t_B}$ at $t=t_B$ will measure a time difference $\Delta T$ given by
\begin{equation}
\Delta T = \int_{t_A}^{t_B} dx^\mu \tau_\mu = t_B - t_A
\end{equation}
independent of the curve $\mathcal{C}$. The twistless torsional case leads to a spacetime with a hypersurface orthogonality condition of the clock fucction $\tau_\mu$. Such spacetimes are encountered in Lifschitz holography \cite{Christensen:2013lma}.
Using the projective inverses of the timelike and spatial Vierbein fields $\tau_\mu$ and $e_\mu{}^{a}$, the
so-called `conventional constraint' equations\,\footnote{For the use of conventional constraints in gravity and supergravity, see, e.g., \cite{Freedman:2012zz,Ortin:2015hya}.}
\begin{equation}\label{conventional}
R_{\mu\nu}{}^{a}(P) = R_{\mu\nu}(Z) =0
\end{equation}
provide precisely sufficient equations to solve the spin-connection fields for spatial rotations and Galilean boosts in terms of the other independent gauge fields. For the zero torsion case these gauge fields are solved, by doing a similar calculation as in the relativistic case (see after
equation~\eqref{flatcurvature}), as follows:
\begin{subequations}\label{standardomega}
\begin{align}
&{\omega}_{\mu}{}^{ab} (\tau,e,m)= -2 e_{\mu}{}^{ab} + e_{\mu c}e^{abc} - \tau_\mu m^{ab}\,,\\[.1truecm]
&{\omega}_{\mu }{}^{a}(\tau,e,m) = e_{\mu 0}{}^{a} - e_{\mu c}e_0{}^{ac} + m_\mu{}^{a} - \tau_\mu m^{a0}\,.
\end{align}
\end{subequations}
Here we have defined
\begin{equation}
e_{\mu\nu}{}^{a} = \partial_{[\mu}e_{\nu]}{}^{a}\,,\hskip 1truecm m_{\mu\nu} = \partial_{[\mu}m_{\nu]}\,.
\end{equation}
Furthermore, we have again used the inverse NR Vierbeine
to convert curved indices into flat indices. For instance
\begin{equation}
e_{\mu 0}{}^{a} = \tau^\nu e_{\mu\nu}{}^{a}\,.
\end{equation}
We note that the transformation of the dependent spin-connection fields is identical to the transformations of the independent spin-connection fields as given in eq~\eqref{NCtraforules}, i.e.
\begin{equation}
\delta {\omega}_{\mu}{}^{ab} (\tau,e,m) = \delta{\omega}_{\mu}{}^{ab}\,,\hskip 2truecm \delta {\omega}_{\mu }{}^{a}(\tau,e,m) = \delta{\omega}_{\mu }{}^{a}\,.
\end{equation}
This is due to the fact that the curvatures in the conventional constraint equations \eqref{conventional} do not transform to any of the other curvatures under spatial rotations, Galilean boosts and central charge transformations. From now on we will assume that the spin-connections are dependent fields but we will not indicate their dependence anymore. Finally, we note that, by solving the conventional constraints \eqref{conventional}, we work by definition in a second-order formulation.
Sofar, we did not yet discuss the $P_{ A} = (P_0,P_{a})$-transformations with parameters $(\eta, \eta^{a})$ of the gauge fields. According to the Bargmann algebra they are given by
\begin{eqnarray}\label{P2}
\delta \tau_\mu &=& \partial_\mu \eta\,,\\[.1truecm]
\delta e_\mu{}^{a} &=& \partial_\mu \eta^{a} -\omega_\mu{}^{ab}\eta_{b}\,,\\[.1truecm]
\delta m_\mu &=& - \omega_{\mu a}\eta^{a} \,.
\end{eqnarray}
To show how these P-transformations are related to general coordinate transformations, we consider the following general identity valid for any Lie algebra
with structure constants $f^I{}_{JK}$:
\begin{equation}
0 = \delta_{gct}(\xi^{\lambda})A_{\mu}{}^I +
\xi^\lambda R_{\mu\lambda}{}^I(T)
- \sum_{\substack{\{J\}}} \delta(\xi^{\lambda}A_{\lambda}{}^{J})A_{\mu}{}^I\,,
\label{veryimportantequation}
\end{equation}
where the index $I$ labels the gauge fields $A_\mu{}^I$ and corresponding curvatures $R_{\mu\nu}{}^I(T)$ of the gauge algebra. The sum in the last term is over all gauge fields. To see how this identity works, let us set, for instance,
$I=a$ for the $P_{a}$-transformations and consider the parameters
\begin{equation}
\xi^\lambda=\tau^\lambda \eta + e_{a}{}^\lambda\eta^{a}\hskip .5truecm \textrm{or}\hskip .5truecm \eta = \xi^\lambda\tau_\lambda\,,\ \eta^{a} = \xi^\lambda e_\lambda{}^{a}\,.
\end{equation}
We can then bring the contribution of $e_\mu{}^{a}$ to the sum in the last term of (\ref{veryimportantequation}) to the left-hand
side of the equation to obtain the following relation between a $P_{a}$-transformation with parameter $\eta^{a}$ and a general coordinate transformation with parameter $\xi^\lambda = e_{a}{}^\lambda\eta^{a}$:
\begin{equation}
\delta_P(\eta^{b}) e_{\mu}^{a}
= \delta_{gct}(\xi^{\lambda})e_{\mu}^{a} + \xi^{\lambda}R_{\mu\lambda}{}^{a}(P)
- \delta_M(\xi^{\lambda}\omega_{\lambda}^{ab})e_{\mu}^{a}\,. \label{Poincarepexchange}
\end{equation}
The same kind of identity holds for each gauge field that transform under a P-transformation, i.e., in our case $\tau_\mu\,, e_\mu{}^{a}$ and $m_\mu$, see equation~\eqref{P2}: one can relate the $P$-transformation of these gauge fields to a general coordinate transformation plus other symmetries of the Bargmann algebra by setting the curvature of these gauge fields to zero. Following this rule we precisely obtain the zero torsion constraint \eqref{zerotorsion} and the two conventional constraints \eqref{conventional}. Remarkably, these constraints allow us to solve for the remaining gauge fields, i.e. the two spin-connection fields, and hence, as dependent gauge fields, they automatically have a P-transformation that is related to a general coordinate transformation since this was already proven for all the independent gauge fields.
We note that for non-zero torsion, the conventional constraints \eqref{conventional} do not transform to each other anymore under all the symmetries of the theory. To achieve this, one needs to add to these conventional constraints additional (independent) torsion tensors with the correct transformation properties. This leads to a notion of {\it torsional} NC geometry that is discussed in \cite{wip}.
\vskip .3truecm
\noindent {\bf The Carroll algebra.}\ \ Carroll symmetries emerge if one considers an {\it ultra-relativistic} limit of general relativity which is the opposite of taking a NR limit. At first sight this seems a strange thing to do. However, Carroll symmetries have shown up in several recent investigations in different connections such as strong coupling limits of gravity \cite{Henneaux:1979vn,Henneaux:1981su}, flat space holography \cite{Duval:2014uva}, black hole horizons \cite{Donnay:2019jiz}, de Sitter cosmology and dark matter \cite{deBoer:2021jej} and even fractons \cite{Casalbuoni:2021fel,Bidussi:2021nmp}. Here, for completeness, we shortly discuss the gauging of the Carrol algebra and point out some differences with the Galilei algebra.
To define the contraction of the Poincar\'e algebra that gives rise to the Carroll algebra, we decompose the $A$-index into ${ A}=\{0,a\} $ with $a=(1,\dots,d)$, and redefine the Poincar\'e generators according to
\begin{eqnarray}
P_0 &=& \omega H \,, \label{eq: P0 redef} \\
J_{0a} &=& \omega G_{a} \,, \label{eq: J0a redef}
\end{eqnarray}
where $H$ and $G_{a}$ are the generators of time translations and boosts, respectively. The generators $P_{a}$ of space translations and $J_{ab}$ of spatial rotations are not redefined. Next, taking the limit $\omega \rightarrow \infty$ we obtain the following Carroll algebra:
\begin{eqnarray}\label{eq:Calgebra}
&&[J_{ab}, P_{c}] = 2\delta_{c[a}P_{b]}\,,\hskip 1.5truecm [J_{ab}, G_{c}] = 2\delta_{c[a}G_{b]}\,,\nonumber\\[.2truecm]
&&[J_{ab}, J_{cd}] = 4
\delta_{[a[d}\,J_{c]b]}\,,\hskip 1.3truecm [P_{a},G_{b}] = \delta_{ab}H\,.
\end{eqnarray}
We next associate to each generator of the Carroll algebra a gauge field as follows:
\begin{equation}
A_\mu^I T_I = \tau_\mu H + e_\mu{}^{a} P_{a} + \frac{1}{2} \omega_\mu{}^{ ab}J_{ab}+ \omega_\mu{}^{a}G_{a}\,.
\end{equation}
Using the general formula \eqref{transfgf}, the gauge fields transform as 1-forms under general coordinate transformations while under spatial rotations with parameters $\lambda^{a}{}_{b}$ and Carroll boosts with parameters $\lambda^{a}$ they transform as follows:
\begin{eqnarray} \label{bossymm1}
\delta \tau_\mu &=& e_\mu{}^{a}\lambda_{a}\,, \nonumber \\ [.1truecm]
\delta e_\mu{}^{a} &=& \lambda^{a}{}_{b} e_\mu{}^{b}\,, \nonumber \\[.1truecm]
\delta \omega_\mu{}^{ab} &=& (D_\mu\lambda)^{ab}\,, \\[.1truecm]
\delta \omega_\mu{}^{a} &=& (D_\mu\lambda)^{a} + \lambda^{a}{}_{b} \omega_{\mu }{}^{b}
\nonumber \,,
\end{eqnarray}
where $D_\mu$ is the covariant derivative with respect to spatial rotations.
Note that, in contrast to the Galilei algebra, $\tau_\mu$ transforms under a boost transformation while $e_\mu{}^{a}$ is invariant. Another important difference with the Galilei algebra is that the Carroll algebra does not allow for a central extension.
Unlike the Galilei or Bargmann algebra, all Carroll curvatures contain a spin-connection field. A priori such curvatures are part of conventional constraints, needed to solve for the spin-connection fields, and, therefore cannot describe an intrinsic torsion like the tensor $T_{\mu\nu}$ in the Galilei and Bargmann case. However, given the $R_{\mu\nu}{}^{a}(P)$ curvature
\begin{equation}
R_{\mu\nu}{}^{a}(P) = e_{\mu\nu}{}^{a} - \omega_{[\mu}{}^{ab}e_{\nu]b}\,,\hskip 1.5truecm e_{\mu\nu}{}^{a} = \partial_\mu e_\nu{}^{a} - \partial_\nu e_\mu{}^{a}\,,
\end{equation}
it turns out that the following boost-invariant projection $K^{ab} = K^{ba}$ does not contain any spin-connection field:
\begin{equation}
K^{ab} = \tau^\mu e^{\nu(a}R_{\mu\nu}{}^{b)}(P) = \tau^\mu e^{\nu(a}e_{\mu\nu}{}^{b)}(P)\,.
\end{equation}
Using that $\tau^\mu e_\mu{}^{a}=0$ one can show that $K^{ab}$ is nothing else as than the spatial components of the extrinsic curvature:
\begin{equation} \label{defK}
K_{ab} = e_{a}{}^\mu e_{b}{}^\nu K_{\mu\nu}\,,\hskip 1truecm K_{\mu\nu} = \tau^\lambda \partial_\lambda h_{\mu\nu} + \partial_\mu \tau^\lambda h_{\lambda \nu} + \partial_\nu \tau^\lambda h_{\lambda \mu}
\end{equation}
with $h_{\mu\nu} = e_\mu{}^{a} e_\nu{}^{b}\delta_{ab}$.
\subsection{Taking limits}
\label{subsec:limi}
The aim of this section is to define the limits of general relativity that correspond to the non-Lorentzian algebras we defined in the previous section as Wigner-In\"on\"u contractions of the Poincar\'e algebra. Our main target is the Bargmann algebra but, for completeness, we will also shortly discuss the limits corresponding to the Galilei and Carroll algebra leading to Galilei and Carroll gravity, respectively.
Generically, to define a limit in all three cases, we will perform the following two steps:
\begin{itemize}
\item we make an invertible field redefinition writing all relativistic fields in terms of the would-be fields of the limiting theory and a dimensionless contraction parameter $\omega$. The invertibility implies that the number of fields before and after taking the limit remains the same. The would-be limiting fields only become the true limiting fields after taking the NR limit in the second step. Before this step we are just rewriting the general relativity theory.
\item Either in the action or equations of motion we take the limit that $\omega$ goes to $\infty$. We do not allow divergent terms in the action. A noteworthy feature of several of the limits that we will be taking is that they are based upon a cancellation of the leading divergence by different contributions. The limiting action is given by all terms of order $\omega^0$. Taking the limit of the equations of motion the resulting equations of motion are given by the terms of leading order in $\omega$. Independent of this we will also take the limit of the transformation rules.
\end{itemize}
\vskip .05truecm
One should distinguish between taking limits from making expansions. In an expansion each field is expanded as an infinite sum of terms of increasing powers of $\omega^{-1}$. The leading terms in such an expansion do not necessarily correspond a redefinition defining a limit. For instance, in a post-Newtonian expansion of general relativity one does not introduce the additional field $M_\mu$. Instead, $m_\mu$ occurs as the sub-leading term in an expansion of $E_\mu{}^0$. Some results about limits can, however, be read off from making an expansion. For instance, the first leading term in the expansion in $\omega$ of a relativistic Lagrangian is always invariant under the corresponding non-Lorentzian symmetry \cite{Batlle:2016iel,Bergshoeff:2017btm}.
\vskip .2truecm
\noindent {\bf Galilei gravity.}\ \ We first consider the case of Galilei gravity.
Using a first-order formulation, an invariant action for Galilei gravity can be obtained by taking a specific NR limit of the Einstein-Hilbert action. To define this limit, we redefine the gauge fields and symmetry parameters with a dimensionless parameter parameter $\omega$ as follows \cite{Bergshoeff:2017btm}:
\begin{eqnarray}
E_\mu^0 &=& \omega \tau_\mu \,, \;\;\quad \Omega^{0a}_\mu \;=\; \omega^{-1} \omega^{a}_\mu \,, \label{eq: Galrescal1}\\
E_\mu^{a} &=& e^{a}_\mu \,, \quad \;\;\;\, \; \Omega^{ab}_\mu \;=\; \omega^{ab}_\mu \,, \label{eq: Galrescal2}\\
\Lambda^{0a} &=& \omega^{-1} \lambda^{a} \,, \;\; \Lambda^{ab} \;=\; \lambda^{ab} \,.\label{eq: Galrescal4}
\end{eqnarray}
Substituting the above field redefinitions into the Einstein-Hilbert action,
redefining Newton's constant $G_N = \omega G_G$ and taking the $\omega\rightarrow\infty$ limit we end up with the following Galilei action
\begin{equation}
S_{\text{Gal}}= - \frac{1}{2\kappa} \int e R_{\mu\nu}{}^{ab}(J) e^\mu_{a} e^\nu_{b}\,,\label{eq: ActionGal}
\end{equation}
where $\kappa = 8\pi G_G$ and $e={\rm det}\, (\tau_\mu,e_\mu{}^{a})$ is the non-relativistic determinant. The projective inverses $\tau^\mu$ and $e^\mu{}_{a}$ transform under the Galilei boosts and spatial rotations as follows:
\begin{eqnarray}
\delta \tau^\mu &=& - \lambda^{a} e_{a}^\mu \,, \hskip 2truecm
\delta e^\mu_{a} = \lambda^{ab}e_{b}^\mu \,.
\end{eqnarray}
One may verify that the Galilei action \eqref{eq: ActionGal} is not only Galilei invariant but it also has an emergent local scaling symmetry given by
\begin{eqnarray}
\tau_\mu & \rightarrow & \lambda(x)^{-(d-2)} \tau_\mu \,,\hskip 1truecm
e^{a}_\mu \rightarrow \lambda(x) e^{a}_\mu \,,\label{eq: rescale}
\end{eqnarray}
where $\lambda(x)$ is an arbitrary function. This emergent local scaling symmetry implies that there is a so-called `missing' equation of motion that
does not follow from the variation of the Galilei action \eqref{eq: ActionGal}. This missing equation of motion can be obtained by taking the limit of the relativistic equations of motion. We will encounter a similar situation when discussing NC gravity below.
For any $d >2$ the equations of motion that follow from the variation of the Galilei action \eqref{eq: ActionGal} lead
to the following constraint on the geometry
\begin{equation}
T_{ab} = e^\mu_{a} e^\nu_{b} T_{\mu\nu} = 0\,. \label{eq: RHab is zero}
\end{equation}
This constraint means that this geometry has twistless torsion.
For $d>2$ the equations of motion can be used to solve for the spatial rotation spin connection $\omega_\mu{}^{ab}$ as
\begin{equation}
\omega_\mu^{ab} = \tau_{\mu}A^{a b} + e_{\mu c} \left(e^{\rho [a} e^{b]\nu}\partial_{\rho}{e^{c}_{\nu}} + e^{\rho[a} e^{c]\nu}\partial_{\rho}{e^{b}_{\nu}} - e^{\rho[b} e^{c]\nu}\partial_{\rho}{e^{a}_{\nu}}\right)
+ \frac{4}{D-3} e^{[a}_{\mu} \tau^{b]0}\,, \label{eq: Gal solomegaab}
\end{equation}
except for $A^{ab}$ which is an undetermined anti-symmetric tensor component of $\omega_\mu{}^{ab}$.
In the second order formulation the constraint \eqref{eq: RHab is zero} arises from the variation with respect to $A^{ab}$. Hence, we can interpret $A^{ab}$ as a Lagrange multiplier. Indeed, in the case $d>2$, plugging expression \eqref{eq: Gal solomegaab} into the action \eqref{eq: ActionGal} to obtain it in a second order formulation leads to
\begin{equation}
S_{\text{Gal}}= - \frac{1}{2\kappa} \int e \left( \left. R_{\mu\nu}{}^{ab}(J) e^\mu_{a} e^\nu_{b} \right|_{A^{ab}=0} + A^{ab}T_{ab} \right) \,.\label{eq: ActionGal sec order}
\end{equation}
This makes manifest the fact that the variation with respect to $A^{ab}$ of the second order action in equation \eqref{eq: ActionGal sec order} reproduces the constraint \eqref{eq: RHab is zero}.
The case $d=2$ is special. In that case we may write $\omega_\mu{}^{ab} =\epsilon^{ab}\omega_\mu$ and it turns out that this $\omega_\mu$ cannot be determined from the field equations, i.e.~there is no second-order formulation. Also, in contrast to the $d>2$ case, the equations of motion imply a stronger geometrical constraint, namely the zero torsion constraint
\begin{equation}
T_{\mu\nu} = 0 \,. \label{eq: RH0a is zero}
\end{equation}
Using the identity $e\epsilon^{ab}e_a^\mu e_b^\nu = 2\epsilon^{\mu\nu\rho}\tau_\rho$, which is valid for $d=2$, the Galilean action \eqref{eq: ActionGal} can be rewritten as
\begin{equation}
S_{\text{Gal 3D}}= - \frac{1}{2\kappa} \int \epsilon^{\mu\nu\rho} \tau_\mu\partial_\nu\omega_\rho\,.\label{eq: ActionGal 3D}
\end{equation}
This form of the action makes manifest that its variation with respect to $\omega_\mu$ precisely reproduces the zero torsion constraint
\eqref{eq: RH0a is zero}. We note that the Galilei algebra in $d=2$ only allows for a degenerate invariant bilinear form. The above action corresponds to the Chern-Simons action for the Galilei algebra with this degenerate bilinear form. The degeneracy of the form explains why not all fields occur in the action.
\vskip .3truecm
\noindent {\bf NC gravity}\ \ We will now derive the equations of motion describing pure Newton-Cartan (NC) gravity in $d+1$ dimensions by taking a specific non-relativistic (NR) limit of general relativity.
In the second-order formulation that we are using here, we need to express the relativistic Vierbein field $E_\mu{}^{ A}$ into the would-be non-relativistic fields of NC gravity, that we described in the previous subsection,
in an invertible way using a contraction parameter. Inspired by the standard Wigner-In\"onu contraction of the Poincar\'e algebra we first write
\begin{equation}\label{limitG}
E_\mu{}^{ 0} = \omega\tau_\mu\,,\hskip 2truecm E_\mu{}^{{ a}} = e_\mu{}^{a}\,,
\end{equation}
where we have decomposed $ A = (0,a), \omega$ is a dimensionless parameter, $\tau_\mu$ is the clock function and $e_\mu{}^{a}$ are the rulers.
It is clear that this limit cannot give rise to NC gravity
because in the NR case energy is not the same as mass and hence we need two gauge fields, one for energy and one for mass, that in the previous subsection we called $\tau_\mu$ and $m_\mu$, respectively. Indeed, the NR limit defined by equation~\eqref{limitG} gives rise to the Galilei gravity we discussed above. The additional mass operator gives rise to a central extension of the Galilei algebra called the Bargmann algebra. We saw in the previous subsection that, in order to obtain this Bargmann algebra from the Wigner-In\"on\"u contraction of a relativistic algebra, we must extend the Poncar\'e algebra with an additional U(1) generator. In terms of gauge fields this implies that we should extend general relativity with an additional gauge field $M_\mu$ before taking the limit.\,\footnote{ We note that in a Post-Newtonian approximation of general relativity there is no need to add this extra gauge field $M_\mu$ since the lowest order terms in such an approximation do not need to constitute an invertible field redefinition.} In order not to extend general relativity with extra degrees of freedom we impose by hand that the field equation of $M_\mu$ is given by the following zero flux condition\,\footnote{Here and in the following we will indicate the equation of motion of a field with square brackets.}
\begin{equation}\label{additional}
[M]_{\mu\nu} = \partial_\mu M_\nu - \partial_\nu M_\mu =0\,.
\end{equation}
Note that, without extending general relativity any further, this field equation does not follow from a relativistic action and therefore the specific limit we are considering can only be taken at the level of the equations of motion, i.e.~the Einstein equations.
Given the extended general relativity theory, we consider the following redefinitions \cite{Bergshoeff:2015sic}:
\begin{equation}
E_\mu{}^{ 0} = \omega\tau_\mu + \frac{\alpha+1}{\omega} m_\mu\,,\hskip 1truecm E_\mu{}^{{ a}} = e_\mu{}^{a}\,,\hskip 1truecm M_\mu = \omega\tau_\mu + \frac{\alpha}{\omega} m_\mu \,,
\end{equation}
where $\alpha$ is a real parameter related to the following field redefinition:
\begin{equation}
\tau_\mu \rightarrow \tau_\mu +\frac{\alpha}{\omega^2}m_\mu\,.
\end{equation}
From now on, we will take $\alpha=0$:
\begin{equation}\label{a=0}
E_\mu{}^{ 0} = \omega\tau_\mu + \frac{1}{\omega} m_\mu\,,\hskip 1truecm E_\mu{}^{{ a}} = e_\mu{}^{a}\,,\hskip 1truecm M_\mu = \omega\tau_\mu\,.
\end{equation}
Note that the relativistic inverse Vielbeine are redefined as follows
\begin{equation}\label{inversea=0}
E^\mu_{0} = \frac{1}{\omega}\tau^\mu + \cdots \,,\hskip 1.5truecm E^\mu{}_{a} = e^\mu{}_{a} + \cdots \,,
\end{equation}
where the NR inverse Vielbeine $\tau^\mu$ and $e^\mu{}_{a}$ were defined in equation~\eqref{inverseVb}.
We have only given here the leading order redefinitions. The lower order dotted terms in \eqref{inversea=0} do not contribute to the final answer when taking the NR limit.
As a simple example of how the limit works we consider the following Lagrangian describing a relativistic particle of mass $M$:
\begin{equation}
S = -M\int d\tau\,\sqrt{-E_\mu{}^A{\dot X}^\mu E_\nu{}^B {\dot X}^\nu \eta_{AB}} - M_\mu{\dot X}^\mu\,.
\end{equation}
The last term represents a coupling of the gauge field $M_\mu$ to the particle via a Wess-Zumino term. Substituting the field redefinitions \eqref{a=0} into this action and redefining the mass $M$ with $M=\omega m$, we obtain, after taking the limit $\omega \to \infty$ and expanding the square root, the following Lagrangian describing the coupling of a non-relativistic particle of mass $m$ with embedding coordinates $X^\mu(\tau)$ to a NC background \cite{DePietri:1994je}:
\begin{equation}
S = \frac{m}{2} \int d\tau\,\bigg\{ \frac{e_\mu{}^{a}{\dot X}^\mu e_\nu{}^{b} {\dot X}^\nu\delta_{ab}}{\tau_\rho {\dot X}^\rho} - 2 m_\mu{\dot X}^\mu \bigg\}\,.
\end{equation}
One can show that this action, due to the second term, is invariant under Galilean boost transformations. For a flat spacetime with $\tau_\mu = \delta_{\mu,0}\,, e_\mu{}^{a} = \delta_\mu{}^{a}=\delta_\mu{}^i$ and $m_\mu = \partial_\mu c$ the action reads
\begin{equation}
S_{\rm flat\ spacetime} = \frac{m}{2} \int d\tau\,\bigg\{ \frac{{\dot X}^i {\dot X}^j \delta_{ij}}{{\dot t}} - 2 {\dot c} \bigg\}\,.
\end{equation}
which describes the coupling of a massive particle to a Newton
potential $\Phi$, which is in accordance with
equation~\eqref{eq:BNHLag} for $R\to \infty$.
We now consider the relativistic Einstein equations
\begin{equation}
\delta S = - \frac{1}{8 \pi {\rm G}_N} \int d^{d+1} x\, E \,\delta E_\mu{}^{ A} E^{\mu B} [G]_{ A B}\,,
\end{equation}
with
\begin{equation}\label{Einsteineqs}
[G]_{ A B} = R_{ A B}(\Omega) - \frac{1}{2}\eta_{ A B} R(\Omega) =0\,.
\end{equation}
This field equation is symmetric in the $\ A$ and $ B$ since the Ricci tensor is symmetric\,\footnote{This follows from inserting into the Bianchi identity for the curvature corresponding to the spacetime translation generator the (conventional) constraint that this curvature is zero.}
\begin{equation}\label{symm}
R_{ A B}(\Omega) = R_{ B A}(\Omega)\,.
\end{equation}
Performing the field redefinitions \eqref{a=0} and \eqref{inversea=0} we find
\begin{subequations}
\label{eq:screscale}
\begin{align}
\Omega_{\mu}{}^{ab}&=\omega^{2}\, \accentset{(2)}{\omega}_{\mu}{}^{ab}+
\accentset{(0)}{\omega}_{\mu}{}^{ab} + \cdots\,,\\[.1truecm]
\Omega_{\mu}{}^{0a}&=\omega\ \accentset{(1)}{\omega}_{\mu}{}^{a}+\omega^{-1}\,\accentset{(-1)}{\omega}_{\mu}{}^{a} + \cdots\,,
\end{align}
\end{subequations}
where the $\omega'$s denote expansion coefficients of the relativistic spin-connection fields $\Omega$. The special expansion coefficients $\accentset{(0)}{\omega}_{\mu}{}^{ab}$ and $\accentset{(-1)}{\omega}_{\mu}{}^{a}$ will serve as spin-connection fields in the non-relativistic case and will be denoted by
\begin{equation}
\accentset{(2)}{\omega}_{\mu}{}^{ab} = \omega_\mu{}^{ab}\,,\hskip 2truecm \accentset{(-1)}{\omega}_{\mu}{}^{a} = \omega_\mu{}^{a}\,.
\end{equation}
We find that the different expansion coefficients are given by
\begin{subequations}\label{eq:leading}
\begin{align}
&\accentset{(2)}{\omega}_\mu{}^{ab}= -\tau_\mu T^{ab}\,,\\[.1truecm]
&{\omega}_{\mu}{}^{ab} = -2 e_{\mu}{}^{ab} + e_{\mu c}e^{abc} - \tau_\mu m^{ab}\,,\\[.1truecm]
&\accentset{(1)}{\omega}_\mu{}^{a}= e_{\mu b}T^{ba}-2\,\tau_{\mu }T^{a0}\,, \\[.1truecm]
&{\omega}_{\mu }{}^{a} = e_{\mu 0}{}^{a} - e_{\mu c}e_0{}^{ac} + m_\mu{}^{a} - \tau_\mu m^{a0}\,.
\end{align}
\end{subequations}
Here we have defined
\begin{equation}
e_{\mu\nu}{}^{a} = \partial_{[\mu}e_{\nu]}{}^{a}\,,\hskip 1truecm m_{\mu\nu} = \partial_{[\mu}m_{\nu]}\,.
\end{equation}
Like before, we have used the inverse NR Vielbeine
to convert curved indices into flat indices.
We now substitute the expansions \eqref{eq:screscale} of the relativistic spin-connection fields into the Einstein equations \eqref{Einsteineqs} and the expansion \eqref{a=0} of the relativistic vector field $M_\mu$ into the additional equation of motion \eqref{additional}.
The leading terms in the expressions for the relativistic spin-connection fields are all proportional to the torsion tensor $T_{\mu\nu}$. Upon inserting these terms into the Einstein equations will lead to leading order and sub-leading order terms that are also proportional to $T_{\mu\nu}$. On the other hand, looking at the leading order term of the additional equation of motion \eqref{additional} we already conclude that the torsion is zero:
$T_{\mu\nu} = 0$\,. Substituting this zero torsion constraint into the expanded Einstein equations, we find that the leading order terms of these equations are not anymore given by the terms proportional to the vanishing torsion but instead by terms that involve the NR fields $\omega_\mu{}^{ab}$ and $\omega_\mu{}^{a}$. We thus find that the different components of the relativistic Enstein tensor $[G]_{ A B}$ give rise to the following NC equations of motion:
\begin{eqnarray}
&&[G]_{00}\,:\hskip .2 truecm R_{0a}{}^{a}(G)=0\,,\label{NC1}\\[.1truecm]
&&[G]_{0a}\,:\hskip .22 truecm R_{0ca}{}^{c}(J) = 0\,,\label{NC2}\\[.1truecm]
&&[G]_{ab}\,:\hskip .23truecm R_{acb}{}^{c}(J) = 0\label{NC3}\,,
\end{eqnarray}
where the curvatures for the Galilean boosts and spatial rotations have been defined in the previous subsection.
To derive these equations of motion, we have made use of the identity
\begin{equation}
R_{ab}{}^{b}(G) = R_{0b}{}^{b}{}_{a}(J)\,,
\end{equation}
which follows from taking the NR limit of the relativistic identity \eqref{symm}.
In a flat Newtonian spacetime we have
\begin{equation}\label{restrictions}
\tau_\mu = \delta_\mu{}^0\,,\hskip 1truecm e_\mu{}^{a} = \delta_\mu{}^{a}\,,\hskip 1truecm m_\mu = \tau_\mu\, \Phi\,,
\end{equation}
where $\Phi$ is the (time-independent) Newton potential. The only non-trivial spin-connection field for this special case is given by $\omega_0{}^{a} = \partial^{a}\Phi$ and the only non-trivial equation of motion reads
\begin{equation}\label{Poisson}
R_{0a}{}^{a}(G) = \partial_{a}\omega_0{}^{a} = \partial_{a}\partial^{a}\Phi =0\,,
\end{equation}
thus recovering the well-known sourceless Laplace's equation for the Newton potential. The restrictions \eqref{restrictions} can be seen as gauge-fixing conditions for the diffeomorphisms restricting to frames with constant acceleration only. The NC equations \eqref{NC1}-\eqref{NC3} can then be viewed as the extension of the Laplace equation \eqref{Poisson} to arbitrary frames.
\vskip .3truecm
\noindent {\bf Carroll gravity}\ \ Finally, we consider the case of Carroll gravity.\,\footnote{For other recent work on Caroll gravity, see
\cite{Henneaux:2021yzg,Hansen:2021fxi,Perez:2021abf,Perez:2022jpr}.}
We will derive an invariant action for Carroll gravity by taking the ultra-relativistic limit of the Einstein-Hilbert action. To define this limit, we redefine the gauge fields and symmetry parameters with a dimensionless parameter $\omega$ as follows \cite{Bergshoeff:2017btm}:\,\footnote{For a different approach to Carroll gravity, see \cite{Hartong:2015xda}.}
\begin{eqnarray}
E_\mu^0 &=& \omega^{-1} \tau_\mu \,, \quad \Omega^{0a}_\mu \;=\; \omega^{-1} \omega^{a}_\mu \,, \label{eq: rescal1}\\
E_\mu^{a} &=& e^{a}_\mu \,, \quad \quad \;\;\, \Omega^{ab}_\mu \;=\; \omega^{ab}_\mu \,, \label{eq: rescal2}\\
\Lambda^{0a} &=& \omega^{-1} \lambda^{a} \,, \quad \;\;\, \Lambda^{ab} \;=\; \lambda^{ab} \,.\label{eq: rescal4}
\end{eqnarray}
Substituting the field redefinitions (\ref{eq: rescal1}) and (\ref{eq: rescal2}) into the Einstein-Hilbert action, redefining Newton's constant as $G_N =\omega^{-1} G_C$ and taking the $\omega\rightarrow\infty$ limit, we end up with the following Carroll action\footnote{This limit shows similarities with the strong coupling limit considered in \cite{Henneaux:1979vn}, \cite{Henneaux:1981su}. Note that both limits lead to a theory with a Carroll-invariant vacuum solution. This suggests that, although looking different at first sight, the result of the two limits might be the same up to field redefinitions.}
\begin{equation}
S_{\text{Car}}= - \frac{1}{16\pi G_C} \int e \left(2\tau^\mu e^\nu_{a} R(G)_{\mu\nu}{}^{a}+e^\mu_{a} e^\nu_{b} R(J)_{\mu\nu}{}^{ab}\right)\,. \label{eq: ActionCar}
\end{equation}
Here $e={\rm det}\, (\tau_\mu,e_\mu{}^{a})$. The projective inverses $\tau^\mu$ and $e^\mu{}_{a}$ transform under boosts and spatial rotations as follows:
\begin{eqnarray}
\delta \tau^\mu =0\,,\hskip 1.5truecm
\delta e^\mu_{a} = - \lambda^{a}\tau^\mu + \lambda^{ab}e_{b}^\mu \,. \label{eq: varinvviel Car}
\end{eqnarray}
The field equations corresponding to the first-order Carroll action \eqref{eq: ActionCar}
can be used to solve for the spin connections
\begin{eqnarray}
\omega_{\mu}{}^{a} &=& \tau_\mu \tau^\nu e^{\rho a} \partial_{[\nu}\tau_{\rho]} + e^{\nu a}\partial_{[\mu}\tau_{\nu]} + S^{ab} e^{b}_\mu \,, \label{eq: solomegaa}\\[.1truecm]
\omega_\mu{}^{ab} &=& - 2 e^{\rho [a}\partial_{[\mu} e_{\rho]}^{b]} + e_{\mu c} e^{\rho a} e^{\nu b}\partial_{[\rho} e_{\nu]}^{c} \,,\label{eq: solomegaab}
\end{eqnarray}
except for a symmetric component $S^{ab}=S^{(ab)} = e^{\mu(a} \omega_\mu^{b)}$ of the boost spin connection $\omega_\mu{}^{a}$ which remains undetermined.
Plugging the dependent expressions for the spin connections \eqref{eq:
solomegaa} and \eqref{eq: solomegaab} into the Carroll action
\eqref{eq: ActionCar} we obtain
\begin{equation}\label{eq: Car act S lag multi}
S_{\text{Car}}= - \frac{1}{16\pi G_C} \int e \left(
\left.2\tau^\mu e^\nu_{a}
R(G)^{a}{}_{\mu\nu}\right|_{S^{ab}=0}+e^\mu_{a} e^\nu_{b}
R(J)^{ab}{}_{\mu\nu} +
2K_{ab}S^{ab}-2\delta^{ab}\delta_{cd}K_{ab}S^{cd}\right)\,.
\end{equation}
From this expression of the action it follows that the equation of
motion for $S^{ab}$ implies that $K_{ab}=0$. In other words, we
conclude that $S^{ab}$ is actually a Lagrange multiplier that
enforces the intrinsic torsion constraint $K_{ab}=0$ with $K_{ab}$
defined in equation~\eqref{defK}. This corresponds to the totally
geodesic Carroll structure mentioned in section 4.3.4.
Finally, we note that in $d=2$ the Carroll algebra can be equipped
with a non-degenerate, invariant bilinear form and as a consequence it
is possible to write down a Chern-Simons action for the Carroll
algebra. This Chern-Simons action is precisely the same as the action
given above for $d=2$.
\section{Field Theories}
In this section we will discuss the non-Lorentzian (NL) field theories
for a complex and real massive spin-0 particle, a massive spin-1/2
particle and a massless spin-1 particle. There are two approaches
here. Either one takes the NL limit of the relativistic field theory
in a flat Minkowski background and after taking the limit one couples
the theory to NL gravity or one first couples the model under
consideration to general relativity and next takes the NL limit of the
matter coupled to gravity system using the NL limits we derived in the
previous section. We will opt for this second option. In particular,
for spin-0, we will discuss the Galilei, Bargmann and Carroll limits
while for spin-1/2 and spin-1 we will only discuss the Bargmann
limit.
\subsection{Real Massive Spin-0}
We first discuss the Galilei and Carroll limits of a real massive scalar field. This leads to the following four cases:
\vskip .3truecm
\noindent{\bf spin-0 Galilei.}\ \ We consider the following Lagrangian for a real scalar field:
\begin{align}\label{case1}
E^{-1} \,\mathcal{L}_{\rm rel} = +\frac12\, E^{\mu 0} E^{\nu 0} \partial_\mu \Phi \partial_\nu \Phi - \frac12\, E^{\mu a} E^{\nu b}\delta_{ab} \partial_\mu \Phi \partial_\nu \Phi -\frac12 \epsilon M^2 \Phi^2\,,
\end{align}
where $\epsilon = +1$ and $\epsilon = -1$ corresponds to a massive particle and a tachyon, respectively. Performing the Galilei redefinitions \eqref{eq: Galrescal1} and \eqref{eq: Galrescal2}, we obtain
\begin{align}
e^{-1} \,\mathcal{L}_{\rm rel} = +\frac{1}{2\omega^2} \tau^\mu\tau^\nu \partial_\mu \Phi \partial_\nu \Phi -\frac12\, e^{\mu a} e^{\nu b}\delta_{ab} \partial_\mu \Phi \partial_\nu \Phi -\frac12 \epsilon M^2 \Phi^2\,,
\end{align}
where $e = \det (\tau_\mu,e_\mu{}^{a})$ and where we have ignored an overall power of $\omega$.\,\footnote{Such an overall power can be cancelled by a further redefinition of the fields.}
There are now two ways to proceed. First, by choosing $\epsilon=-1$ and taking the limit $\omega \to \infty$, we obtain the following `magnetic' Galilei Lagrangian:
\begin{align}
e^{-1} \,\mathcal{L}_{\rm magnetic\ Galilei} = - \frac12\, e^{\mu a} e^{\nu b}\delta_{ab} \partial_\mu \Phi \partial_\nu \Phi +\frac12 M^2 \Phi^2\,.
\end{align}
The flat spacetime Lagrangian and the corresponding transformation rules are given by
\begin{align}\label{case1bflat}
\mathcal{L}_{\rm magnetic\ Galilei}(flat\ spacetime) = - \frac12\, (\partial_{i}\Phi)^2 +\frac12 M^2 \Phi^2\,
\end{align}
and
\begin{equation}
\delta \Phi = \Big(\zeta\,\partial_t +\xi^{i}\partial_{i} -\lambda^{i}\,t\,\partial_{i} -x^{j}\lambda^{i}{}_{j}\,\partial_{i}
\Big)\Phi \,.
\end{equation}
This limit was considered in the context of taking the limit of a tachyonic particle Lagrangian \cite{Batlle:2017cfa} where it leads to the massless Galilei particle of Souriau with `colour' $M$ \cite{Souriau}.
A second option is to first redefine $\Phi = \omega\phi, M = \omega^{-1}m$ and obtain the following Lagrangian:
\begin{align}\label{case1c}
e^{-1} \,\mathcal{L}_{\rm rel} = +\frac12\, \tau^\mu\tau^\nu \partial_\mu \phi \partial_\nu \phi -\frac12\, \omega^2\, e^{\mu a} e^{\nu b}\delta_{ab} \partial_\mu \phi \partial_\nu \phi -\frac12\epsilon m^2 \phi^2\,,
\end{align}
To deal with the quadratic divergence in the second term, we use a result of \cite{Gomis:2005pg} and rewrite the Lagrangian, introducing auxiliary fields $ \chi^{a}$, as follows:\,\footnote{The general expression is that for each $X$, the quadratic divergence $\omega^2 X^2$ can be rewritten, introducing an auxiliary field $\chi$, as $-\frac{1}{\omega^2} \chi^2 -2 \chi X$.}
\begin{align}\label{case1d}
e^{-1} \,\mathcal{L}_{\rm rel} = +\frac12\, \tau^\mu\tau^\nu
\partial_\mu \phi \partial_\nu \phi + \frac1{2\omega^2}\chi^{a}\chi_{a} + \chi^{a}e^\mu{}_{a}\partial_\mu\phi -\frac12\epsilon m^2 \phi^2\,.
\end{align}
Next, choosing $\epsilon = +1$ and taking the limit $\omega\to \infty$ the auxiliary fields $\chi^{a}$ become Lagrange multipliers and we obtain the following Lagrangian:
\begin{align}
e^{-1} \,\mathcal{L}_{\rm electric\ Galilei} = +\frac12\, \tau^\mu\tau^\nu \partial_\mu \phi \partial_\nu \phi + \chi^{a}e^\mu{}_{a}\partial_\mu\phi -\frac12 m^2 \phi^2
\end{align}
The flat spacetime Lagrangian and the corresponding transformation rules are given by
\begin{align}
\mathcal{L}_{\rm elctric\ Galilei}(flat\ spacetime) = +\frac12\, (\partial_t \phi)^2 + \chi^{i}\partial_{i}\phi -\frac12 m^2 \phi^2
\end{align}
and
\begin{equation}
\delta \phi = \Big(\zeta\,\partial_t +\xi^{i}\partial_{i} -\lambda^{i}\,t\,\partial_{i} -x^{j}\lambda^{i}{}_{j}\,\partial_{i}
\Big)\phi \,,\hskip 1truecm \delta \chi^i = -\lambda^j\, t\,\partial_j\chi^i
+\lambda^i(\partial_t\phi)\,.
\end{equation}
\vskip .5truecm
\noindent{\bf spin-0 Carroll.}\ \ This case was recently considered in \cite{deBoer:2021jej} in connection with dark matter and inflation and in
\cite{Henneaux:2021yzg} using field theory in an Hamiltonian formulation. The two types of Carroll limits considered here have also been considered in the context of $p$-brane sigma models using a Lagrangian formulation \cite{Bergshoeff:2020xhv}. The fact that there are two types of Carroll limits also follows from the duality between the Galilei and Carrol symmetries considered in \cite{Barducci:2018wuj}.
We consider the same Lagrangian for a real scalar field as in the Galilei case:
\begin{align}
E^{-1} \,\mathcal{L}_{\rm rel} = +\frac12\, E^{\mu 0} E^{\nu 0} \partial_\mu \Phi \partial_\nu \Phi - \frac12\, E^{\mu a} E^{\nu b}\delta_{ab} \partial_\mu \Phi \partial_\nu \Phi -\frac12 \epsilon M^2 \Phi^2\,.
\end{align}
but now perform the Carroll redefinitions \eqref{eq: rescal1} and \eqref{eq: rescal2}. In this way we obtain
\begin{align}
e^{-1} \,\mathcal{L}_{\rm rel} = +\frac12\, \omega^2 \tau^\mu\tau^\nu \partial_\mu \Phi \partial_\nu \Phi -\frac12\, e^{\mu a} e^{\nu b}\delta_{ab} \partial_\mu \Phi \partial_\nu \Phi -\frac12\epsilon M^2 \Phi^2\,,
\end{align}
where $e = \det (\tau_\mu,e_\mu{}^{a})$ and where we have ignored an overall power of $\omega$.
Like in the Galilei case, there are now two options to proceed. First,
to deal with the quadratic divergence in the first term, we rewrite the Lagrangian introducing an auxiliary field $\chi$ as follows:
\begin{align}\label{case3a}
e^{-1} \,\mathcal{L}_{\rm rel} = -\frac12\, \frac1{\omega^2}\chi^2 - \chi \tau^\mu\partial_\mu\Phi - \frac12\, e^{\mu a} e^{\nu b}\delta_{ab} \partial_\mu \Phi \partial_\nu \Phi -\frac12\epsilon M^2 \Phi^2\,.
\end{align}
Next, choosing $\epsilon=-1$ and taking the limit $\omega \to \infty$, we see that $\chi$ has become a Lagrange multiplier and we obtain the following magnetic Carroll Lagrangian:
\begin{align}\label{case3b}
e^{-1} \,\mathcal{L}_{\rm magnetic \ Carroll} = - \chi \tau^\mu\partial_\mu\Phi - \frac12\, e^{\mu a} e^{\nu b}\delta_{ab} \partial_\mu \Phi \partial_\nu \Phi +\frac12 M^2 \Phi^2\,.
\end{align}
The flat spacetime Lagrangian and the corresponding transformation rules are given by
\begin{align}\label{case3bflat}
\mathcal{L}_{\rm magnetic \ Carroll}(flat\ spacetime) = - \chi (\partial_t\Phi) - \frac12\, (\partial_i \Phi)^2 +\frac12 M^2 \Phi^2\,.
\end{align}
and
\begin{equation}
\delta \Phi = \Big(\zeta\,\partial_t +\xi^{i}\partial_{i} - \lambda^i x_i\partial_t -x^{j}\lambda^{i}{}_{j}\,\partial_{i}
\Big)\Phi \,,\hskip 1truecm \delta \chi = - \lambda^i x_i\partial_t \chi + \lambda^i(\partial_i\Phi)\,.
\end{equation}
A second option is to first redefine $\Phi =\frac{1}{\omega}\phi, M=\omega m$. We then choose $\epsilon=+1$ and take the limit $\omega \to \infty$ after which we obtain the following Lagrangian \cite{Bergshoeff:2014jla}:
\begin{align}\label{eq:case3c}
e^{-1} \,\mathcal{L}_{\rm electric\ Carroll } = +\frac12\, \tau^\mu\tau^\nu \partial_\mu \phi \partial_\nu \phi -\frac12 m^2 \phi^2\,.
\end{align}
The flat spacetime Lagrangian and the corresponding transformation rules are given by \cite{Bergshoeff:2014jla}
\begin{align}\label{eq:case3c-flat}
e^{-1} \,\mathcal{L}_{\rm electric\ Carroll }(flat\ spacetime) = +\frac12\, (\partial_t \phi)^2 -\frac12 m^2 \phi^2\,.
\end{align}
and
\begin{equation}
\delta \phi = \Big(\zeta\,\partial_t +\xi^{i}\partial_{i} -\lambda^i x_i\partial_t -x^{j}\lambda^{i}{}_{j}\,\partial_{i}
\Big)\phi\,.
\end{equation}
This concludes our discussion of the four limits of a real massive spin-0 particle.
\subsection{Complex Massive Spin-0}
Following \cite{Bergshoeff:2015sic}, we now discuss the standard Bargmann limit of a complex Klein-Gordon scalar field in a curved background.
In contrast to the real scalar discussed above the introduction of an extra vector gauge field has the effect that the quadratic divergences cancel and there is only one way to take the limit.
Our starting point is a Lagrangian for a relativistic massive complex scalar $\Phi$, with mass $M$, minimally coupled to an arbitrary gravitational background and the extra zero-flux U(1) gauge field $M_\mu$ that we introduced in the previous section:
\begin{align}\label{relscalar}
E^{-1} \,\mathcal{L}_{\rm rel} = -\frac12\,g^{\mu\nu}\,D_\mu\Phi^*D_\nu\Phi -\frac{M^2}{2}\,|\Phi|^2 \,.
\end{align}
Here the covariant derivative is given by
\begin{align}
D_\mu\Phi = \partial_\mu\Phi - {\rm i}\,M\,M_\mu\,\Phi \,.
\end{align}
Apart from invariance under diffeomorphisms, the above Lagrangian is also invariant under a local U(1) symmetry given by the transformation rule
\begin{equation}
\label{eq:PhiU1}
\delta \Phi = {\rm{i}}\, M\, \Lambda\, \Phi \,.
\end{equation}
The conserved current associated to this local U(1) symmetry, which is given by
\begin{equation}
j^\mu_{\rm rel} = \frac{M}{2{\rm{i}}} \, \Big( \Phi^* D^\mu \Phi - \Phi D^\mu \Phi^*\Big) \,,
\end{equation}
expresses conservation of the number of particles minus the number of antiparticles.
Using the redefinitions of the previous section and redefining the mass parameter $M$ as
\begin{equation}
\label{eq:massresc}
M = \omega m \,,
\end{equation}
one finds that the $O(\omega^2)$ contribution to the Lagrangian cancels with one contribution coming from the mass term and another one from the term that is quadratic in the U(1) gauge field. Therefore, the $\omega\rightarrow \infty$ limit is well-defined and leads to the following Lagrangian for a Schr\"odinger field coupled to an arbitrary Newton-Cartan background:\,\footnote{We have ignored an overall factor of $\omega$ coming from the redefinition of $E = \omega \, e + \mathcal{O}(\omega^{-1})$. This factor is irrelevant as it amounts to an overall rescaling of the Lagrangian that could be compensated by a redefinition of the scalar field. } \footnote{We have turned curved indices into flat indices using the inverse Newton-Cartan Vierbeine. Thus $\tilde{D}_0$, $\tilde{D}_{a}$ are shorthand for $\tau^\mu \tilde{D}_\mu$, $e^\mu{}_{a} \tilde{D}_\mu$. Spatial flat indices are raised and lowered with a Kronecker delta.}
\begin{align}\begin{split}\label{nrscalar}
e^{-1} \mathcal{L}_{\rm non-rel} = m\,\Big[\,\frac{{\rm{i}}}2\,\Big(\Phi^*\tilde D_0\Phi -\Phi\tilde D_0\Phi^*\Big)
-\frac1{2m}\,\big|\tilde D_{a}\Phi\big|^2 \,\Big] \,,
\end{split}\end{align}
where we have defined
\begin{align}
\tilde D_\mu\Phi = \partial_\mu\Phi +{\rm{i}}\,m\,m_\mu\,\Phi \,.
\end{align}
The Lagrangian \eqref{nrscalar} is invariant under diffeomorphisms (with parameter $\xi^\mu$) and the local U(1) central charge transformation of the Bargmann algebra (with parameter $\sigma$), under which $\Phi$ transforms as
\begin{equation} \label{eq:nonrelU1Phi}
\delta \Phi = \xi^\mu \partial_\mu \Phi -{\rm{i}}\, m \, \sigma \, \Phi \,.
\end{equation}
One can then define the current associated to the central charge transformation by
\begin{equation}
\label{eq:partconscurrent}
j^\mu_{\rm non-rel} = \tau^\mu \,|\Phi|^2 + e^\mu{}_{a}\, \frac{1}{2 m {\rm{i}}} \left( \Phi^*\tilde{D}^{a} \Phi - \Phi\tilde{D}^{a} \Phi^* \right)\,.
\end{equation}
When choosing a flat background
\begin{align}
\label{eq:flatbackground}
\tau_\mu = \delta_\mu^t \,, \qquad\quad e_t{}^{a} = 0,\quad e_i{}^{a} = \delta_i^{a} \,, \qquad\quad m_\mu = 0 \,,
\end{align}
this current corresponds to the usual current of particle number or mass conservation. We thus explicitly see that, as expected for a non-relativistic limit, our NR limit procedure has suppressed antiparticles.
It is instructive to look at the action on $\Phi$ of the symmetries that are left when the flat background (\ref{eq:flatbackground}) is chosen.
The transformation rules (\ref{eq:nonrelU1Phi}) then reduce to those that leave these flat background fields invariant. They are determined by the following NR Killing equations
\begin{align}\begin{aligned}
\label{eq:killeqsnr}
\partial_\mu \xi^t &= 0 \,, \qquad &\partial_t \xi^i + \lambda^i &= 0 \,, \\
\partial_i \xi^j + \lambda^j{}_i &= 0 \,, \qquad& \partial_t \sigma &= 0 \,, \qquad& \partial_i \sigma + \lambda_i &= 0 \,.
\end{aligned}\end{align}
The solution to these equations is given by
\begin{align}
\label{eq:killvecsnr}
\xi^t (x^\mu) = \zeta\,, \qquad \xi^i(x^\mu) = \xi^i - \lambda^i\, t - \lambda^i{}_j\, x^j\,, \qquad \sigma(x^\mu) = \sigma - \lambda^i \, x^i\,,
\end{align}
where the parameters $\zeta$, $\xi^i$, $\lambda^i$, $\lambda^{ij}$, $\sigma$ are now constants. These correspond to the usual time translation, spatial translations, Galilean boosts, spatial rotations and central charge transformation of the rigid Bargmann algebra. One thus finds that
$\Phi$ transforms as follows:
\begin{align} \label{eq:flatsymmscalnr}
\delta\Phi = \Big(\zeta\,\partial_t +\xi^i\partial_i -\lambda^i\,t\,\partial_i -x^j\lambda^i{}_j\,\partial_i
-{\rm{i}}\,m\,\sigma + {\rm{i}}\,m\,\lambda^i x^i\Big)\Phi \,.
\end{align}
The last term in this transformation rule corresponds to the phase
factor acquired by a Schr\"odinger field under rigid Galilean boosts,
that is necessary to show Galilei invariance of the flat space
Schr\"odinger Lagrangian. We note that this same Schr\"odinger
Lagrangian is also invariant under an extra dilatation and special
conformal transformation, that extend the symmetries of the Bargmann
algebra denoted in (\ref{eq:flatsymmscalnr}) to the ones of the
Schr\"odinger algebra \cite{Niederer:1972zz}. So, even though we
started from a relativistic theory with no conformal invariance, we
end up with a NR theory that is invariant under non-relativistic
conformal Schr\"odinger symmetries.
\subsection{Massive Spin-1/2}
Following \cite{Bergshoeff:2015sic}, our starting point is the Dirac Lagrangian for a $d=3$ massive spin 1/2 particle described by a 4-component spinor $\Psi$ coupled to an arbitrary gravitational background and the U(1) gauge field $M_\mu$:
\begin{align}\label{relspin}
E^{-1} \mathcal{L}_{\rm rel} = \bar\Psi\slashed{D}\Psi -M\,\bar\Psi\Psi +{\rm h.c.}\,,
\end{align}
where the covariant derivative is given by
\begin{align}
D_\mu\Psi = \partial_\mu\Psi - \frac14\,\Omega_\mu{}^{ A B}\gamma_{ A B}\Psi + {\rm{i}}\,M\,M_\mu\,\Psi \,.
\end{align}
Before taking the limit, it is convenient to define projected spinors in terms of the original spinor as follows \cite{Gomis:2004pw}:
\begin{align} \label{eq:spinproj}
\Psi_\pm = \frac12\,\Big(\unity \pm {\rm{i}}\,\gamma^0\Big)\Psi\,, \hskip2cm \bar\Psi_\pm = \bar\Psi \,\frac12\,\Big(\unity \pm {\rm{i}}\,\gamma^0\Big) \,.
\end{align}
Besides redefining the bosonic gravitational fields we also redefine these projected spinors as follows \cite{Gomis:2004pw}:
\begin{align} \label{eq:spinprojresc}
\Psi_+ = \sqrt{\omega}\,\psi_+ \,, \hskip2cm \Psi_- = \frac1{\sqrt \omega}\,\psi_- \,.
\end{align}
Using all these redefinitions and taking $M = \omega m$, one finds that the action \eqref{relspin} upon taking the $\omega\rightarrow\infty$ limit reduces to
\begin{align}\begin{split}\label{4dNRdirac}
e^{-1} \mathcal{L}_{\rm non-rel} &= \bar\psi_+\gamma^0\tilde D_0\psi_+ +\bar\psi_+\gamma^{a}\tilde D_{a}\psi_- +\bar\psi_-\gamma^{a}\tilde D_{a}\psi_+
-2\,m\,\bar\psi_-\psi_- +{\rm h.c.}\,,
\end{split}\end{align}
where we have used the covariant derivatives
\begin{align}\begin{split}\label{4dcovD}
\tilde D_\mu\psi_+ &= \partial_\mu\psi_+ -\frac14\,\omega_\mu{}^{ab}\gamma_{ab}\psi_+ -{\rm{i}}\,m\,m_\mu\,\psi_+ \,, \\
\tilde D_\mu\psi_- &= \partial_\mu\psi_- -\frac14\,\omega_\mu{}^{ab}\gamma_{ab}\psi_- +\frac12\,\omega_\mu{}^{a}\gamma_{a0}\psi_+
-{\rm{i}}\,m\,m_\mu\,\psi_- \,.
\end{split}\end{align}
Note that all divergences have canceled.
The invariance of the Lagrangian \eqref{4dNRdirac} under Galilean boosts is not manifest but can be checked by using the transformation rules
\begin{align}\begin{split}\label{nice}
\delta \psi_+ &= \frac14\,\lambda^{ab}\gamma_{ab}\psi_+ + {\rm{i}} \, m \, \sigma \psi_+ \,, \\
\delta \psi_- &= \frac14\,\lambda^{ab}\gamma_{ab}\psi_- -\frac12\,\lambda^{a}\gamma_{a0}\psi_+ + {\rm{i}} \, m \, \sigma \,\psi_- \,,
\end{split}\end{align}
that are easily found by applying all field redefinitions in the relativistic transformation rules and taking the limit $\omega \to \infty$.
The equations of motion corresponding to the non-relativistic Lagrangian \eqref{4dNRdirac} are given by the L\'evy--Leblond equations
\begin{align}\begin{split}
\gamma^0\tilde D_0 \psi_+ +\gamma^{a}\tilde D_{a} \psi_- &=0\,, \\[.1truecm]
\gamma^{a}\tilde D_{a} \psi_+ -2\,m\,\psi_- &=0\,.
\end{split}\end{align}
The second equation can be used to solve for the auxiliary spinor $\psi_-$ and eliminate it from the Lagrangian \eqref{4dNRdirac}. Substituting the solution for $\psi_-$ back into the first equation
we obtain the curved space generalization of the so-called
Schr\"odinger--Pauli equation:
\begin{align} \label{modLL}
\Big[\gamma^0\tilde D_0 +\frac1{2m}\tilde D^{a} \tilde D_{a}
\Big]\psi_+ =0 \,.
\end{align}
\subsection{NR massless spin 1}
We now consider the massless spin 1 case. Our starting point is the Lagrangian of a real, massless, relativistic vector field coupled to gravity:
\begin{equation}
\label{eq:realvecrel}
E^{-1} \mathcal{L}_{\rm rel} = - \frac14\, E^\mu_{ A} E^{\rho A} E^\nu_{ B} E^{\sigma B} F_{\mu\nu} F_{\rho \sigma} \,,
\end{equation}
where $F_{\mu\nu}$ is the usual Maxwell field strength. Like for the spin 0 case, one can take an electric or magnetic Galilean limit of electrodynamics or even an electric and magnetic Carrollian limit. We will only discuss here the magnetic Galilean limit since it shows the additional feature of an emergent symmetry, something that does not occur in the spin-0 case.
Redefining the gravitational background fields like in the previous section leads to the following non-relativistic Lagrangian
\begin{equation}\label{nrexpansion}
e^{-1} \mathcal{L}_{\rm rel} = -\frac{1}{2\omega^2}\tau^\mu\tau^\nu F_{\mu a}F_{\nu }{}^a - \frac14\, F_{ab} F^{ab} \,.
\end{equation}
Taking the limit $\omega \to \infty$, we obtain for a flat spacetime
\begin{equation}
\label{eq:realvecnonrel}
\mathcal{L}_{\rm non-rel} = - \frac14\, F_{ij} F^{ij} \,.
\end{equation}
Due to the absence of the field $A_0$ this Lagrangian has an emergent
Stueckelberg symmetry $\delta A_0(x) = \rho(x)$ while the
corresponding field equation of $A_0$ does not follow directly from
the non-relativistic Lagrangian \eqref{eq:realvecnonrel}. The
situation is very similar to what happens when taking the limit of
Neveu-Schwarz gravity where the Poisson equation of the Newton
potential is missing \cite{Bergshoeff:2021bmc}. The missing equation
of motion can be obtained by taking the limit of the relativistic
equations of motion. The complete set of non-relativistic equations of
motion form a reducible but indecomposable representation under
Galilean boosts which means that the equation of motion corresponding
to $A_0$ transforms to the equations of motion corresponding to $A_i$
but not the other way around. This shows the following connection
between the equation of motion corresponding to $A_0$ and the
Lagrangian \eqref{eq:realvecnonrel}: the missing equation of motion
corresponding to $A_0$ is not invariant under Galilean boosts by
itself but, instead transforms into the equations of motion that
follow from the non-relativistic Lagrangian \eqref{eq:realvecnonrel}.
\subsection{Massless Spin 1 with an additional scalar field}
Allowing the option to add extra fields to the Lagrangian, there is yet another way to obtain a non-relativistic Lagrangian from a relativistic one.
To be precise, extending the relativistic Lagrangian with a massless scalar $\rho$ we consider the following Lagrangian \cite{Bergshoeff:2015sic}:
\begin{equation}
\label{eq:realvecscalrel}
E^{-1} \mathcal{L}_{\rm rel} = - \frac14\, E^\mu_{ A} E^{\rho A} E^\nu_{ B} E^{\sigma B} F_{\mu\nu} F_{\rho \sigma} - \frac12\, E^\mu_{ A} E^{\nu A} \partial_\mu \rho\, \partial_\nu \rho \,.
\end{equation}
Defining two fields $A$ and $B$ as follows:
\begin{equation} \label{eq:AB}
A = E^\mu{}_0 A_\mu - \rho \,, \qquad B = E^\mu{}_0 A_\mu + \rho \,,
\end{equation}
one can redefine the bosonic background fields like in the previous section supplemented with the redefinitions
\begin{equation} \label{eq:ABtilde}
A = \frac{1}{\omega} \tilde{A} \,, \qquad B = \omega \tilde{B} \,,
\end{equation}
to obtain the following non-relativistic Lagrangian in the $\omega \rightarrow \infty$ limit\,\footnote{For a flat spacetime, the same Lagrangian can be obtained by a null reduction \cite{Festuccia:2016caf}. A `T-dual way' to obtain the same Lagrangean is to take a so-called string limit of Maxwell in one dimension higher and to reduce over the spatial direction longitudinal to the string \cite{Bergshoeff:2021tfn}.}
\begin{equation}
\label{eq:realvecscalnonrel}
e^{-1} \mathcal{L}_{\rm non-rel} = \frac18\,\partial_0\tilde{B}\,\partial_0\tilde{B}
+\frac12\,\tilde{D}_{a}\tilde{A}\,\partial^{a}\tilde{B} -\frac14\,F_{ab} F^{ab}
-\frac12\,\tilde{D}^{a} A_{a}\, \partial_0 \tilde{B} \,,
\end{equation}
where the following derivatives were used
\begin{align} \label{eq:halfcovders}
\tilde{D}_\mu \tilde{A} &= \partial_\mu \tilde{A} + \omega_\mu{}^{a} A_{a} \,, \nonumber \\
\tilde{D}_\mu A_{a} &= \partial_\mu A_{a} - \omega_{\mu\, a}{}^{b} A_{b} + \frac12\, \omega_\mu{}_{a} \tilde{B} \,.
\end{align}
Note that the basic variables are a spatial vector $A_{a}$ with spatial flat indices and two extra fields $\tilde{A}$, $\tilde{B}$. These fields transform non-trivially under local spatial rotations and Galilean boosts as follows:
\begin{align}
\delta \tilde{A} &= -\lambda^{a} A_{a} \,, \qquad \qquad \qquad \delta \tilde{B} = 0\,, \nonumber \\
\delta A_{a} &= \lambda_{a}{}^{b} A_{b}- \frac12\, \lambda^{a} \tilde{B} \,,
\end{align}
while they transform as scalars under general coordinate transformations.
It is with respect to these transformations that the above derivatives (\ref{eq:halfcovders}) are defined.
The Lagrangian (\ref{eq:realvecscalnonrel}) is also invariant under the U(1) gauge transformation
\begin{equation}
\label{eq:nonrelU1}
\delta \tilde{A} = \tau^\mu \partial_\mu \Lambda \,, \qquad\quad \delta A_{a} = e^\mu{}_{a} \partial_\mu \Lambda \,,
\end{equation}
although this invariance is not manifest.
To get a better physical understanding of the Lagrangian
(\ref{eq:realvecscalnonrel}), we consider the equations of motion when
restricted to the flat background \eqref{eq:flatbackground} (such that
$i = a$):
\begin{equation}
\begin{split}
\partial^i \partial_i \tilde{B} &= 0 \,, \\
\partial_i \partial_t \tilde{B} + \partial^j F_{ji} &= 0 \,, \\
\partial_t \partial_t \tilde{B} - 2\, \partial^i \partial_i \tilde{A} + 2\, \partial_t \partial^i A_i &= 0 \,.
\end{split}
\end{equation}
One can consistently set $\tilde{B}$ to zero in these equations since
this constraint is invariant under all the symmetries of the theory.
The remaining equations for $\tilde{A}$ and $A_i$ then coincide with
the equations of Galilean Electromagnetism in the magnetic limit,
where $\tilde{A}$ plays the role of the electric potential. This
theory is not only invariant under the Galilei group, but also under
the Galilean conformal group \cite{Niederer:1972zz, Hagen:1972pd,
Duval:1990hj, Henkel:1993sg}. The latter is the conformal extension
of the Galilei group that is obtained by performing an
In\"on\"u-Wigner contraction of the relativistic conformal
group. Since the relativistic Lagrangian we started from is
conformally invariant when restricted to flat space, it is not
surprising to see that the non-relativistic limit is invariant under
Galilean conformal transformations.
\section{Conclusion}
\label{sec:conclusion}
In this review we summarized the basic properties of a number of
non-Lorentzian theories. We first discussed the kinematical spaces and
corresponding symmetry algebras of these non-Lorentzian theories. We
next constructed a number of actions describing the dynamics of
particles moving in these kinematical spaces. For this, we applied the
method of nonlinear realisations and explained the relation between
this method and the co-adjoint orbit method. We have also analysed
the non-Lorentzian particles as a suitable non-relativistic limit of
relativistic particles. We also discussed three types of
non-Lorentzian gravity theories: Galilei gravity, Newton-Cartan
gravity and Carroll gravity. We not only showed how these gravity
theories can be obtained by applying a gauging procedure to an
underlying non-relativistic Lie algebra but also by taking a special
non-relativistic limit of general relativity. Introducing matter, we
discuss the coupling of gravity to field theories describing particles
of different spin. We achieved this by starting from the relativistic
field theories coupled to general relativity and taking a
non-relativistic limit.
There are several ways to extend the results presented in this review
some of which are discussed in the other articles in this volume. As
mentioned in the introduction, one could extend the degenerate
geometries we considered here to geometries that are characterized by
a foliation of a higher co-dimension. In particular, the geometries
with a co-dimension two foliation play an important role in describing
non-relativistic string theory, see the article by Oling and Yan
\cite{Oling:2022fft}.
\section*{Acknowledgments}
We acknowledge many enlightening conversations on non-lorentzian
topics with the following people: Roberto Casalbuoni, Can Görmez, Ross
Grassie, Jelle Hartong, Emil Have, Yannick Herfray,
Axel Kleinschmidt, Johannes Landsteiner, Stefan Prohazka, Luca Romano,
Jan Rosseel, Jakob Salzer, Dieter Van den Bleeken and Kevin van Helden.
The work of JG has been supported in part by MINECO
FPA2016-76005-C2-1-P and PID2019-105614GB-C21 and from the State
Agency for Research of the Spanish Ministry of Science and Innovation
through the Unit of Excellence Maria de Maeztu 2020-203 award to the
Institute of Cosmos Sciences (CEX2019-000918-M).
\providecommand{\href}[2]{#2}\begingroup\raggedright |
1,108,101,562,972 | arxiv | \section{Introduction}
Wilson and Cowan \cite{wandc} proposed a model for describing the dynamics of localized populations of excitatory and inhibitory neurons. This model is a coarse-grained description of the overall activity of a large-scale neuronal network, employing just two differential equations \cite{Kilpatrick}. It is used in the developing of multi-scale mathematical model of cortical electric activity with realistic mesoscopic connectivity \cite{epilepsy}.
On the other hand, sudden changes and the instantaneous perturbations in a neural network at a certain time, which are identified by external elements, are examples of impulsive phenomena which may influence the evolutionary process of the neural network \cite{akca}. In fact, the existence of impulse is often a source of richness for a model. That is to say, the impulsive neural networks will be an appropriate description of
symptoms of sudden dynamic changes. Therefore, the models considered in this paper have impulsive moments.
The singularly perturbed problems depend on a small positive parameter, which is in front of the derivative, such that the solution varies rapidly in some regions and varies slowly in other regions. They arise in the various processes and phenomena such as chemical kinetics, mathematical biology, neural networks, fluid dynamics and in a variety models for control theory \cite{segel,hek,owen,terman,fluid, kokotovic,gondal}. In this article, we will investigate the Wilson-Cowan model with singular impulsive function in which singular perturbation method has been used to analyze the dynamics of neuronal models.
Local bifurcations are ubiquitous in mathematical biology \cite{kuang1} and mathematical neuroscience \cite{hop,diego,dias}, because they provide a framework for understanding behavior of the biological networks modeled as dynamical systems. Moreover, a local bifurcation can affect the global dynamic behavior of a neuron \cite{hop}. There are many neuronal models to consider the bifurcation analysis, for instance, the bifurcation for Wilson-Cowan model is discussed in the book of Hoppensteadt and Izhikevich \cite{hop} in which they consider the model of the following type
\[\dot{x}=-x+S(\rho+cx),\]
where $x\in \mathbb{R}$ is the activity of the neuron, $\rho\in \mathbb{R}$ is the external input to the neuron, the feedback parameter $c\in \mathbb{R}$ characterizes the non-linearity of the system, and $S$ is a sigma shaped function. This system consists only one neuron or one population of neurons. When the bifurcation parameter $\rho$ changes, the saddle-node bifurcation occurs. In our paper, we will discuss two and four of population of neurons. These systems have impulses of prescribed moments of time. We will observe the local bifurcation in these models.
The attractors observed in our simulations do not resemble any attractors which have already been observed in the literature. This is why, we need to introduce a new terminology to describe an ultimate behavior of motion in the model.
We call the recently introduced components of constructed attractors as medusas and rings.
This ``zoological" approach to dynamics is not unique in differential equations. For example, canards
are cycles of singularly perturbed differential equations \cite{krupa,szmolyan,de}. They were discovered in the van der Pol oscillator by Benoit et al \cite{benoit}. This phenomenon explains the very fast transition upon variation of a parameter from a small amplitude limit cycle to a relaxation oscillation \cite{krupa}.
The fast transition is called canard explosion and happens within an exponentially small range of the control parameter. Because this phenomenon is hard to detect it was nicknamed a canard, after the French newspaper slang word for hoax. Furthermore, the shape of these periodic orbits in phase space resemble a duck; hence the name ``canard," the French word
for duck. So the notion of a canard cycle was born and the chase after these creatures began \cite{campbell}. It is important to note that both canards and medusas appear in the singularly perturbed systems.
Bifurcation occurred in this paper cannot be reduced to the existing local bifurcations in the literature, namely, saddle-node, pitchfork, Hopf bifurcations, etc. First of all, we are talking about the change of an attractor set in the four subpopulations of neurons of Wilson-Cowan model with impulses depending on the change of the small parameter. This time the bifurcation parameter is also the parameter of the singularity. Moreover, it is a parameter of the singularity not only in the differential equations of the model, but also in the impulsive part of it. Thus, the cause of bifurcation is not the change of eigenvalues, but it relates to the singular compartment and the impulsive dynamics of the model. This is why, theoretical approvement of the observed bifurcations has not been done in the paper. However, we see that the abrupt changes in the phase portrait through simulations. Additionally, we notice that in the numerical study attractors of the model can be described through the new picture's elements which we call as medusa, medusa without ring and rings, which, in general, may not be considered invariant for solutions of the model despite that the elements are introduced for the first time. We are confident that they are very generic for differential equations with impulses and they will give a big benefit in the next investigations of discontinuous neural networks.
We will start by defining the membrane time constant since it will be used as the parameter of singularity and bifurcation.
\section{Membrane Time Constant}
The role of the membrane time constant is important in Wilson-Cowan models. In these models the frequency of the oscillation is determined primarily by the membrane time constants \cite{ramesh}. Let us define the membrane time constant $\mu$ for a simple circuit.
Suppose that the membrane is characterized by a single membrane capacitance $C$ in series with a single voltage-independent membrane resistance $R,$ see Fig. \ref{fig:RC}.
\begin{figure}[H
\centering
\includegraphics[scale=0.07]{RC}
\vspace{-10pt}
\caption{A simple RC circuit.}
\label{fig:RC}
\end{figure}
Then, by Ohm's law the dynamics of the potential $V$ across this circuit in response to a current injection $I$ changes as
\begin{equation*}
RC\frac{dV}{dt}=-V+IR,
\end{equation*}
which has the solution \[V(t)=IR(1-e^{-\frac{t}{RC}})\]
The membrane time constant, here, is defined by the product of the membrane resistance and membrane capacitance $\mu=RC.$ The potential $V(t)$ is governed by exponential decay toward
the steady-state $V=IR$ as $\mu \to 0.$
The membrane time constant is used to understand how quickly a neuron's voltage level changes after it receives an input signal.
\section{Singular Model with Singular Impulsive Function}
The dynamics of excitatory and inhibitory neurons are described as follows \cite{wandc}
\begin{equation}\label{model}
\begin{split}
\mu_e\frac{dE}{dt}&=-E+(k_e-r_eE)S_e(c_1E-c_2I+P), \\
\mu_i\frac{dI}{dt}&=-I+(k_i-r_iI)S_i(c_3E-c_4I+Q),
\end{split}
\end{equation}
where $E(t)$ and $I(t)$ are the proportion of excitatory and inhibitory cells firing per unit time at time $t$, respectively, $c_1$ and $c_2$ are the connectivity coefficients, which are both positive, represent the average number of excitatory and inhibitory synaptic inputs
per cell, $P(t)$ represents the external input to the excitatory subpopulation, the quantities $c_3,c_4$ and $Q(t)$ are defined similarly for the inhibitory subpopulation. The nonzero quantities $\mu_e$ and $\mu_i$ represent the membrane time constants while $k_e,k_i,r_e$
and $r_i$ are associated with the refractory terms. Moreover, $S_e(x)$ is the sigmoid function of the following form
\begin{equation}
S_e(x)=\frac{1}{1+\exp[-a_e(x-\theta_e)]}-\frac{1}{1+\exp(a_e\theta_e)},
\end{equation}
where $\theta_e$ is the position of the maximum slope of $S_e(x)$ and $\max[\dot{S}_e(x)] = a_e / 4,$ and $S_i$ is defined similarly.
Since the external inputs influence the neurons activities, $E(t)$ and $I(t)$ can change abruptly. It is natural to consider the previous continuous dynamics in the way that the membrane time constants proceed to be involved in the electrical processes and the impulsive equations have the form
\begin{equation}\label{impulsefunction2}
\begin{split}
&\Delta E|_{t=\theta_i}=\bar{K}(E,I), \\
&\Delta I|_{t=\theta_i}=\bar{J}(E,I),
\end{split}
\end{equation}
where the impulse moments $\theta_i$s are distinct, $\theta_i \in (0,T)$ and the equality $\Delta E|_{t=\theta_i} = E(\theta+)-E(\theta-)$ denotes the jump operator in which $t=\theta$ is the time when the external input influence $E(t)$, $E(\theta-)$ is the pre-impulse value and $E(\theta+)$ is the post-impulse value. Moreover, if one considers the impulsive equations as the limit cases of the differential equations, then at some moments impulsive changes of the activities can depend on the membrane time constants, similar to the ones for the system \eqref{model}. More precisely, we will also study the equations of the form
\begin{equation}\label{impulsefunction1}
\begin{split}
\mu_e\Delta E|_{t=\eta_j}=K(E,I,\mu_e), \\
\mu_i\Delta I|_{t=\eta_j}=J(E,I,\mu_i),
\end{split}
\end{equation}
where the moments $\eta_j$s are, in general, different from $\theta_i$s. Finally, gathering all the dynamics details formulated above, our single Wilson-Cowan model with impulses has the following form
\begin{equation}\label{modelg}
\begin{split}
&\mu_e\frac{dE}{dt}=-E+(k_e-r_eE)S_e(c_1E-c_2I+P), \\
&\mu_i\frac{dI}{dt}=-I+(k_i-r_iI)S_i(c_3E-c_4I+Q),\\
&\Delta E|_{t=\theta_i}=\bar{K}(E,I), \\
&\Delta I|_{t=\theta_i}=\bar{J}(E,I), \\
&\mu_e\Delta E|_{t=\eta_j}=K(E,I,\mu_e), \\
&\mu_i\Delta I|_{t=\eta_j}=J(E,I,\mu_i),
\end{split}
\end{equation}
with the initial activity $(E(0),I(0))=(E_0,I_0).$
Define the function $F(E,I)=\begin{pmatrix}
-E+(k_e-r_eE)S_e(c_1E-c_2I+P) \\
-I+(k_i-r_iI)S_i(c_3E-c_4I+Q)
\end{pmatrix}.$
Suppose that $E,I \in \mathbb{R}$, $t \in [0, T],$ $F(E,I)$ is continuously differentiable on $D$, $K(E,I,\mu_e),J(E,I,\mu_i)$ are continuous on $D\times[0,1]$ and $\bar{K}(E,I),\bar{J}(E,I)$ are continuous on $D$, $D$ is the domain $D=\{0\leq t\leq T, |E|<d,|I|<d \},$ $\theta_i,i=1,2,\dots,p,$ and $\eta_j,j=1,2,\dots,\bar{p},$ are distinct discontinuity moments in $(0,T).$
Substituting $\mu_e=\mu_i=0$ in \eqref{model} and \eqref{impulsefunction1}, we obtain
$F(E,I)=0$ and
\begin{equation}\label{impulsefunction1mu0}
\begin{split}
&0=K(E,I,0), \\
&0=J(E,I,0).
\end{split}
\end{equation}
Assume that equations $F(E,I)=0$ and \eqref{impulsefunction1mu0} have the steady states \[(E_1,I_1), (E_2,I_2),...(E_k,I_k),(E_{k+1},I_{k+1}),\dots,(E_l,I_l)\] such that all of them are real and isolated in $\bar{D}$. They are considered to be states of low level background activities since such activities seem ubiquitous in neural tissue. $E(t)$ and $I(t)$ will be used to refer the activities in the respective subpopulations.
The following condition are required for system \eqref{model}.
\begin{itemize}
\item[(C1)] Jacobian matrices of $F(E,I)$ at the points $(E_1,I_1), (E_2,I_2),...,(E_k,I_k)$ are Hurwitz matrices (they have eigenvalues whose real parts are negative).
\end{itemize}
This condition implies that the states $(E_1,I_1), (E_2,I_2),...,(E_k,I_k)$ are stable steady states of the differential equation \eqref{model}. Moreover, for the impulsive functions we need the following conditions.
\begin{itemize}
\item[(C2)] For each $j \in \{1,2,\dots,k\}$ there exists $i \in \{1,2,\dots,k\}$ such that
\[
\begin{pmatrix}
E_j \\
I_j
\end{pmatrix}
+
\begin{pmatrix}
\bar{K}(E_j,I_j) \\
\bar{J}(E_j,I_j)
\end{pmatrix}
=
\begin{pmatrix}
E_i \\
I_i
\end{pmatrix} .
\]
\end{itemize}
That is, after the each impulse moment $\theta_j$ the activity $(E(t),I(t))$ will be close to another stable steady state $\begin{pmatrix} E_i \\ I_i \end{pmatrix}.$
\begin{itemize}
\item[(C3)] \[ \lim_{\substack{(E,I)\to (E_j,I_j) \\ \mu_{e,i} \to 0}}
\begin{pmatrix}
\frac{K(E,I,\mu_{e})}{\mu_{e}} \\
\frac{J(E,I,\mu_{i})}{\mu_{i}}
\end{pmatrix}=
\begin{pmatrix}
0 \\ 0
\end{pmatrix}
, \quad j=1,2,\dots,k.\]
\end{itemize}
In the denominator of the limit we have small parameters $\mu_e$ and $\mu_i$ which decay to zero. In order to avoid a blow up we need the last condition. In addition, the zero value of the limit gives us the privilege that the activities stay in the domain of attractions of the stable steady states.
Denote $D_j$ as the domain of attraction of stable steady state $(E_j,I_j),j=1,2,\dots,k,$ such that $D_i\cap D_j=\emptyset$ if $i\neq j$ and $D_j\subset D,j=1,2,\dots,k.$ Also, $z_j(t)$ will be used for denoting the solution of $F(E,I)=$ and \eqref{impulsefunction1mu0}
such that if the initial value $(E_0,I_0)\in D_j,$ then $z_j(t)=(E_j,I_j)$ for $t\in (0,\theta_1]$ and it alternates to the other stable steady states by condition (C2) for the next intervals $(\theta_i,\theta_{i+1}],i=1,2,\dots,p-1.$
\begin{theorem} \label{thml}
Suppose that conditions (C1)-(C3) are true. If the initial value $(E_0,I_0)$ is located in the domain of attraction $D_j$ of the steady state $(E_j,I_j),j=1,2,\dots,k,$ then the solution $(E(t),I(t))$ of \eqref{modelg} with $(E_0,I_0)$ exists on $[0,T]$ and it is satisfies the limit
\begin{equation}\label{lim}
\lim_{\mu_{e,i} \to 0}(E(t),I(t))=z_j(t) \quad \text{for} \quad 0<t\leq T,
\end{equation}
where $j=1,2,\dots,k(k-1)^p.$
\end{theorem}
The proof follows from the proof in \cite{cag}.
\textbf{Example. }
Now, let us take the external forces $P(t)=Q(t)=0$, $\mu_e=\mu_i=\mu,$ and other coefficients in \eqref{model} as follows: $c_1=12, c_2=4, c_3=13, c_4=11, a_e=1.2, a_i=1,\theta_e=2.8,\theta_i=4, r_e=1, r_i=1, k_e=0.97, k_i=0.98.$ Then, one obtains
\begin{equation}\label{model0}
\begin{split}
\mu\frac{dE}{dt}&=-E+(0.97-E)S_e(12E-4I), \\
\mu\frac{dI}{dt}&=-I+(0.98-I)S_i(13E-11I).
\end{split}
\end{equation}
Taking $\mu=0,$ one has the three equilibria (see Fig. \ref{fig:wlc}), namely
\[
\begin{pmatrix}
E \\
I
\end{pmatrix}
=
\begin{pmatrix}
0 \\
0
\end{pmatrix}
,
\begin{pmatrix}
0.44234 \\
0.22751
\end{pmatrix}
,
\begin{pmatrix}
0.18816 \\
0.067243
\end{pmatrix}.
\]
\begin{figure}[H]
\centering
\vspace{-10pt}
\includegraphics[scale=0.25]{wlc}
\vspace{-10pt}
\caption{E-I phase plane of \eqref{model0}. The green represents $-E+(0.97-E)S_e(12E-4I)=0$ and the red represents $-I+(0.98-I)S_i(13E-11I)=0.$}
\label{fig:wlc}
\end{figure}
We have $F(E,I)=\begin{pmatrix}
-E+(0.97-E)S_e(12E-4I) \\
-I+(0.98-I)S_i(13E-11I)
\end{pmatrix}.$ Then, the Jacobian matrices of $F(E,I)$ on the steady states are
\[
\begin{pmatrix}
-0.5468 & -0.1511\\
0.2250 & -1.1904
\end{pmatrix}
,
\begin{pmatrix}
-0.9895 & -0.2829\\
2.1299 & -31045
\end{pmatrix}
,\]
\[
\begin{pmatrix}
1.0001 & -0.7469\\
0.9879 & -1.9096
\end{pmatrix},
\]
respectively. All eigenvalues of the first two matrices are negative, but last one has a positive eigenvalue. Therefore, the first two steady states are stable.
We extend model \eqref{model0} with the following impulse functions
\begin{equation}\label{impulse1}
\begin{split}
&\Delta E|_{t=\theta_i}=-2E+0.44234, \\
&\Delta I|_{t=\theta_i}=-2I+0.22751.
\end{split}
\end{equation}
\begin{equation}\label{impulse2}
\begin{split}
&\mu \Delta E|_{t=\eta_i}=-\mu E^{1/2}(E-0.44234)^2-\sin(\mu^2)I, \\
&\mu \Delta I|_{t=\eta_i}=-\mu I^{1/3}(I-0.22751)^3-\sin(\mu^2)E,
\end{split}
\end{equation}
where $\theta_i=\frac{2i}{3}, \eta_i=\frac{2i-1}{3}, i=1,2,\dots,20.$ Let us check the conditions of Theorem \ref{thml}. We have shown that the states
$\begin{pmatrix}
0 \\
0
\end{pmatrix}
,
\begin{pmatrix}
0.44234 \\
0.22751
\end{pmatrix}$
are stable. Moreover, they satisfy the equations \eqref{impulse2} if $\mu=0.$ Condition (C2) holds since
\[
\begin{pmatrix}
0 \\
0
\end{pmatrix}
+
\begin{pmatrix}
0.44234\\
0.22751
\end{pmatrix}
=
\begin{pmatrix}
0.44234\\
0.22751
\end{pmatrix}
\]
and
\[
\begin{pmatrix}
0.44234\\
0.22751
\end{pmatrix}
+
\begin{pmatrix}
-0.88468+0.44234\\
-0.45502+0.22751
\end{pmatrix}
=
\begin{pmatrix}
0\\
0
\end{pmatrix}.
\]
Lastly, let us check the condition (C3):
\[ \lim_{\substack{(E,I)\to (E_j,I_j) \\ \mu \to 0}}
\begin{pmatrix}
-E^{1/2}(E-0.44234)^2-\frac{1}{\mu}\sin(\mu^2)I\\
-I^{1/3}(I-0.22751)^3-\frac{1}{\mu}\sin(\mu^2)E
\end{pmatrix}=
\begin{pmatrix}
0 \\ 0
\end{pmatrix}
, \quad j=1,2.\]
Clearly, all conditions are satisfied. Therefore, if the initial value $(E_0,I_0)$ is in the domain of attraction of the steady state $(0,0)$ then the activities $(E(t),I(t))$ approaches to the steady states as $\mu \to 0$, that is to say,
\[
\lim_{\mu \to 0}(E(t,\mu),I(t,\mu))=\begin{cases} (0,0) \text{ if } t\in (0,\theta_1]\cup(\theta_2,\theta_3]\cup \dots\\ (0.44234,0.22751) \text{ if } t\in (\theta_1,\theta_2]\cup(\theta_3,\theta_4]\cup \dots \end{cases} ,
\]
and if it is in the domain of attraction of the steady state $(0.44234,0.22751),$ then
\[\lim_{\mu \to 0}(E(t,\mu),I(t,\mu))=\begin{cases} (0.44234,0.22751) \text{ if } t\in (0,\theta_1]\cup(\theta_2,\theta_3]\cup \dots\\ (0,0) \text{ if } t\in (\theta_1,\theta_2]\cup(\theta_3,\theta_4]\cup \dots \end{cases}.\]
To demonstrate the results via simulation, we take $(E_0,I_0)=(0.25,0)$ which is in the domain of attraction of $(0.44234,0.22751).$ Obviously, the results of the theorem can be seen in Fig. \ref{fig:wcex}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.7]{wcex}
\vspace{-10pt}
\caption{Coordinates of \eqref{model0},\eqref{impulse1},\eqref{impulse2} with initial value $(0.25,0),$ where red, blue and black lines corresponds to value of $\mu=0.1,0.2,0.3,$ respectively.}
\label{fig:wcex}
\end{figure}
\section{Bifurcation of New Attractor Composed of Medusa}
In discontinuous dynamics, we show that a new type of attractor consisting medusa, medusa without ring, and rings exist.
For this purpose, we study a pair of coupled Wilson-Cowan models. Here, we have four subpopulation in which each system has an excitatory and an inhibitory subpopulation. The first system admits three stable steady states and it has singular impulses. The second one has a limit cycle and it does not have any impulse effects. Also, in the latter system, the membrane time constants are equals to 1.
The first Wilson-Cowan model with impulsive singularity is of the following form:
\begin{equation}\label{3states}
\begin{split}
&\mu_e\frac{dE}{dt}=-E+(0.97-E)S_e(13E-4I), \\
&\mu_i\frac{dI}{dt}=-I+(0.98-I)S_i(22E-2I),\\
&\Delta E|_{t=\theta_i}=6.741E^2-3.58612E+0.45064, \\
&\Delta I|_{t=\theta_i}=6.6087I^2-3.85682I+0.49, \\
&\mu_e \Delta E|_{t=\eta_i}=-\mu_e E^{1/2}(E-0.44234)^2-\sin(\mu_e^2)I, \\
&\mu_i \Delta I|_{t=\eta_i}=-\mu_i I^{1/3}(I-0.22751)^3-\sin(\mu_i^2)E,
\end{split}
\end{equation}
where the sigmoid functions are
\[
\begin{split}
&{S_e}(x)=\frac{1}{1+\exp[-1.5(x-2.5)]}-\frac{1}{1+\exp(3.75)},\\
&{S_i}(x)=\frac{1}{1+\exp[-6(x-4.3)]}-\frac{1}{1+\exp(25.8)}.
\end{split}
\]
and impulse moments are $\theta_i=2i+4.95, \eta_i=2i-1+4.95, i=1,2,\dots,50.$ The differential equations in \eqref{3states} have three stable states
\[
\begin{pmatrix}
0 \\
0
\end{pmatrix}
,
\begin{pmatrix}
0.20353 \\
0.18691
\end{pmatrix}
,
\begin{pmatrix}
0.45064 \\
0.49
\end{pmatrix}
\]
and two unstable steady states
\[
\begin{pmatrix}
0.096205 \\
0
\end{pmatrix}
,
\begin{pmatrix}
0.37647 \\
0.49
\end{pmatrix}.
\]
The second model, which has a limit cycle , is of the form
\begin{align}\label{periodic}
\begin{split}
&\frac{de}{dt}=-e+(0.97-e)\tilde{S_e}(16e-12i+1.25), \\
&\frac{di}{dt}=-i+(0.98-i)\tilde{S_i}(15e-3i),\\
\end{split}
\end{align}
where
\[
\begin{split}
&\tilde{S_e}(x)=\frac{1}{1+\exp[-1.3(x-4)]}-\frac{1}{1+\exp(5.2)},\\ &\tilde{S_i}(x)=\frac{1}{1+\exp[-2(x-3.7)]}-\frac{1}{1+\exp(7.4)}.
\end{split}
\]
We couple system \eqref{3states} and \eqref{periodic} as follows
\begin{align}\label{coupled}
\begin{split}
&\mu_e\frac{dE}{dt}=-E+(0.97-E)S_e(13E-4I), \\
&\mu_i\frac{dI}{dt}=-I+(0.98-I)S_i(22E-2I),\\
&\frac{de}{dt}=-e+(0.97-e)\tilde{S_e}(16e-12i+1.25), \\
&\frac{di}{dt}=-i+(0.98-i)\tilde{S_i}(15e-3i),\\
&\Delta E|_{t=\theta_i}=6.741E^2-3.58612E+0.45064, \\
&\Delta I|_{t=\theta_i}=6.6087I^2-3.85682I+0.49, \\
&\mu_e \Delta E|_{t=\eta_i}=-\mu_e E^{1/2}(E-0.44234)^2-\sin(\mu_e^2)I, \\
&\mu_i \Delta I|_{t=\eta_i}=-\mu_i I^{1/3}(I-0.22751)^3-\sin(\mu_i^2)E.
\end{split}
\end{align}
It is already known that differential equations in \eqref{3states} has three stable steady states. Suppose that the membrane time constants in \eqref{coupled} are equal such that $\mu_e=\mu_i=\mu$ and the initial condition is $(0.4656,0.1101,0.1101,0.04766).$ Clearly, in Fig. \ref{fig:diadem050}, one can observe that a medusa exist for the value of parameter $\mu=0.05.$ Note that this is a single trajectory and its form looks like a medusa.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{diadam050}
\vspace{-10pt}
\caption{(E,e,i)-coordinates of system \eqref{coupled} for the initial value $(0.4656,0.1101,0.1101,0.04766)$ and the parameter $\mu=0.05.$}
\label{fig:diadem050}
\end{figure}
Fig. \ref{fig:diadem050} is formed as follows. The (E,e,i)-coordinate which start at the given initial value approaches to the cycle. It moves around the cycle until the impulse moment $\eta_1.$ When the time reaches $t=\eta_1,$ because of the impulse function the coordinate jump to $(E(\eta_1+,\mu),e(\eta_1+,\mu),i(\eta_1+,\mu)).$ Again it will approach the cycle and move until the impulse moment $t=\theta_1.$ Then the coordinate jumps to $(E(\theta_1+,\mu),e(\theta_1+,\mu),i(\theta_1+,\mu))$ and it will approach to the cycle. The (E,e,i)-coordinate moves in this pattern and finally the medusa in Fig. \ref{fig:diadem050} is observed. The pattern is visualized in Fig. \ref{fig:medusa}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{medusa.eps}
\vspace{-10pt}
\caption{Formation of Fig. \ref{fig:diadem050}. }
\label{fig:medusa}
\end{figure}
In the following figures, we will see that for different values of the parameter $\mu$ and for the different values of the initial conditions, we will obtain different medusas and rings. First of all, consider the system \eqref{coupled} with the initial values $(-0.01,0,0.17,0.25),$ $(0.21,0.20,0.20,0.15),$ and with the parameter $\mu=1$ to get Fig. \ref{fig:diadem10}. In this figure, there are two medusas without ring and a cycle. Indeed, they are a single trajectory, which is disconnected in the geometrical sense, but it is connected in the dynamics sense.
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{diadam10}
\vspace{-10pt}
\caption{(E,e,i)-coordinates of system \eqref{coupled} for the initial values $(-0.01,0,0.17,0.25),$ $(0.21,0.20,0.20,0.15),$ and for the parameter $\mu=1.$ Blue and red trajectories are correspondence to the each initial value, respectively. It is seen that two medusas without ring and one cycle are formed. The cycle is between two medusas without ring.}
\label{fig:diadem10}
\end{figure}
Next, we change the parameter to $\mu=0.2$ and use different initial activations $(-0.01,0,0.17,0.25),$ $(0.21,0.20,0.20,0.15),$ $(0.5,0.5,0.3,0.3).$
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{diadam021}
\vspace{-10pt}
\caption{Attractor consists of one medusa and two different rings. Blue, red and magenta trajectories represent solutions in the coordinates (E,e,i) for the given initial values $(-0.01,0,0.17,0.25),(0.21,0.20,0.20,0.15),(0.5,0.5,0.3,0.3),$ respectively, and $\mu=0.2.$ }
\label{fig:diadam021}
\end{figure}
In Fig. \ref{fig:diadam021}, one medusa and two different rings are emerged. Geometrically, the attractor is disconnected. However, it is connected in the dynamics sense since it is a single attractor with three parts. There does not exist any limit cycle. The cycles which look like limits cycles are just parts of the whole trajectory.
Let us consider Fig. \ref{fig:diadam011}. In this figure initial activations are same as in \ref{fig:diadam021}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{diadam011}
\vspace{-10pt}
\caption{Attractor consist only one medusa. Blue, red and magenta trajectories represent solutions in the coordinates (E,e,i) for the given initial values $(-0.01,0,0.17,0.25),(0.21,0.20,0.20,0.15),(0.5,0.5,0.3,0.3),$ respectively, and $\mu=0.1.$ The alone red cycle is not an attractor. It is just a part of the trajectory.}
\label{fig:diadam011}
\end{figure}
The parameter is fixed and $\mu=0.1.$ Although the initial values are different, the trajectories eventually obtain the shape of the same medusa.
There is an alone red trajectory. It is a part of the whole red trajectory. Therefore, it is neither a limit cycle nor a ring since the trajectory never comes to the neighborhood of it.
Finally, fix the parameter $\mu=0.05.$ In Fig. \ref{fig:diadem052}, any trajectory from the different initial values blue$(-0.01,0,0.17,0.25),$ red$(0.21,0.20,0.20,0.15),$ magenta$(0.5,0.5,0.3,0.3),$ ultimately gets the form of red or blue medusa. The blue and the magenta trajectories converges to the same medusa. This is why, we will say that the attractor consists of two disjoint medusas. They are disjoint since there is not a single trajectory which makes two medusas.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{diadam052}
\vspace{-10pt}
\caption{Trajectories of system \eqref{coupled} in coordinates (E,e,i) for different initial values $(-0.01,0,0.17,0.25),(0.21,0.20,0.20,0.15),
(0.5,0.5,0.3,0.3)$ and for the fixed parameter $\mu=0.05.$ Blue, red and magenta trajectories represent solutions for the given initial values, respectively.
}
\label{fig:diadem052}
\end{figure}
Note that as the parameter decreases the form of the trajectory becomes horizontal through the E-coordinate.
In conclusion, we see that the neuron's dynamics have the following properties: in the case $\mu=1$ two medusas without ring and a cycle are obtained. When $\mu=0.2$ one medusa and two rings emerge and when $\mu=0.1$ one medusa emerges. Finally, if $\mu=0.05$ two medusas emerge. These results demonstrate that for different values of $\mu$ the qualitative changes in the behavior of trajectories of \eqref{coupled} occur ultimately. Therefore, we have a bifurcation. It is important to note that
this bifurcation occurs because of the singularity and impulses. This is why, one cannot explain the bifurcations in this paper through the traditional types of bifurcations, saddle-node, pitchfork, Hopf bifurcation, etc. For example, the change of the numbers of medusas and rings in the local phase portrait depend on the impulsive jumps sizes. Bifurcation, here, also depends on the positions of cycles for unperturbed system.
\section{Conclusion}
It is the first time that only a single small parameter $\mu$ causes not only to the singularity, but also to the bifurcation. The singularity in this paper is a new kind such that it emerges both from the differential equation part and in the impulsive function. It is also important that the small parameter $\mu$ is a natural parameter which comes from the membrane time constant in Wilson-Cowan neuron model.
We have shown the existence of bifurcation through the simulations. Theoretical proofs are not given since it is difficult to analyze the discontinuous dynamics of the model in which a single parameter causes both singularity and bifurcation. Therefore, bifurcation is not occurred by the change of eigenvalues, but it relates to the singular compartment and the impulsive dynamics of the model.
New type of attractor, which consists medusa, medusa without ring and rings, is defined. The name comes from the similarity of the form of trajectory and medusa.
\section*{References}
|
1,108,101,562,973 | arxiv | \section{Introduction}
Most software systems today are configurable. Despite the undeniable benefits
of configurability,
large configuration spaces challenge developers, maintainers, and users. In the face of hundreds of configuration options, it is difficult to keep track of the effects of individual configuration options and their mutual interactions. So, predicting the performance of individual system configurations or determining the optimal configuration is often more guess work than engineering. In their recent paper, Xu et al.\ documented the difficulties developers face
with understanding the configuration spaces of their systems~\cite{xu2015hey}. As a result, developers tend to ignore over $5/6$ths of the configuration options, which leaves considerable optimization potential untapped and induces major economic cost~\cite{xu2015hey}.
Addressing the challenge of performance prediction and optimization in the face of large configuration spaces, researchers have developed a number of approaches that rely on sampling and machine learning~\cite{siegmund2012predicting,guo2013variability,sarkar2015cost}.
While gaining some ground, state-of-the-art approaches face two problems:
(a)~they require far too many sample configurations for learning or (b)~they are prone to large variances in their predictions. For example, prior work on predicting performance scores using regression trees had to compile and execute hundreds to thousands of specific system configurations~\cite{guo2013variability}.
A more balanced approach by Siegmund et al.\ is able to learn predictors for configurable systems~\cite{siegmund2012predicting} with low mean errors, but with large variances of prediction accuracy (e.g.\ in half of the results, the performance predictions for the Apache Web server were up to 50\,\% wrong).
Guo et al.~\cite{guo2013variability} also proposed an incremental method to build a predictor model, which uses incremental random samples with steps equal to the number of configuration options (features) of the system. This approach also
suffered from unstable predictions (e.g., predictions had a mean error of up to 22\,\%, with a standard deviation of up 46\,\%). Finally, Sarkar et al.~\cite{sarkar2015cost} proposed a proj\-ective-learning approach (using fewer measurements than Guo at al.\ and Siegmund et al.) to quickly compute the number of sample configurations for learning a stable predictor. However, as we will discuss, after making that prediction, the total number of samples required for learning the predictor is comparatively high (up to hundreds of samples).
The problems of large sample sets and large variances in prediction can be avoided using the {\bf WHAT}\xspace spectral learner, which is our main contribution.
{{\bf WHAT}\xspace}'s innovation is the use of the spectrum (eigenvalues) of the distance matrix
between the configurations of a configurable system, to perform dimensionality reduction. Within that
reduced configuration space, many closely associated configurations can be studied
by measuring only a few samples.
In a number of experiments, we compared {\bf WHAT}\xspace against the state-of-the-art approaches of Siegmund et al.~\cite{siegmund2012predicting}, Guo et al.~\cite{guo2013variability}, and Sarkar et al. \cite{sarkar2015cost} by means of six real-world configurable systems: Berkeley DB, the Apache Web server, SQLite, the LLVM compiler, and the x264 video encoder.
We found that {\bf WHAT}\xspace performs as well or better than prior approaches,
while requiring far fewer samples (just a few dozen).
This is significant and most surprising, since some of the systems explored here have up to millions of possible configurations.
Overall, we make the following contributions:
\begin{itemize}
\item We present a novel sampling and learning approach for predicting the performance of software configurations in the face of large configuration spaces. The approach is based on a
{\em spectral
learner} that uses an approximation to the first principal component of the configuration space to recursively cluster it, relying only on a few points as representatives of each cluster.
\item We demonstrate the practicality and generality of our approach by conducting experiments on six real-world configurable software systems (see Figure ~\ref{fig:systems}). The results show that our approach is more accurate (lower mean error) and more stable (lower standard deviation) than state-of-the-art approaches. A key finding is the utility of the principal component of a configuration space to find informative samples from a large configuration space.
All materials required for reproducing this work are available at \url{https://goo.gl/689Dve}.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Figures/motivation}
\caption{The variation of performance scores of BDBC. }
\label{fig:motivation}
\end{figure}
\section{Background \& Related Work}
\label{sect:addit}
We use the configurable system, BDBC, as an example to motivate our approach. BDBC is an embedded database system written in C. In this example, we consider 18 features or configuration options of BDBC, which the user can configure. We use the response time to indicate the performance of BDBC in different configurations. These 18 configuration options lead to 2,560 configurations.
In Figure~\ref{fig:motivation}, we show the performance distribution of all the configurations of BDBC. It is worth noting the difference between the best performing configuration (lower left corner) and the worst performing (top right corner). The figure shows that having a good configuration reduces the response time by a factor of 40 when compared to the worst possible configuration.
An important point is that, in practice, the configurations are selected often uninformed and may not be the best or near best configuration. More over with more configurations added with ever release~\cite{xu2015hey}, it is important to have an automated approach to find the best or near-best configuration. Another aspect to this problem is the cost of evaluation (of a particular configuration), which may be very expensive. So, an ideal method should be able to find the best or near-best performing configuration with the least number of evaluations. Our approach {\bf WHAT}\xspace{} is effective in building accurate as well as stable performance models while using fewer sample configurations than the state-of-the-art.
A configurable software system has a set $X$ of Boolean configuration options,\footnote{In this paper, we concentrate on Boolean options, as they make up the majority of all options; see Siegmund et al., for how to incorporate numeric options~\cite{SGA+15}.} also referred to as features or independent variables in our setting.
We denote the number of features of system $S$ as $n$. The configuration space of $S$ can be represented by a Boolean space $\mathbb{Z}_{2}^{n}$, which is denoted by $F$. All valid configurations of $S$ belong to a set $V$,
which is represented by vectors $\vec{C_i}$ (with $1\leq i\leq \left\vert{V}\right\vert$) in $\mathbb{Z}_{2}^{n}$. Each element of a configuration represents a feature, which can either be \emph{True} or \emph{False}, based on whether the feature is selected or not.
Each valid instance of a vector (i.e., a configuration) has a corresponding performance score associated to it.
The literature offers two approaches to performance prediction of software configurations: a {\em maximal sampling} and a {\em minimal sampling} approach:
With {\em maximal sampling}, we compile all possible configurations and record the associated performance scores.
Maximal sampling can be impractically slow. For example, the performance data used in our experiments required 26 days of CPU time for measuring (and much longer, if we also count the time required for compiling the code prior to execution).
Other researchers have commented that, in
real world scenarios, the cost of acquiring the optimal configuration is overly expensive and time consuming \cite{weiss2008maximizing}.
If collecting performance scores of all configurations is impractical, {\em minimal sampling}
can be used to intelligently select and execute just enough configurations (i.e., samples) to build a
predictive model.
For example, Zhang et al.~\cite{zhang2015performance} approximate the
configuration space as a Fourier series, after which they can derive an expression showing how many configurations must be studied
to build predictive models with a given error. While a theoretically satisfying result, that approach still needs thousands to hundreds of thousands of executions of sample
configurations.
Another set of approaches are the four "additive" {\em minimal sampling} methods of Siegmund et al.~\cite{siegmund2012predicting}.
Their first method, called feature-wise sampling ({\em FW}), is their basic method.
To explain {\em FW}, we note that, from a configurable software system, it is theoretically possible to enumerate many or all of the valid configurations\footnote{Though, in practice, this can be very difficult. For example, in models like the Linux Kernel such an enumeration is practically impossible ~\cite{sayyad13b}.}.
Since each configuration ($\vec{C_i}$) is a vector of $n$ Booleans, it is possible to use this information to isolate examples of how much each feature individually contributes to the total run time:
\begin{enumerate}
\item Find a pair of configurations $\vec{C_1}$ and $\vec{C}_2$, where $\vec{C}_2$ uses exactly the same features as $\vec{C_1}$, plus one extra feature $f_i$.
\item Set the run time $\Pi(f_i)$ for feature $f_i$ to be the difference in the performance scores between $\vec{C_2}$ and $\vec{C_1}$.
\item The run time for a new configuration $\vec{C}_i$ (with $1\leq i\leq \left\vert{V}\right\vert$) that has not been sampled before is then the sum of the run time of its features, as determined before:
\begin{equation}
\Pi(C_i) = \sum_{f_j \in C_i}\Pi(f_j)
\end{equation}
\end{enumerate}
When many pairs, such as ${\vec{C_1},\vec{C}_2}$, satisfy the criteria of point~1, Siegmund et al.\ used the
pair that covers the {\em smallest} number of features. Their minimal sampling method, {\em FW},
compiles and executes only these smallest ${\vec{C_1}}$ and ${\vec{C_2}}$ configurations.
Siegmund et al.\ also offers three extensions to the basic method, which are based on sampling
not just the smallest pairs, but also additional configurations covering certain kinds of {\em interactions} between features.
All the following minimal sampling policies compile and execute valid configurations selected via one of three heuristics:
\begin{description}
\item[{\em PW (pair-wise):}] For each pair of features, try to find a configuration that contains the pair and has a minimal number of features selected.
\item[{\em HO (higher-order):}] Select extra configurations, in which three features, $f_1,f_2,f_3$, are selected if two of the following pair-wise interactions exist: $(f_1,f_2)$ and $(f_2,f_3)$ and $(f_1,f_3)$.
\item[{\em HS (hot-spot):}] Select extra configurations that contain features that are
frequently interacting with other features.
\end{description}
Guo et al.~\cite{guo2013variability} proposed a progressive random sampling approach, which samples the configuration space in steps of the number of features of the software system in question. They used the sampled configurations to train a regression tree, which is then used to predict the performance scores of other system configurations. The termination criterion of this approach is based on a heuristic, similar to the {\em PW} heuristics of Siegmund et al.
Sarkar et al.~\cite{sarkar2015cost} proposed a cost model for predicting the effort (or cost) required to generate an accurate predictive model. The user can use this model to decide whether to go ahead and build the predictive model. This method randomly samples configurations and uses a heuristic based on feature frequencies as termination criterion. The samples are then used to train a regression tree; the accuracy of the model is measured by using a test set (where the size of the training set is equal to size of the test set). One of four projective functions (e.g., exponential) is selected based on how correlated they are to accuracy measures. The projective function is used to approximate the accuracy-measure curve, and the elbow point of the curve is then used as the optimal sample size. Once the optimal size is known, Sarkar et al.\ uses the approach of Guo et al.\ to build the actual prediction model.
The advantage of these previous approaches is that, unlike the results of Zhang et al., they require only dozens to hundreds of samples. Also, like our approach, they do not require to enumerate all configurations, which is important for highly configurable software systems.
That said, as shown by our experiments (see Section~\ref{sec:experiments}), these approaches produce estimates with larger mean errors and partially larger variances than our approach. While sometimes the approach by Sarkar et al. results in models with (slightly)
lower mean error rates, it still requires a considerably larger number of samples (up to hundreds), while {\bf WHAT}\xspace requires only few dozen.
\section{Approach}
\subsection{Spectral Learning}\label{sect:spect}
The minimal sampling method we propose here is based on a spectral-learning algorithm
that explores the spectrum (eigenvalues) of the distance matrix between configurations in the configuration space.
In theory, such spectral learners are an appropriate method to handle noisy, redundant, and tightly inter-connected variables, for the following reasons:
When data sets have many irrelevancies or closely associated data parameters $d$, then
only a few eigenvectors $e$, $e \ll d$ are required to characterize the data.
In this reduced space:
\begin{itemize}
\item
Multiple inter-connected variables $i,j,k \subseteq d$ can be represented
by a single eigenvector;
\item
Noisy variables from $d$ are
ignored, because they do not contribute to the signal in the data;
\item
Variables become (approximately) parallel lines
in $e$ space. For redundancies \mbox{$i,j \in d$}, we
can ignore $j$
since effects that change over $j$ also
change in the same way over $i$;
\end{itemize}
That is, in theory, samples of configurations drawn via an eigenspace sampling method
would not get confused by noisy, redundant, or tightly inter-connected variables. Accordingly,
we expect predictions built from that sample to have lower mean errors and lower variances on that error.
Spectral methods have been used before for a variety of data mining applications~\cite{kamvar2003spectral}.
Algorithms, such as PDDP~\cite{boley98}, use spectral methods, such as principle component analysis (PCA), to
recursively divide data into smaller regions. Software-analytics researchers use spectral methods (again, PCA) as a pre-processor prior to data mining to reduce noise in software-related data sets~\cite{theisen2015approximating}.
However, to the best of our knowledge, spectral methods have not been used before as a basis of a minimal sampling method.
{\bf WHAT}\xspace is somewhat different from other spectral
learners explored in, for instance, image processing applications~\cite{shi2000normalized}.
Work on image processing does not aim at
defining a minimal sampling policy to predict performance scores.
Also, a standard spectral method requires an $O(N^2)$ matrix multiplication to compute the components
of PCA~\cite{ilin10}. Worse, in the case of hierarchical division methods, such as PDDP,
the polynomial-time inference must be repeated at every level of the hierarchy.
Competitive results can be achieved
using an $O(2N)$ analysis that we have developed previously~\cite{me12d}, which is based on a heuristic proposed by Faloutsos and Lin~\cite{Faloutsos1995} (which Platt has shown computes a Nystr\"om approximation to the first component of PCA~\cite{platt05}).
\begin{figure}
\begin{tabular}{ccc}
\includegraphics[height=35mm, width=35mm]{Figures/1} & \includegraphics[height=35mm, width=35mm]{Figures/2} & \includegraphics[height=35mm, width=35mm]{Figures/3} \\ \begin{tabular}[c]{@{}l@{}}(a) Feature space of a system \\with two configurations\end{tabular} & \begin{tabular}[c]{@{}l@{}}(b) Choosing a random point \\from the feature space\end{tabular} & \begin{tabular}[c]{@{}l@{}}(c) Find {\em West} farthest from the\\selected random point\end{tabular}\\[6pt]
\includegraphics[height=35mm, width=35mm]{Figures/4} &
\includegraphics[height=35mm, width=35mm]{Figures/5} &
\includegraphics[height=35mm, width=35mm]{Figures/6}
\\
\begin{tabular}[c]{@{}l@{}}(d) Find {\em East}, a point\\farthest away from {\em West}.\end{tabular} & \begin{tabular}[c]{@{}l@{}}(e) Line joining {\em East} and {\em West}\\is the first principal component\end{tabular} & \begin{tabular}[c]{@{}l@{}}(f) Projection of a point ({\em x})\\is calculated\end{tabular} \\[6pt]
\end{tabular}
\caption{Spectral Learning using {\bf WHAT}\xspace{} using the example of a system with two configuration options.}
\label{fig:spectral_desc}
\end{figure}
Figure~\ref{fig:spectral_desc} describes the procedure used to calculate the projection of a configurations. {\bf WHAT}\xspace receives $N$ (with $1\leq \left\vert{N}\right\vert\leq \left\vert{V}\right\vert$)
valid configurations ($\vec{C}$), $N_1,N_2,...$, as input (as shown in Figure~\ref{fig:spectral_desc}(a)) and then:
\begin{enumerate}
\item
Picks any
point $N_i$ ($1\leq i \leq\left\vert{N}\right\vert$) at random (as shown in Figure~\ref{fig:spectral_desc}(b));
\item
Finds
the point {\em West}~$\in N$ that is
furthest away from $N_i$ (as shown in Figure~\ref{fig:spectral_desc}(c));
\item Finds the point {\em East}~$\in N$
that is furthest from {\em West} (as shown in Figure~\ref{fig:spectral_desc}(d)).
\end{enumerate}
The line joining {\em East}
and {\em West} is our approximation for the first principal component (as shown in Figure~\ref{fig:spectral_desc}(e)).
Using the distance calculation shown in Equation~\ref{eq:dist},
we define $\mathit{c}$ to be the distance between {\em East}~(x)
and {\em West}~(y).
{\bf WHAT}\xspace uses this distance ($\mathit{c}$) to divide all the configurations as follows:
The value $x_i$ is the projection of $N_i$
on the line running from {\em East} to {\em West} (as shown in Figure~\ref{fig:spectral_desc}(f))\footnote{The projection of $N_i$ can be calculated in the following way:\newline $a = \mathit{dist}(\mathit{East}, N_i); b = \mathit{dist}(\mathit{West}, N_i); x_i = \sqrt{\frac{a^2 - b^2 + \mathit{c}^2}{2\mathit{c}}}$.
}. We divide
the examples based on the median value of the projection of $x_i$. Now, we have two clusters of data divided based on the projection values (of $N_i$) on the line joining {\em East} and {\em West}. This process is applied recursively on these clusters until a predefined stopping condition. In our study, the recursive splitting of the $N_i$'s stops when a sub-region
contains less than $\sqrt{|N|}$ examples.
\begin{equation}
\mathit{dist}(x, y) =
\begin{cases}
\sqrt{\sum_i(x_i-y_i)^2}
& \text{if $x_i$ and $y_i$ is numeric}\\
\begin{cases}
0, & \text{ if $x_i = y_i$}\\
1, & \text{ otherwise}\\
\end{cases}
& \text{if $x_i$ and $y_i$ is Boolean}\\
\end{cases}
\label{eq:dist}
\end{equation}
We explore this approach for three reasons:
\begin{itemize}
\item
{\em It is very fast}:
This process requires only $2|n|$ distance comparisons
per level of recursion, which is far less than the $O(N^2)$
required by PCA~\cite{Du2008}
or other algorithms such as K-Means~\cite{hamerly2010making}.
\item
{\em It is not domain-specific}:
Unlike traditional PCA, our approach is general in that it does not assume that all the variables are numeric. As shown in Equation~\ref{eq:dist},\footnote{In our study, $\mathit{dist}$ accepts pair of configuration ($\vec{C}$) and returns the distance between them. If $x_i$ and $y_i$ $\in \mathbb{R}^n$, then the distance function would be same as the standard Euclidean distance.} we can approximate distances for both numeric and non-numeric data (e.g., Boolean).
\item
{\em It reduces the dimensionality problem}:
This technique explores the underlying dimension (first principal component) without getting confused by noisy, related, and highly associated variables.
\end{itemize}
\subsection{Spectral Sampling}\label{sect:sample}
When the above clustering method terminates, our sampling policy (which we call $S_1$) is then applied:
\begin{description}
\item[{\em Random sampling ($S_1$):}] compile and execute one configuration, picked at random, from each leaf cluster;
\end{description}
We use this sampling policy, because (as we will show later) it performs better than:
\begin{description}
\item[{\em East-West sampling ($S_2$):}] compile and execute the {\em East} and {\em West} poles of the leaf clusters;
\item[{\em Exemplar sampling ($S_3$):}] compile and execute all items in all leaves and return the one
with lowest performance score.
\end{description}
Note that $S_3$ is {\em not} a {\em minimal} sampling policy (since it executes all configurations).
We use it here as one baseline
against which we can compare the other, more minimal, sampling policies. In the results
that follow, we also compare our
sampling methods against another baseline using information gathered after executing
all configurations.
\subsection{Regression-Tree Learning} \label{rtlearning}
After collecting the data using one of the sampling policies ($S_1$, $S_2$, or $S_3$), as described in Section \ref{sect:sample}, we use a CART regression-tree learner~\cite{breiman1984} to build a performance predictor. Regression-tree learners seek the attribute-range split that most increases
our ability to make accurate predictions.
CART explores splits that divide $N$ samples into two sets $A$ and $B$, where each set has a standard deviation on the target variable of $\sigma_1$ and $\sigma_2$.
CART finds the ``best'' split defined as the split that minimizes $\frac{A}{N}\sigma_1 + \frac{B}{N}\sigma_2$.
Using this best split, CART divides the data recursively.
In summary, {\bf WHAT}\xspace combines:
\begin{compactitem}
\item
The FASTMAP method of Faloutsos and Lin~\cite{Faloutsos1995}, which rather than $N^2$ comparisons only performs $2N$ where $N$ is the number of configurations in the configuration space;
\item A spectral-learning algorithm initially inspired by Boley's PDDP system~\cite{boley98}, which we modify
by replacing PCA with FASTMAP (called
``WHERE'' in prior work ~\cite{me12d});
\item
The sampling policy that explores the leaf clusters found by this recursive division;
\item
The CART regression-tree learner that converts the data from the samples collected by sampling policy
into a run-time prediction model~\cite{breiman1984}.
\end{compactitem}
That is,
\begin{center}
\begin{tabular}{rcl}
WHERE& = &PDDP $-$ PCA $+$ FASTMAP\\[1.5ex]
{\bf WHAT}\xspace& = & WHERE $+$ \{ $S_1, S_2, S_3$ \} $+$ CART
\end{tabular}
\end{center}
This unique combination of methods has not been previously explored in the
software-engineering literature.
\subsection{Approach as a pipeline}
Different components of {\bf WHAT}\xspace{} can be used to sample configurations of a configurable software system, which can be then used to generate an accurate and stable performance model. We test {\bf WHAT}\xspace{} in following way:
\begin{itemize}
\item All the possible configurations of a system is enumerated
\item Split the configurations into training and testing datasets based on a predefined ratio (as discussed in section~\ref{sec:exp_rig}) -- at this point none of the configurations is measured
\item {\bf WHAT}\xspace{} (section~\ref{sect:spect}) is used to sample configuration (section~\ref{sect:sample}) from the training dataset and for each configuration the corresponding performance is measured.
\item Configuration and the corresponding performance score is used to build a performance model using Regression-Tree Learner (section~\ref{rtlearning}).
\item The accuracy (in terms of MRE) of the built performance model is measured using configurations for the testing set.
\end{itemize}
In the next section, we formulate our research questions and discuss the experimental setup.
\section{Experiments}
\label{sec:experiments}
\subsection{Research Questions}
We formulate our research questions in terms of the challenges of
exploring large complex configuration spaces.
As our approach explores the spectral space, our hypothesis is that only a small
number of samples is required to explore the whole space.
However, a prediction model built from a very small sample of the configuration space might
be very inaccurate and unstable, that is, it may exhibit very large mean prediction errors and variances on the prediction error.
Also, if we learn models from small regions of the training data,
it is possible that a learner will miss {\em trends} in the data
between the sample points. Such trends are useful when building {\em optimizers}
(i.e., systems that receives one configuration as input and propose an alternate
configuration that has, for instance, a better performance). Such optimizers might
need to evaluate hundreds to millions of alternate configurations.
To speed up that process, optimizers can use a {\em surrogate model}\,\footnote{Also known as response surface methods, meta models, or emulators.}
that mimics the outputs of a system of interest, while being computationally cheap(er) to evaluate~\cite{loshchilov13}. For example, when optimizing
performance scores, we might ask a CART for a performance
prediction (rather than compile and execute
the corresponding configuration). Note that such surrogate-based
reasoning critically depends on how well the surrogate can guide optimization.
Therefore, to assess feasibility of our sampling policies, we must consider:
\begin{itemize}
\item Performance scores generated from our minimal sampling policy;
\item The variance of the error rates when comparing predicted performance scores with actual ones;
\item The optimization support offered by the performance predictor (i.e., can the model work in tandem with other off-the-shelf optimizers to generate useful solutions).
\end{itemize}
The above considerations lead to four research questions:
\begin{description}
\item[{\em RQ1:}] {\em Can {\bf WHAT}\xspace generate good predictions after
examining only a small number of configurations?}
\end{description}
Here, by ``good'' we mean that the predictions made by models that were trained using sampling with {\bf WHAT}\xspace are as accurate, or more accurate,
as preditions generated from models supplied with more samples.
\begin{description}
\item[{\em RQ2:}] {\em
Do less data used in building the predictions models cause larger variances in the predicted performance scores?}
\item[{\em RQ3:}] {\em
Can ``good'' surrogate models (to be used in optimizers)
be built from minimal samples?}
\end{description}
Note that RQ2 and RQ3 are of particular concern with our approach,
since our goal is to sample as little as possible from the configuration space.
\begin{description}
\item[{\em RQ4:}] {\em How good is {\bf WHAT}\xspace compared to the state of the art of
learning performance predictors from configurable software systems?}
\end{description}
\begin{table}
\centering
\caption{Subject systems used in the experiments.}\label{fig:systems}\footnotesize
\rotatebox{90}{
\begin{tabular}{p{1.5cm}p{2.25cm}p{0.75cm}p{0.5cm}p{6.5cm}p{1cm}p{3cm}p{1cm}}
\toprule
& \textbf{Description} & \textbf{LOC} & \textbf{\#\,Feat.} & \textbf{Configurations} & \textbf{\#\,Config.}& \textbf{Benchmark Used} & \textbf{Performance Metric} \\ \cmidrule{1-8}
\textbf{Apache} & Apache is a prominent open-source Web server with numerous configuration options. & 230,277 & 9 & Base, HostnameLookups, KeepAlive, EnableSendfile, FollowSymLinks, AccessLog, ExtendedStatus, InMemory, Handle & 192 & We used the tools autobench and httperf to generate load on the Web server. We increased the load until the server could not, handle any, further requests & Maximum load \\
\\ \textbf{Berkeley~DB\newline C Edition\newline (BDBC)} & BDBC is an embedded database system written in C. & 219,811 & 18 & HAVE\_CRYPTO, HAVE\_HASH, HAVE\_REPLICATION, HAVE\_VERIFY, HAVE\_SEQUENCE, HAVE\_STATISTICS, DIAGNOSTIC, PAGESIZE, PS1K, PS4K, PS8K, PS16K, PS32K, CACHESIZE, CS32MB, CS16MB, CS64MB, CS512MB & 2,560 & Benchmark provided by the vendor & Response time \\
\\ \textbf{Berkeley~DB\newline Java Edition\newline (BDBJ)} &BDBJ is a complete re-development of BDBC in Java with full SQL support. & 42,596 & 32 &Base, Persistence, IO, OldIO, NewIO, NIOBase, NIOType, ChunkedNIO, SingleWriteNIO, DirectNIO, LogSize, S100MiB, S1MiB, Checksum, BTreeFeatures, INCompressor, IEvictor, Evictor, Critical\_Eviction, Verifier, ITracing, Tracing, TracingLevel, Severe, Finest, Statistics & 400 & Benchmark provided by the vendor & Response time \\
\\ \textbf{LLVM} & LLVM is a compiler infrastructure written in C++. & 47,549 & 11 & time\_passes, gvn, instcombine, inline, jump\_threading, simplifycfg, sccp, print\_used\_types, ipsccp, iv\_users, licm & 1,024 & As benchmark, we measured the time to compile LLVM’s test suite & Time to\newline compile LLVM’s test suite \\
\\ \textbf{SQLite} & SQLite is an embedded database system deployed over several millions of devices. & 312,625 & 39 & OperatingSystemCharacteristics,SQLITE\_SECURE\_DELETE, ChooseSQLITE\_TEMP\_STORE,SQLITE\_TEMP\_STOREzero, SQLITE\_TEMP\_STOREone, SQLITE\_TEMP\_STOREtwo, SQLITE\_TEMP\_STOREthree, AutoVacuumOff, AutoVacuumOn, SetCacheSize, StandardCacheSize, LowerCacheSize, HigherCacheSize, LockingMode, ExclusiveLock, NormalLockingMode, PageSize, StandardPageSize, LowerPageSize, HigherPageSize, HighestPageSize & 3,932,160 & Benchmark provided by the vendor & Response time \\
\\ \textbf{x264} & x264is a video encoder in C that provides configuration options to adjust output quality of encoded video files. & 45,743 & 16 & no\_asm, no\_8x8dct, no\_cabac, no\_deblock, no\_fast\_pskip, no\_mbtree, no\_mixed\_refs, no\_weightb, rc\_lookahead, rc\_lookahead\_20, rc\_lookahead\_40, rc\_lookahead\_60, ref, ref\_1, ref\_5, ref\_9 & 1,152 & As benchmark, we encoded the Sintel trailer (735 MB) from AVI to the xH.264 codec & Encoding time \\ \bottomrule
\end{tabular}
}
\end{table}
\noindent To answer RQ4, we will compare {\bf WHAT}\xspace
against approaches presented by Siegmund et al.~\cite{siegmund2012predicting}, Guo et al.~\cite{guo2013variability}, and Sarkar et al.~\cite{sarkar2015cost}.
\subsection{Subject Systems}
\label{sec:subject_systems}
The configurable systems we used in our experiments are described in Table~\ref{fig:systems}.
All systems are real-world systems and representative of different domains with different configuration mechanisms and implemented using different programming languages.
Note, with ``predicting performance'', we
mean predicting performance scores of the subject systems while executing test suites provided by the developers or the community, as described in Table~\ref{fig:systems}.
To compare the predictions of our and prior approaches with actual performance measures, we use data sets that have been obtained by
measuring {\em nearly all} configurations\footnote{http://openscience.us/repo/performance-predict/cpm.html}.
We say {\em nearly all} configurations, for the following reasoning: For
all except one of our subject systems, the total number of valid configurations
was tractable (192 to 2560). However, SQLite has 3,932,160
possible configurations (since SQLite has 39 configuration -- $2^{39}$ possible configurations), which is an impractically large number of configurations to test whether our predictions are accurate and stable. Hence, for SQLite, we use the 4500 samples for testing prediction accuracy and stability, which we could collect in one day of CPU time. Taking this into account, we will pay particular attention to the variance of the SQLite results.
\subsection{Experimental Rig}\label{sec:exp_rig}
RQ1 and RQ2 require the construction and assessment of numerous runtime predictors from small samples
of the data. The following rig implements that construction process.
For each configurable software system, we built a table of data, one row per valid configuration. We then ran all configurations of all software systems
and recorded the performance scores (i.e., that are invoked by a benchmark).
The exception is SQLite for which we measured only the
configurations needed to detect interactions and additionally
100 random configurations.
To this table, we added a column showing the performance score obtained from the actual measurements for each configuration.
Note that the following procedure ensures that
we \textit{never} test any prediction model on the data that we used to learn this model. Next, we repeated the following procedure 20 times (the figure of 20 repetitions was
selected using the Central Limit Theorem):
For each subject system in \{BDBC, BDBJ, Apache, SQLite, LLVM, x264\}
\begin{itemize}
\item Randomize the order of the rows in their table of data;
\item For $X$ in \{10, 20, 30, ... , 90\};
\begin{itemize}
\item Let {\em Train} be the first $X$\,\% of the data
\item Let {\em Test} be the rest of the data;
\item Pass {\em Train} to {\bf WHAT}\xspace to select sample configurations;
\item Determine the performance scores associated with these configurations. This corresponds to a table lookup, but would entail compiling and executing a system configuration in a practical setting.
\item Using the {\em Train} data and their performance scores, build a performance predictor using CART.
\item Using the {\em Test} data, assess the accuracy of the predictor using the error
measure of \eq{err} (see below).
\end{itemize}
\end{itemize}
The validity of the predictors built by CART is verified on testing data.
For each test item, we determine how long it {\em actually} takes to run the corresponding system configuration and compare the actual measured performance to the {\em prediction} from CART. The resulting prediction error is then computed using:
\begin{equation}\label{eq:err}
\mathit{error}=\frac{\mid\mathit{predicted} - \mathit{actual}\mid}{\mathit{actual}} \cdot 100
\end{equation}
(Aside: It is reasonable to ask why this metrics and not some of the others proposed
in the literature (e.g sum absolute residuals). In short, our results are stable
across a range of different metrics. For e.g., the results of this paper have
been repeated using sum of absolute residuals and, in those other results,
we seen the same ranking of methods; see \url{http://tiny.cc/sumAR}).
RQ2 requires testing the standard deviation of the prediction error rate. To support that test, we:
\begin{itemize}
\item Determine the $X$-th point in the above experiments, where all predictions stop improving (elbow point);
\item Measure the standard deviation of the error at this point, across our 20 repeats.
\end{itemize}
As shown in Figure~\ref{fig:sampling_accuracy}, all our results plateaued after studying $X=40$\,\% of the valid configurations\footnote{Just to clarify one frequently asked question about this work, we note
that our rig ``studies'' 40\,\% of the data. We do not mean that our predictive models
require accessing the performance scores from the 40\,\% of the data. Rather, by ``study'' we mean reflect
on a sample of configurations to determine what minimal subset of that
sample deserves to be compiled and executed.}.
Hence to answer { RQ2}, we will compare all 20 predictions at $X=40$\,\%.
{ RQ3} uses the learned regression tree as a {\em surrogate model} within an optimizer;
\begin{itemize}}%[leftmargin=0.4cm]
\item Take $X=40\,\%$ of the configurations;
\item Apply {\bf WHAT}\xspace to build a CART model using some minimal sample taken from that 40\,\%;
\item Use that CART model within some standard optimizer while searching for
configurations with least runtime;
\item Compare the faster configurations found in this manner with the fastest configuration
known for that system.
\end{itemize}
This last item requires access to a ground truth of performance scores for a
large number of configurations. For this experiment, we have access to that ground truth
(since we have access to all system configurations, except for SQLite). Note that such a ground truth
would not be needed when practitioners choose to use {\bf WHAT}\xspace in their own work (it is only for our empirical investigation).
For the sake of completeness, we explored
a range of optimizers seen in the literature: DE~\cite{storn1997differential}, NSGA-II~\cite{deb00afast},
and our own GALE~\cite{krall2014gale,zuluaga2013active} system. Normally,
it would be reasonable to ask
why we used those three, and not the hundreds of other
optimizers described in the literature~\cite{fletcher13,harman12}. However,
as shown below, all these optimizers in this
domain exhibited very similar
behavior (all found configurations close to the
best case performance). Hence, the specific
choice of optimizer is not a critical
variable in our analysis.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Figures/SamplingAccuracy}
\caption{Errors of the predictions made by {\bf WHAT}\xspace with four different
sampling policies. Note that, on the y-axis, {\em lower} errors are {\em better}.
}
\label{fig:sampling_accuracy}
\end{figure}
\section{Results}
\subsection{RQ1}
\begin{center}
{\em Can {\bf WHAT}\xspace generate good predictions after
examining only a small number of configurations?}
\end{center}
\noindent \fig{sampling_accuracy} shows the mean errors of the predictors learned
after taking $X$\,\% of the configurations, then asking {\bf WHAT}\xspace and some sampling method ($S_1$, $S_2$, and $S_3$)
to (a)~find what configurations to measure; then (b)~asking CART to build a predictor
using these measurements. The horizontal axis of the plots shows what $X$\,\%
of the configurations are studied; the vertical axis shows the mean relative error ($\mu$) from \eq{err}.
In this figure:
\begin{itemize}
\item
The $\times$\hspace{-2pt}---\hspace{-2pt}$\times$ lines in \fig{sampling_accuracy} show a {\em baseline} result
where data from the performance scores of 100\,\% of configurations were used by CART
to build a runtime predictor.
\item
The other lines show the results using the sampling methods defined in Section~\ref{sect:sample}.
Note that these sampling methods used runtime data only from a
subset of 100\,\% of the performance scores seen in configurations
from 0 to X\,\%.
\end{itemize}
In \fig{sampling_accuracy}, {\em lower} y-axis values are {\em better} since this means lower
prediction errors. Overall, we find that:
\begin{itemize}
\item Some of the subject systems exhibit large variances in their error rate, below $X=40$\,\% (e.g., BDBC and BDBJ).
\item Above $X=40$\,\%, there is little effect on the overall change of the sampling methods.
\item
Mostly, $S_3$ shows the highest overall error,
so that it cannot be recommended.
\item Always, the $\times$\hspace{-2pt}---\hspace{-2pt}$\times$ baseline shows the lowest errors, which is to be
expected since predictors built on the baseline have access to all data.
\item
We see a trend that the error of $S_1$ and $S_2$ are within $5$\,\% of the {\em baseline} results.
Hence, we can recommend these two minimal sampling methods.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=0.75\columnwidth]{Figures/evaluation_graph}
\caption{Comparing evaluations of different sampling policies. We see that the number of configurations evaluated for $S_2$ is twice as high as $S_1$, as it selects 2 points from each cluster, where as $S_1$ selects only 1 point. }\label{fig:Evaluations}
\end{figure}
\fig{Evaluations} provides information about which of $S_1$ or $S_2$ we should recommend.
This figure displays data taken from the $X=40$\,\% point of \fig{sampling_accuracy} and displays
how many performance scores of configurations are needed by our sampling methods (while
reflecting on the configurations seen in the range $0\le X \le 40$). Note that:
\begin{itemize}
\item
$S_3$ needs up to thousands of performance scores points,
so it cannot be recommended as minimal-sampling policy;
\item $S_2$ needs twice as much performance scores as
$S_1$ ($S_2$ uses {\em two} samples per leaf cluster while
$S_1$ uses only {\em one}).
\item $S_1$ needs performance scores only for a few dozen (or less) configurations to generate
the predictions with the lower errors seen in \fig{sampling_accuracy}.
\end{itemize}
Combining the results of \fig{sampling_accuracy} and \fig{Evaluations}, we conclude that:
\begin{myshadowbox}
$S_1$ is our preferred spectral sampling method. Furthermore,
the answer to RQ1 is ``yes'', because applying {\bf WHAT}\xspace{}, we can (a)~generate runtime predictors
using just a few dozens of sample performance scores;
and (b)~these predictions have error rates
within 5\,\% of the error rates seen if predictors are built from information about all performance scores.
\end{myshadowbox}
\subsection{RQ2}
\begin{center}
{\em
Do less data used in building prediction models cause larger variances in the predicted values?}
\end{center}
Two competing effects can cause increased or decreased variances in
performance predictions. In our study, we report standard deviation ($\sigma$) as a measure of variances in the performance predicitons.
The less we sample the configuration space,
the less we constrain model generation in that space. Hence, one effect that can be expected
is that models learned
from too few samples exhibit large variances.
But,
a compensating effect can be introduced by sampling from the spectral space
since that space contains fewer confusing or correlated variables than the raw configuration space.
\fig{Variance} reports which one of these two competing effects are dominant.
\fig{Variance} shows that after some initial fluctuations,
after seeing $X=40$\,\% of the configurations, the variances in prediction errors reduces to nearly zero, which is similar to the results in figure \ref{fig:sampling_accuracy}.
\begin{figure}[t]
\includegraphics[width=\columnwidth, height=10cm]{Figures/Variance}
\centering
\caption{Standard deviations seen at various points of \fig{sampling_accuracy}.}\label{fig:Variance}
\end{figure}
\begin{myshadowbox}
Based of the results of Figure~\ref{fig:Variance}, we answer RQ2 with ``no'': Selecting a small number of samples does not necessarily increase variance (at least to say, not in this domain).
\end{myshadowbox}
\subsection{RQ3}
\begin{center}
{\em
Can ``good'' surrogate models (to be used in optimizers)
be built from minimal samples?}
\end{center}
The results of answering RQ1 and RQ2 suggest to use {\bf WHAT}\xspace~(with $S_1$) to build runtime predictors from a small sample of data. RQ3
asks if that predictor can be used by an optimizer to infer what {\em other} configurations correspond to system configurations with fast performance scores.
To answer this question, we ran a random set of 100
configurations, 20 times, and related that baseline to three optimizers (GALE~\cite{krall2014gale}, DE~\cite{storn1997differential} and NSGA-II~\cite{deb00afast}) using their
default parameters.
When these three optimizers mutated existing configurations to suggest new ones,
these mutations were checked for validity. Any mutants that violated the system's constraints (e.g., a feature excluding another feature) were rejected
and the survivors were ``evaluated'' by asking the CART surrogate model.
These evaluations either rejected the mutant or used it in generation $i+1$, as the basis for a search for more, possibly
better mutants.
\fig{performance_graph} shows the configurations found by the three optimizers projected onto the ground truth of the performance scores of nearly
all configurations (see Section~\ref{sec:subject_systems}). Again note that, while we use that ground truth for the validation of these results, our optimizers
used only a small part of that ground-truth data in their search for the fastest configurations (see the {\bf WHAT}\xspace + $S_1$
results of \fig{Evaluations}).
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth, height=8cm]{Figures/optimizer_result}
\caption{Solutions found by GALE, NSGA-II, and DE (shown as points) laid against the ground truth (all known configuration performance scores).
It can be observed that all the optimizers can find the configuration with lower performance scores.}\label{fig:performance_graph}
\end{figure}
The important information of \fig{performance_graph} is that all the optimized configurations fall within 1\,\% of the fastest
configuration according to the ground truth (see all the left-hand-side dots on each plot). Table~\ref{fig:external_validity} compares the performance of the optimizers
used in this study. Note that the performances are nearly identical, which leads to the following conclusions:
\begin{myshadowbox}
Based on the results of figure~\ref{fig:performance_graph} answer to RQ3 is ``yes'': For optimizing performance scores, we can use surrogates built from few runtime samples. The choice of the optimizer does not critically effect this conclusion.
\end{myshadowbox}
\begin{table}[tbh]
\centering
\caption{The table shows how the minimum performance scores as found by the learners GALE, NSGA-II, and DE, vary over 20 repeated
runs. Mean values are denoted $\mu$ and IQR denotes the 25th--75th percentile. A low IQR suggests that the surrogate model build by {\bf WHAT}\xspace is stable and can be utilized by off the shelf optimizers to find performance-optimal configurations.
}
\label{fig:external_validity}
\vspace{2ex}
\begin{tabular}{lrrrrrr}
\toprule
\multirow{3}{*}{} & \multicolumn{6}{c}{Searcher} \\ \cmidrule{2-7}
& \multicolumn{2}{c}{GALE} & \multicolumn{2}{c}{DE} & \multicolumn{2}{c}{NSGAII} \\ \cmidrule{2-7}
& Mean & IQR & Mean & IQR & Mean & IQR \\ \midrule
\textbf{Apache} & 870 & 0 & 840 & 0 & 840 & 0 \\
\textbf{BDBC} & 0.363 & 0.004 & 0.359 & 0.002 & 0.354 & 0.005 \\
\textbf{BDBJ} & 3139 & 70 & 3139 & 70 & 3139 & 70 \\
\textbf{LLVM} & 202 & 3.98 & 200 & 0 & 200 & 0 \\
\textbf{SQLite} & 13.1 & 0.241 & 13.1 & 0 & 13.1 & 0.406 \\
\textbf{X264} & 248 & 3.3 & 244 & 0.003 & 244 & 0.05 \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure*}[h]
\begin{minipage}{4in}
{\small
\begin{tabular}{l@{~~~~}l@{~~~~}r@{~~~~~}r@{~~~~~}c@{}r}
\multicolumn{1}{l}{Rank}& Approach & Mean MRE($\mu$) & STDev($\sigma$) & \textbf{} & \#Evaluations \\
\rowcolor{lightgray}\arrayrulecolor{lightgray}
\textbf{Apache} & \textbf{} & \textbf{} & \textbf{}&\textbf{}& \\\hline
1 & Sarkar & 7.49 & 0.82 & \quart{6}{3}{7} & 55 \\
1 & Guo(PW) & 10.51 & 6.85 & \quart{3}{33}{22} & 29 \\
1 & Siegmund & 10.34 & 11.68 & \quart{0}{55}{21} & 29\\
1 & \textbf{WHAT} & 10.95 & 2.74 & \quart{16}{13}{24} & 16 \\
1 & Guo(2N) & 13.03 & 15.28 & \quart{7}{72}{34} & 18\\
\hline
\rowcolor{lightgray}\arrayrulecolor{lightgray}
\textbf{BDBC} & \textbf{} & \textbf{} & \textbf{}& \textbf{}&\\\hline
1 & Sarkar & 1.24 & 1.46 & \quart{0}{1}{0} & 191\\
\hline 2 & Siegmund & 6.14 & 4.41 & \quart{4}{5}{6} & 139\\
2 & \textbf{WHAT} & 6.57 & 7.40 & \quart{4}{9}{7} & 64\\
2 & Guo(PW) & 10.16 & 10.6 & \quart{2}{13}{11} & 139\\
\hline 3 & Guo(2N) & 49.90 & 52.25 & \quart{16}{63}{59} & 36\\
\hline
\rowcolor{lightgray}\arrayrulecolor{lightgray}
\textbf{BDBJ} & \textbf{} & \textbf{} & \textbf{}& \textbf{}&\\\hline
1 & Guo(2N) & 2.29 & 3.26 & \quart{0}{29}{9} & 52\\
1 & Guo(PW) & 2.86 & 2.72 & \quart{2}{25}{14} & 48\\
1 & \textbf{WHAT} & 4.75 & 4.46 & \quart{12}{40}{31} & 16\\
\hline 2 & Sarkar & 5.67 & 6.97 & \quart{6}{62}{39} & 48\\
2 & Siegmund & 6.98 & 7.13 & \quart{16}{63}{51} & 57\\
\hline
\rowcolor{lightgray}\arrayrulecolor{lightgray}
\textbf{LLVM} & \textbf{} & \textbf{} & \textbf{}&\textbf{}& \\\hline
1 & Guo(PW) & 3.09 & 2.98 & \quart{0}{21}{10} & 64\\
1 & \textbf{WHAT} & 3.32 & 1.05 & \quart{9}{7}{12} & 32\\
1 & Sarkar & 3.72 & 0.45 & \quart{13}{3}{15} & 62\\
1 & Guo(2N) & 4.99 & 5.05 & \quart{11}{36}{24} & 22\\
\hline 2 & Siegmund & 8.50 & 8.28 & \quart{21}{58}{49} & 43\\
\hline
\rowcolor{lightgray}\arrayrulecolor{lightgray}
\textbf{SQLite} & \textbf{} & \textbf{} & \textbf{} & \textbf{}&\\\hline
1 & Sarkar & 3.44 & 0.10 & \quart{0}{0}{0} & 925\\
\hline 2 & \textbf{WHAT} & 5.60 & 0.57 & \quart{7}{2}{8} & 64\\
\hline 3 & Guo(2N) & 8.57 & 7.30 & \quart{2}{28}{19} & 78\\
3 & Guo(PW) & 8.94 & 6.24 & \quart{6}{24}{20} & 566\\
\hline 4 & Siegmund & 12.83 & 17.0 & \quart{16}{63}{35} & 566\\
\hline
\rowcolor{lightgray}\arrayrulecolor{lightgray}
\textbf{x264} & \textbf{} & \textbf{} & \textbf{}& \textbf{}&\\\hline
1 & Sarkar & 6.64 & 1.04 & \quart{4}{2}{5} & 93\\
1 & \textbf{WHAT} & 6.93 & 1.67 & \quart{4}{4}{6} & 32\\
1 & Guo(2N) & 7.18 & 7.07 & \quart{0}{15}{6} & 32\\
1 & Guo(PW) & 7.72 & 2.33 & \quart{4}{5}{8} & 81\\
\hline 2 & Siegmund & 31.87 & 21.24 & \quart{32}{47}{61} & 81\\
\hline
\end{tabular}}
\end{minipage}
\caption{Mean MRE($\mu$) seen in 20 repeats. Mean MRE is the prediction error as described in Equation~\ref{eq:err} and STDev ($\sigma$) is the standard deviation of the MREs found during multiple repeats.
Lines with a a dot in the middle
(e.g. \protect \quartex{3}{13}{13})
show the mean as a round dot withing the IQR (and if the IQR is very small, only a round dot will be visible).
All the results are sorted by the mean values: a lower mean value of MRE is better than large mean value.
The left-hand side column ({\textit rank}) ranks various techniques for e.g. when comparing various techniques for Apache, all the techniques have the same rank since their mean values are not statistically different. \textit{Rank} is computed using Scott-Knott, bootstrap 95\% confidence, and A12 test.
}
\label{fig:stats}
\end{figure*}
\subsection{RQ4}
\begin{center}
{\em How good is {\bf WHAT}\xspace compared to the state of the art of learning performance predictors from configurable software systems?}
\end{center}
We compare {\bf WHAT}\xspace with the three state-of-the-art predictors proposed in the literature~\cite{siegmund2012predicting}, \cite{guo2013variability}, \cite{sarkar2015cost}, as discussed in Section~\ref{sect:addit}. Note that all approaches use regression-trees as predictors, except Siegmund's approach, which uses a regression function derived using linear programming.
The results were studied using non-parametric tests, which was also used by Arcuri and Briand at ICSE
'11~\cite{mittas13}). For testing statistical significance,
we used non-parametric bootstrap test 95\% confidence~\cite{efron93} followed by
an A12 test to check that any observed differences were not trivially small effects;
i.e. given two lists $X$ and $Y$, count how often there are larger
numbers in the former list (and there there are ties, add a half mark):
$a=\forall x\in X, y\in Y\frac{\#(x>y) + 0.5*\#(x=y)}{|X|*|Y|}$
(as per Vargha~\cite{Vargha00}, we say that a ``small'' effect has $a <0.6$).
Lastly, to generate succinct reports, we use the Scott-Knott test to recursively
divide our optimizers. This recursion used A12 and bootstrapping
to group together subsets that are (a)~not significantly different and are (b)~not
just a small effect different to each other. This use of Scott-Knott is endorsed
by Mittas and Angelis~\cite{mittas13}
and by Hassan et al.~\cite{7194626}.
As seen in Figure~\ref{fig:stats}, the FW heuristic of Siegmund et al. (i.e., the sampling approach using the fewest number of configurations) has the higher errors rate and the highest standard deviation on that error rate (four out of six times). Hence, we cannot recommend this method or, if one wishes to use this method, we recommend using the other sampling heuristics (e.g., HO, HS) to make more accurate predictions (but at the cost of much more measurements). Moreover, the size of the standard deviation of this method causes further difficulties in estimating which configurations are those exhibiting a large prediction error.
As to the approach of Guo et al. (with PW), it does not standout on any of our measurements. Its error results are within 1\% of {\bf WHAT}\xspace; its standard deviations are usually larger and it requires much more data than {\bf WHAT}\xspace (Evaluations column of the figure~\ref{fig:stats}).
In terms of the number of measure samples required to build a model, the right-hand column of Figure~\ref{fig:stats} shows that {\bf WHAT}\xspace requires the fewest samples except for two cases: the approach of Guo et al. (with 2N) working on BDBC and LLVM. In both these cases, the mean error and standard deviation on the error estimate is larger than {\bf WHAT}\xspace. Furthermore, in the case of BDBC, the error values
are $\mu=14\,\%$, $\sigma=13\,\%$, which are much larger
than {\bf WHAT}\xspace{}'s error scores of $\mu=6\,\%$, $\sigma=5\,\%$.
Although the approach of Sarkar et al. produces an error rate that is sometimes less than the one of {\bf WHAT}\xspace, it requires the highest number of measurements. Moreover, {\bf WHAT}\xspace\textquotesingle s accuracy is close to Sarkar\textquotesingle s approach (1\% to 2\%) difference). Hence, we cannot recommend this approach, too.
Table~\ref{tab:measurements} shows the number of evaluations used by each approaches. We see that most state-of-the-art approaches often require many more samples than
{\bf WHAT}\xspace{}. Using those fewest numbers of samples, {\bf WHAT}\xspace has
within 1\% to 2\,\% of the lowest standard deviation rates
and within 1 to 2\,\% of lowest error rates.
The exception is Sarkar's approach, which has 5\,\% lower mean error
rates (in BDBC, see the Mean MRE column of figure~\ref{fig:stats}). However,
as shown in right-hand side of Table~\ref{tab:measurements}, Sarkar's approach needs nearly three times
more measurements than {\bf WHAT}\xspace.
\noindent To summarize, there are two cases in Figure~\ref{fig:stats} where {\bf WHAT}\xspace performs worse than, at least, one
other method:
\begin{itemize}
\item
SQLite: The technique proposed by Sarkar et al. does better than {\bf WHAT}\xspace (3.44 vs 5.6)
but, as shown in the final column of Figure~\ref{fig:stats},
does so at the cost of $\frac{925}{64} \approx 15$ times more evaluations that {\bf WHAT}\xspace.
In this case, a pragmatic engineer could well prefer our solution over that of Sarkar et al. (since
number of evaluations performed by Sarkar et al.more than an order of magnitude than {\bf WHAT}\xspace).
\item BDBC: Here again, {\bf WHAT}\xspace is not doing the best but, compared to the number of evaluations required by all other solutions, it is not doing particularly bad.
\end{itemize}
\noindent Given
the overall reduction of the error is small (5\,\% difference
between Sarkar and {\bf WHAT}\xspace in mean error), the
cost of tripling the data-collection cost is
often not feasible in a practical context and might not justify the small additional benefit in accuracy.
\begin{myshadowbox}
Based on the results of figure~\ref{fig:stats}, we answer {\bf RQ4} with ``yes'',
since {\bf WHAT}\xspace yields predictions that are similar to or more accurate than prior
work, while requiring fewer samples.
\end{myshadowbox}
\begin{table}[t]
\caption{Comparison of the number of the samples
required with the state of the art. The grey colored cells indicate the approach that requires the lowest number of samples. We notice that WHAT and Guo (2N) uses less data compared to the other approaches. The high fault rate of Guo (2N) accompanied with high variability in the predictions makes WHAT our preferred method.}\label{tab:measurements}
\vspace{2ex}
\centering
\small
\begin{tabular}{lrrrrr}
\toprule
& \multicolumn{5}{c}{{Samples}} \\ \cmidrule{2-6}
\multirow{-2}{*}{\textbf{}} & {Siegmund} & {Guo (2N)} & {Guo (PW)} & {Sarkar} & {WHAT} \\ \midrule
\textbf{Apache} & 29 & 181 & 29 & 55 & \cellcolor[HTML]{C0C0C0}16 \\
\textbf{BDBC} & 139 & \cellcolor[HTML]{C0C0C0}36 & 139 & 191 & 64 \\
\textbf{BDBJ} & 48 & 52 & 48 & 57 & \cellcolor[HTML]{C0C0C0}16 \\
\textbf{LLVM} & 62 & \cellcolor[HTML]{C0C0C0}22 & 64 & 43 & 32 \\
\textbf{SQLite} & 566 & 78 & 566 & 925 & \cellcolor[HTML]{C0C0C0}64 \\
\textbf{X264} & 81 & \cellcolor[HTML]{C0C0C0}32 & 81 & 93 & \cellcolor[HTML]{C0C0C0}32 \\ \bottomrule
\end{tabular}
\end{table}
\section{Why does it work?}
In this section, we present an in-depth analysis to understand why our sampling technique (based on a spectral learner) achieves such low mean fault rates while being stable (low variance). We hypothesize that the configuration space of the system configuration lie on a low dimensional manifold.
\subsection{History}
Menzies et. al~\cite{me12d} demonstrated how to exploit the underlying dimension to cluster data to find local homogeneous data regions in an otherwise heterogeneous data space. The authors used an algorithm called WHERE (see section~\ref{rtlearning}), which recurses on two dimensions synthesized in linear time using a technique called FASTMAP ~\cite{Faloutsos1995}. The use of underlying dimension has been endorsed by various other researchers~\cite{bettenburg2012think, deiters2013using, bettenburg2015towards, zhang2016cross}. There are numerous other methods in the literature, which are used to learn the underlying dimensionality of the data set such as Principal Component Analysis (PCA)~\cite{jolliffe2002principal}~\footnote{WHERE is an approximation of the first principal component}, Spectral Learning~\cite{shi2000normalized} and Random Projection~\cite{bingham2001random}. These algorithms use different techniques to identify the underlying, independent/orthogonal dimensions to cluster the data points and differ with respect to the computational complexity and accuracy. We use WHERE since it computationally efficient~$O(2N)$, while still being accurate.
\subsection{Testing Technique}
Given our hypothesis the configuration space lies in a lower dimensional hyperplane --- it is imperative to demonstrate that the intrinsic dimensionality of the configuration space is less than the actual dimension. To formalize this notion, we borrow the concept of correlation dimension from the domain of physics~\cite{grassberger2004measuring}. The correlation dimension of a dataset with $k$ items is found by computing the number of items found at distance withing radius $r$ (where r is the Euclidean distance between two configurations) while varying $r$. This is then normalized by the number of connections between $k$ items to find the expected number of neighbors at distance $r$. This can be written as:
\begin{equation}
C(r) = \frac{2}{k(k-1)} \displaystyle\sum_{i=1}^{n} \displaystyle\sum_{j=i+1}^{n} I(||x_i, x_j|| < r) \\
\end{equation}
$$
where:
I(x < y) = \begin{cases}
1, & \text{ if x \textless y}\\
0, & \text{ otherwise}\\
\end{cases}
$$
Given the dataset with $k$ items and range of distances [$r_0$--$r_{max}$], we estimate the intrinsic dimensionality as the mean slope between $\ln(C(r))$ and $\ln(r)$.
\subsection{Evaluation}
\begin{figure}[t]
\includegraphics[width=0.9\columnwidth]{Figures/underlying_dimension}
\caption{The actual dimensions are shown on the x-axis and intrinsic dimensionality is shown on the y-axis. The points are annotated with the names of the corresponding software system. The intrinsic dimensionality of the systems are much lower than the actual dimensionality (number of columns in the dataset).}
\label{fig:underlying_d}
\end{figure}
On the configuration space of our subject systems, we observe that {the intrinsic dimensionality of the software system is much lower than the actual dimension}. Figure~\ref{fig:underlying_d} presents the intrinsic dimensionality along with the actual dimensions of the software systems. If we take a look at the intrinsic dimensionality and compare it with the actual dimensionality, then it becomes apparent that the configuration space lies on a lower dimensional hyperplane. For example, SQLite has 39 configuration options, but the intrinsic dimensionality of the space is just 1.61 (this is a fractal dimension). At the heart of {\bf WHAT}\xspace is WHERE (a spectral clusterer), which uses the approximation of the first principal component to divide the configuration space and hence can take advantage of the low intrinsic dimensionality.
As a summary, our observations indicate that the intrinsic dimension of the configuration space is much lower that its actual dimension. Hence, clustering based on the intrinsic dimensions rather than the actual dimension would be more effective. In other words, configurations with similar performance values lie closer to the intrinsic hyperplane, when compared to the actual dimensions, and may be the reason as to why {\bf WHAT}\xspace achieves empirically good results.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{Figures/param_tuning}
\caption{The trade-off between the number of evaluations (affected by the size of the sub-region) and the performance (MRE) of the model generated.}
\label{fig:param_tuning}
\end{figure}
\section{Discussion}
\subsection{What is the trade-off between the MRE and the number of measurements?}
{\bf WHAT}\xspace{} requires that the practitioner to define a stopping criterion, (size of the sub-region) before the
process commences.
The stopping criterion preempts the process of recursive division of regions
based on projection values of the configurations. In our experiments, the
number of measurements or the size of the training set depends
on the stopping criterion. An early termination of the
sampling process would lead to a very inaccurate performance model, while
late termination would result in resource wastage. Hence, it is very
important to discuss the trade-off between the upper bound of the size of the sub-region and
the MRE of the model built. In Figure~\ref{fig:param_tuning}, we show the trade-off
between the MRE found and the number
of measurements (size of training set). The trade-off characterizes
the relationship between two conflicting objectives, for example,
point in Apache, (size of sub-region=$4\cdot \sqrt{N}$) requires very few measurements but the MRE of the model built is the highest, whereas point (size of sub-region=$\frac{1}{4}\cdot \sqrt{N}$) requires large
number of measurements, but the MRE of the model built is the lowest. Since
our objective is to minimize the number of measurements while
reducing MRE, we assign the value of $\sqrt{N}$ to the upper bound of the size of the sub-region for the
purposes of our experiments.
\subsection{What is the relationship between intrinsic dimensionality and difficulty of a problem (or dataset)?}
Houle et al.~\cite{houle2012generalized} observe a clear correlation between dimensionality of a problem space and loss of performance, that is a problem represented in lower dimensions is easier to model than the same problem represented in higher dimensions. In a similar vein, Domingos~\cite{domingos2012few} explains how our intuitions fail in higher dimensions and algorithms that work on lower dimensions does not work on higher dimensions. This is because the size of the training data required to create a generalized 'model' for such a high dimensional space is exponentially large~\footnote{Another challenge of having high dimensional search space is the amount of noise induced by irrelevant dimensions.}. This is generally referred to as the ``curse of dimensionality'', but what counteracts this is the ``blessing of non-uniformity''. Blessing of non-uniformity refers to the fact that the possible valid solutions in a space is not spread uniformly across the problem space but concentrated on or near a lower dimensional manifold. Hence, it is a rule of thumb of machine learning practitioners to reduce the dimension of a data set by projecting the data onto a lower dimensional orthogonal subspace that captures the variation in the data. Bruges~\cite{burges2010dimension} mentions that, if data lies in a lower dimensional space (with lower intrinsic dimensions), then modeling the data directly in lower dimensional manifold make it much easier to model. Our results are inline with the observation made by Bruges, as we show that few samples are enough to model a large (sometimes millions) space, which can be attributed to the low intrinsic dimensionality of the space.
~\\
There are several other techniques (similar to {\bf WHAT}\xspace) that also exploit the non-uniformity of the data points, such as Random Projections~\cite{dasgupta2000experiments} and Auto-encoders~\cite{hinton2006reducing}. The central intuition is similar to our work: problems that contain intrinsic lower dimensions should be easier/cheaper to model than those with higher intrinsic dimensionality.
That said, to the best of our knowledge, we are the first to propose exploring the lower intrinsic dimensionality of configuration spaces, and exploit those lower dimensions for the purposes of sampling.
\subsection{What are the limitations of WHAT?}
The limitations of {\bf WHAT}\xspace{} are:
\begin{itemize}
\item {\bf WHAT}\xspace{} cannot be used for non-numeric configuration options. However, it can be used for numeric configurations options, not just Boolean options ( most related work only supports Boolean options).
\item The configurable systems used in this paper are fairly easy to model using machine learning techniques such as CART, but there exist software systems, which cannot be model using CART (even using 40\% of all possible configurations). For these systems, {\bf WHAT}\xspace{} cannot be used to build accurate performance models.
\item The effectiveness of {\bf WHAT}\xspace{} depends on projecting the configurations on the approximated first principal component. The approximation of the first principal component require calculating the farthest points (points which is most dissimilar) in the configuration space using euclidean distance. However, there maybe systems where Euclidean distance (as used in this paper) cannot find the most dissimilar points~\cite{chen2016sampling}. For such systems, {\bf WHAT}\xspace{} in current form will not be effective (which is a part of our future work).
\item Finding near-optimal configuration can become challenging when the configuration space is non-convex. However, we did not find such systems during our empirical evaluations. The other point we would like to stress is: we wanted to build a tool, which can differentiate between good and not-so-good configurations using few evaluations and our goal is not to find the best configuration but rather near optimal solutions.
\end{itemize}
\section{Reliability and Validity}\label{sect:construct}
{\em Reliability} refers to the consistency of the results obtained
from the research. For example, how well independent researchers
could reproduce the study? To increase external
reliability, we took care to either clearly define our
algorithms or use implementations from the public domain
(SciKitLearn)~\cite{scikit-learn}. Also, all the data used in this work are available
on-line in the PROMISE\footnote{\url{http://openscience.us/repo/performance-predict/cpm.html}} code repository and all our algorithms
are on-line at github.com/ai-se/where.
{\em Validity} refers to the extent to which a piece of research actually
investigates what the researcher purports to investigate~\cite{SSA15}.
{\em Internal validity} checks if the differences found in
the treatments can be ascribed to the treatments under study.
One threat to internal validity of our experiments is the choice
of {\em training and testing} data sets discussed in
\fig{systems}. Recall that, while all our learners used the same
{\em testing} data set, our untuned learners were only given
access to {\em training} data.
Another threat to internal validity is {\em instrumentation}. The very low $\mu$ and $\sigma$ error values
reported in this study are so small that it is reasonable to ask whether they are due to some instrumentation
quirk, rather than due to using a clever sample strategy:
\begin{itemize}
\item
Our low $\mu$ values are consistent with prior work~\cite{sarkar2015cost};
\item
As to our low $\sigma$ values, we note that, when the error values are so close to 0\,\%, the standard
deviation of the error is ``squeezed'' between zero and those errors. Hence, we would expect that
experimental rigs
that generate error values on the order of 5\,\% and \eq{err} should have $\sigma$ values of $0\le \sigma \le 5$ (e.g., like those seen in our introduction).
\end{itemize}
Regarding SQLite, we cannot measure all possible configurations in reasonable time. Hence, we sampled only 100 configurations to compare prediction and actual performance values. We are aware that this evaluation leaves room for outliers.
Also, we are aware that measurement bias can cause false interpretations~\cite{me12d}. Since we aim at predicting performance for a special workload, we do not have to vary benchmarks.
We aimed at increasing the {\em external validity} by choosing software systems from different domains with different configuration mechanisms and implemented with different programming languages. Furthermore, our subject systems are deployed and used in the real world. Nevertheless, assuming the evaluations to be automatically transferable to all configurable software systems is not fair. To further strengthen external validity, we run the model (generated by \textit{{\bf WHAT}\xspace + $S_1$}) against other optimizers, such as NSGA-II and differential evolution~\cite{storn1997differential}. That is, we validated whether the learned models are not only applicable for GALE style of perturbation. In Table~\ref{fig:external_validity}, we see that the models developed are valid for all optimizers, as all optimizers are able to find the near optimal solutions.
\section{Related Work}
\label{sect:related}
In 2000, Shi and Maik~\cite{shi2000normalized} claimed the term ``spectral clustering'' as a reference to their normalized cuts
image
segmentation algorithm that partitions data through a spectral (eigenvalue) analysis of the
Laplacian representation of the similarity graph between instances in the data.
In 2003, Kamvar et al.~\cite{kamvar2003spectral} generalized that definition saying that ``spectral learners''
were any data-mining algorithm that first replaced the raw
dimensions with those inferred from the spectrum (eigenvalues) of the affinity (a.k.a.\ distance)
matrix of the data, optionally adjusted via some normalization technique).
Our clustering based on first principal component splits the data on a approximation to an eigenvector, found at each recursive level
of the data (as described in \tion{spect}).
Hence, this method is a ``spectral clusterer'' in the general Kamvar sense.
Note that,
for our data, we have
not found that Kamvar's normalization matrices are needed.
Regarding sampling, there are a wide range of methods know as experimental designs or designs of experiments~\cite{pukelsheim2006optimal}. They usually rely on fractional factorial designs as in the combinatorial testing community~\cite{Kuhn:2013}.
Furthermore, there is a recent approach that learns {\em per\-for\-mance-influence models} for configurable software systems~\cite{SGA+15}. While this approach can handle even numeric features, it has similar sampling techniques for the Boolean features as reported in their earlier work~\cite{siegmund2012predicting}. Since we already compared to that earlier work and do not consider numeric features, we did not compare our work to performance-influence models.
\section{Conclusions \& Future Work}
Configurable software systems today are widely used in practice, but they impose challenges
regarding finding performance-optimal configurations. State-of-the-art approaches require too
many measurements or are prone to large variances in their performance predictions. To overcome
these limitations, we have proposed a fast spectral learner, called {\bf WHAT}\xspace, along with three
new sampling techniques. The key idea of {\bf WHAT}\xspace is to explore the configuration space with
eigenvalues of the features used in a configuration to determine exactly those configurations
for measurement that reveal key performance characteristics.
This way, we can study many closely associated configurations with only a few measurements.
We evaluated our approach on six real-world configurable software systems borrowed from the
literature. Our approach achieves similar to lower error rates, while being stable when
compared to the state of the art. In particular, with the exception of Berkeley DB, our
approach is more accurate than the state-of-the-art approaches by Siegmund et
al.~\cite{siegmund2012predicting} and Guo et al.~\cite{guo2013variability}. Furthermore, we
achieve a similar prediction accuracy and stability as the approach by Sarkar et
al~\cite{sarkar2015cost}, while requiring a far smaller number of configurations to be
measured. We also demonstrated that our approach can be used to build cheap and stable
surrogate prediction models, which can be used by off-the-shelf optimizers to find the
performance-optimal configuration. We use the correlation dimension to demonstrate how the high dimensional configuration space of our subject systems has a low intrinsic dimensionality, which might be the reason why {\bf WHAT}\xspace performs so well on these datasets.
As to future work, we plan to explore the implications of {\bf WHAT}\xspace{}. Currently {\bf WHAT}\xspace{} uses a static number of evaluations based on the total number of possible configurations ($\sqrt{N}$), which may not be useful for systems that are more difficult to model that the system used in this study. Hence, we need a progressive strategy, which can progressively sample new configurations and stop the sampling process based on either the performance score achieved or the budget allocated. Finally the current version of {\bf WHAT}\xspace{}, assumes that all the features are of similar importance and uses a Euclidean distance to differentiate between good and `not-so-good' solutions. There are certainly systems where not all the features are equally important or there are redundancy in terms of configuration options. Hence, using feature weighting techniques to find weight (importance) of configuration options and use that information to differentiate between configurations.
\begin{acknowledgement}
The work is partially funded by NSF awards \#1506586. Sven Apel's work has been supported by the German Research Foundation (AP 206/4 and AP 206/6). Norbert Siegmund's work has been supported by the German Research Foundation (SI 2171/2).
\end{acknowledgement}
\bibliographystyle{plain}
\input{activeconfig.bbl}
\end{document}
\section{Introduction}
Most software systems today are configurable. Despite the undeniable benefits
of configurability,
large configuration spaces challenge developers, maintainers, and users. In the face of hundreds of configuration options, it is difficult to keep track of the effects of individual configuration options and their mutual interactions. So, predicting the performance of individual system configurations or determining the optimal configuration is often more guess work than engineering. In their recent paper, Xu et al.\ documented the difficulties developers face
with understanding the configuration spaces of their systems~\cite{xu2015hey}. As a result, developers tend to ignore over $5/6$ths of the configuration options, which leaves considerable optimization potential untapped and induces major economic cost~\cite{xu2015hey}.
Addressing the challenge of performance prediction and optimization in the face of large configuration spaces, researchers have developed a number of approaches that rely on sampling and machine learning~\cite{siegmund2012predicting,guo2013variability,sarkar2015cost}.
While gaining some ground, state-of-the-art approaches face two problems:
(a)~they require far too many sample configurations for learning or (b)~they are prone to large variances in their predictions. For example, prior work on predicting performance scores using regression trees had to compile and execute hundreds to thousands of specific system configurations~\cite{guo2013variability}.
A more balanced approach by Siegmund et al.\ is able to learn predictors for configurable systems~\cite{siegmund2012predicting} with low mean errors, but with large variances of prediction accuracy (e.g.\ in half of the results, the performance predictions for the Apache Web server were up to 50\,\% wrong).
Guo et al.~\cite{guo2013variability} also proposed an incremental method to build a predictor model, which uses incremental random samples with steps equal to the number of configuration options (features) of the system. This approach also
suffered from unstable predictions (e.g., predictions had a mean error of up to 22\,\%, with a standard deviation of up 46\,\%). Finally, Sarkar et al.~\cite{sarkar2015cost} proposed a proj\-ective-learning approach (using fewer measurements than Guo at al.\ and Siegmund et al.) to quickly compute the number of sample configurations for learning a stable predictor. However, as we will discuss, after making that prediction, the total number of samples required for learning the predictor is comparatively high (up to hundreds of samples).
The problems of large sample sets and large variances in prediction can be avoided using the {\bf WHAT}\xspace spectral learner, which is our main contribution.
{{\bf WHAT}\xspace}'s innovation is the use of the spectrum (eigenvalues) of the distance matrix
between the configurations of a configurable system, to perform dimensionality reduction. Within that
reduced configuration space, many closely associated configurations can be studied
by measuring only a few samples.
In a number of experiments, we compared {\bf WHAT}\xspace against the state-of-the-art approaches of Siegmund et al.~\cite{siegmund2012predicting}, Guo et al.~\cite{guo2013variability}, and Sarkar et al. \cite{sarkar2015cost} by means of six real-world configurable systems: Berkeley DB, the Apache Web server, SQLite, the LLVM compiler, and the x264 video encoder.
We found that {\bf WHAT}\xspace performs as well or better than prior approaches,
while requiring far fewer samples (just a few dozen).
This is significant and most surprising, since some of the systems explored here have up to millions of possible configurations.
Overall, we make the following contributions:
\begin{itemize}
\item We present a novel sampling and learning approach for predicting the performance of software configurations in the face of large configuration spaces. The approach is based on a
{\em spectral
learner} that uses an approximation to the first principal component of the configuration space to recursively cluster it, relying only on a few points as representatives of each cluster.
\item We demonstrate the practicality and generality of our approach by conducting experiments on six real-world configurable software systems (see Figure ~\ref{fig:systems}). The results show that our approach is more accurate (lower mean error) and more stable (lower standard deviation) than state-of-the-art approaches. A key finding is the utility of the principal component of a configuration space to find informative samples from a large configuration space.
All materials required for reproducing this work are available at \url{https://goo.gl/689Dve}.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Figures/motivation}
\caption{The variation of performance scores of BDBC. }
\label{fig:motivation}
\end{figure}
\section{Background \& Related Work}
\label{sect:addit}
We use the configurable system, BDBC, as an example to motivate our approach. BDBC is an embedded database system written in C. In this example, we consider 18 features or configuration options of BDBC, which the user can configure. We use the response time to indicate the performance of BDBC in different configurations. These 18 configuration options lead to 2,560 configurations.
In Figure~\ref{fig:motivation}, we show the performance distribution of all the configurations of BDBC. It is worth noting the difference between the best performing configuration (lower left corner) and the worst performing (top right corner). The figure shows that having a good configuration reduces the response time by a factor of 40 when compared to the worst possible configuration.
An important point is that, in practice, the configurations are selected often uninformed and may not be the best or near best configuration. More over with more configurations added with ever release~\cite{xu2015hey}, it is important to have an automated approach to find the best or near-best configuration. Another aspect to this problem is the cost of evaluation (of a particular configuration), which may be very expensive. So, an ideal method should be able to find the best or near-best performing configuration with the least number of evaluations. Our approach {\bf WHAT}\xspace{} is effective in building accurate as well as stable performance models while using fewer sample configurations than the state-of-the-art.
A configurable software system has a set $X$ of Boolean configuration options,\footnote{In this paper, we concentrate on Boolean options, as they make up the majority of all options; see Siegmund et al., for how to incorporate numeric options~\cite{SGA+15}.} also referred to as features or independent variables in our setting.
We denote the number of features of system $S$ as $n$. The configuration space of $S$ can be represented by a Boolean space $\mathbb{Z}_{2}^{n}$, which is denoted by $F$. All valid configurations of $S$ belong to a set $V$,
which is represented by vectors $\vec{C_i}$ (with $1\leq i\leq \left\vert{V}\right\vert$) in $\mathbb{Z}_{2}^{n}$. Each element of a configuration represents a feature, which can either be \emph{True} or \emph{False}, based on whether the feature is selected or not.
Each valid instance of a vector (i.e., a configuration) has a corresponding performance score associated to it.
The literature offers two approaches to performance prediction of software configurations: a {\em maximal sampling} and a {\em minimal sampling} approach:
With {\em maximal sampling}, we compile all possible configurations and record the associated performance scores.
Maximal sampling can be impractically slow. For example, the performance data used in our experiments required 26 days of CPU time for measuring (and much longer, if we also count the time required for compiling the code prior to execution).
Other researchers have commented that, in
real world scenarios, the cost of acquiring the optimal configuration is overly expensive and time consuming \cite{weiss2008maximizing}.
If collecting performance scores of all configurations is impractical, {\em minimal sampling}
can be used to intelligently select and execute just enough configurations (i.e., samples) to build a
predictive model.
For example, Zhang et al.~\cite{zhang2015performance} approximate the
configuration space as a Fourier series, after which they can derive an expression showing how many configurations must be studied
to build predictive models with a given error. While a theoretically satisfying result, that approach still needs thousands to hundreds of thousands of executions of sample
configurations.
Another set of approaches are the four "additive" {\em minimal sampling} methods of Siegmund et al.~\cite{siegmund2012predicting}.
Their first method, called feature-wise sampling ({\em FW}), is their basic method.
To explain {\em FW}, we note that, from a configurable software system, it is theoretically possible to enumerate many or all of the valid configurations\footnote{Though, in practice, this can be very difficult. For example, in models like the Linux Kernel such an enumeration is practically impossible ~\cite{sayyad13b}.}.
Since each configuration ($\vec{C_i}$) is a vector of $n$ Booleans, it is possible to use this information to isolate examples of how much each feature individually contributes to the total run time:
\begin{enumerate}
\item Find a pair of configurations $\vec{C_1}$ and $\vec{C}_2$, where $\vec{C}_2$ uses exactly the same features as $\vec{C_1}$, plus one extra feature $f_i$.
\item Set the run time $\Pi(f_i)$ for feature $f_i$ to be the difference in the performance scores between $\vec{C_2}$ and $\vec{C_1}$.
\item The run time for a new configuration $\vec{C}_i$ (with $1\leq i\leq \left\vert{V}\right\vert$) that has not been sampled before is then the sum of the run time of its features, as determined before:
\begin{equation}
\Pi(C_i) = \sum_{f_j \in C_i}\Pi(f_j)
\end{equation}
\end{enumerate}
When many pairs, such as ${\vec{C_1},\vec{C}_2}$, satisfy the criteria of point~1, Siegmund et al.\ used the
pair that covers the {\em smallest} number of features. Their minimal sampling method, {\em FW},
compiles and executes only these smallest ${\vec{C_1}}$ and ${\vec{C_2}}$ configurations.
Siegmund et al.\ also offers three extensions to the basic method, which are based on sampling
not just the smallest pairs, but also additional configurations covering certain kinds of {\em interactions} between features.
All the following minimal sampling policies compile and execute valid configurations selected via one of three heuristics:
\begin{description}
\item[{\em PW (pair-wise):}] For each pair of features, try to find a configuration that contains the pair and has a minimal number of features selected.
\item[{\em HO (higher-order):}] Select extra configurations, in which three features, $f_1,f_2,f_3$, are selected if two of the following pair-wise interactions exist: $(f_1,f_2)$ and $(f_2,f_3)$ and $(f_1,f_3)$.
\item[{\em HS (hot-spot):}] Select extra configurations that contain features that are
frequently interacting with other features.
\end{description}
Guo et al.~\cite{guo2013variability} proposed a progressive random sampling approach, which samples the configuration space in steps of the number of features of the software system in question. They used the sampled configurations to train a regression tree, which is then used to predict the performance scores of other system configurations. The termination criterion of this approach is based on a heuristic, similar to the {\em PW} heuristics of Siegmund et al.
Sarkar et al.~\cite{sarkar2015cost} proposed a cost model for predicting the effort (or cost) required to generate an accurate predictive model. The user can use this model to decide whether to go ahead and build the predictive model. This method randomly samples configurations and uses a heuristic based on feature frequencies as termination criterion. The samples are then used to train a regression tree; the accuracy of the model is measured by using a test set (where the size of the training set is equal to size of the test set). One of four projective functions (e.g., exponential) is selected based on how correlated they are to accuracy measures. The projective function is used to approximate the accuracy-measure curve, and the elbow point of the curve is then used as the optimal sample size. Once the optimal size is known, Sarkar et al.\ uses the approach of Guo et al.\ to build the actual prediction model.
The advantage of these previous approaches is that, unlike the results of Zhang et al., they require only dozens to hundreds of samples. Also, like our approach, they do not require to enumerate all configurations, which is important for highly configurable software systems.
That said, as shown by our experiments (see Section~\ref{sec:experiments}), these approaches produce estimates with larger mean errors and partially larger variances than our approach. While sometimes the approach by Sarkar et al. results in models with (slightly)
lower mean error rates, it still requires a considerably larger number of samples (up to hundreds), while {\bf WHAT}\xspace requires only few dozen.
\section{Approach}
\subsection{Spectral Learning}\label{sect:spect}
The minimal sampling method we propose here is based on a spectral-learning algorithm
that explores the spectrum (eigenvalues) of the distance matrix between configurations in the configuration space.
In theory, such spectral learners are an appropriate method to handle noisy, redundant, and tightly inter-connected variables, for the following reasons:
When data sets have many irrelevancies or closely associated data parameters $d$, then
only a few eigenvectors $e$, $e \ll d$ are required to characterize the data.
In this reduced space:
\begin{itemize}
\item
Multiple inter-connected variables $i,j,k \subseteq d$ can be represented
by a single eigenvector;
\item
Noisy variables from $d$ are
ignored, because they do not contribute to the signal in the data;
\item
Variables become (approximately) parallel lines
in $e$ space. For redundancies \mbox{$i,j \in d$}, we
can ignore $j$
since effects that change over $j$ also
change in the same way over $i$;
\end{itemize}
That is, in theory, samples of configurations drawn via an eigenspace sampling method
would not get confused by noisy, redundant, or tightly inter-connected variables. Accordingly,
we expect predictions built from that sample to have lower mean errors and lower variances on that error.
Spectral methods have been used before for a variety of data mining applications~\cite{kamvar2003spectral}.
Algorithms, such as PDDP~\cite{boley98}, use spectral methods, such as principle component analysis (PCA), to
recursively divide data into smaller regions. Software-analytics researchers use spectral methods (again, PCA) as a pre-processor prior to data mining to reduce noise in software-related data sets~\cite{theisen2015approximating}.
However, to the best of our knowledge, spectral methods have not been used before as a basis of a minimal sampling method.
{\bf WHAT}\xspace is somewhat different from other spectral
learners explored in, for instance, image processing applications~\cite{shi2000normalized}.
Work on image processing does not aim at
defining a minimal sampling policy to predict performance scores.
Also, a standard spectral method requires an $O(N^2)$ matrix multiplication to compute the components
of PCA~\cite{ilin10}. Worse, in the case of hierarchical division methods, such as PDDP,
the polynomial-time inference must be repeated at every level of the hierarchy.
Competitive results can be achieved
using an $O(2N)$ analysis that we have developed previously~\cite{me12d}, which is based on a heuristic proposed by Faloutsos and Lin~\cite{Faloutsos1995} (which Platt has shown computes a Nystr\"om approximation to the first component of PCA~\cite{platt05}).
\begin{figure}
\begin{tabular}{ccc}
\includegraphics[height=35mm, width=35mm]{Figures/1} & \includegraphics[height=35mm, width=35mm]{Figures/2} & \includegraphics[height=35mm, width=35mm]{Figures/3} \\ \begin{tabular}[c]{@{}l@{}}(a) Feature space of a system \\with two configurations\end{tabular} & \begin{tabular}[c]{@{}l@{}}(b) Choosing a random point \\from the feature space\end{tabular} & \begin{tabular}[c]{@{}l@{}}(c) Find {\em West} farthest from the\\selected random point\end{tabular}\\[6pt]
\includegraphics[height=35mm, width=35mm]{Figures/4} &
\includegraphics[height=35mm, width=35mm]{Figures/5} &
\includegraphics[height=35mm, width=35mm]{Figures/6}
\\
\begin{tabular}[c]{@{}l@{}}(d) Find {\em East}, a point\\farthest away from {\em West}.\end{tabular} & \begin{tabular}[c]{@{}l@{}}(e) Line joining {\em East} and {\em West}\\is the first principal component\end{tabular} & \begin{tabular}[c]{@{}l@{}}(f) Projection of a point ({\em x})\\is calculated\end{tabular} \\[6pt]
\end{tabular}
\caption{Spectral Learning using {\bf WHAT}\xspace{} using the example of a system with two configuration options.}
\label{fig:spectral_desc}
\end{figure}
Figure~\ref{fig:spectral_desc} describes the procedure used to calculate the projection of a configurations. {\bf WHAT}\xspace receives $N$ (with $1\leq \left\vert{N}\right\vert\leq \left\vert{V}\right\vert$)
valid configurations ($\vec{C}$), $N_1,N_2,...$, as input (as shown in Figure~\ref{fig:spectral_desc}(a)) and then:
\begin{enumerate}
\item
Picks any
point $N_i$ ($1\leq i \leq\left\vert{N}\right\vert$) at random (as shown in Figure~\ref{fig:spectral_desc}(b));
\item
Finds
the point {\em West}~$\in N$ that is
furthest away from $N_i$ (as shown in Figure~\ref{fig:spectral_desc}(c));
\item Finds the point {\em East}~$\in N$
that is furthest from {\em West} (as shown in Figure~\ref{fig:spectral_desc}(d)).
\end{enumerate}
The line joining {\em East}
and {\em West} is our approximation for the first principal component (as shown in Figure~\ref{fig:spectral_desc}(e)).
Using the distance calculation shown in Equation~\ref{eq:dist},
we define $\mathit{c}$ to be the distance between {\em East}~(x)
and {\em West}~(y).
{\bf WHAT}\xspace uses this distance ($\mathit{c}$) to divide all the configurations as follows:
The value $x_i$ is the projection of $N_i$
on the line running from {\em East} to {\em West} (as shown in Figure~\ref{fig:spectral_desc}(f))\footnote{The projection of $N_i$ can be calculated in the following way:\newline $a = \mathit{dist}(\mathit{East}, N_i); b = \mathit{dist}(\mathit{West}, N_i); x_i = \sqrt{\frac{a^2 - b^2 + \mathit{c}^2}{2\mathit{c}}}$.
}. We divide
the examples based on the median value of the projection of $x_i$. Now, we have two clusters of data divided based on the projection values (of $N_i$) on the line joining {\em East} and {\em West}. This process is applied recursively on these clusters until a predefined stopping condition. In our study, the recursive splitting of the $N_i$'s stops when a sub-region
contains less than $\sqrt{|N|}$ examples.
\begin{equation}
\mathit{dist}(x, y) =
\begin{cases}
\sqrt{\sum_i(x_i-y_i)^2}
& \text{if $x_i$ and $y_i$ is numeric}\\
\begin{cases}
0, & \text{ if $x_i = y_i$}\\
1, & \text{ otherwise}\\
\end{cases}
& \text{if $x_i$ and $y_i$ is Boolean}\\
\end{cases}
\label{eq:dist}
\end{equation}
We explore this approach for three reasons:
\begin{itemize}
\item
{\em It is very fast}:
This process requires only $2|n|$ distance comparisons
per level of recursion, which is far less than the $O(N^2)$
required by PCA~\cite{Du2008}
or other algorithms such as K-Means~\cite{hamerly2010making}.
\item
{\em It is not domain-specific}:
Unlike traditional PCA, our approach is general in that it does not assume that all the variables are numeric. As shown in Equation~\ref{eq:dist},\footnote{In our study, $\mathit{dist}$ accepts pair of configuration ($\vec{C}$) and returns the distance between them. If $x_i$ and $y_i$ $\in \mathbb{R}^n$, then the distance function would be same as the standard Euclidean distance.} we can approximate distances for both numeric and non-numeric data (e.g., Boolean).
\item
{\em It reduces the dimensionality problem}:
This technique explores the underlying dimension (first principal component) without getting confused by noisy, related, and highly associated variables.
\end{itemize}
\subsection{Spectral Sampling}\label{sect:sample}
When the above clustering method terminates, our sampling policy (which we call $S_1$) is then applied:
\begin{description}
\item[{\em Random sampling ($S_1$):}] compile and execute one configuration, picked at random, from each leaf cluster;
\end{description}
We use this sampling policy, because (as we will show later) it performs better than:
\begin{description}
\item[{\em East-West sampling ($S_2$):}] compile and execute the {\em East} and {\em West} poles of the leaf clusters;
\item[{\em Exemplar sampling ($S_3$):}] compile and execute all items in all leaves and return the one
with lowest performance score.
\end{description}
Note that $S_3$ is {\em not} a {\em minimal} sampling policy (since it executes all configurations).
We use it here as one baseline
against which we can compare the other, more minimal, sampling policies. In the results
that follow, we also compare our
sampling methods against another baseline using information gathered after executing
all configurations.
\subsection{Regression-Tree Learning} \label{rtlearning}
After collecting the data using one of the sampling policies ($S_1$, $S_2$, or $S_3$), as described in Section \ref{sect:sample}, we use a CART regression-tree learner~\cite{breiman1984} to build a performance predictor. Regression-tree learners seek the attribute-range split that most increases
our ability to make accurate predictions.
CART explores splits that divide $N$ samples into two sets $A$ and $B$, where each set has a standard deviation on the target variable of $\sigma_1$ and $\sigma_2$.
CART finds the ``best'' split defined as the split that minimizes $\frac{A}{N}\sigma_1 + \frac{B}{N}\sigma_2$.
Using this best split, CART divides the data recursively.
In summary, {\bf WHAT}\xspace combines:
\begin{compactitem}
\item
The FASTMAP method of Faloutsos and Lin~\cite{Faloutsos1995}, which rather than $N^2$ comparisons only performs $2N$ where $N$ is the number of configurations in the configuration space;
\item A spectral-learning algorithm initially inspired by Boley's PDDP system~\cite{boley98}, which we modify
by replacing PCA with FASTMAP (called
``WHERE'' in prior work ~\cite{me12d});
\item
The sampling policy that explores the leaf clusters found by this recursive division;
\item
The CART regression-tree learner that converts the data from the samples collected by sampling policy
into a run-time prediction model~\cite{breiman1984}.
\end{compactitem}
That is,
\begin{center}
\begin{tabular}{rcl}
WHERE& = &PDDP $-$ PCA $+$ FASTMAP\\[1.5ex]
{\bf WHAT}\xspace& = & WHERE $+$ \{ $S_1, S_2, S_3$ \} $+$ CART
\end{tabular}
\end{center}
This unique combination of methods has not been previously explored in the
software-engineering literature.
\subsection{Approach as a pipeline}
Different components of {\bf WHAT}\xspace{} can be used to sample configurations of a configurable software system, which can be then used to generate an accurate and stable performance model. We test {\bf WHAT}\xspace{} in following way:
\begin{itemize}
\item All the possible configurations of a system is enumerated
\item Split the configurations into training and testing datasets based on a predefined ratio (as discussed in section~\ref{sec:exp_rig}) -- at this point none of the configurations is measured
\item {\bf WHAT}\xspace{} (section~\ref{sect:spect}) is used to sample configuration (section~\ref{sect:sample}) from the training dataset and for each configuration the corresponding performance is measured.
\item Configuration and the corresponding performance score is used to build a performance model using Regression-Tree Learner (section~\ref{rtlearning}).
\item The accuracy (in terms of MRE) of the built performance model is measured using configurations for the testing set.
\end{itemize}
In the next section, we formulate our research questions and discuss the experimental setup.
\section{Experiments}
\label{sec:experiments}
\subsection{Research Questions}
We formulate our research questions in terms of the challenges of
exploring large complex configuration spaces.
As our approach explores the spectral space, our hypothesis is that only a small
number of samples is required to explore the whole space.
However, a prediction model built from a very small sample of the configuration space might
be very inaccurate and unstable, that is, it may exhibit very large mean prediction errors and variances on the prediction error.
Also, if we learn models from small regions of the training data,
it is possible that a learner will miss {\em trends} in the data
between the sample points. Such trends are useful when building {\em optimizers}
(i.e., systems that receives one configuration as input and propose an alternate
configuration that has, for instance, a better performance). Such optimizers might
need to evaluate hundreds to millions of alternate configurations.
To speed up that process, optimizers can use a {\em surrogate model}\,\footnote{Also known as response surface methods, meta models, or emulators.}
that mimics the outputs of a system of interest, while being computationally cheap(er) to evaluate~\cite{loshchilov13}. For example, when optimizing
performance scores, we might ask a CART for a performance
prediction (rather than compile and execute
the corresponding configuration). Note that such surrogate-based
reasoning critically depends on how well the surrogate can guide optimization.
Therefore, to assess feasibility of our sampling policies, we must consider:
\begin{itemize}
\item Performance scores generated from our minimal sampling policy;
\item The variance of the error rates when comparing predicted performance scores with actual ones;
\item The optimization support offered by the performance predictor (i.e., can the model work in tandem with other off-the-shelf optimizers to generate useful solutions).
\end{itemize}
The above considerations lead to four research questions:
\begin{description}
\item[{\em RQ1:}] {\em Can {\bf WHAT}\xspace generate good predictions after
examining only a small number of configurations?}
\end{description}
Here, by ``good'' we mean that the predictions made by models that were trained using sampling with {\bf WHAT}\xspace are as accurate, or more accurate,
as preditions generated from models supplied with more samples.
\begin{description}
\item[{\em RQ2:}] {\em
Do less data used in building the predictions models cause larger variances in the predicted performance scores?}
\item[{\em RQ3:}] {\em
Can ``good'' surrogate models (to be used in optimizers)
be built from minimal samples?}
\end{description}
Note that RQ2 and RQ3 are of particular concern with our approach,
since our goal is to sample as little as possible from the configuration space.
\begin{description}
\item[{\em RQ4:}] {\em How good is {\bf WHAT}\xspace compared to the state of the art of
learning performance predictors from configurable software systems?}
\end{description}
\begin{table}
\centering
\caption{Subject systems used in the experiments.}\label{fig:systems}\footnotesize
\rotatebox{90}{
\begin{tabular}{p{1.5cm}p{2.25cm}p{0.75cm}p{0.5cm}p{6.5cm}p{1cm}p{3cm}p{1cm}}
\toprule
& \textbf{Description} & \textbf{LOC} & \textbf{\#\,Feat.} & \textbf{Configurations} & \textbf{\#\,Config.}& \textbf{Benchmark Used} & \textbf{Performance Metric} \\ \cmidrule{1-8}
\textbf{Apache} & Apache is a prominent open-source Web server with numerous configuration options. & 230,277 & 9 & Base, HostnameLookups, KeepAlive, EnableSendfile, FollowSymLinks, AccessLog, ExtendedStatus, InMemory, Handle & 192 & We used the tools autobench and httperf to generate load on the Web server. We increased the load until the server could not, handle any, further requests & Maximum load \\
\\ \textbf{Berkeley~DB\newline C Edition\newline (BDBC)} & BDBC is an embedded database system written in C. & 219,811 & 18 & HAVE\_CRYPTO, HAVE\_HASH, HAVE\_REPLICATION, HAVE\_VERIFY, HAVE\_SEQUENCE, HAVE\_STATISTICS, DIAGNOSTIC, PAGESIZE, PS1K, PS4K, PS8K, PS16K, PS32K, CACHESIZE, CS32MB, CS16MB, CS64MB, CS512MB & 2,560 & Benchmark provided by the vendor & Response time \\
\\ \textbf{Berkeley~DB\newline Java Edition\newline (BDBJ)} &BDBJ is a complete re-development of BDBC in Java with full SQL support. & 42,596 & 32 &Base, Persistence, IO, OldIO, NewIO, NIOBase, NIOType, ChunkedNIO, SingleWriteNIO, DirectNIO, LogSize, S100MiB, S1MiB, Checksum, BTreeFeatures, INCompressor, IEvictor, Evictor, Critical\_Eviction, Verifier, ITracing, Tracing, TracingLevel, Severe, Finest, Statistics & 400 & Benchmark provided by the vendor & Response time \\
\\ \textbf{LLVM} & LLVM is a compiler infrastructure written in C++. & 47,549 & 11 & time\_passes, gvn, instcombine, inline, jump\_threading, simplifycfg, sccp, print\_used\_types, ipsccp, iv\_users, licm & 1,024 & As benchmark, we measured the time to compile LLVM’s test suite & Time to\newline compile LLVM’s test suite \\
\\ \textbf{SQLite} & SQLite is an embedded database system deployed over several millions of devices. & 312,625 & 39 & OperatingSystemCharacteristics,SQLITE\_SECURE\_DELETE, ChooseSQLITE\_TEMP\_STORE,SQLITE\_TEMP\_STOREzero, SQLITE\_TEMP\_STOREone, SQLITE\_TEMP\_STOREtwo, SQLITE\_TEMP\_STOREthree, AutoVacuumOff, AutoVacuumOn, SetCacheSize, StandardCacheSize, LowerCacheSize, HigherCacheSize, LockingMode, ExclusiveLock, NormalLockingMode, PageSize, StandardPageSize, LowerPageSize, HigherPageSize, HighestPageSize & 3,932,160 & Benchmark provided by the vendor & Response time \\
\\ \textbf{x264} & x264is a video encoder in C that provides configuration options to adjust output quality of encoded video files. & 45,743 & 16 & no\_asm, no\_8x8dct, no\_cabac, no\_deblock, no\_fast\_pskip, no\_mbtree, no\_mixed\_refs, no\_weightb, rc\_lookahead, rc\_lookahead\_20, rc\_lookahead\_40, rc\_lookahead\_60, ref, ref\_1, ref\_5, ref\_9 & 1,152 & As benchmark, we encoded the Sintel trailer (735 MB) from AVI to the xH.264 codec & Encoding time \\ \bottomrule
\end{tabular}
}
\end{table}
\noindent To answer RQ4, we will compare {\bf WHAT}\xspace
against approaches presented by Siegmund et al.~\cite{siegmund2012predicting}, Guo et al.~\cite{guo2013variability}, and Sarkar et al.~\cite{sarkar2015cost}.
\subsection{Subject Systems}
\label{sec:subject_systems}
The configurable systems we used in our experiments are described in Table~\ref{fig:systems}.
All systems are real-world systems and representative of different domains with different configuration mechanisms and implemented using different programming languages.
Note, with ``predicting performance'', we
mean predicting performance scores of the subject systems while executing test suites provided by the developers or the community, as described in Table~\ref{fig:systems}.
To compare the predictions of our and prior approaches with actual performance measures, we use data sets that have been obtained by
measuring {\em nearly all} configurations\footnote{http://openscience.us/repo/performance-predict/cpm.html}.
We say {\em nearly all} configurations, for the following reasoning: For
all except one of our subject systems, the total number of valid configurations
was tractable (192 to 2560). However, SQLite has 3,932,160
possible configurations (since SQLite has 39 configuration -- $2^{39}$ possible configurations), which is an impractically large number of configurations to test whether our predictions are accurate and stable. Hence, for SQLite, we use the 4500 samples for testing prediction accuracy and stability, which we could collect in one day of CPU time. Taking this into account, we will pay particular attention to the variance of the SQLite results.
\subsection{Experimental Rig}\label{sec:exp_rig}
RQ1 and RQ2 require the construction and assessment of numerous runtime predictors from small samples
of the data. The following rig implements that construction process.
For each configurable software system, we built a table of data, one row per valid configuration. We then ran all configurations of all software systems
and recorded the performance scores (i.e., that are invoked by a benchmark).
The exception is SQLite for which we measured only the
configurations needed to detect interactions and additionally
100 random configurations.
To this table, we added a column showing the performance score obtained from the actual measurements for each configuration.
Note that the following procedure ensures that
we \textit{never} test any prediction model on the data that we used to learn this model. Next, we repeated the following procedure 20 times (the figure of 20 repetitions was
selected using the Central Limit Theorem):
For each subject system in \{BDBC, BDBJ, Apache, SQLite, LLVM, x264\}
\begin{itemize}
\item Randomize the order of the rows in their table of data;
\item For $X$ in \{10, 20, 30, ... , 90\};
\begin{itemize}
\item Let {\em Train} be the first $X$\,\% of the data
\item Let {\em Test} be the rest of the data;
\item Pass {\em Train} to {\bf WHAT}\xspace to select sample configurations;
\item Determine the performance scores associated with these configurations. This corresponds to a table lookup, but would entail compiling and executing a system configuration in a practical setting.
\item Using the {\em Train} data and their performance scores, build a performance predictor using CART.
\item Using the {\em Test} data, assess the accuracy of the predictor using the error
measure of \eq{err} (see below).
\end{itemize}
\end{itemize}
The validity of the predictors built by CART is verified on testing data.
For each test item, we determine how long it {\em actually} takes to run the corresponding system configuration and compare the actual measured performance to the {\em prediction} from CART. The resulting prediction error is then computed using:
\begin{equation}\label{eq:err}
\mathit{error}=\frac{\mid\mathit{predicted} - \mathit{actual}\mid}{\mathit{actual}} \cdot 100
\end{equation}
(Aside: It is reasonable to ask why this metrics and not some of the others proposed
in the literature (e.g sum absolute residuals). In short, our results are stable
across a range of different metrics. For e.g., the results of this paper have
been repeated using sum of absolute residuals and, in those other results,
we seen the same ranking of methods; see \url{http://tiny.cc/sumAR}).
RQ2 requires testing the standard deviation of the prediction error rate. To support that test, we:
\begin{itemize}
\item Determine the $X$-th point in the above experiments, where all predictions stop improving (elbow point);
\item Measure the standard deviation of the error at this point, across our 20 repeats.
\end{itemize}
As shown in Figure~\ref{fig:sampling_accuracy}, all our results plateaued after studying $X=40$\,\% of the valid configurations\footnote{Just to clarify one frequently asked question about this work, we note
that our rig ``studies'' 40\,\% of the data. We do not mean that our predictive models
require accessing the performance scores from the 40\,\% of the data. Rather, by ``study'' we mean reflect
on a sample of configurations to determine what minimal subset of that
sample deserves to be compiled and executed.}.
Hence to answer { RQ2}, we will compare all 20 predictions at $X=40$\,\%.
{ RQ3} uses the learned regression tree as a {\em surrogate model} within an optimizer;
\begin{itemize}}%[leftmargin=0.4cm]
\item Take $X=40\,\%$ of the configurations;
\item Apply {\bf WHAT}\xspace to build a CART model using some minimal sample taken from that 40\,\%;
\item Use that CART model within some standard optimizer while searching for
configurations with least runtime;
\item Compare the faster configurations found in this manner with the fastest configuration
known for that system.
\end{itemize}
This last item requires access to a ground truth of performance scores for a
large number of configurations. For this experiment, we have access to that ground truth
(since we have access to all system configurations, except for SQLite). Note that such a ground truth
would not be needed when practitioners choose to use {\bf WHAT}\xspace in their own work (it is only for our empirical investigation).
For the sake of completeness, we explored
a range of optimizers seen in the literature: DE~\cite{storn1997differential}, NSGA-II~\cite{deb00afast},
and our own GALE~\cite{krall2014gale,zuluaga2013active} system. Normally,
it would be reasonable to ask
why we used those three, and not the hundreds of other
optimizers described in the literature~\cite{fletcher13,harman12}. However,
as shown below, all these optimizers in this
domain exhibited very similar
behavior (all found configurations close to the
best case performance). Hence, the specific
choice of optimizer is not a critical
variable in our analysis.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Figures/SamplingAccuracy}
\caption{Errors of the predictions made by {\bf WHAT}\xspace with four different
sampling policies. Note that, on the y-axis, {\em lower} errors are {\em better}.
}
\label{fig:sampling_accuracy}
\end{figure}
\section{Results}
\subsection{RQ1}
\begin{center}
{\em Can {\bf WHAT}\xspace generate good predictions after
examining only a small number of configurations?}
\end{center}
\noindent \fig{sampling_accuracy} shows the mean errors of the predictors learned
after taking $X$\,\% of the configurations, then asking {\bf WHAT}\xspace and some sampling method ($S_1$, $S_2$, and $S_3$)
to (a)~find what configurations to measure; then (b)~asking CART to build a predictor
using these measurements. The horizontal axis of the plots shows what $X$\,\%
of the configurations are studied; the vertical axis shows the mean relative error ($\mu$) from \eq{err}.
In this figure:
\begin{itemize}
\item
The $\times$\hspace{-2pt}---\hspace{-2pt}$\times$ lines in \fig{sampling_accuracy} show a {\em baseline} result
where data from the performance scores of 100\,\% of configurations were used by CART
to build a runtime predictor.
\item
The other lines show the results using the sampling methods defined in Section~\ref{sect:sample}.
Note that these sampling methods used runtime data only from a
subset of 100\,\% of the performance scores seen in configurations
from 0 to X\,\%.
\end{itemize}
In \fig{sampling_accuracy}, {\em lower} y-axis values are {\em better} since this means lower
prediction errors. Overall, we find that:
\begin{itemize}
\item Some of the subject systems exhibit large variances in their error rate, below $X=40$\,\% (e.g., BDBC and BDBJ).
\item Above $X=40$\,\%, there is little effect on the overall change of the sampling methods.
\item
Mostly, $S_3$ shows the highest overall error,
so that it cannot be recommended.
\item Always, the $\times$\hspace{-2pt}---\hspace{-2pt}$\times$ baseline shows the lowest errors, which is to be
expected since predictors built on the baseline have access to all data.
\item
We see a trend that the error of $S_1$ and $S_2$ are within $5$\,\% of the {\em baseline} results.
Hence, we can recommend these two minimal sampling methods.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=0.75\columnwidth]{Figures/evaluation_graph}
\caption{Comparing evaluations of different sampling policies. We see that the number of configurations evaluated for $S_2$ is twice as high as $S_1$, as it selects 2 points from each cluster, where as $S_1$ selects only 1 point. }\label{fig:Evaluations}
\end{figure}
\fig{Evaluations} provides information about which of $S_1$ or $S_2$ we should recommend.
This figure displays data taken from the $X=40$\,\% point of \fig{sampling_accuracy} and displays
how many performance scores of configurations are needed by our sampling methods (while
reflecting on the configurations seen in the range $0\le X \le 40$). Note that:
\begin{itemize}
\item
$S_3$ needs up to thousands of performance scores points,
so it cannot be recommended as minimal-sampling policy;
\item $S_2$ needs twice as much performance scores as
$S_1$ ($S_2$ uses {\em two} samples per leaf cluster while
$S_1$ uses only {\em one}).
\item $S_1$ needs performance scores only for a few dozen (or less) configurations to generate
the predictions with the lower errors seen in \fig{sampling_accuracy}.
\end{itemize}
Combining the results of \fig{sampling_accuracy} and \fig{Evaluations}, we conclude that:
\begin{myshadowbox}
$S_1$ is our preferred spectral sampling method. Furthermore,
the answer to RQ1 is ``yes'', because applying {\bf WHAT}\xspace{}, we can (a)~generate runtime predictors
using just a few dozens of sample performance scores;
and (b)~these predictions have error rates
within 5\,\% of the error rates seen if predictors are built from information about all performance scores.
\end{myshadowbox}
\subsection{RQ2}
\begin{center}
{\em
Do less data used in building prediction models cause larger variances in the predicted values?}
\end{center}
Two competing effects can cause increased or decreased variances in
performance predictions. In our study, we report standard deviation ($\sigma$) as a measure of variances in the performance predicitons.
The less we sample the configuration space,
the less we constrain model generation in that space. Hence, one effect that can be expected
is that models learned
from too few samples exhibit large variances.
But,
a compensating effect can be introduced by sampling from the spectral space
since that space contains fewer confusing or correlated variables than the raw configuration space.
\fig{Variance} reports which one of these two competing effects are dominant.
\fig{Variance} shows that after some initial fluctuations,
after seeing $X=40$\,\% of the configurations, the variances in prediction errors reduces to nearly zero, which is similar to the results in figure \ref{fig:sampling_accuracy}.
\begin{figure}[t]
\includegraphics[width=\columnwidth, height=10cm]{Figures/Variance}
\centering
\caption{Standard deviations seen at various points of \fig{sampling_accuracy}.}\label{fig:Variance}
\end{figure}
\begin{myshadowbox}
Based of the results of Figure~\ref{fig:Variance}, we answer RQ2 with ``no'': Selecting a small number of samples does not necessarily increase variance (at least to say, not in this domain).
\end{myshadowbox}
\subsection{RQ3}
\begin{center}
{\em
Can ``good'' surrogate models (to be used in optimizers)
be built from minimal samples?}
\end{center}
The results of answering RQ1 and RQ2 suggest to use {\bf WHAT}\xspace~(with $S_1$) to build runtime predictors from a small sample of data. RQ3
asks if that predictor can be used by an optimizer to infer what {\em other} configurations correspond to system configurations with fast performance scores.
To answer this question, we ran a random set of 100
configurations, 20 times, and related that baseline to three optimizers (GALE~\cite{krall2014gale}, DE~\cite{storn1997differential} and NSGA-II~\cite{deb00afast}) using their
default parameters.
When these three optimizers mutated existing configurations to suggest new ones,
these mutations were checked for validity. Any mutants that violated the system's constraints (e.g., a feature excluding another feature) were rejected
and the survivors were ``evaluated'' by asking the CART surrogate model.
These evaluations either rejected the mutant or used it in generation $i+1$, as the basis for a search for more, possibly
better mutants.
\fig{performance_graph} shows the configurations found by the three optimizers projected onto the ground truth of the performance scores of nearly
all configurations (see Section~\ref{sec:subject_systems}). Again note that, while we use that ground truth for the validation of these results, our optimizers
used only a small part of that ground-truth data in their search for the fastest configurations (see the {\bf WHAT}\xspace + $S_1$
results of \fig{Evaluations}).
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth, height=8cm]{Figures/optimizer_result}
\caption{Solutions found by GALE, NSGA-II, and DE (shown as points) laid against the ground truth (all known configuration performance scores).
It can be observed that all the optimizers can find the configuration with lower performance scores.}\label{fig:performance_graph}
\end{figure}
The important information of \fig{performance_graph} is that all the optimized configurations fall within 1\,\% of the fastest
configuration according to the ground truth (see all the left-hand-side dots on each plot). Table~\ref{fig:external_validity} compares the performance of the optimizers
used in this study. Note that the performances are nearly identical, which leads to the following conclusions:
\begin{myshadowbox}
Based on the results of figure~\ref{fig:performance_graph} answer to RQ3 is ``yes'': For optimizing performance scores, we can use surrogates built from few runtime samples. The choice of the optimizer does not critically effect this conclusion.
\end{myshadowbox}
\begin{table}[tbh]
\centering
\caption{The table shows how the minimum performance scores as found by the learners GALE, NSGA-II, and DE, vary over 20 repeated
runs. Mean values are denoted $\mu$ and IQR denotes the 25th--75th percentile. A low IQR suggests that the surrogate model build by {\bf WHAT}\xspace is stable and can be utilized by off the shelf optimizers to find performance-optimal configurations.
}
\label{fig:external_validity}
\vspace{2ex}
\begin{tabular}{lrrrrrr}
\toprule
\multirow{3}{*}{} & \multicolumn{6}{c}{Searcher} \\ \cmidrule{2-7}
& \multicolumn{2}{c}{GALE} & \multicolumn{2}{c}{DE} & \multicolumn{2}{c}{NSGAII} \\ \cmidrule{2-7}
& Mean & IQR & Mean & IQR & Mean & IQR \\ \midrule
\textbf{Apache} & 870 & 0 & 840 & 0 & 840 & 0 \\
\textbf{BDBC} & 0.363 & 0.004 & 0.359 & 0.002 & 0.354 & 0.005 \\
\textbf{BDBJ} & 3139 & 70 & 3139 & 70 & 3139 & 70 \\
\textbf{LLVM} & 202 & 3.98 & 200 & 0 & 200 & 0 \\
\textbf{SQLite} & 13.1 & 0.241 & 13.1 & 0 & 13.1 & 0.406 \\
\textbf{X264} & 248 & 3.3 & 244 & 0.003 & 244 & 0.05 \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure*}[h]
\begin{minipage}{4in}
{\small
\begin{tabular}{l@{~~~~}l@{~~~~}r@{~~~~~}r@{~~~~~}c@{}r}
\multicolumn{1}{l}{Rank}& Approach & Mean MRE($\mu$) & STDev($\sigma$) & \textbf{} & \#Evaluations \\
\rowcolor{lightgray}\arrayrulecolor{lightgray}
\textbf{Apache} & \textbf{} & \textbf{} & \textbf{}&\textbf{}& \\\hline
1 & Sarkar & 7.49 & 0.82 & \quart{6}{3}{7} & 55 \\
1 & Guo(PW) & 10.51 & 6.85 & \quart{3}{33}{22} & 29 \\
1 & Siegmund & 10.34 & 11.68 & \quart{0}{55}{21} & 29\\
1 & \textbf{WHAT} & 10.95 & 2.74 & \quart{16}{13}{24} & 16 \\
1 & Guo(2N) & 13.03 & 15.28 & \quart{7}{72}{34} & 18\\
\hline
\rowcolor{lightgray}\arrayrulecolor{lightgray}
\textbf{BDBC} & \textbf{} & \textbf{} & \textbf{}& \textbf{}&\\\hline
1 & Sarkar & 1.24 & 1.46 & \quart{0}{1}{0} & 191\\
\hline 2 & Siegmund & 6.14 & 4.41 & \quart{4}{5}{6} & 139\\
2 & \textbf{WHAT} & 6.57 & 7.40 & \quart{4}{9}{7} & 64\\
2 & Guo(PW) & 10.16 & 10.6 & \quart{2}{13}{11} & 139\\
\hline 3 & Guo(2N) & 49.90 & 52.25 & \quart{16}{63}{59} & 36\\
\hline
\rowcolor{lightgray}\arrayrulecolor{lightgray}
\textbf{BDBJ} & \textbf{} & \textbf{} & \textbf{}& \textbf{}&\\\hline
1 & Guo(2N) & 2.29 & 3.26 & \quart{0}{29}{9} & 52\\
1 & Guo(PW) & 2.86 & 2.72 & \quart{2}{25}{14} & 48\\
1 & \textbf{WHAT} & 4.75 & 4.46 & \quart{12}{40}{31} & 16\\
\hline 2 & Sarkar & 5.67 & 6.97 & \quart{6}{62}{39} & 48\\
2 & Siegmund & 6.98 & 7.13 & \quart{16}{63}{51} & 57\\
\hline
\rowcolor{lightgray}\arrayrulecolor{lightgray}
\textbf{LLVM} & \textbf{} & \textbf{} & \textbf{}&\textbf{}& \\\hline
1 & Guo(PW) & 3.09 & 2.98 & \quart{0}{21}{10} & 64\\
1 & \textbf{WHAT} & 3.32 & 1.05 & \quart{9}{7}{12} & 32\\
1 & Sarkar & 3.72 & 0.45 & \quart{13}{3}{15} & 62\\
1 & Guo(2N) & 4.99 & 5.05 & \quart{11}{36}{24} & 22\\
\hline 2 & Siegmund & 8.50 & 8.28 & \quart{21}{58}{49} & 43\\
\hline
\rowcolor{lightgray}\arrayrulecolor{lightgray}
\textbf{SQLite} & \textbf{} & \textbf{} & \textbf{} & \textbf{}&\\\hline
1 & Sarkar & 3.44 & 0.10 & \quart{0}{0}{0} & 925\\
\hline 2 & \textbf{WHAT} & 5.60 & 0.57 & \quart{7}{2}{8} & 64\\
\hline 3 & Guo(2N) & 8.57 & 7.30 & \quart{2}{28}{19} & 78\\
3 & Guo(PW) & 8.94 & 6.24 & \quart{6}{24}{20} & 566\\
\hline 4 & Siegmund & 12.83 & 17.0 & \quart{16}{63}{35} & 566\\
\hline
\rowcolor{lightgray}\arrayrulecolor{lightgray}
\textbf{x264} & \textbf{} & \textbf{} & \textbf{}& \textbf{}&\\\hline
1 & Sarkar & 6.64 & 1.04 & \quart{4}{2}{5} & 93\\
1 & \textbf{WHAT} & 6.93 & 1.67 & \quart{4}{4}{6} & 32\\
1 & Guo(2N) & 7.18 & 7.07 & \quart{0}{15}{6} & 32\\
1 & Guo(PW) & 7.72 & 2.33 & \quart{4}{5}{8} & 81\\
\hline 2 & Siegmund & 31.87 & 21.24 & \quart{32}{47}{61} & 81\\
\hline
\end{tabular}}
\end{minipage}
\caption{Mean MRE($\mu$) seen in 20 repeats. Mean MRE is the prediction error as described in Equation~\ref{eq:err} and STDev ($\sigma$) is the standard deviation of the MREs found during multiple repeats.
Lines with a a dot in the middle
(e.g. \protect \quartex{3}{13}{13})
show the mean as a round dot withing the IQR (and if the IQR is very small, only a round dot will be visible).
All the results are sorted by the mean values: a lower mean value of MRE is better than large mean value.
The left-hand side column ({\textit rank}) ranks various techniques for e.g. when comparing various techniques for Apache, all the techniques have the same rank since their mean values are not statistically different. \textit{Rank} is computed using Scott-Knott, bootstrap 95\% confidence, and A12 test.
}
\label{fig:stats}
\end{figure*}
\subsection{RQ4}
\begin{center}
{\em How good is {\bf WHAT}\xspace compared to the state of the art of learning performance predictors from configurable software systems?}
\end{center}
We compare {\bf WHAT}\xspace with the three state-of-the-art predictors proposed in the literature~\cite{siegmund2012predicting}, \cite{guo2013variability}, \cite{sarkar2015cost}, as discussed in Section~\ref{sect:addit}. Note that all approaches use regression-trees as predictors, except Siegmund's approach, which uses a regression function derived using linear programming.
The results were studied using non-parametric tests, which was also used by Arcuri and Briand at ICSE
'11~\cite{mittas13}). For testing statistical significance,
we used non-parametric bootstrap test 95\% confidence~\cite{efron93} followed by
an A12 test to check that any observed differences were not trivially small effects;
i.e. given two lists $X$ and $Y$, count how often there are larger
numbers in the former list (and there there are ties, add a half mark):
$a=\forall x\in X, y\in Y\frac{\#(x>y) + 0.5*\#(x=y)}{|X|*|Y|}$
(as per Vargha~\cite{Vargha00}, we say that a ``small'' effect has $a <0.6$).
Lastly, to generate succinct reports, we use the Scott-Knott test to recursively
divide our optimizers. This recursion used A12 and bootstrapping
to group together subsets that are (a)~not significantly different and are (b)~not
just a small effect different to each other. This use of Scott-Knott is endorsed
by Mittas and Angelis~\cite{mittas13}
and by Hassan et al.~\cite{7194626}.
As seen in Figure~\ref{fig:stats}, the FW heuristic of Siegmund et al. (i.e., the sampling approach using the fewest number of configurations) has the higher errors rate and the highest standard deviation on that error rate (four out of six times). Hence, we cannot recommend this method or, if one wishes to use this method, we recommend using the other sampling heuristics (e.g., HO, HS) to make more accurate predictions (but at the cost of much more measurements). Moreover, the size of the standard deviation of this method causes further difficulties in estimating which configurations are those exhibiting a large prediction error.
As to the approach of Guo et al. (with PW), it does not standout on any of our measurements. Its error results are within 1\% of {\bf WHAT}\xspace; its standard deviations are usually larger and it requires much more data than {\bf WHAT}\xspace (Evaluations column of the figure~\ref{fig:stats}).
In terms of the number of measure samples required to build a model, the right-hand column of Figure~\ref{fig:stats} shows that {\bf WHAT}\xspace requires the fewest samples except for two cases: the approach of Guo et al. (with 2N) working on BDBC and LLVM. In both these cases, the mean error and standard deviation on the error estimate is larger than {\bf WHAT}\xspace. Furthermore, in the case of BDBC, the error values
are $\mu=14\,\%$, $\sigma=13\,\%$, which are much larger
than {\bf WHAT}\xspace{}'s error scores of $\mu=6\,\%$, $\sigma=5\,\%$.
Although the approach of Sarkar et al. produces an error rate that is sometimes less than the one of {\bf WHAT}\xspace, it requires the highest number of measurements. Moreover, {\bf WHAT}\xspace\textquotesingle s accuracy is close to Sarkar\textquotesingle s approach (1\% to 2\%) difference). Hence, we cannot recommend this approach, too.
Table~\ref{tab:measurements} shows the number of evaluations used by each approaches. We see that most state-of-the-art approaches often require many more samples than
{\bf WHAT}\xspace{}. Using those fewest numbers of samples, {\bf WHAT}\xspace has
within 1\% to 2\,\% of the lowest standard deviation rates
and within 1 to 2\,\% of lowest error rates.
The exception is Sarkar's approach, which has 5\,\% lower mean error
rates (in BDBC, see the Mean MRE column of figure~\ref{fig:stats}). However,
as shown in right-hand side of Table~\ref{tab:measurements}, Sarkar's approach needs nearly three times
more measurements than {\bf WHAT}\xspace.
\noindent To summarize, there are two cases in Figure~\ref{fig:stats} where {\bf WHAT}\xspace performs worse than, at least, one
other method:
\begin{itemize}
\item
SQLite: The technique proposed by Sarkar et al. does better than {\bf WHAT}\xspace (3.44 vs 5.6)
but, as shown in the final column of Figure~\ref{fig:stats},
does so at the cost of $\frac{925}{64} \approx 15$ times more evaluations that {\bf WHAT}\xspace.
In this case, a pragmatic engineer could well prefer our solution over that of Sarkar et al. (since
number of evaluations performed by Sarkar et al.more than an order of magnitude than {\bf WHAT}\xspace).
\item BDBC: Here again, {\bf WHAT}\xspace is not doing the best but, compared to the number of evaluations required by all other solutions, it is not doing particularly bad.
\end{itemize}
\noindent Given
the overall reduction of the error is small (5\,\% difference
between Sarkar and {\bf WHAT}\xspace in mean error), the
cost of tripling the data-collection cost is
often not feasible in a practical context and might not justify the small additional benefit in accuracy.
\begin{myshadowbox}
Based on the results of figure~\ref{fig:stats}, we answer {\bf RQ4} with ``yes'',
since {\bf WHAT}\xspace yields predictions that are similar to or more accurate than prior
work, while requiring fewer samples.
\end{myshadowbox}
\begin{table}[t]
\caption{Comparison of the number of the samples
required with the state of the art. The grey colored cells indicate the approach that requires the lowest number of samples. We notice that WHAT and Guo (2N) uses less data compared to the other approaches. The high fault rate of Guo (2N) accompanied with high variability in the predictions makes WHAT our preferred method.}\label{tab:measurements}
\vspace{2ex}
\centering
\small
\begin{tabular}{lrrrrr}
\toprule
& \multicolumn{5}{c}{{Samples}} \\ \cmidrule{2-6}
\multirow{-2}{*}{\textbf{}} & {Siegmund} & {Guo (2N)} & {Guo (PW)} & {Sarkar} & {WHAT} \\ \midrule
\textbf{Apache} & 29 & 181 & 29 & 55 & \cellcolor[HTML]{C0C0C0}16 \\
\textbf{BDBC} & 139 & \cellcolor[HTML]{C0C0C0}36 & 139 & 191 & 64 \\
\textbf{BDBJ} & 48 & 52 & 48 & 57 & \cellcolor[HTML]{C0C0C0}16 \\
\textbf{LLVM} & 62 & \cellcolor[HTML]{C0C0C0}22 & 64 & 43 & 32 \\
\textbf{SQLite} & 566 & 78 & 566 & 925 & \cellcolor[HTML]{C0C0C0}64 \\
\textbf{X264} & 81 & \cellcolor[HTML]{C0C0C0}32 & 81 & 93 & \cellcolor[HTML]{C0C0C0}32 \\ \bottomrule
\end{tabular}
\end{table}
\section{Why does it work?}
In this section, we present an in-depth analysis to understand why our sampling technique (based on a spectral learner) achieves such low mean fault rates while being stable (low variance). We hypothesize that the configuration space of the system configuration lie on a low dimensional manifold.
\subsection{History}
Menzies et. al~\cite{me12d} demonstrated how to exploit the underlying dimension to cluster data to find local homogeneous data regions in an otherwise heterogeneous data space. The authors used an algorithm called WHERE (see section~\ref{rtlearning}), which recurses on two dimensions synthesized in linear time using a technique called FASTMAP ~\cite{Faloutsos1995}. The use of underlying dimension has been endorsed by various other researchers~\cite{bettenburg2012think, deiters2013using, bettenburg2015towards, zhang2016cross}. There are numerous other methods in the literature, which are used to learn the underlying dimensionality of the data set such as Principal Component Analysis (PCA)~\cite{jolliffe2002principal}~\footnote{WHERE is an approximation of the first principal component}, Spectral Learning~\cite{shi2000normalized} and Random Projection~\cite{bingham2001random}. These algorithms use different techniques to identify the underlying, independent/orthogonal dimensions to cluster the data points and differ with respect to the computational complexity and accuracy. We use WHERE since it computationally efficient~$O(2N)$, while still being accurate.
\subsection{Testing Technique}
Given our hypothesis the configuration space lies in a lower dimensional hyperplane --- it is imperative to demonstrate that the intrinsic dimensionality of the configuration space is less than the actual dimension. To formalize this notion, we borrow the concept of correlation dimension from the domain of physics~\cite{grassberger2004measuring}. The correlation dimension of a dataset with $k$ items is found by computing the number of items found at distance withing radius $r$ (where r is the Euclidean distance between two configurations) while varying $r$. This is then normalized by the number of connections between $k$ items to find the expected number of neighbors at distance $r$. This can be written as:
\begin{equation}
C(r) = \frac{2}{k(k-1)} \displaystyle\sum_{i=1}^{n} \displaystyle\sum_{j=i+1}^{n} I(||x_i, x_j|| < r) \\
\end{equation}
$$
where:
I(x < y) = \begin{cases}
1, & \text{ if x \textless y}\\
0, & \text{ otherwise}\\
\end{cases}
$$
Given the dataset with $k$ items and range of distances [$r_0$--$r_{max}$], we estimate the intrinsic dimensionality as the mean slope between $\ln(C(r))$ and $\ln(r)$.
\subsection{Evaluation}
\begin{figure}[t]
\includegraphics[width=0.9\columnwidth]{Figures/underlying_dimension}
\caption{The actual dimensions are shown on the x-axis and intrinsic dimensionality is shown on the y-axis. The points are annotated with the names of the corresponding software system. The intrinsic dimensionality of the systems are much lower than the actual dimensionality (number of columns in the dataset).}
\label{fig:underlying_d}
\end{figure}
On the configuration space of our subject systems, we observe that {the intrinsic dimensionality of the software system is much lower than the actual dimension}. Figure~\ref{fig:underlying_d} presents the intrinsic dimensionality along with the actual dimensions of the software systems. If we take a look at the intrinsic dimensionality and compare it with the actual dimensionality, then it becomes apparent that the configuration space lies on a lower dimensional hyperplane. For example, SQLite has 39 configuration options, but the intrinsic dimensionality of the space is just 1.61 (this is a fractal dimension). At the heart of {\bf WHAT}\xspace is WHERE (a spectral clusterer), which uses the approximation of the first principal component to divide the configuration space and hence can take advantage of the low intrinsic dimensionality.
As a summary, our observations indicate that the intrinsic dimension of the configuration space is much lower that its actual dimension. Hence, clustering based on the intrinsic dimensions rather than the actual dimension would be more effective. In other words, configurations with similar performance values lie closer to the intrinsic hyperplane, when compared to the actual dimensions, and may be the reason as to why {\bf WHAT}\xspace achieves empirically good results.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{Figures/param_tuning}
\caption{The trade-off between the number of evaluations (affected by the size of the sub-region) and the performance (MRE) of the model generated.}
\label{fig:param_tuning}
\end{figure}
\section{Discussion}
\subsection{What is the trade-off between the MRE and the number of measurements?}
{\bf WHAT}\xspace{} requires that the practitioner to define a stopping criterion, (size of the sub-region) before the
process commences.
The stopping criterion preempts the process of recursive division of regions
based on projection values of the configurations. In our experiments, the
number of measurements or the size of the training set depends
on the stopping criterion. An early termination of the
sampling process would lead to a very inaccurate performance model, while
late termination would result in resource wastage. Hence, it is very
important to discuss the trade-off between the upper bound of the size of the sub-region and
the MRE of the model built. In Figure~\ref{fig:param_tuning}, we show the trade-off
between the MRE found and the number
of measurements (size of training set). The trade-off characterizes
the relationship between two conflicting objectives, for example,
point in Apache, (size of sub-region=$4\cdot \sqrt{N}$) requires very few measurements but the MRE of the model built is the highest, whereas point (size of sub-region=$\frac{1}{4}\cdot \sqrt{N}$) requires large
number of measurements, but the MRE of the model built is the lowest. Since
our objective is to minimize the number of measurements while
reducing MRE, we assign the value of $\sqrt{N}$ to the upper bound of the size of the sub-region for the
purposes of our experiments.
\subsection{What is the relationship between intrinsic dimensionality and difficulty of a problem (or dataset)?}
Houle et al.~\cite{houle2012generalized} observe a clear correlation between dimensionality of a problem space and loss of performance, that is a problem represented in lower dimensions is easier to model than the same problem represented in higher dimensions. In a similar vein, Domingos~\cite{domingos2012few} explains how our intuitions fail in higher dimensions and algorithms that work on lower dimensions does not work on higher dimensions. This is because the size of the training data required to create a generalized 'model' for such a high dimensional space is exponentially large~\footnote{Another challenge of having high dimensional search space is the amount of noise induced by irrelevant dimensions.}. This is generally referred to as the ``curse of dimensionality'', but what counteracts this is the ``blessing of non-uniformity''. Blessing of non-uniformity refers to the fact that the possible valid solutions in a space is not spread uniformly across the problem space but concentrated on or near a lower dimensional manifold. Hence, it is a rule of thumb of machine learning practitioners to reduce the dimension of a data set by projecting the data onto a lower dimensional orthogonal subspace that captures the variation in the data. Bruges~\cite{burges2010dimension} mentions that, if data lies in a lower dimensional space (with lower intrinsic dimensions), then modeling the data directly in lower dimensional manifold make it much easier to model. Our results are inline with the observation made by Bruges, as we show that few samples are enough to model a large (sometimes millions) space, which can be attributed to the low intrinsic dimensionality of the space.
~\\
There are several other techniques (similar to {\bf WHAT}\xspace) that also exploit the non-uniformity of the data points, such as Random Projections~\cite{dasgupta2000experiments} and Auto-encoders~\cite{hinton2006reducing}. The central intuition is similar to our work: problems that contain intrinsic lower dimensions should be easier/cheaper to model than those with higher intrinsic dimensionality.
That said, to the best of our knowledge, we are the first to propose exploring the lower intrinsic dimensionality of configuration spaces, and exploit those lower dimensions for the purposes of sampling.
\subsection{What are the limitations of WHAT?}
The limitations of {\bf WHAT}\xspace{} are:
\begin{itemize}
\item {\bf WHAT}\xspace{} cannot be used for non-numeric configuration options. However, it can be used for numeric configurations options, not just Boolean options ( most related work only supports Boolean options).
\item The configurable systems used in this paper are fairly easy to model using machine learning techniques such as CART, but there exist software systems, which cannot be model using CART (even using 40\% of all possible configurations). For these systems, {\bf WHAT}\xspace{} cannot be used to build accurate performance models.
\item The effectiveness of {\bf WHAT}\xspace{} depends on projecting the configurations on the approximated first principal component. The approximation of the first principal component require calculating the farthest points (points which is most dissimilar) in the configuration space using euclidean distance. However, there maybe systems where Euclidean distance (as used in this paper) cannot find the most dissimilar points~\cite{chen2016sampling}. For such systems, {\bf WHAT}\xspace{} in current form will not be effective (which is a part of our future work).
\item Finding near-optimal configuration can become challenging when the configuration space is non-convex. However, we did not find such systems during our empirical evaluations. The other point we would like to stress is: we wanted to build a tool, which can differentiate between good and not-so-good configurations using few evaluations and our goal is not to find the best configuration but rather near optimal solutions.
\end{itemize}
\section{Reliability and Validity}\label{sect:construct}
{\em Reliability} refers to the consistency of the results obtained
from the research. For example, how well independent researchers
could reproduce the study? To increase external
reliability, we took care to either clearly define our
algorithms or use implementations from the public domain
(SciKitLearn)~\cite{scikit-learn}. Also, all the data used in this work are available
on-line in the PROMISE\footnote{\url{http://openscience.us/repo/performance-predict/cpm.html}} code repository and all our algorithms
are on-line at github.com/ai-se/where.
{\em Validity} refers to the extent to which a piece of research actually
investigates what the researcher purports to investigate~\cite{SSA15}.
{\em Internal validity} checks if the differences found in
the treatments can be ascribed to the treatments under study.
One threat to internal validity of our experiments is the choice
of {\em training and testing} data sets discussed in
\fig{systems}. Recall that, while all our learners used the same
{\em testing} data set, our untuned learners were only given
access to {\em training} data.
Another threat to internal validity is {\em instrumentation}. The very low $\mu$ and $\sigma$ error values
reported in this study are so small that it is reasonable to ask whether they are due to some instrumentation
quirk, rather than due to using a clever sample strategy:
\begin{itemize}
\item
Our low $\mu$ values are consistent with prior work~\cite{sarkar2015cost};
\item
As to our low $\sigma$ values, we note that, when the error values are so close to 0\,\%, the standard
deviation of the error is ``squeezed'' between zero and those errors. Hence, we would expect that
experimental rigs
that generate error values on the order of 5\,\% and \eq{err} should have $\sigma$ values of $0\le \sigma \le 5$ (e.g., like those seen in our introduction).
\end{itemize}
Regarding SQLite, we cannot measure all possible configurations in reasonable time. Hence, we sampled only 100 configurations to compare prediction and actual performance values. We are aware that this evaluation leaves room for outliers.
Also, we are aware that measurement bias can cause false interpretations~\cite{me12d}. Since we aim at predicting performance for a special workload, we do not have to vary benchmarks.
We aimed at increasing the {\em external validity} by choosing software systems from different domains with different configuration mechanisms and implemented with different programming languages. Furthermore, our subject systems are deployed and used in the real world. Nevertheless, assuming the evaluations to be automatically transferable to all configurable software systems is not fair. To further strengthen external validity, we run the model (generated by \textit{{\bf WHAT}\xspace + $S_1$}) against other optimizers, such as NSGA-II and differential evolution~\cite{storn1997differential}. That is, we validated whether the learned models are not only applicable for GALE style of perturbation. In Table~\ref{fig:external_validity}, we see that the models developed are valid for all optimizers, as all optimizers are able to find the near optimal solutions.
\section{Related Work}
\label{sect:related}
In 2000, Shi and Maik~\cite{shi2000normalized} claimed the term ``spectral clustering'' as a reference to their normalized cuts
image
segmentation algorithm that partitions data through a spectral (eigenvalue) analysis of the
Laplacian representation of the similarity graph between instances in the data.
In 2003, Kamvar et al.~\cite{kamvar2003spectral} generalized that definition saying that ``spectral learners''
were any data-mining algorithm that first replaced the raw
dimensions with those inferred from the spectrum (eigenvalues) of the affinity (a.k.a.\ distance)
matrix of the data, optionally adjusted via some normalization technique).
Our clustering based on first principal component splits the data on a approximation to an eigenvector, found at each recursive level
of the data (as described in \tion{spect}).
Hence, this method is a ``spectral clusterer'' in the general Kamvar sense.
Note that,
for our data, we have
not found that Kamvar's normalization matrices are needed.
Regarding sampling, there are a wide range of methods know as experimental designs or designs of experiments~\cite{pukelsheim2006optimal}. They usually rely on fractional factorial designs as in the combinatorial testing community~\cite{Kuhn:2013}.
Furthermore, there is a recent approach that learns {\em per\-for\-mance-influence models} for configurable software systems~\cite{SGA+15}. While this approach can handle even numeric features, it has similar sampling techniques for the Boolean features as reported in their earlier work~\cite{siegmund2012predicting}. Since we already compared to that earlier work and do not consider numeric features, we did not compare our work to performance-influence models.
\section{Conclusions \& Future Work}
Configurable software systems today are widely used in practice, but they impose challenges
regarding finding performance-optimal configurations. State-of-the-art approaches require too
many measurements or are prone to large variances in their performance predictions. To overcome
these limitations, we have proposed a fast spectral learner, called {\bf WHAT}\xspace, along with three
new sampling techniques. The key idea of {\bf WHAT}\xspace is to explore the configuration space with
eigenvalues of the features used in a configuration to determine exactly those configurations
for measurement that reveal key performance characteristics.
This way, we can study many closely associated configurations with only a few measurements.
We evaluated our approach on six real-world configurable software systems borrowed from the
literature. Our approach achieves similar to lower error rates, while being stable when
compared to the state of the art. In particular, with the exception of Berkeley DB, our
approach is more accurate than the state-of-the-art approaches by Siegmund et
al.~\cite{siegmund2012predicting} and Guo et al.~\cite{guo2013variability}. Furthermore, we
achieve a similar prediction accuracy and stability as the approach by Sarkar et
al~\cite{sarkar2015cost}, while requiring a far smaller number of configurations to be
measured. We also demonstrated that our approach can be used to build cheap and stable
surrogate prediction models, which can be used by off-the-shelf optimizers to find the
performance-optimal configuration. We use the correlation dimension to demonstrate how the high dimensional configuration space of our subject systems has a low intrinsic dimensionality, which might be the reason why {\bf WHAT}\xspace performs so well on these datasets.
As to future work, we plan to explore the implications of {\bf WHAT}\xspace{}. Currently {\bf WHAT}\xspace{} uses a static number of evaluations based on the total number of possible configurations ($\sqrt{N}$), which may not be useful for systems that are more difficult to model that the system used in this study. Hence, we need a progressive strategy, which can progressively sample new configurations and stop the sampling process based on either the performance score achieved or the budget allocated. Finally the current version of {\bf WHAT}\xspace{}, assumes that all the features are of similar importance and uses a Euclidean distance to differentiate between good and `not-so-good' solutions. There are certainly systems where not all the features are equally important or there are redundancy in terms of configuration options. Hence, using feature weighting techniques to find weight (importance) of configuration options and use that information to differentiate between configurations.
\begin{acknowledgement}
The work is partially funded by NSF awards \#1506586. Sven Apel's work has been supported by the German Research Foundation (AP 206/4 and AP 206/6). Norbert Siegmund's work has been supported by the German Research Foundation (SI 2171/2).
\end{acknowledgement}
\bibliographystyle{plain}
\input{activeconfig.bbl}
\end{document}
|
1,108,101,562,974 | arxiv |
\section{Introduction}
The computation of solutions maximizing the \emph{social welfare}, i.e., maximizing the total ``happiness'' of the advertisers, in sponsored search auctions (SSAs) strongly depends on how such happiness is defined. Clearly, the more clicks their ads receive, the more content advertisers are. A naive measure to forecast clicks, named click through rate (CTR), would only consider the \emph{quality} of the ad itself (``better'' ads receive more clicks). However, one cannot overlook the importance of \emph{externalities} in this context: specifically, \emph{slot-dependent externalities} (i.e., ads positioned higher in the list have a higher chance to get a click) and \emph{ad-dependent externalities} (e.g., the ad of a strong competitor -- e.g., BMW -- shown in the first slot can only decrease the number of clicks that the ad -- e.g., of Mercedes -- in the second slot gets). Much research focused on modeling externalities in SSAs and providing algorithms for the resulting optimization problem.
On one hand of the scale, there is the simple, yet neat, \emph{cascade} model \cite{cascade,DBLP:conf/wine/AggarwalFMP08}. In the cascade model, users are assumed to scan the ads \emph{sequentially} from top to bottom and the probability with which a user clicks on the ad $a_i$ shown in slot $s_m$ is the product of the intrinsic quality $q_i$ of the ad, the relevance $\lambda_m$ of slot $s_m$ ({slot-dependant externality}) and of \emph{all} the ads allocated to slots $s_1$ through $s_{m-1}$. A host of results is proved in this model as the input parameters vary (e.g., $\lambda_m \in \{0,1\}$ rather than $\lambda_m \in [0,1]$). In its more general version, the optimization problem of social welfare maximization is conjectured to be NP-hard, shown to be in APX (i.e., a $1/4$-approximation algorithm is given) and shown to admit a QPTAS (a quasi-polynomial time approximation scheme) \cite{cascade}. In addition to its unknown computational complexity, the cascade model has two main limitations to be considered a satisfactory model of externalities in SSAs. First, it assumes that users have unlimited ``memory'' and that, consequently, an ad in slot $s_1$ exerts externalities to an ad many slots below. This is experimentally disproved in \cite{Segal} wherein it is observed how the \emph{distance} between ads is important. Second, it assumes that the externality of an ad is the same no matter what ad is exerted on. Nevertheless, while BMW can have a strong externality on Mercedes since both makers attract the high end of market, the externality on makers in a different price bracket, e.g., KIA, is arguably much less strong.
On the other hand of the scale, we can find models
that try to address these limitations.
In \cite{Fotakis} Fotakis \emph{et al.} propose a model whereby users have limited memory, i.e., externalities occur only within a \emph{window} of $c$ consecutive slots, and consider the possibility that externalities boost CTRs (\emph{positive} externalities) as well as reduce CTRs (\emph{negative} externalities).
In particular, the externalities of an ad apply to ads displayed $c$ slots below (\emph{forward} externalities) and ads displayed $c$ slots above (\emph{backward} externalities).
Moreover, in order to model the fact that externalities might have ad-dependent effect, they introduce the concept of \emph{contextual graph}, whereby vertices represent ads and edge weights represent the externality between the endpoints.
Their model turned out to be too rich to allow tight and significant algorithmic results
(their main complexity results apply to the arguably less interesting case of forward positive externalities).
\subsection{Our contribution}
The present work can be placed in the middle of this imaginary spectrum of models for externalities in SSAs. Our main aim is to enrich the literature by means of more general ways to model slot- and ad-dependent externalities, while giving a (nearly) complete picture of the computational complexity of the problem. We do not attempt to explicitly model the user's behavior but
bridge the aforementioned models in order to overcome the respective weaknesses.
In detail, we enrich the naive model of SSAs by adding the concepts of window
and contextual externalities, while keeping ad- and slot-dependent externalities factorized as in the cascade model.
We also complement much of the known literature by studying a model wherein the externalities coming from ads and slots cannot be expressed as a product.
Our study gives rise to a number of novel and rich models for which we can provide (often tight) approximability results (see Table \ref{tab:results} for an overview).\footnote{It is important to notice that, as common in the literature on SSAs, the number of slots is a parameter of the problem (rather than fixed) for otherwise the computational problem becomes easy (by, e.g., running the color coding algorithm).}
Since the case of \emph{selfish} advertisers is of particular relevance in this context, we also initiate the study of mechanism design for the optimization problems introduced and consider the \emph{incentive-compatibility} of our algorithms, i.e., whether they can be augmented with payment functions so to work also with selfish advertisers.
For the version in which slot- and ad-dependant externalities cannot be factorized and externalities occur in a window of size $c$, we prove that the optimization problem is in $P$, if $c$ is a constant. We consider the LP relaxation of the ILP describing the problem and prove that the integrality gap is 1.
\begin{table*}[t]
\centering
\begin{tabular}{l|c|c|c|c|c|}
\cline{2-6}
& \multicolumn{2}{c|}{\textbf{FNE$_{aa}$$(c)$}} & \multicolumn{2}{c|}{\textbf{FNE$_{aa}$$(K)$}} & \multirow{2}{*}{\textbf{FNE$_{sa}$$(c)$}} \\ \cline{2-5}
& \textbf{nr} & \textbf{r} & \textbf{nr} & \textbf{r} & \\ \hline
\multicolumn{1}{|l|}{\textbf{LB}} & APX-hard & \multirow{2}{*}{\rule{0ex}{13pt} APX-complete} & \multirow{2}{*}{\rule{0ex}{13pt} poly-APX-complete} & \multirow{2}{*}{\rule{0ex}{13pt} APX-complete} & \multirow{2}{*}{\rule{0ex}{13pt} P$\;^\star$} \\ \cline{1-2}
\multicolumn{1}{|l|}{\textbf{UB}} & \rule{0ex}{13pt}$\frac{\log(N)}{2\min\{N,K\}}^\star$ & & & & \\ \hline
\multicolumn{1}{|l|}{\textbf{SP}} & \rule{0ex}{13pt} $\frac{\log(N)}{2\min\{N,K\}}^\star$ & $1/2$ & $1/K$ & $1/2$ & $1\;^\star$ \\ \hline
\end{tabular}
\caption{Summary of our results: LB (UB, resp.) stands for lower (upper, resp.) bound on the approximation of the problem; the row SP, instead, contains the approximation guarantees we obtain with truthful mechanisms. Results marked by `$\star$' require $c=O(1)$. APX-completeness of a subclass of FNE$_{aa}$$(c)$-nr is also given. (See the model for details on the notation.)}\label{tab:results}
\end{table*}
For the variant of the problem with factorized
externalities, contextual ad-dependent externalities and window of $c$ slots, a distinction on the effects that empty slots have on users' behavior is useful. In a sort of whole page optimization fashion~\cite{wholepage}, we think of those slots as occupied by a \emph{special} (fictitious) ad used to refresh (e.g., by means of pictures) the user's attention.
If the special ad cannot be used (or, equivalently, the user's attention cannot be reset) we prove that the allocation problem is poly-APX-complete whenever users have a ``large'' memory (i.e., the window equals the number of slots $K$). Specifically, we give an approximation preserving reduction from the Longest Path problem and design an
approximation algorithm using several different ideas and sources of approximation; interestingly, its approximation guarantee matches the best known approximation guarantee for Longest Path. However, we prove that this algorithm cannot be used in any truthful mechanism and note that a simple single-item second price auction gives a weaker, yet close, truthful approximation. We complement the results for this model with the identification of tractable instances for which we provide an exact polynomial-time algorithm. For $c<K$ instead, we are unable to determine the exact hardness of approximating the problem in general. To the APX-hardness proof, we pair a number of approximation algorithms that assume constant $c$. The first, based on color coding \cite{colorcoding}, returns a non-constant approximation on any instance of SSA. The second assumes that the contextual graph is complete and returns a solution which (roughly) guarantees a $\gamma_{\min}^c$ fraction of the optimum social welfare, $\gamma_{\min}$ being the minimum edge weight in the graph. Interestingly, this algorithm shows the APX-completeness of the subclass of instances having constant $\gamma_{\min}$ (we indeed further provide a hardness result for instances with complete contextual graphs). We believe the tight result for this subclass of instances to be quite relevant. In fact, complete contextual graphs are quite likely to happen in real-life: the results returned by a keyword search are highly related to one another, and, as such, each pair of ads has a non-null externality, however small.
If the special ad can be used, the problem becomes easier and turns out to be APX-complete, for any $c$. We first prove the problem with $c=K$ to be APX-hard, via a reduction from (a subclass of) ATSP (i.e., asymmetric version of TSP) and then surprisingly connect instances with $c<K$ to instances with $c=K$ by reducing the case with $c=1$ to the case with $c=K$ and \emph{binary} externalities (intuitively, the weights of the edges of the contextual graph can be either 0 or 1). We finally observe how a simple greedy algorithm cleverly uses the special ad to return $1/2$-approximate solutions and leads to a truthful mechanism.
\section{Model}
In a SSA we have $N$ ads and $K$ slots. We assume that each ad corresponds to an advertiser; this is w.l.o.g. from the optimization point of view.
We denote each ad by $a_i$ with $i \in \mathcal{N}$, where $\mathcal{N}=\{1,\dots, N\}$ is the set of indices of the ads.
We introduce a fictitious ad, denoted by $a_\bot$, s.t., when allocated, the slot is left empty.
The $K$ slots are denoted by $s_m$ with $m \in \mathcal{K}$, $\mathcal{K}=\{1,\dots, K\}$ being the set of slot indices s.t. $s_1$ is the slot at the top of the page and $s_K$ is at the bottom.
We also have a fictitious slot, denoted by $s_\bot$ s.t. an ad allocated to $s_\bot$ is not displayed in the webpage. Each ad $a_i$ is characterized by: (\emph{i}) the \emph{quality} $q_i \in [0,1]$, i.e., the probability a user clicks on ad~$a_i$ when he observes it, irrespectively of other externalities; (\emph{ii}) the valuation $v_i \in \mathbb{R}^+$ advertiser~$i$ associates to his ad being clicked by a user.
The fictitious ad $a_\bot$ has $q_\bot = v_\bot = 0$.
A feasible allocation of ads to slots, denoted as $\theta$, consists of an ordered sequence of ads $\theta=\langle a_1, \ldots, a_K\rangle$ s.t. the ads are ordered by increasing slot number, i.e., $a_1$ is allocated to the top slot, $a_K$ to the bottom one. Every ad $a_i$ can be allocated to at most one slot, whereas $a_\bot$ can be allocated to more than one slot. The set of all possible feasible allocations is denoted as $\Theta$. With a slight abuse of notation, we let (\emph{i}) $\theta(a_i)$ denote the index of the slot ad $a_i$ is allocated to, and (\emph{ii}) $\theta(s_m)$ denote the index of the ad allocated to $s_m$.
Given $\theta \in \Theta$, the \emph{click through rate} of ad $a_i$, denoted as $CTR_i(\theta)$, is the probability ad $a_i$ is clicked by the user taking externalities into consideration. The optimal allocation $\theta^*$ is the one maximizing the \emph{social welfare}, namely: $\theta^* \in \arg \max_{\theta \in \Theta} SW(\theta)$, where $$SW(\theta)= \sum_{i \in \mathcal{N}} CTR_i(\theta) v_i.$$ A $1/\alpha$-approximate solution $\theta$ satisfies $SW(\theta) \geq SW(\theta^*)/\alpha$.
Typically, $CTR_i(\theta)$ defines how the quality $q_i$ of ad $a_i$ is ``perturbed'' by the externalities in terms of click probability. Accordingly,
in general
$CTR_i(\theta)=q_i \Gamma_i(\theta)$, $\Gamma_i(\theta)$ being a function encoding the effect of externalities. E.g., in the cascade model, $$\Gamma_i(\theta)=\Lambda_{\theta(a_i)} \prod_{l=1}^{\theta(a_i)-1}\gamma_{\theta(s_l)},$$ where $\Lambda_{\theta(a_i)}=\prod_{l=1}^{\theta(a_i)}\lambda_l$, $\lambda_m \in [0,1]$, called the \emph{factorized prominence} of~$s_m$, denotes the slot-dependant externality and $\gamma_i\ \forall i \in \mathcal{N}$, called \emph{continuation probability}, denotes the ad-dependent externality. (W.l.o.g., we assume $\Lambda_1=\lambda_1=1$.)
Our conceptual contribution rests upon novel and richer ways to define $\Gamma_i(\theta)$, along three main dimensions.
The \emph{first dimension} concerns the \emph{user memory}, a.k.a. \emph{window}. We let $c$ be the number of ads displayed above $a_i$ in $\theta$, from $s_{\theta(a_i)-1}$ to $s_{\theta(a_i)-c}$, that affect $\Gamma_i(\theta)$.
The \emph{second dimension} concerns a generalization of the externalities.
Here we propose two alternative families of externalities, called \emph{sa} (for slot-ad) and \emph{aa} (for ad-ad). The sa-externalities remove the factorization in slot- and ad--dependent externalities: i.e.,
$\lambda_m$ and $\gamma_i$ are substituted by parameters $\gamma_{m,j}\in [0,1]$, $m \in \mathcal{K}$ and $j \in \mathcal{N}$.
When the window is $c$, the CTR is defined as $CTR_i(\theta) = q_i \Gamma_i(\theta)$, where $$\Gamma_i(\theta)= \prod_{m = \max\{1,\theta(a_i)-c\}}^{\theta(a_i)-1} \gamma_{m,\theta(s_m)}.$$
This definition captures the situation in which an ad can affect the ads displayed below it in a different way according to the position in which it is displayed. %
For the aa-externalities, on the other hand,
we preserve the factorization in $\lambda_m$ and $\gamma_i$, but redefine these latter parameters as $\gamma_{i,j}\in[0,1]$ where $a_j$ is the ad that is displayed in the slot just below $\theta(a_i)$.
It is convenient to see the $\gamma_{i,j}$'s as the weights of the \emph{contextual graph} $G=(\mathcal{N},\mathcal{E})$ where the direct edges $(i,j)$ weigh $\gamma_{i,j}>0$ and represent the way ad $a_i$ influences $a_j$. Note that non-edges of $G$ correspond to the pairs of ads $a_i$, $a_j$ s.t. $\gamma_{i,j}=0$. Here, with window $c$, $$\Gamma_i(\theta) = \Lambda_{\theta(a_i)} \prod_{l=\max\{1,\theta(a_i)-c\}}^{\theta(a_i)-1} \gamma_{\theta(s_l),\theta(s_{l+1})}$$ where $\Lambda_m$ is defined as above.
This definition captures the situation in which each ad can affect each other ad in a different way.
The \emph{third dimension} concerns the definition of $\gamma_{m,\bot}$ for the sa-externalities and $\gamma_{i,\bot}$ and $\gamma_{\bot, i}$ for the aa-externalities.
In the model \emph{with reset} we have $\gamma_{m,\bot} = 1$ for sa and $\gamma_{i,\bot} = \gamma_{\bot, i} = 1$ $\forall i \in \mathcal{N} \cup \{\bot\}$ for aa. This variant captures the situation in which slots can be distributed in the page in different positions (a.k.a., slates) and, in order to raise the user's attention, we can allocate a content, e.g. pictures,
that nullifies the externality between the ad allocated before and after the content. In the model \emph{without reset}, $\gamma_{m,\bot} = 0$ for sa and $\gamma_{i,\bot}=\gamma_{\bot, i}=0$ $\forall i \in \mathcal{N} \cup \{\bot\}$ for aa, thus capturing the situation in which leaving a slot empty between two allocated slots does not provide any advantage.
We let FNE$_x(c)$-y be the problem of optimizing the social welfare in our model with \emph{F}orward \emph{N}egative \emph{E}xternalities with window $c$, $x \in \{sa, aa\}$-ex\-ter\-nal\-i\-ties and y $ \in \{$r, nr$\}$ reset (r stands for reset; nr for no reset). When the value of y is not relevant for our results, we talk about FNE$_x(c)$.
We are interested in two particular subclasses of FNE$_{aa}(c)$, namely: (\emph{i}) subclass FNE$_{aa}^+(c)$-y, defined upon a complete contextual graph and such that $0 < \gamma_{\min} = \min_{i,j \in \mathcal{N}, i \neq j} \gamma_{i,j}$ and (\emph{ii}) subclass $\mathcal{B}$--FNE$_{aa}(c)$-y, where $\gamma_{i,j}$ can take values in $\{0,1\}$.
\subsection{Mechanism design}
We use the theory of mechanism design to study the incentive-compatibility of our algorithms \cite{book}.
A \emph{mechanism} ${M}$ is a pair $(A,P)$, where $A: (\mathbb{R}^+)^N \rightarrow \Theta$ is an algorithm that associates to any vector $\mathbf{v}=(v_1, \ldots, v_N)$ of valuations a feasible outcome in $\Theta$
(only valuations are private knowledge). The payment function $P_i:(\mathbb{R}^+)^{N} \rightarrow \mathbb{R}^+$ maps valuation vectors to monetary charges for advertiser~$i$. The aim of each advertiser is to maximize his own utility $u_i(\mathbf{v}, v_i) = CTR_i(A(\mathbf{v})) v_i - P_i(\mathbf{v})$. An advertiser could misreport his true valuation and declare $\hat{v}_i \not = v_i$ when $u_i((\hat{v}_i,\mathbf{v}_{-i}), v_i) > u_i(\mathbf{v}, v_i)$, $\mathbf{v}_{-i}$ being the vector of the valuations of all the agents but $i$. We are then interested in truthful mechanisms.
A mechanism is \emph{truthful} if for any $i \in \mathcal{N}$, $\mathbf{v}_{-i} \in (\mathbb{R}^+)^{N-1}$, $v_i, \hat{v}_i \in \mathbb{R}^+$, $u_i((\hat{v}_i,\mathbf{v}_{-i}), v_i) \leq u_i(\mathbf{v}, v_i)$.
In this setting, a monotone algorithm \emph{must} be used in truthful mechanisms \cite{tardos}.
Algorithm $A$ is monotone if for any $i \in \mathcal{N}$, $\mathbf{v}_{-i} \in (\mathbb{R}^+)^{N-1}$, $CTR_i(A(\hat{v}_i,\mathbf{v}_{-i}))$ is non-decreasing in $\hat{v}_i$. Important for our work is also the family of VCG-like mechanisms, a.k.a., \emph{Maximal In Range (MIR)} mechanisms. An algorithm $A$ is MIR if there exists $\Theta' \subseteq \Theta$ s.t. $A(\mathbf{v}) \in \arg$ $\max_{\theta\in \Theta'} SW(\theta)$ $\forall \mathbf{v} \in \mathbb{R}^{N}$~\cite{NisamRonen}. These algorithms can be augmented with a VCG-like payment so to obtain truthful mechanisms. (VCGs are MIR mechanisms wherein $\Theta' = \Theta$.) We are interested in mechanisms for which both $A$ and $P$ are computable in polynomial time. MIR mechanisms run in polynomial-time if the MIR algorithm does.
As usual in the context of SSA, we adopt a pay-per-click payment scheme, i.e., we charge ${P_i(\mathbf{v})}/{CTR_i(A(\mathbf{v}))}$ when a user clicks on~$a_i$.
\section{FNE$_{sa}(c)$ is in $P$ for constant $c$}
Our presentation focuses on FNE$_{sa}$$(1)$-nr to simplify the notation.
The more general cases when $c>1$ and the reset model is considered are easily obtainable by generalization from FNE$_{sa}$$(1)$, but require a more cumbersome notation without significant new ideas (see discussion at the end of this section).
We first give the ILP formulation of FNE$_{sa}$$(1)$-nr and prove that if there is an optimal fractional solution, then there are at least two feasible integral solutions with the same value of social welfare. Since it is well known, by LP theory, that the ellipsoid algorithm can be forced (in polynomial-time) to output an integral optimal solution, we are able to prove the following:
\begin{theorem}\label{thm:sainP}
For $c=O(1)$, there is a polynomial-time optimal algorithm for FNE$_{sa}$$(c)$.
\end{theorem}
\noindent FNE$_{sa}$$(1)$-nr can be formulated as following ILP:
{\allowdisplaybreaks
\begin{align}
\max\sum_{m=2}^K \sum_{i \in \mathcal{N}} \sum_{{j \in \mathcal{N}, j \not = i}} \gamma_{m-1, j} q_i v_i x_{j,m,i} & + \sum_{i \in \mathcal{N}} x_{1,i} q_i v_i \nonumber \\
\textrm{subject to:} \hskip 11.4em & \nonumber \\
\sum_{m=2}^K \sum_{{j \in \mathcal{N}, j \not = i}} x_{j,m,i} + x_{1,i} \leq 1 \quad & \quad \forall i \in \mathcal{N} \nonumber \\
x_{1,i} = \sum_{{j \in \mathcal{N}, j \not = i}} x_{i, 2, j} \quad & \quad \forall i \in \mathcal{N} \nonumber \\
\sum_{j \in \mathcal{N}, j \not = i} x_{j,m,i} = \sum_{{j \in \mathcal{N}, j \not = i}} x_{i,m+1,j} \quad & \quad \forall i \in \mathcal{N}, \nonumber \\
\quad & \quad 2 \leq m < K \nonumber \\
\sum_{i \in \mathcal{N}} x_{1, i} = 1 \quad & \quad \label{eq:sumone} \\
\sum_{j \in \mathcal{N}} \sum_{{i \in \mathcal{N}, i \not = j}} x_{j, m, i} = 1 \quad & \quad \forall m \in \mathcal{K}\setminus\{1\} \nonumber \\
x_{1,i} \in \{0,1\} \quad & \quad \forall i \in \mathcal{N} \nonumber \\
x_{j,m,i} \in \{0,1\} \quad & \quad \forall 2 \leq m \leq K, \nonumber \\
\quad & \quad i, j \in \mathcal{N}, i \neq j \nonumber
\end{align}
}
where $x_{j,m,i}=1$ iff $a_i$ is allocated to slot $s_m$, $m>1$, and $a_j$ is allocated to slot $s_{m-1}$; $x_{1,i}=1$ iff $a_i$ is allocated to $s_1$. The objective function and the constraints are rather straightforward and, hence, their description is omitted here.
The next proposition proves Theorem \ref{thm:sainP} since it shows that we can solve the above ILP in polynomial-time, despite its similarities with the 3D-assignment, a well-known ${NP}$-hard problem.
\begin{proposition}\label{prop:csa1:poly}
The continuous relaxation of the above ILP always admits integral optimal solutions.
\end{proposition}
\begin{proof}
We show that, if there is an optimal fractional solution $x$, then there are at least two feasible integral solutions with the same value of social welfare.
Specifically, we prove that $x$ is equivalent to a probability distribution over integral allocations $\theta = \langle a_1, \ldots, a_K\rangle $. The probability $\mathbb{P}(\theta)$ given to $\theta$ is:
\begin{align*}
\mathbb{P}(\theta) & = \prod_{i=1}^K \mathbb{P}\left( \theta(a_i)=s_i \Big| \bigwedge_{j<i} \theta(a_j) = s_j\right) \\ & = x_{1,1}\prod_{l=2}^K \frac{x_{l-1,l,l}}{\sum\limits_{m \geq l} x_{l-1,l,m}}.
\end{align*}
In order to show that $\mathbb{P}(\theta)$ is actually a probability distribution over allocations, we show that $\sum_{\theta \in \Theta} \mathbb{P}(\theta) = 1$.
The proof is recursive. Let $\Theta'$ be the set of allocations $\theta$ with the same first $K-1$ ads.
The allocations in $\Theta'$ differ only for the ad allocated to $s_K$. To fix the notation, for $\theta \in \Theta'$ let $\theta(s_l)=a_l$, for $l<K$. We have:
\begin{align*}
\sum_{\theta\in\Theta'} \mathbb{P}(\theta) & = x_{1,1}\prod_{l = 2}^{K-1} \left(\frac{x_{l-1,l,l}}{\sum_{m \geq l} x_{l-1,l, m}}\right) \sum_{h \geq K}\frac{x_{K-1,K, h}}{\sum\limits_{m \geq K} x_{K-1,K,m}} & \\
&= x_{1,1}\prod_{l = 2}^{K-1} \left(\frac{x_{l-1,l,l}}{\sum_{m \geq l} x_{l-1,l,m}}\right) \frac{\sum_{h \geq K}x_{K-1,K,h}}{\sum_{m \geq K} x_{K-1,K,m}} \\ & = x_{1,1}\prod_{l = 2}^{K-1} \left(\frac{x_{l-1,l,l}}{\sum_{m \geq l} x_{l-1,l,m}}\right).
\end{align*}
\noindent By applying recursively the same argument above from $\Theta'' \supset \Theta'$, the set of all allocations $\theta$ satisfying $\theta(s_l)=a_l$, for $l \leq K-2$, down to the set of allocations having only the same first ad, we have $\sum_{\theta:\theta(s_1)=a_1} \mathbb{P}(\theta) = x_{1,1}$. Since (\ref{eq:sumone}) forces $\sum_{i\in \mathcal{N}}x_{1,i}=1$, we have $\sum_{\theta \in \Theta} \mathbb{P}(\theta) = \sum_{i \in \mathcal{N}} x_{1,i} = 1$. This shows that $\mathbb{P}(\theta)$ is a well defined probability distribution. The proof concludes by observing that all integral solutions are indeed feasible.
\end{proof}
\noindent To solve the problem when $c>1$, we just need to modify the ILP and allow each variable $x$ to depend on $c+2$ indices to take into account the (at most) $c$ indices of all the ads that precede the ad of interest. The reset model for $c=1$ instead requires the introduction of $K$ additional variables for $a_\bot$ to be visualized in each slot (together with some constraints to fix each variable for $a_\bot$ to a slot).
Theorem \ref{thm:sainP} implies that mechanism design becomes an easy problem for FNE$_{sa}$$(c)$ and $c=O(1)$, since the optimal algorithm can be used to obtain a truthful VCG mechanism.
\section{FNE$_{aa}$$(K)$-nr is Poly--APX--Complete}
\subsection{Easy Instances}
As a warm-up, we identify a significant class of instances of FNE$_{aa}$$(K)$-nr for which we can design a polynomial-time optimal algorithm. These instances are characterized by the fact that the underlying contextual graph is a DAG, thus modeling nearly oligopolistic markets in which the ads can be organized hierarchically. The idea of Algorithm \ref{alg:polytime_on_DAG} is that since DAGs can be sorted topologically in polynomial time then we can \emph{rename} the ads as $a_1,\ldots,a_N$ so to guarantee that for any pair of ads $a_i, a_j$, if $i<j$ then $(a_j,a_i)\notin \mathcal{E}$. We can then prove that we can focus w.l.o.g. on
\emph{ordered} allocations $\theta$, i.e., for any pair of allocated ads $a_i, a_j$, with $i<j$, $\theta(a_i) \leq \theta(a_j)$.
Consider an unordered $\theta$ and let $a_i$ be the first
ad (from the top) for which there exists $a_j$, $i<j$, such that
$\theta(a_i) > \theta(a_j)$. Since $\gamma_{j,i}=0$ then all the ads $a_k$ s.t. $\theta(a_k) \geq \theta(a_i)$ have $CTR_k(\theta)=0$ and, therefore, we can prune $\theta$ of (i.e., substitute with $a_\bot$) $a_i$ and all the subsequent ads without any loss in the social welfare. But then in the class of ordered allocations, the optimum has an optimal substructure and we can use dynamic programming.
Let $D[i, m]$ be the value of the optimal ordered allocation that uses only slots $s_m, \ldots, s_K$ and allocates ad $a_i$ in $s_m$. It is not hard to see that $D[i,m] = \Lambda_m q_i v_i + \max_{j>i} \gamma_{i,j} D[j,m+1]$ and that the optimum is $\max_{i \in [N]} D[i,1]$. In the pseudo-code of the algorithm, we simply construct the table $D$ after the topological sort of the contextual graph (with renaming of the ads) is done. The algorithm runs in time $O(KN^2)$.
\begin{algorithm}
\begin{algorithmic}[1]
\STATE $\textsc{TopologicalSort}(G)$ \label{s:par2o}
\FOR{all $m \leq K$} \label{s:lrow}
\STATE $D[N,m] = \Lambda_m q_N v_N$ \label{s:endlrow}
\ENDFOR
\FOR{all $i \leq N$} \label{s:lcol}
\STATE $D[i,K] = \Lambda_K q_i v_i$ \label{s:endlcol}
\ENDFOR
\FOR{$i = N -1$ to $1$} \label{s:table}
\FOR{$m = K-1$ to $1$}
\STATE $D[i,m] = \Lambda_m q_i v_i + \max_{j>i} \gamma_{i,j} D[j,m+1]$ \label{s:endtable}
\ENDFOR
\ENDFOR
\RETURN{$(\max_{i \in [N]} D[i,1])$} \label{s:max1slot}
\end{algorithmic}
\caption{}\label{alg:polytime_on_DAG}
\end{algorithm}
\noindent Since social welfare maximization is a utilitarian problem, and given that the algorithm above is optimal we can use the VCG mechanism to obtain a polynomial-time optimal truthful mechanism.
\subsection{Hardness}
We now prove the hardness of approximating FNE$_{aa}$$(K)$-nr.
\begin{theorem}
FNE$_{aa}$$(K)$-nr is poly--APX--hard.
\end{theorem}
\begin{proof}
We reduce from the Longest Path problem. An instance of the Longest Path problem consists of a direct graph $G' = (T,A)$ where $T$ is the set of vertices of the graph and $A \neq \emptyset$ is the set of unweighted edges. The problem demands to compute a \emph{longest simple path}, i.e., a maximum length path that visits each vertex of the graph at most once.
This problem is poly--APX--complete~\cite{LONGESTPATH} and the best known asymptotic approximation is ${\log |T|}/{|T|}$.
From an instance $G'=(T,A)$ of Longest Path we obtain an instance of FNE$_{aa}$$(K)$-nr as follows.
For each vertex $t_i\in T$ we add an ad $a_i$, with $q_i=v_i=1$ and for each directed arc $(t_i,t_j)\in A$ we add an arc $(i,j)$ in $\mathcal{E}$.
Furthermore, we set $\gamma_{i,j} = 1$ if $(i,j)\in \mathcal{E}$ and $\gamma_{i,j}=0$ otherwise.
Finally, we set $N=K=|T|$ and $\Lambda_m=1$, $\forall m \in [K]$.
Given an ordered sequence of vertices $\rho = ( t_1, t_2, \ldots, t_N )$, we denote as $len(\rho)$ the length of the path that starts in $t_1$ and visits the nodes in $\rho$ till the first node $t_j$ s.t. $(t_j, t_{j+1}) \not \in A$ is reached.
Let us denote as $\rho^*$ the sequence that describes the longest path in $G'$ and as $\theta^*$ the allocation that maximizes the social welfare in the instance of FNE$_{aa}$$(K)$-nr defined upon $G'$.
It is easy to check that $len(\rho^*) = SW (\theta^*) - 1$.
Indeed, $\theta^*$ allocates sequentially from the first slot the ads that correspond to the vertices composing the longest path. Conversely, we can transform an allocation $\theta$ into a sequence of vertices $\rho$ just by substituting the ads with their corresponding vertices until the first $a_\bot$ in $\theta$ is found. Thus, we have that for $\theta$ and the corresponding $\rho$ it holds $len(\rho) = SW(\theta) - 1$.
Consider a generic $\alpha$-approximate allocation $\theta_{\alpha}$ for FNE$_{aa}$$(K)$-nr: $SW(\theta_{\alpha}) \geq \alpha SW(\theta^*)$.
Since $A$ is non-empty, there is a solution $\theta_2$ to FNE$_{aa}$$(K)$-nr of social welfare at least $2$.
Let $\theta_{\beta}$ denote the solution in $\{\theta_\alpha, \theta_2\}$ with maximum social welfare. As $\theta_\alpha$ is an $\alpha$-approximate solution so is $\theta_{\beta}$. By letting $\rho_{\beta}$ denote the path constructed from $\theta_{\beta}$ as described above, we prove that the reduction preserves the approximation (up to a constant factor):
$len(\rho_{\beta}) = SW(\theta_{\beta}) - 1 \geq \frac{1}{2} SW(\theta_{\beta}) \geq \frac{\alpha}{2} SW(\theta^*) = \frac{\alpha}{2} \left(len\left(\rho^*\right) + 1\right) \geq \frac{\alpha}{2} len(\rho^*).$ \qed
\end{proof}
\subsection{Approximation algorithm}
We show that the problem is in poly--APX, with an approximation ratio that is asymptotically the same as the best guarantee known for Longest Path. Our algorithm combines the Color Coding (CC) algorithm \cite{colorcoding} together with three approximation steps.
Let $C$ be a set containing $K$ different colors. CC is a random algorithm, randomly assigning colors from $C$ to the ads, and then finding the best \emph{colorful} (i.e., no pair of ads has the same color) allocation.
To find the best colorful allocation, given a random coloring we do the following. For $S\subseteq C$, we define $(S,a_i)$ as the set of partial allocations with the properties of having the same number $|S|$ of allocated ads (each colored with a different color of $S$) in the first $|S|$ slots and having ad~$a_i$ in slot~$s_{|S|}$. We start from $S=\emptyset$ where no ad is allocated. Then, allocating one of the ads in the first position, we add one color to $S$ until $S=C$. Iteratively, the algorithm extends the allocations in $(S, a_i)$ appending a new ad, say $a_j$, with a color not in $S$ in slot $s_{|S|+1}$ obtaining $(S \cup \{o_j\}, a_j)$ where $o_j$ is the color of $a_j$. Each partial allocation in $(S, a_i)$ is characterized by the values of $SW$ and $\Gamma_i$. We can safely discard all the Pareto dominated partial allocations: given two allocations $\theta_1$ and $\theta_2$ in $(S, a_i)$, we say that $\theta_2$ is Pareto dominated by $\theta_1$ iff $SW(\theta_1) \geq SW(\theta_2)$ and $\Gamma_i(\theta_1) \geq \Gamma_i(\theta_2)$. However, there is no guarantee that the number of allocations in $(S,a_i)$ is polynomially bounded and, in principle, all the generated $O(N^K)$ partial allocations may be Pareto efficient. The complexity per coloring is $O(2^KN^{K+1}K^2)$. CC generates $e^K$ random colorings, but it can be derandomized with a cost of $\log^2(N)$ and a total complexity $O((2e)^K K^2 N^{K+1} (\log N)^2)$. To make the algorithm polynomial, we apply three approximation steps. Initially, we briefly sketch these three approximations and, subsequently, we provide the details. Firstly, we run CC over a reduced number $K'$ of slots where $K' = \min(\lceil\log (N)\rceil, K)$. Secondly, we discard all the allocations $\theta$ in which the probability to click on the last allocated ad is smaller than a given $\delta \in [0,1]$. Finally, we discretize the $\gamma_{i,j}$'s. We prove in the following that the running time is indeed polynomial and the approximation ratio is $(1-\delta)(1-\epsilon)\frac{\log (N)}{2 \min \{N,K\}}$, $\epsilon$ controlling the granularity of the $\gamma_{i,j}$ discretization. All the three approximations are necessary in order to obtain a polynomial-time algorithm. This algorithm is not monotone as we show below. However, a simple $1/K$-approximate truthful mechanism can be obtained, via a single-item second price auction. From here on, we provide the details of the algorithms and we prove its approximation ratio.
\smallskip \noindent \\underline{\emph{Approximation 1}.} We apply CC over a reduced number $K'$ of slots, where $K' = \min(\lceil\log (N)\rceil, K)$, implying the following approximation ratio.
\begin{proposition} \label{p:K'}
Given $\theta^*$, the optimal allocation over $K$ slots, and $\theta^*_{K'}$, the optimal allocation over the first $K'\leq \min\{N,K\}$ slots, we have $SW\left(\theta^*_{K'}\right) \geq \frac{1}{2} \frac{K'}{\min\{N,K\}} SW\left(\theta^*\right)$.
\end{proposition}
\begin{proof}
We partition $K'' = \min\{N,K\}$ slots in groups of $K'$ consecutive slots. There could be remaining slots that will constitute the last group with less then $K'$ slots. The number of groups in which the $K$ slots are divided is $NG=\lceil \frac{K''}{K'} \rceil$. Let $G_i = \{(i-1)K'+1, \ldots, \min(i K', K)\}$, for $i \in [NG]$, be the $i$-th group of indices of $K'$ slots.
We let $SW(\theta|G_i) = \sum_{m \in G_i} \Lambda_m \Gamma_{\theta(m)}(\theta) q_{\theta(m)} v_{\theta(m)}$, for any $\theta \in \Theta$. Since $SW(\theta^*) = \sum_{i=1}^{NG} SW(\theta^*|G_i)$, there must exist a group $G_i$ s.t. $SW(\theta^*|G_i) \geq \frac{1}{NG} SW(\theta^*)$. Observing that $\lceil \frac{K''}{K'} \rceil \leq \frac{K''}{K'} + 1$ and $K' \leq K''$ we get $SW(\theta^*|G_i) \geq \frac{K'}{2K''} SW(\theta^*)$. The proof concludes by noting that, by optimality, $SW(\theta^*_{K'}) \geq SW(\theta^*|G_i)$.
\end{proof}
\smallskip \noindent \underline{\emph{Approximation 2}.} In CC, we discard allocations $\theta$ in which $\Gamma_i(\theta)$ of the last allocated ad $a_i$, $i \in [N]$, is less than a given $\delta \in [0,1]$, implying the following approximation ratio.
\begin{proposition} \label{p:>=delta}
Given $\theta^*_{K'}$, the optimal allocation over $K'$ slots, and $\theta^{\delta}_{K'}$ the optimal allocation among the allocations $\theta \in \Theta$ where the last allocated ad $a_i$, $i \leq N$, satisfies $\Gamma_i(\theta) \geq \delta$, we have $SW\left(\theta^{\delta}_{K'}\right) \geq \left(1-\delta\right) SW\left(\theta^*_{K'}\right)$.
\end{proposition}
\begin{proof}
Consider the allocation $\theta^*_{K'}$ and assume that the last ad satisfying $\Gamma_i(\theta^*_{K'})\geq \delta$ is the one in slot $s_l$.
Recalling the notation $SW(\theta|S)$ for $S \subseteq [K]$, provided in the proof of Proposition~\ref{p:K'}, by optimality of $\theta^*_{K'}$ we have $SW(\theta^*_{K'}) \geq \frac{1}{\Gamma_{\theta^*_{K'}(l+1)}} SW(\theta^*_{K'}|\{l+1,\ldots,K\})$. Indeed, on the r.h.s. we have a lower bound on the social welfare that the ads allocated by $\theta^*_{K'}$ in slots $s_{l+1}, \ldots, s_{K'}$ would have if shifted to the first slot. If this were bigger than $SW(\theta^*_{K'})$ then $\theta^*_{K'}$ would not be optimal. But then since $\Gamma_{\theta^*_{K'}(l+1)} < \delta$, we have
$\delta SW(\theta^*_{K'}) \geq SW(\theta^*_{K'}|\{l+1,\ldots,K\})$.
Finally we have that $\theta^{\delta}_{K'}$, the allocation that removes from $\theta^*_{K'}$ the ads allocated from $s_{l+1}$ to $s_{K'}$, has $SW(\theta^{\delta}_{K'}) = SW(\theta^*_{K'}) - SW(\theta^*_{K'}|\{l+1,\ldots,K\})
\geq SW(\theta^*_{K'}) - \delta SW(\theta^*_{K'}) = (1-\delta) SW(\theta^*_{K'})$.
\end{proof}
\smallskip \noindent \underline{\emph{Approximation 3}.} In CC, we use rounded values for $\gamma_{i,j}$.
More precisely, we use $\lfloor \frac{1}{\tau}\log \frac{1}{\gamma_{i,j}} \rfloor$ in place of $\log \frac{1}{\gamma_{i,j}}$, where the normalization constant $\tau$ is defined below.
The constraint due to Proposition~\ref{p:>=delta} is now a capacity constraint of the form $\sum_{m \in [K]: m < l} \lfloor \frac{1}{\tau}\log \frac{1}{\gamma_{\theta(m),\theta(m+1)}} \rfloor \leq \lfloor \frac{1}{\tau}\log \frac{1}{\delta} \rfloor$.
Notice that, with rounded values, the capacity can assume a finite number of values (i.e., $\lfloor \frac{1}{\tau}\log \frac{1}{\delta} \rfloor$) and therefore we can now bound the number of allocations to be stored in $(S,a_i)$.
More precisely, for each value of capacity, we can discard all the allocations except one maximizing the social welfare measured with rounded values. This step has the following consequences on the approximation guarantee.
\begin{proposition}
Given $\theta^\delta_{K'}$, defined as in Proposition \ref{p:>=delta}, and $\theta^{\delta\epsilon}_{K'}$, the optimal allocation when the rounding procedure is applied, we have that, choosing $\tau = \frac{1}{K'}\log \frac{1}{1 - \epsilon}$, $SW\left(\theta^{\delta\epsilon}_{K'}\right) \geq \left(1-\epsilon\right) SW\left(\theta^{\delta}_{K'}\right)$.
\end{proposition}
\begin{proof}
Let $\xi^{x}_{m, m+1}$ be a shorthand for $\log \frac{1}{\gamma_{\theta_{K'}^{x}(m),\theta_{K'}^{x}(m+1)}}$ and $x(i)$ be a shorthand for $\theta^x_{K'}(a_i)$, for $x \in \{\delta \epsilon,\delta \}$.
By definition:
\begin{align*}
SW\left(\theta^{\delta\epsilon}_{K'}\right) & = \sum\limits_{i \in [N]} \Lambda_{{\delta\epsilon}(i)} \Gamma_{i}\left(\theta^{\delta\epsilon}_{K'}\right) q_i v_i
\\ & = \sum\limits_{i \in [N]} \Lambda_{{\delta\epsilon}(i)}
\prod_{m < {\delta\epsilon}(i)} 2^{-\xi^{\delta \epsilon}_{m, m+1}}
q_i v_i.
\end{align*}
\noindent Since $\xi^{\delta \epsilon}_{m, m+1} \leq \tau (\lfloor \frac{1}{\tau}\xi^{\delta \epsilon}_{m, m+1} \rfloor +1)$, we then have
\begin{align*}
SW\left(\theta^{\delta\epsilon}_{K'}\right) & \geq \sum\limits_{i \in [N]} \Lambda_{{\delta \epsilon}(i)} \prod_{m < {\delta\epsilon}(i)} 2^{-\tau\left( \left \lfloor \frac{1}{\tau} \xi^{\delta \epsilon}_{m, m+1} \right \rfloor + 1 \right)} q_i v_i\\
& \geq \sum\limits_{i \in [N]} \Lambda_{{\delta}(i)} \prod_{m < {\delta}(i)} 2^{-\tau\left( \left \lfloor \frac{1}{\tau}\xi^{\delta}_{m, m+1} \right \rfloor + 1 \right)} q_i v_i,
\end{align*}
\noindent where the latter inequality follows from optimality of $\theta^{\delta}_{K'}$. Given that $\lfloor y \rfloor \leq y$ we can conclude that $SW\left(\theta^{\delta\epsilon}_{K'}\right)$ is bounded from below by:
\begin{align*}
& \sum\limits_{i \in [N]} \Lambda_{\delta(i)} \left(\prod_{m < {\delta}(i)} 2^{\log \gamma_{\theta_{K'}^{\delta}(m),\theta_{K'}^{\delta}(m+1)} -\tau }\right) q_i v_i\\
& \geq 2^{-K'\tau}\cdot \sum\limits_{i} \Lambda_{{\delta}(i)} \Gamma_i\left(\theta_{K'}^\delta\right) q_i v_i\\
& = (1-\epsilon)\cdot \sum\limits_{i} \Lambda_{{\delta}(i)} \Gamma_i\left(\theta_{K'}^\delta\right) q_i v_i =\left(1-\epsilon\right) SW\left(\theta^{\delta}_{K'}\right).
\end{align*}
\noindent This concludes the proof.
\end{proof}
The approximation ratio of the algorithm is thus $(1-\delta)(1-\epsilon)\frac{\log (N)}{2 \min \{N,K\}}$, asymptotically the same as the best known approximation ratio of the Longest Path once $N=K$. The complexity instead can be derived as follows. The maximum number of allocations that can be stored in each $(S, a_i)$ is $O(\frac{\log \frac{1}{\delta}}{\tau})$ with $\tau = \frac{\log \frac{1}{1 - \epsilon}}{K'}$ thanks to dominations. Thus, given that $\log(\frac{1}{1-\epsilon}) \rightarrow \epsilon$ as $\epsilon \rightarrow 0$, the number of elements is $O(K' \frac{1}{\epsilon})$. Thus, the complexity when $K' = \log (N)$ is $O((2e)^{\log (N)} \frac{1}{\epsilon} \log(\frac{1}{\delta})N^2 \log^4 (N))=O(\frac{1}{\epsilon\delta}N^3 \log^4 (N))$.
Notice that all the three above approximations are necessary in order to obtain a polynomial--time algorithm. Approximation~2 and Approximation~3 allow us to bound the number of the allocations stored per pair $(S,a_i)$ and would lead, if applied without Approximation~1, to a complexity $O((2e)^KK^2N^2\log^2(N)\frac{1}{\epsilon\delta})$. Notice also that, without Approximation~2, the possible values for the capacity are not upper bounded. Approximation~1 allows us to remove the exponential dependence on $K$ and to obtain polynomial complexity.
\subsubsection*{Non--monotonicity of the approximation algorithm}
\input{071-nomon}
\section{FNE$_{aa}$$(K)$-r is APX-complete}
In this section we will prove the APX-hardness of FNE$_{aa}$$(K)$-r and provide a $1/2$-approximation algorithm.
\subsection{Hardness}
In this section we prove that
FNE$_{aa}$$(K)$-r is APX--hard.
\begin{theorem}\label{thm:CNFE_inapproximability}
FNE$_{aa}$$(K)$-r cannot be approximated within a factor of $\frac{1}{1+\alpha}$, for $\alpha < \frac{1}{412}$, unless $P=NP$.
\end{theorem}
\begin{proof}
We reduce from the Asymmetric TSP with weights in $\{1,2\}$, hereinafter denoted as $ATSP(1,2)$.
The $ATSP(1,2)$ problem demands finding a minimum cost Hamiltonian tour in a complete directed weighted graph $G'=(T,A)$ where $T$ is the set of nodes of $G'$, $A$ is the set of edges and the weight function $w_{i,j}\in \{1,2\}$ for all edges $(i,j)\in A$.
$ATSP(1,2)$ cannot be approximated in polynomial time within a factor of $\frac{1}{1+\beta}$, with $\beta<1/206$~\cite{ATSP1_2}.
Below, we denote as $\tau$ a solution of an $ATSP(1,2)$ instance, as $cost(\tau)$ its cost and as $\tau^*$ the optimal tour.
Given an instance of $ATSP(1,2)$ on graph $G'=(T,A)$ we construct an instance of FNE$_{aa}$$(K)$-r as follows: (\emph{i}) for each vertex $t_i \in T$ we generate an ad $a_i$ with $q_i=v_i=1$, then we have $N=|T|$; (\emph{ii}) the contextual graph is $G=([N],\mathcal{E})$, where $(i,j)\in \mathcal{E}$ iff $w_{i,j}=1$; (\emph{iii}) for all $(i,j)\in \mathcal{E}$, $\gamma_{i,j} = 1$; and finally (\emph{iv}) the number of slots is equal to the cost of the optimal tour $\tau^*$ in $ATSP(1,2)$, i.e. $K = cost(\tau^*)$.
We will show at the end of the proof how we can deal with the fact that we do not know $cost(\tau^*)$. Observe that with $K=cost(\tau^*)$, we have $SW(\theta^*) = N$, $\theta^*$ denoting the optimal solution of the FNE$_{aa}$$(K)$-r instance constructed.
The definition of the reduction is completed by observing that an allocation $\theta$ for the FNE$_{aa}$$(K)$-r that allocates all the $N$ ads can be easily mapped back to a tour $\tau$ for the $ATSP(1,2)$ by simply substituting the ad with the corresponding vertex of the graph $G'$.
Let us suppose for the sake of contradiction that there exists a $\frac{1}{1+\alpha}$-approximate algorithm for FNE$_{aa}$$(K)$-r, with $\alpha<\frac{\beta}{2}<\frac{1}{412}$.
Let $\theta_{\alpha}$ be the $\frac{1}{1+\alpha}$--approximate solution returned by such an algorithm, i.e., $SW(\theta_{\alpha}) \geq \frac{1}{1+\alpha} SW(\theta^*) = \frac{N}{1+\alpha}$.
It is easy to check that $\theta_\alpha$ consists of $\lceil \frac{N}{1+\alpha} \rceil$ ads, each providing a contribution of 1 to the social welfare, while there are $SW(\theta^*) - \lceil \frac{N}{1+\alpha} \rceil$ ads that w.l.o.g. we can consider empty.
Moreover, being $\alpha < 1$, $\frac{N}{1+\alpha} \geq cost(\tau^*) - \frac{N}{1+\alpha}$ holds.
For the sake of conciseness, hereinafter we omit the ceiling notation.
Let $\tau_\beta$ be the tour obtained from $\theta_\alpha$.
We state that in $\tau_{\beta}$ there are, at least, $\frac{2N}{1+\alpha} - cost(\tau^*) - 1$ edges of weight 1.
Divide the ads allocated in $\theta_\alpha$ in two sets: the $\frac{N}{1+\alpha}$ allocated ads $a_i\ i \in [N]$ and $a_{\bot}$.
Allocate in alternation one of the $\frac{N}{1+\alpha}$ ads $a_i$, with $i \in [N]$, and one of the $cost(\tau^*) - \frac{N}{1+\alpha}$ ads $a_\bot$.
When the slot index $2(cost(\tau^*) - \frac{N}{1+\alpha})$ is reached, the available $a_\bot$ are finished, thus, in the following $cost(\tau^*) - 2(cost(\tau^*) - \frac{N}{1+\alpha}) = \frac{2N}{1+\alpha} - cost(\tau^*)$ slots, only non-fictitious ads $a_i$, $i \in [N]$, are consecutively allocated (no slots are left empty).
This means that in $\theta_\alpha$, where the ads are disposed in a different way, we still have the guarantee that there are $\frac{2N}{1+\alpha} - cost(\tau^*) - 1$ pairs of consecutive ads $(a_i,a_j)$ s.t. $\gamma_{i,j}=1$.
Thus, in the tour $\tau_{\beta}$ there are, at least, $\frac{2N}{1+\alpha} - cost(\tau^*) - 1$ edges of weight 1.
Therefore, given that a tour is composed of $N$ edges, in $\tau_{\beta}$ there can be at most $N -\frac{2N}{1+\alpha} + cost(\tau^*) + 1$ edges of weight 2.
The length of $\tau_{\beta}$ is upper-bounded by $cost(\tau_{\beta}) \leq \frac{2N}{1+\alpha} - cost(\tau^*) - 1 + 2 (N -\frac{2N}{1+\alpha} + cost(\tau^*) + 1) = cost(\tau^*) + \frac{2N\alpha}{1+\alpha} + 1$.
Now we can state:
$
cost(\tau_\beta) \leq cost(\tau^*) + \frac{2\alpha N}{1+\alpha} + 1
\leq cost(\tau^*)+2\alpha N
\leq cost(\tau^*)+2\alpha\, cost(\tau^*)
= (1+2\alpha)\, cost(\tau^*)
< (1+\beta)\, cost(\tau^*),
$
where: (i) the second inequality holds for $N\geq \frac{1+\alpha}{2\alpha^2}$; (\emph{ii}) the third inequality holds since $N\leq cost(\tau^*)$ and (\emph{iii}) the last inequality holds since, by assumption, $\alpha<\frac{\beta}{2}$.
Thus, for the instances where $N\geq \frac{1+\alpha}{2\alpha^2}$ if there were an algorithm that $\frac{1}{1+\alpha}$--approximates FNE$_{aa}$$(K)$-r with $\alpha<\frac{1}{412}$, there would be a $\frac{1}{1+\beta}$ approximation of $ATSP(1,2)$ with $\beta < \frac{1}{206}$. We obtained an absurd.
We finally show that we can deal with the non existence of the oracle returning $cost(\tau^*)$. For all the instances of $ATSP(1,2)$ with $N$ vertices, $N \leq cost(\tau^*) \leq 2N$. So,
we run the polynomial $\frac{1}{1+\alpha}$--approximation algorithm of FNE$_{aa}$$(K)$-r for all the values $K=m$ with $m \in \{N \ldots, 2N\}$, obtain $m$
tours $\tau_{\beta}^m$ and
set $\tau_{\beta} = \arg\min_{ m \in \{N,\ldots,2N\}} cost(\tau_{\beta}^m)$, guaranteeing $cost(\tau_{\beta})\leq cost(\tau_{\beta}^{cost(\tau^*)})$.
\end{proof}
\subsection{$\frac{1}{2}$-Approximate Greedy Algorithm for FNE$_{aa}$$(c)$-r, for any $c$}. The algorithm orders the ads in nonincreasing order of $q_i v_i$ and allocates them in the odd slots, starting from the one with the highest product; even slots are left empty.
\begin{proposition}\label{prop:greedy}
The greedy algorithm above is $\frac{1}{2}$-approxi-mate for FNE$_{aa}$$(c)$-r, for any $c$.
\end{proposition}
\begin{proof}
Let $\theta_{.5}$ be the allocation obtained by the algorithm. We want to prove that $SW(\theta_{.5}) \geq SW (\theta^*)/2$. W.l.o.g., rename the ads so that $q_1 v_1 \geq q_2 v_2 \geq \ldots \geq q_N v_N$. Let $K'= \left \lceil {K/2} \right \rceil$. We have $SW(\theta_{.5}) = \sum_{m \in [K']} \Lambda_{2m-1} q_m v_m$. On the other hand, $SW(\theta^*) \leq \sum_{m \in [K]} \Lambda_m q_m v_m$. Since $\Lambda_i q_i v_i \geq \Lambda_{i+1} q_{i+1} v_{i+1}$, we have $\Lambda_i q_i v_i \geq {1/2} \sum_{m=i,i+1} \Lambda_m q_m v_m$. We conclude:
\begin{align*}
SW(\theta_{.5}) = & \sum_{m\in [K']} \Lambda_{2m-1} q_m v_m \geq \\
& \sum_{m\in [K']} \Lambda_{2m-1} q_{2m-1} v_{2m-1} \geq \\
& {1/2} \sum_{m\in [K]} \Lambda_m q_m v_m \geq SW(\theta^*)/2. \qedhere
\end{align*}
\end{proof}
\noindent The greedy algorithm above is a MIR, range $\Theta'$ being all the allocations that leave even slots empty. The solution output is indeed the one guaranteeing maximum social welfare in $\Theta'$. We therefore have proved the existence of a $1/2$-approximate truthful polynomial-time mechanism for FNE$_{aa}$$(c)$-r.
\section{FNE$_{aa}$(c) is APX-hard} \label{sec:c<K.APX-hard}
We now prove that FNE$_{aa}(1)$-r (Proposition \ref{prop:FNE_1-r}) and FNE$_{aa}(1)$-nr (Proposition \ref{prop:FNE_1-nr}) are APX-hard.
First we state two auxiliary lemmata.
Hereinafter, for the sake of notation, we will denote as $SW_1(\theta)$ and $SW_K(\theta)$ the objective function of $\mathcal{B}$--FNE$_{aa}(1)$-r and $\mathcal{B}$--FNE$_{aa}(K)$-r, respectively.
\begin{lemma}\label{lemma:no_gamma_0}
Let $\theta$ be an allocation (possibly containing empty slots)
and let $\theta'$ be the allocation obtained from $\theta$ by
replacing, for each pair $(a_{i-1},a_{i})$ in $\theta$ such that $\gamma_{i-1,i}=0$, ad $a_{i-1}$ with $a_\bot$. Then $SW_1(\theta)=SW_1(\theta')$.
\end{lemma}
\begin{proof}
Let $(a_{i-1},a_i)$ be the first pair of ads in $\theta$ with the property that $\gamma_{i-1,i}=0$, and let $\theta''$ be the allocation obtained from $\theta$ by substituting $a_{i-1}$ with $a_\bot$.
Let $SW_1^A (\theta)=\sum_{j=1}^{i-2} CTR_j(\theta)v_j$ and $SW_1^B (\theta)=\sum_{j=i+1}^{K} CTR_j(\theta)v_j$ denote the contributions to the $SW$ of the ads allocated, respectively, above and below the pair $(a_{i-1},a_i)$.
We can write $SW_1(\theta) = SW_1^A(\theta)+SW_1^B(\theta)+CTR_{i-1}(\theta)v_{i-i}+CTR_{i}(\theta)v_{i}$.
By assumption, we have $CTR_{i-1}(\theta)v_{i-i}=1$ (as $CTR_{i-1}(\theta)=1$ and $a_{i-1}\neq a_\bot$) and $CTR_{i}(\theta)v_{i}=0$.
We note that $SW_1^A(\theta'')=SW_1^A(\theta)$ and $SW_1^B(\theta'')=SW_1^B(\theta)$.
Furthermore, we note that $CTR_{i-1}(\theta'')v_{i-i}+CTR_{i}(\theta'')v_{i}=1$, as $v_{i-i}=0$ and $CTR_{i}(\theta'')=1$.
So we can conclude that $SW_1(\theta)=SW_1(\theta'')$.
By repeatedly applying the above procedure on $\theta''$ we can obtain an allocation $\theta'$ containing no pair of ads $(a_{i-1},a_i)$ where $\gamma_{i-1,i}=0$ and such that $SW_1(\theta) = SW_1(\theta')$.
\end{proof}
\begin{lemma}\label{lemma:no_gamma_0_1_equals_no_gamma_0_k}
Let $\theta$ be an allocation such that no pair of ads $(a_{i-1},a_i)$ exists where $\gamma_{i-1,i}=0$.
Then $SW_1(\theta)=SW_K(\theta)$.
\end{lemma}
\begin{proof}
The claim follows from the fact that $\forall i\in \mathcal{N}$, $CTR_i(\theta) = 1$ for both $\mathcal{B}$--FNE$_{aa}$$(1)$-r and $\mathcal{B}$--FNE$_{aa}$$(K)$-r if $\theta$ does not contain any pair of ads $(a_{i-1},a_i)$ for which $\gamma_{i-1,i}=0$.
\end{proof}
\begin{proposition}\label{prop:FNE_1-r}
FNE$_{aa}$$(1)$-r is APX-hard.
\end{proposition}
\begin{proof}
We prove that the subproblem $\mathcal{B}$--FNE$_{aa}(1)$-r is APX--hard via an approximation preserving reduction from the APX-hard problem $\mathcal{B}$--FNE$_{aa}(K)$-r (Theorem \ref{thm:CNFE_inapproximability}). In particular, we will show that computing an approximate solution for $\mathcal{B}$--FNE$_{aa}(1)$-r is not easier than $\mathcal{B}$--FNE$_{aa}$$(K)$-r on the same instance.
We will first prove that $SW_K(\theta^*_K)\leq SW_1(\theta^*_1)$ holds, where $\theta^*_K$ and $\theta^*_1$ denote, respectively, the optimal allocation for $\mathcal{B}$--FNE$_{aa}$$(K)$-r and $\mathcal{B}$--FNE$_{aa}$$(1)$-r.
For the sake of contradiction, let us suppose that $SW_K(\theta^*_K) > SW_1(\theta^*_1)$.
We can assume without loss of generality that $\theta^*_K$ does not contain a pair $(a_{i-1},a_{i})$ such that $\gamma_{i-1,i}=0$, as replacing $a_{i-1}$ with $a_\bot$ would yield an allocation with a non-decreasing SW value.
By Lemma \ref{lemma:no_gamma_0_1_equals_no_gamma_0_k} and by hypothesis we have that $ SW_1(\theta^*_K)= SW_K(\theta^*_K) >SW_1(\theta^*_1)$, which contradicts the optimality of $\theta^*_1$.
We are now going to prove that given an $\alpha$--approximate solution $\theta_1^\alpha$ to the objective of $\mathcal{B}$--FNE$_{aa}(1)$-r we can compute in polynomial time an approximate solution $\theta_K^\alpha$ to the objective of $\mathcal{B}$--FNE$_{aa}(K)$-r such that $SW_1(\theta_1^\alpha)\leq SW_K(\theta_K^\alpha)$. This is easily done by replacing $a_{i-1}$ with $a_\bot$ for each couple of ads $(a_{i-1},a_i)$ in $\theta_1^\alpha$ such that $\gamma_{i-1,i}=0$, thus obtaining $\theta'^\alpha_1$.
By Lemmata \ref{lemma:no_gamma_0} and \ref{lemma:no_gamma_0_1_equals_no_gamma_0_k} we finally conclude that $SW_1(\theta_1^\alpha)=SW_1(\theta'^\alpha_1)=SW_K(\theta'^\alpha_1)$.
\end{proof}
\begin{proposition}\label{prop:FNE_1-nr}
FNE$_{aa}$$(1)$-nr is APX-hard.
\end{proposition}
\begin{proof}
We conduct the proof by reduction from problem $\mathcal{B}$--FNE$_{aa}$$(1)$-r.
In particular, we add to the instance of $\mathcal{B}$--FNE$_{aa}$$(1)$-r $K$ new ads $ \{a_{N+1},\ldots,a_{N+K} \}$ such that: (\emph{i}) $v_{j} =0$ for all $j\in \{N+1, \ldots, N+K\}$ and (\emph{ii}) $\gamma_{i,j}=\gamma_{j,i}=1$ for all $i\in \{1, \ldots, N+K\}$ and $j\in \{N+1, \ldots, N+K\}$.
Let $\theta_{nr}^\alpha$ be an $\alpha$-approximate solution for the so-defined FNE$_{aa}$$(1)$-nr problem.
We can assume w.l.o.g. that $\theta_{nr}^\alpha$ does not contain any $a_\bot$, as in the no-reset model we can always allocate any non-allocated ad to an empty slot obtaining a non-decreasing $SW$ value.
We observe that, from a generic allocation $\theta_{nr}$, it is possible to obtain an allocation $\theta_r$ by substituting any ad $a_j$, $j \in \{N+1, \ldots, N+K\}$, in $\theta_{nr}$ with $a_\bot$ s.t. $SW^r(\theta_r)=SW^{nr}(\theta_{nr})$, and vice versa. Thus, from $\theta_{nr}^\alpha$ we can obtain an allocation $\theta_{r}^\alpha$ s.t. $SW^r(\theta_r^\alpha) = SW^{nr}(\theta_{nr}^\alpha)$; $SW^{x}(\theta)$ denoting the social welfare of $\theta \in \Theta$ in the model with reset $x\in \{r,nr\}$.
Furthermore, let $\theta^*_r$ and $\theta^*_{nr}$ be the optimal solutions, respectively, for $\mathcal{B}$--FNE$_{aa}$$(1)$-r and the FNE$_{aa}$$(1)$-nr defined by our reduction.
According to the observations above, it is easy to check that $SW^r(\theta^*_r)=SW^{nr}(\theta^*_{nr})$ holds.
In fact, let $\tilde{\theta}_{nr}$ be the solution obtained from $\theta_r^*$ by substituting each $a_\bot$ with an ad $a_j$, $j \in \{N+1, \ldots, N+K\}$.
Then $SW^r(\theta^*_r)=SW^{nr}(\tilde{\theta}_{nr})$.
Furthermore, $SW^{nr}(\tilde{\theta}_{nr})=SW^{nr}(\theta^*_{nr})$, as otherwise if $SW^{nr}(\tilde{\theta}_{nr})<SW^{nr}(\theta^*_{nr})$ we could translate $\theta^*_{nr}$ into a solution $\tilde{\theta}_r$ for $\mathcal{B}$--FNE$_{aa}$$(1)$-r such that $SW^r(\theta^*_r)<SW^r(\tilde{\theta}_r)$.
A similar argument holds if we consider the allocation $\tilde{\theta}_{r}$ obtained by substituting all ads $a_j$, $j \in \{N+1, \ldots, N+K\}$, in $\theta^*_{nr}$ with $a_\bot$. Finally, $SW^r(\theta_r^\alpha) = SW^{nr}(\theta_{nr}^\alpha) \geq \alpha SW^{nr}(\theta_{nr}^*) = \alpha SW^r(\theta_{r}^*)$.
\end{proof}
\section{FNE$_{aa}^+(c)$-nr is APX-complete for constant $\gamma_{min}$}
\begin{theorem}
FNE$_{aa}^{+}$(1)-nr is APX-hard.
\end{theorem}
\begin{proof}
Let $\{\gamma_{min},1\}$-FNE$^+_{aa}(1)$-nr denote the subclass of FNE$_{aa}^{+}$(1)-nr
where $\gamma_{ij}\in\{\gamma_{min},1\}$ for all $i,j\in \mathcal{N}$ and
a given $0<\gamma_{min}<1$. We prove the APX-hardness of FNE$_{aa}^{+}$(1)-nr by an approximation
preserving reduction from problem $\mathcal{B}$-FNE$_{aa}$$(1)$-nr (proved APX-hard in Proposition \ref{prop:FNE_1-nr}) to problem $\{\gamma_{min},1\}$-FNE$^+_{aa}(1)$-nr:
we prove the existence of an $\alpha$-approximate algorithm for $\{\gamma_{min},1\}$-FNE$^+_{aa}(1)$-nr to imply
the existence of a $2\alpha$-approximate algorithm for $\mathcal{B}$-FNE$_{aa}$$(1)$-nr.
The instance of $\{\gamma_{min},1\}$-FNE$^+_{aa}(1)$-nr is obtained from the
instance of $\mathcal{B}$-FNE$_{aa}$$(1)$-nr by simply setting $\gamma'_{i,j}=\gamma_{min}=\frac{1}{K-1}$
for all $i,j\in \mathcal{N}$ such that $\gamma_{i,j}=0$ in the given instance
of $\mathcal{B}$-FNE$_{aa}$$(1)$-nr, $\gamma'_{i,j}=1$ otherwise.
Let $\theta_{\gamma_{min}}^{*}$ and $\theta_{\mathcal{B}}^{*}$ be an optimal solution for problems $\{\gamma_{min},1\}$-FNE$^+_{aa}(1)$-nr
and $\mathcal{B}$-FNE$_{aa}$$(1)$-nr, respectively. We have $SW(\theta_{\mathcal{B}}^{*})\leq SW(\theta_{\gamma_{min}}^{*})$. Indeed,
if there is no $(a_{i-1},a_{i})\in\theta_{\mathcal{B}}^{*}$
s.t. $\gamma_{i-1,i}=0$ then $SW(\theta_{\mathcal{B}}^{*})=SW(\theta_{\gamma_{min}}^{*})$, whereas if there is a pair $(a_{i-1},a_{i})\in\theta_{\mathcal{B}}^{*}$
s.t. $\gamma_{i-1,i}=0$ then $SW(\theta_{\mathcal{B}}^{*})<SW(\theta_{\gamma_{min}}^{*})$.
Let now $\theta_{\gamma_{min}}$ be an $\alpha$-approximation of
$\{\gamma_{min},1\}$-FNE$^+_{aa}(1)$-nr and let $\theta_{\mathcal{B}}$
be the corresponding solution for $\mathcal{B}$-FNE$_{aa}$$(1)$-nr. (I.e., $\theta_{\mathcal{B}}$ is the solution $\theta_{\gamma_{min}}$ where the $\gamma_{min}$ externalities weigh 0.) We now prove that $SW(\theta_{\gamma_{min}})\leq 2 SW(\theta_{\mathcal{B}})$.
We have $SW(\theta_{\mathcal{B}})=1+\mathcal{P}(\theta_{\mathcal{B}})$,
where $\mathcal{P}(\theta_{\mathcal{B}})\leq K-1$ denotes the number of
pairs $(a_{i-1},a_{i})$ of ads in $\theta_{\mathcal{B}}$ such that
$\gamma_{i-1,i}=1$. Likewise, $SW(\theta_{\gamma_{min}})=1+\mathcal{P}(\theta_{\gamma_{min}})+(K-1-\mathcal{P}(\theta_{\gamma_{min}}))\cdot\gamma_{min}$.
By construction, $\mathcal{P}(\theta_{\mathcal{B}})=\mathcal{P}(\theta_{\gamma_{min}})=\mathcal{P}$,
from which it follows that $SW(\theta_{\gamma_{min}})\leq 2 \cdot SW(\theta_{_{\mathcal{B}}})$
is equivalent to $1+\frac{K-1-\mathcal{P}}{1+\mathcal{P}}\gamma_{min}\leq 2$. This is proved by noticing that
$1+\frac{K-1-\mathcal{P}}{1+\mathcal{P}}\gamma_{min} \leq 1+\frac{K-1}{1+\mathcal{P}}\gamma_{min}=\frac{\mathcal{P}+2}{\mathcal{P}+1}$, where last equality follows from definition of $\gamma_{min}$.
\end{proof}
\subsection{Approximation algorithm}
We now prove that any $\alpha$-approximate algorithm for Weighted 3-Set Packing (W3SP) can be turned into an $(\alpha\gamma_{min}^c)$--approximation algorithm for FNE$^+_{aa}(c)$--nr.
Given a universe $U$ and a collection of its subsets each of cardinality at most 3 and associated to a weight, W3SP consists of finding a sub-collection of pairwise-disjoint subsets of maximal weight. Several constant-ratio approximate algorithms are known in literature to solve this problem, e.g., the algorithm in \cite{Berman00} provides a $1/2$-approximation.
We now present a reduction from FNE$^+_{aa}(c)$-nr to W3SP, similar in spirit to that defined, for positive only externalities, in \cite{Fotakis}. \begin{theorem}\label{th:apx_3-set-pack}
Given an $\alpha$--approximate algorithm for problem W3SP, we can obtain an $(\alpha \gamma_{min}^c)$-approximation algorithm for problem FNE$^+_{aa}(c)$-nr.
\end{theorem}
\begin{proof}
Given an instance of FNE$^+_{aa}(c)$-nr, we obtain an instance of W3SP by means of the following reduction.
To simplify the presentation, we suppose that $K$ is even (the proof can be easily extended for an odd $K$).
We divide $K$ into $K/2$ blocks of two slots each.
We construct a collection of $\frac K 2\cdot \binom{N}{2}$ sets, each set having the form $\{a_i, a_j, p\}$, where $p\in\{1,3,5,\ldots,K-1\}$ and $i, j \in \mathcal{N}$. The weight of a set is defined as the maximum social welfare that ads $a_i$ and $a_j$ can provide when assigned to slots $s_p$ and $s_{p+1}$ without taking into considerations the externalities of $a_i$ and $a_j$ on the ads allocated to slots $s_m$, $m \neq p, p+1$. Specifically,
$ W(a_i, a_j, p) = \max\{ \Lambda_p q_i v_i + \Lambda_{p+1} \gamma_{i,j} q_j v_j, \Lambda_p q_j v_j + \Lambda_{p+1} \gamma_{j,i} q_i v_i \}.
$
Note that there is an immediate mapping between solutions of W3SP and FNE$^+_{aa}(c)$-nr. For a solution $\theta_S$ of W3SP, let $W(\theta_S)$ denote its total weight.
Now, let $\theta_S^*$ and $\theta^*$ denote, respectively, an optimal allocation for W3SP and an optimal allocation for FNE$^+_{aa}(c)$-nr. Furthermore, let $\theta_S^{\alpha}$ be an $\alpha$-approximate solution for W3SP, and $\theta^{\alpha}$ be the corresponding solution to FNE$^+_{aa}(c)$-nr. Since in W3SP, outer-block externalities are not taken into consideration, we have:
$W(\theta^*_S) \geq SW(\theta^*)$ and
$SW(\theta^\alpha) \geq \gamma_{\min}^c W(\theta_S^\alpha)$.
From these inequalities we obtain: $SW(\theta^{\alpha}) \geq \gamma_{\min}^c W(\theta_S^{\alpha}) \geq \alpha \gamma_{\min}^c W(\theta^*_S) \geq \alpha \gamma_{min}^c SW(\theta^*)$.
\end{proof}
\begin{corollary}\label{corl:constant_apx}
If $\gamma_{min}$ is bounded from below by a constant (i.e., $\gamma_{min}\in \Omega(1)$), then FNE$^+_{aa}(c)$-nr is approximable within a constant factor.
\end{corollary}
It can be easily shown that the above algorithm is not monotone.
\begin{theorem}
The algorithm of Theorem \ref{th:apx_3-set-pack} is not monotone
\end{theorem}
\begin{proof}
Consider an instance $I$ of FNE$^+_{aa}(1)$-nr with $N=K=4$ wherein $\Lambda_3 \gamma_{z,4} < \Lambda_4 \gamma_{3,4}$, for $z \in \{1,2\}$, $v_1, v_2 \gg v_3, v_4$ and $\gamma_{1,2}=\gamma_{2,1}=1$ so that $W(a_1, a_2, 1)$ is much bigger than any other $W(a_i, a_j, 1)$. Therefore, any reasonable approximation of the W3SP instance constructed upon $I$ must return sets $\{a_1, a_2, 1\}$ and $\{a_3, a_4, 3\}$. Additionally consider $v_4 < \frac{\Lambda_4 \gamma_{4,3}}{\Lambda_3^2-\Lambda_3\Lambda_4 \gamma_{3,4}}$ so that $W(a_3, a_4, 3)=\Lambda_3 q_3 v_3 + \Lambda_{4} \gamma_{3,4} q_4 v_4$. So the solution $\theta$ returned by the algorithm run on $I$ places $a_4$ in $s_4$, resulting in $CTR_4(\theta)=q_4 \Lambda_4 \gamma_{3,4}$. Take now the instance $I'$ defined as $I$ except that $v_1, v_2 \gg v_4' > \frac{\Lambda_4 \gamma_{4,3}}{\Lambda_3^2-\Lambda_3\Lambda_4 \gamma_{3,4}} > v_4$. As before, the approximation algorithm for W3SP will return sets $\{a_1, a_2, 1\}$ and $\{a_3, a_4, 3\}$ but this time $W'(a_3, a_4, 3)=\Lambda_3 q_4 v_4 + \Lambda_{4} \gamma_{4,3} q_3 v_3$. Therefore, the solution $\theta'$ returned by the algorithm run on $I'$ places ad $a_4$ in slot $s_3$, i.e., $CTR_4(\theta')=q_4 \Lambda_3 \gamma_{z,4}$, where $z \in \{1,2\}$ is the ad placed in slot $s_2$ in the allocation $\theta'$. The algorithm is therefore not monotone and cannot be used to design a truthful mechanism.
\end{proof}
\section{Approximating FNE$_{aa}$$(c)$-nr}
Similarly to the case $c=K$, Color Coding can be applied to design an optimal exponential-time algorithm finding the optimal solution and a simple modification of such algorithm returns a $\frac{\log(N)}{2\min\{N,K\}}$ approximation in polynomial time. While the basic idea is the same, some details change here.
We denote by $S\subseteq C$ a subset of colors and by $\delta(a)$ a function returning the color assigned to $a$.
Given a coloring $\delta$, the best colorful allocation
is found by dynamic programming.
For $|S|>c$, $W(S,\langle a_{h_0},\ldots,a_{h_{c}} \rangle)$ contains the value of the best allocation with colors in $S$ in which the last $c+1$ ads are $a_{h_0},\ldots,a_{h_{c}}$ from top to bottom. (The definition naturally extends for $|S| \leq c$.) Starting from $W(\emptyset, \langle \rangle)=0$,
we can compute $W$ recursively. For instance, for $|S|>c$, $W(S\cup\{\delta(a_{h_c})\},\langle a_{h_0}, \ldots,a_{h_{c}} \rangle) = \Lambda_{|S|+1}v_{h_c}q_{h_c}\prod_{i=0}^{c-1}\gamma_{h_{i},h_{i+1}} +\max_{a}W(S,\langle a, a_{h_0},\ldots,$ $a_{h_{c-1}} \rangle)$ if $\delta(a_{h_c}) \not\in S$ and
$-\infty$ otherwise. Given a random coloring, the probability that the ads composing the best allocation are colorful is $\frac{K!}{K^K}$. Thus, repeating the procedure $r{e^K}$ times, where $r\geq 1$, the probability of finding the best allocation is $1-e^{-r}$.
The complexity is $O((2e)^KKN^{c+2})$.
The algorithm can be derandomized with an additional cost of $O(\log^2(N))$.
By applying the above algorithm to the first $K'$ slots, $K' = \min\{K,\lceil \log(N) \rceil \}$, we obtain an algorithm with complexity $O(K^{3.5}N^{c+2}\log_2^2(N))$. We observe that if $c$ is not a constant, the complexity is exponential. It is not too hard to note that such an algorithm is $\frac{\log(N)}{2\min\{N,K\}}$-approximate. Moreover, this algorithm is MIR and as such can be used to design a truthful mechanism.
\section{Conclusions}
We enrich the literature on externalities in SSAs by introducing more general ways to model slot- and ad-dependent externalities, while giving a (nearly) complete picture of the computational complexity of the problem.
In detail, we enrich the naive model of SSAs by adding: (\emph{i}) the concepts of limited user memory
(\emph{ii}) contextual externalities and (\emph{iii})
refreshable user memory (i.e., reset model).
This gives rise to the FNE$_{sa}${} model, where ad- and slot-dependent externalities are factorized as in the cascade model and the FNE$_{aa}${} model, where the externalities and not factorized.
We satisfactorily solve the problem for FNE$_{sa}${}, whereas our results leave unanswered a number of interesting questions, with regards to both approximation and truthfulness for FNE$_{aa}${}. The parameter $c$ is central to this list.
If $c$ is constant, then we do not know whether a constant approximation algorithm for FNE$_{aa}$$(c)$ exists; this holds also for the special case of FNE$^+_{aa}(c)$-nr when $\gamma_{min}$ is not a constant. In the latter case, when $\gamma_{\min}$ is instead constant we are not aware of any truthful constant approximation mechanism.
Motivated by the fact that FNE$_{aa}${}-r is, apparently, an easier problem than FNE$_{aa}${}-nr, we believe that an interesting direction for future research is to study reset in more detail in order to understand its role w.r.t. the relatively harder FNE$_{aa}${}-nr.
|
1,108,101,562,975 | arxiv | \section{Introduction}
The dynamics of molecular motors is an important topic in biophysics
and nanotechnology. In the living and in the artificial nanoscale
world fast non-diffusive directed transport or rotary motion
constitute key ingredients of any complex structure. Molecular motors
are the ``nanomachines'' which perform these tasks
\cite{Schliwa,Dekker, Vale,Howard}. This definition involves a
considerable amount of different molecules: Motor proteins, such as
myosin and kinesin, RNA polymerases, topoisomerases, ...
In this paper we focus on the problem of directed motion over a
substrate which is exemplified in the kinesin \cite{Carter,Asbury2}.
Active transport in eukariotic cells is driven by complex proteins
like kinesin which moves cargo inside cells away from the nucleus
along microtubules transforming chemical fuel (ATP molecule) into
mechanical work. Kinesin is a two head protein linked by a domain
(neck) and a tail which attach a cargo or vesicle to be carried. The
two heads perform a processive walk over the substrate (the
microtubule). The way in which this process is performed attracts big
interest in the research in molecular biology as well as in biological
physics. In order to understand how kinesin works two properties that
arise from the structure \cite{julicher, Howard} of the microtubules
cannot be forgotten: they have a regular, periodic structure and
structural polarity -- they are asymmetric with respect to their two
ends, which determines the direction of kinesin motion
In the last fifteen years, experimental molecular biology has provide
a lot of new results which allows to elucidate, at mesoscopic level,
the main mechanisms for directed transport. These experimental
evidences are mostly based in single molecules
experiments~\cite{Visscher, Ritort}. The interpretation of these
results is not always easy and many times are not conclusive on the
detailed way in which the motor walks. Two basic mechanisms have been
proposed to explain the kinesin motion: ``inchworm'' and
``hand-over-hand'' motion (see figure \ref{fig:hand_inch}). In the
first case, one head does not overcome the other one. In this case the
period of the motion is one period of the microtubule structure ($l_0$
in the figure). In the hand-over-hand mechanism one head overcomes the
other. Now the period for each head is the double ($2l_0$). In both
cases the center of masses advances the same length. Although first
single molecule experiments were compatible with both mechanisms more
recent experiments have shown \cite{Yildiz,Asbury,Schief,Hua} in a
very clever way that hand-over-hand motion may be more plausible.
\begin{figure}
\includegraphics[width=8.5cm]{fig1_bis.eps}
\caption{\label{fig:hand_inch} (Color online) Schematic representation
of the possible mechanisms of motion for the motor of kinesin. In the
hand-over-hand case each head moves a distance equals to $2l_0$
whereas in the inchworm the period of motion is $l_0$.}
\end{figure}
Two strategies can be devised in order to model the motion of
molecular motors~\cite{Fisher,Chow}. On one hand continuous models
based on mirror symmetry breaking potentials (ratchet
potentials~\cite{Reimann}) or time symmetry broken driven
forces~\cite{Chacon}. On the other hand, discrete kinetics models
which are based on the solution of master equations associated to
different states of the motor (see~\cite{Fisher} and references
therein). Using either approximation both mechanisms have been
studied: inchworm~\cite{Cilla,Sancho2} or
hand-over-hand~\cite{SaoGao,Ping,Sancho1,Sasaki}.
In this work we study a minimalist mechanical continuous model for
hand-over-hand motion that, we believe, captures the main features of
biological motors. The model also has into account the properties of
the microtubule substrate. The article is organised as follows: first
we cast a two dimensional model which can mimic the motion of the
motor. Within of reasonable values of parameters we explore different
regimes of motion. In the conclusions section we will discuss the
validity of the results to model a molecular motor.
\section{2-D MODEL}
In order to find a suitable model for the kinesin motor, its
properties shall be studied carefully. Ref. \cite{Carter} summarizes
all these features: kinesin is a two-head protein which moves along
the microtubule with $8.3\,nm$ steps, matching the repeat distance of
the microtubule lattice; each step needs 1 ATP which is hydrolyzed and
the movement stalls when a backward load of $7\,pN$ is
applied. Experiments reported in \cite{Yildiz} show, by marking one of
the heads, that the motion follows the hand-over-hand mechanism,
as a $16.6\,nm$ step is observed for each head, thus forbidding the
movement proposed in the inchworm mechanism.
The description of the movement is rather simple: the two heads of the
kinesin are attached to the microtubule \cite{Mori} in two neighbor
monomers until 1 ATP molecule is hydrolyzed by the head
backwards. This energy frees the head, which moves to a new binding
place ahead the other one. Two complementary mechanisms to understand
how the particle released is able to find the next binding site have
been proposed~\cite{Carter,Tomi}: (a) The \emph{neck linker} mechanism
assumes a conformational change in the neck between heads which moves
the free head from one place to the next forwards. (b) The
\emph{diffusional search} relies on the assumption that the noise
associated to the thermal bath that surrounds the particle makes the
free particle move, and this movement is preferably forwards and
forced by the particle ahead which is attached to the microtubule.
Thermal fluctuations play a central role in the whole process. In the
nanometer-length dimension and at room temperature, motion is governed
by randomness induced by the environment (in this case the cytosol,
made up mainly by water). At this scale damping and thermal noise are
dominant and the dynamics can be studied by an overdamped Langevin
equation:
\begin{equation}
\gamma \frac{d{\bf r}}{dt} = -{\bf \nabla} V({\bf r}) + {\bf F}
({\bf r},t)+{\boldsymbol \xi}(t).
\label{eq:overdampedlangevin}
\end{equation}
Here ${\bf F}$ stands for external forces and ${\boldsymbol \xi}$ for thermal
noise, being
\begin{equation}
< \xi_j(t)\xi_k(t') > = 2\,\gamma\,k_B\,T\, \delta(t-t') \delta_{jk}
\label{eq:autocorrelacion}
\end{equation}
($\xi_j$ and $\xi_k$ are Cartesian components of the vector
${\boldsymbol \xi}$).
\subsection{Energy potentials}
We will model the kinesin as two interacting particles moving in the
plane under the effect of flashing ratchet substrate potentials (two
particles moving in two dimensions).
The potential energy of the system is given by
\begin{equation}
V({\bf r_1},{\bf r_2})=V_1({\bf r_1},t)+V_2({\bf r_2},t)+V_{12}({\bf
r_1}-{\bf r_2})
\end{equation}
The two heads of the kinesin are linked through a modified version of
the Finite Extensible Non-linear Elastic (FENE) interaction
\cite{FENE}:
\begin{equation}
V_{12}(r)=-\,\frac{1}{2}K\,R_0^2\,\log \left( 1 - \frac{(r-l_0)^2}{R_0^2} \right),
\label{FENEeq}
\end{equation}
with $r=|{\bf r_1}-{\bf r_2}|$, $K$ is the stiffness of the neck,
$l_0$ the equilibrium distance between heads and $R_0$ determines a
maximum allowed separation, $l_0-R_0<r<l_0+R_0$.
With respect to the substrate potentials, in order to model the
characteristics observed, two periodic flashing ratchet potentials
lagged half a period in the $x$ direction will be used.
\begin{equation}
V_j({\bf r},t)=V_j({\bf r})f_j(t).
\end{equation}
$j=1,2$ and in the $x$ direction the potentials are periodic with
period $2l_0$ and $V_2$ is displaced $l_0$, the period of the
microtubule lattice~\cite{l0}, with respect to $V_1$:
\begin{equation}
V_1({\bf r}+2l_0 {\bf \hat{x}})=V_1({\bf r})=V_2({\bf r}+l_0 {\bf \hat{x}}).
\label{period}
\end{equation}
The mathematical description of the 2d potential associated to
particle 1 [see Fig.~(\ref{fig:potential})] is the following
\begin{equation}
V_1(x,y) = V_{1x}(x) + V_{1y}(y)
\end{equation}
with
\begin{equation}
V_{1x}(x)=\left\{ \begin{array}{clc} \frac{x}{x_M}V_0 & {\rm if} & 0
\leq x \leq x_M \\
\\ \frac{2l_0-x}{2l_0-x_M}V_0 & {\rm if} & x_M \leq x
\leq 2l_0. \\
\end{array}
\right.
\end{equation}
$x_M$ controls the asymmetry of the potential and if $x_M=l_0$ the
potential is symmetric.
In order to confine the particles in the microtubule channel, with
respect to the $y$ direction we choose a simple parabolic dependence.
\begin{equation}
V_{1y}(y)=\frac{1}{2}k_y\cdot y^2
\end{equation}
\begin{figure}[]
\includegraphics[width=8.5cm]{fig2.eps}
\caption{(Color online) Surface plot of the 2d substrate potential
$V_1$. Minima correspond to $\widetilde{x}=2n$ ($n=0,\pm 1,...$) and
$\widetilde{y}=0$.}
\label{fig:potential}
\end{figure}
\begin{figure}[t]
\includegraphics[width=8cm]{fig3.eps}
\caption{(Color online) Time sequence for the flashing ratchet
substrate potentials. Each potential acts on a different
particle. Note that this potential follows the sequence of
attach-deattach shown in the hand-over-hand motion,
Fig.~\ref{fig:hand_inch}.}
\label{fig:time}
\end{figure}
We still have to define $f_j(t)$. The idea is to reproduce a cyclic
motion. Such a cycle has 4 steps, see Fig.~(\ref{fig:time}. First
($t=0$) both particles are confined close to the minima of their
respective potentials and thus separated an averaged distance $l_0$
(the natural length of the neck). After a given time $t_{\rm on}$ some
energy arrives at particle 1 for instance which does not see its
substrate potential for a time $t_ {\rm off}$. During this time this
particle suffer a thermal diffusion only subjected to the interaction
with the other particle. When $V_1$ is switched on again at $t=t_{\rm
on}+t_{\rm off}$, the particle slides down towards some minimum energy
position. This step lasts another $t_{\rm on}$ time and then at
$t=2t_{\rm on}+t_{\rm off}$, $V_2$ is switched off for a $t_{\rm off}$
time closing the cycle. The total period of this cycle is $T=2t_{\rm
on}+2t_{\rm off}$. As we will see, thanks to the asymmetric character
of the potential a directed motion is obtained.
In order to compute the efficiency of the motion we define an efficiency
parameter given by:
\begin{equation}
\varepsilon = \frac{\langle \Delta x_1 \rangle}{2l_0} \times 100,
\end{equation}
where $\langle \Delta x_1 \rangle$ is the average advance of particle
1 (for instance) per cycle of the potential. Note that our definition
of efficiency is basically the velocity of the motion, in fact the
mean velocity can be computed as $v_{\rm mean}=\frac{\epsilon}{100}
\times 2l_0/(t_{on}+t_{off})$ and is not related to the input of
energy and the output of work.
The important parameter here is $t_{\rm off}$, the time a particle has
for the diffusive motion. $t_{\rm on}$ only requires to be long enough
to allow for relaxation towards a minimum, which in overdamped
dynamics happens very fast. Thus, in our simulations we have played
with different values of $t_{\rm off}$ and set $t_{\rm on}=t_{\rm
off}$. This value corresponds to a duty ratio $r=t_{\rm on}/(t_{\rm
on}+t_{\rm off})=0.5$ which guarantees the processivity of the
motion~\cite{Howard,Chow}.
\subsection{\label{ss:norm}Normalization}
We will measure distance in units of $l_0=8.3$\,nm, the distance
between monomers in the microtubule, see also~\cite{l0}. Energy is measured
in units of $V_0$, the maximum value of the substrate potential. We
choose $V_0 \simeq E_{\rm ATP} \simeq 20$ $k_BT$ (at 300\,K)
\cite{com1}. The natural unit of time will be $\tau=l_0^2 \gamma / V_0
\simeq 40$\,ns. Here, $\gamma$ is the damping coefficient used in the
Langevin equation ($\gamma=6\pi\eta r= 4.7 \cdot 10^{-11}
\frac{kg}{s}$, with $\eta=10^{-3}$\,Pa the viscosity of the water and
$r=25\,\AA$ the size of the head).
We will use now \;$\widetilde{}$\; signs for normalized variables as
\begin{eqnarray}
\widetilde{x}=\frac{x}{l_0} \qquad ; \qquad \widetilde{t}=\frac{t}{\tau} \quad
; \qquad \widetilde{V}=\frac{V}{V_0} \nonumber \\
\widetilde{T}=\frac{k_B\,T}{V_0} \qquad {\rm and} \qquad \widetilde{Q}=\frac{l_0\, Q}{V_0}
\label{eq:normalizaciones}
\end{eqnarray}
\begin{figure}
\begin{tabular}{c}
\includegraphics[width=8.5cm]{fig4a_bis.eps}\\\includegraphics[width=8.5cm]{fig4b.eps}\\\includegraphics[width=8.5cm]{fig4c.eps}\\
\end{tabular}
\caption{\label{fig:dynamics} $\widetilde{x}$($\widetilde{t}$) for the two
particles (top and middle) and trajectory in the $\widetilde{x}$--$\widetilde{y}$
plane (bottom). The middle figure also shows the flashing dynamics of the
substrate potentials (the base lines correspond to the {\em on} periods).}
\end{figure}
\section{\label{sec:results} RESULTS}
We are going to present our results based in the numerical integration
of the normalised system of equations for the two particles. The
integration algorithm we use is a version of the Runge-Kutta algorithm
for integration of stochastic differential equations ($3_O\,4_S\,2_G$)
\cite{sde1,sde2}. With respect to the different constants and
parameters, unless extra information is given, the default normalized
parameters will be we $\widetilde{T}=0.05$ (300 K), $\widetilde{t}_
{\rm off}=\widetilde{t}_ {\rm on}=20$, $\widetilde{K}=10$,
$\widetilde{k}_y=1$, $\widetilde{R}_0=0.4$ and $\widetilde{x}_M=0.5$.
\subsection{\label{sub:dynamics} Dynamics of the system}
Fig.~\ref{fig:dynamics} shows a typical example of the dynamics of the
system at the parameter values listed above. There we can see that
simulations reproduce the expected mechanism, a hand-over-hand net
advance of the molecule. Middle figure shows a detail of the top one.
If both potentials are on, the particles do random motions around the
minimum potential energy position. However, as one of the potentials
is turned off, its linked particle starts to diffuse in 2d. The
importance of the asymmetric mechanism is fully understood here. After
$t_ {\rm off}$, when the potential is turned on again, most of the times
the particle is sited to the right of the maximum of the asymmetric
potential and then typically moves down to the nearest minimum
position. As we have said, due to the asymmetry of the potential, this
minimum more frequently corresponds to the one to the right of the
original one. Clearly, the more asymmetric the potential is, the more
likely the system moves forward.
In the third graph of Fig.~\ref{fig:dynamics} we show the trajectories
of the particles in phase space. The distance between heads moves
around the rest distance $l_0$. Motion in the $x$ direction happens
usually when one of the substrate potentials is off. Otherwise
particles stay most of the time close to minimum energy position.
\subsection{Efficiency as a function of $t_{\rm off}$ and $T$}
\begin{figure}
\includegraphics[width=8.5cm]{fig5.eps}
\caption{\label{fig:histo_x} x-axis projection, at different
normalized times $\widetilde{t}$, of the diffusion of a particle attached
to the other, fixed at $(1,0)$, when no substrate potential is being
applied. Data obtained at $\widetilde{T}=0.05$}
\end{figure}
A first estimation of the time needed for a particle to reach the next
minimum can be easily worked out by using the 2D diffusion equation for the
particle probability distribution $p(\varphi,t)$,
\begin{equation}
\label{eq:diffusion}
\frac{\partial p}{\partial t} = D \, \nabla^2 p.
\end{equation}
Writing [\ref {eq:diffusion}] in polar coordinates and assuming that
the distance between heads $r$ is constant the equation reads
\begin{equation}
\label{eq:polar}
\frac{\partial p}{\partial t} = \frac{D}{r^2}\,\left[ \frac{\partial^2
p}{\partial r^2} + \frac{1}{r}\,\frac{\partial p}{\partial r} +
\frac{\partial^2 p}{\partial \varphi^2} \right]_{r=const.} =
\frac{D}{r^2}\frac{\partial^2 p}{\partial \varphi^2}
\end{equation}
This equation can be solved (making Fourier transformation for
instance) with appropriated initial conditions
\begin{equation}
\label{eq:ini_cond}
\left. p(\varphi,t) \right|_{t=0} = \delta (\varphi)
\end{equation}
to give the normalized $p(\varphi,t)$
\begin{equation}
\label{eq:sol_dif}
p(\varphi,t)=\frac{r}{2\sqrt{\pi\,D\,t}} \exp\left( \frac{-r^2
\varphi^2}{4\,D\,t} \right)
\end{equation}
from this results the mean angle reached at time $t$ is given by
\begin{equation}
\langle \varphi^2 \rangle = \frac{2 D t }{r^2} = \frac{2 k_B\,T t }{\gamma r^2}
\end{equation}
where we have used the Stokes-Einstein relation:
\begin{equation}
D=\frac{k_B\,T}{\gamma}.
\end{equation}
Let be $\varphi_M$ the angle where the maximum of the potential is
placed which is determined by the $x$ position of that maximum,
$x_M$. If the particle is at a $x$ position less than $x_M$ when the
potential turns on, it will return to its original position. However,
if $x>x_M$ the particle will move forward. Assuming that the position
of the maximum is placed at $\widetilde{x}_M=0.5$, and
$\widetilde{r}=1$ we obtain $\varphi_M=\pi/3$. Figure
\ref{fig:histo_x} shows the time evolution of the probability
distribution (projected on the x-axis). It is clearly observed that as
time goes the probability of finding that a particle crosses the maximum
$x_M$ increases. The time in which the probability for crossing is
$1/2$ is simply given by
\begin{equation}
\label{eq:t_half}
t = 0.674 \frac{r^2 \varphi_M^2 \gamma}{2 k_B\,T}.
\end{equation}
With the values given above the adimensional time (for temperature
$\widetilde{T}=0.05$) is $\widetilde{t}\sim 16$. For this $t_{\rm off}$ time
the efficiency of the motor is half of the maximum one, which is fixed
by $x_M$ (see below).
Finally we analyze the behavior of the efficiency with temperature,
Fig.~\ref{fig:T_t_int}. For low temperatures we need long $t_{\rm
off}$ times to reach a reasonable efficiency as expected from equation
[\ref{eq:t_half}] and we not get an asymptotic limit. For intermediate
temperatures $\widetilde{T}=0.03-0.05$ the highest efficiency is
achieved. Moreover, in the limit of high temperatures compared to the
2d potential and long $t_{\rm off}$, the efficiency starts to fall, as
backwards movement is more likely to occur (the free particle can drag
the confined one).
\begin{figure}[t]
\includegraphics[width=8.5cm]{fig6.eps}
\caption{\label{fig:T_t_int} Efficiency as a function of
$\widetilde{t}_{\rm off}$ for different normalized temperatures
$\widetilde{T}$.}
\end{figure}
\begin{figure}
\includegraphics[width=8.5cm]{fig7.eps}
\caption{
\label{fig:x_m_t_int} Efficiency as a function of $\widetilde{t}_{\rm off}$ at
different positions of the maximum $\widetilde{x}_M$. When
$\widetilde{x}_M=1.0$, the potential becomes symmetric and no rectified
movement is observed.}
\end{figure}
\subsection{Efficiency at different asymmetries}
Here we present results on the behavior of the system as the asymmetry
of the potential changes, being $x_M$ the parameter that controls it
($x_M$ fixes the position of the maximum in the period $2l_0$ periodic
potential, so $\widetilde{x}_M=1$ corresponds to the symmetric
case). At a given $t_ {\rm off}$ time, efficiency depends importantly
on this parameter. The mechanism will be inefficient for a symmetric
potential and the largest efficiency will be obtain for the more
asymmetric one.
Fig. \ref{fig:x_m_t_int} shows the numerical simulation of the
efficiency as a function of $t_ {\rm off}$ for different values of
$x_M$. As we reduce the asymmetry the efficiency tends to zero, as
shown in the $\widetilde{x}_M=1.0$ line, which corresponds to a symmetric
potential. On the other hand the efficiency of the mechanism increases
as we make the potential more asymmetric. In all the cases, when we
increase $t_{\rm off}$ the efficiency grows from zero and saturates at
its maximum value for long enough values of this parameter.
\begin{figure}
\includegraphics[width=8.5cm]{fig8.eps}
\caption{\label{fig:carga_t_int} Efficiency as a function of the
external load applied, $\widetilde{Q}$. Each line refers to a
different $\widetilde{t}_{\rm off}$.}
\end{figure}
\begin{figure}
\includegraphics[width=8.5cm]{fig9.eps}
\caption{\label{fig:carga_T} Efficiency as a function of the external
load applied $\widetilde{Q}$, for different normalized temperatures
$\widetilde{T}$.}
\end{figure}
\subsection{Dynamics under external loads}
In this section we want to explore the experimental results reported
in Ref.~\cite{Carter}, where backward stepping was observed when using
high backward loads. Then, it is worth studying how the system behaves
under the effect of an external force.
To model the effect of such a load is not trivial. We have to decide
how the total load $Q$ is divided into the two heads of the
protein. It seems obvious that a head can make an opposite force to
the applied only in case it is fixed to the
microtubule. Therefore, the following mechanism is proposed: if there
is only one head with its potential switched on, it will bear the
whole opposite load. On the other hand, if both heads have their
potential on, everyone will bear a force $Q/2$.
The expected behavior of the system is the following: as one potential
turns off, its associated head starts diffusing. The other particle
feels a force $Q$ which doubles the previous $Q/2$. Therefore, if that
force is strong enough. the particle starts climbing the potential
slope. The asymmetric potential plays again an important role: if the
external force is positive, the particle faces the sharpest slope of
the potential, so a bigger force than in the negative case is needed.
Fig.~\ref{fig:carga_t_int} shows the relationship between external
load and $t_{\rm off}$. The most important characteristic is the value
of the load for which the system does not move, 0 efficiency. For
negative loads, as $t_{\rm off}$ shortens, the particle needs greater
forces to start to move backwards. On the other hand, when long times
are employed, the mechanism seems to reach a limit around
$\widetilde{Q}=-0.5$.
We have also studied the effect of the temperature in the
mechanism. Results are shown in Fig.~\ref{fig:carga_T} where
efficiency versus external load for a given value of
$\widetilde{t}_{\rm off}=20$ is plotted at different temperatures. An
almost linear relation between critical load and temperature is
obtained in this range.
\begin{figure}
\includegraphics[width=8.5cm]{fig10.eps}
\caption{\label{fig:l0_t_int} Efficiency as a function of the natural
length of the neck, $\widetilde{l}_0'$, for different values of $\widetilde{t}_{\rm
off}$.}
\end{figure}
\subsection{Varying the natural length of the neck}
Up to now we have studied the case where the two space lengths of the
system, the distance between monomers in the microtubule and the
natural length of the neck, are equal (both are $l_0$). In this
section we have extended our work to the study of the case when the
natural distance between the heads is different from the spatial unit,
fixed by the distance between monomers in the microtubule. Then, in
our model, $l_0$ need to be replaced by $l_0'$ in Eq.~(\ref{FENEeq}).
Fig.~\ref{fig:l0_t_int} is clear enough to provide strong evidence
about the striking behavior observed as the natural length of the neck
tends to 0: $100\%$ efficiency is achieved. This almost deterministic
mechanism can be understood with the help of Fig. \ref{fig:l0_moves}
and presents three steps: (a) We start with one particle sited in a
minimum of the potential and the other one ahead (it feels a small
force since the potential slope there is also small). (b) As the first
potential disappears, the second particle moves to its minimum,
dragging the other one. (c) Now the potential turns on, thus making
the first particle to move ahead and we recover a situation equivalent
to step (a).
\begin{figure}
\includegraphics[width=5.5cm]{fig11.eps}
\caption{\label{fig:l0_moves} (Color online) Schematic explanation of
the almost deterministic motion observed when $l_0'$ close to zero.}
\end{figure}
\begin{figure}
\includegraphics[width=8.5cm]{fig12.eps}
\caption{\label{fig:l0_T} Efficiency versus $\widetilde{l}_0'$ for
different temperatures, $\widetilde{T}$.}
\end{figure}
\begin{figure}[t]
\includegraphics[width=8.5cm]{fig13.eps}
\caption{\label{fig:l0_K} Efficiency versus $\widetilde{l}_0'$ for
different values of $\widetilde{K}$.}
\end{figure}
Fig. \ref{fig:l0_T} shows results at different temperatures. First of
all, the deterministic $T=0$ limit must be carefully explained. In
this limit, there are only two possible values for the efficiency:
$0\%$, associated to the range $\widetilde{l}_0'\in \left( 0.5,1.5
\right)$, and $100\%$ for $\widetilde{l}_0'\in \left[ 0.0,0.5 \right)\cup
\left( 1.5,2.0 \right]$. These two regions can be fully explained
using the mechanism described above. In the $0\%$ case, switching off
one of the potentials will make the other particle move to a minimum,
but it will not be the minimum ahead which would not produce a net
movement forwards.
There is just one parameter left to be discussed, which is the
stiffness of the linker between heads of the motor. The study of the
efficiency as a function of $l_0'$ at different values of $K$ is shown
in Fig. \ref{fig:l0_K}. For $\widetilde{l}_0' < 0.5$, the stiffness of the
neck determines whether the particles prefer to be in their minimums
no matter how far they are, or in an intermediate position, as plotted
in Fig. \ref{fig:l0_moves}~a).
\section{CONCLUDING REMARKS}
We have studied a simple mechanical model for hand-over-hand motion in
two dimensions. This model has into account some important
characteristics of two heads biological motor as kinesin. These
characteristics are incorporated in the model in a simple but
realistic way. The hand-over-hand mechanism requires a two-dimensional
space. Unidirectionality is given by the ratchet potential in the
advance direction. The balance between on and off times controls the
efficiency and processivity of the motion. With all these ingredients
we have been able to simulate the most remarkable features of kinesin
motion within reasonable values of he parameters. Specifically, we
have clearly observed a stochastic directed motion in which particles
alternates each other (hand-over-hand). Moreover, a strong dependence
of the stall force with off time and temperature has been
found. Temperature makes a decrease of stall force with respect to one
expected from energetic calculations. This decrease in the motor
efficiency agrees with experimental observations \cite{Carter,
Fisher}.
Several improvements to the model can be considered in future work. A
link between $t_{\rm off}$ and ATP concentration could be
established. This would imply a random flashing force instead of the
periodic one used here. Another interesting extension of the model
could allow the motor to change the lane in $y$ axis. This could be
easily implemented by using a periodic potential in the transverse
direction.
Finally, we have to stress that the characterization of the behavior
and properties of those motors and the mechanisms behind them is an
initial step toward the construction of synthetic nanoscale motors.
This is a very active field in the nanoscience world. There has been
some successful achievements in this field that include triptycene
motors \cite{motor1}, helicene motors \cite{motor2} and a nanotube
nanomotor \cite{nanotubo}. In this article, we have shown the
conditions for which a nanowalker can work.
\begin{acknowledgments}
We thank L.~M. Flor\'{\i}a for helpful comments and discussion. Work
is supported by the Spanish DGICYT Project FIS2005-00337.
\end{acknowledgments}
|
1,108,101,562,976 | arxiv | \section{Introduction}
This work is about a study of ancient stained glass windows as they might be used in the Cistercian art in France in the middle of the XIIIth century. Using a model of the abbey church of Royaumont, optical simulations are done with Virtuelium, a physically-based rendering open source software developed at Ecole Centrale Paris (France). More specifically, our interest is to produce consistent visualization based on some hypotheses about the visual appearance of Cistercian stained glass windows as they could have been seen in the church around 1250 AD. We use several samples of medieval glasses for this study, many of them coming from the abbey of Maubuisson, which is very close to Royaumont in terms of geographic location and time. As well, the methodology required more recent glasses, although manufactured in a very traditional way, like the complete set of Saint-Just glasses (Saint-Gobain group) we also used.
Interactions between natural lighting and medieval glass materials are hard to render in their whole complexity because of their heterogeneity in terms of optical behavior. For instance, measurements on glass tiles revealed a non uniform thickness and the presence of micro particles of air inside the tiles. That is responsible of an irregular volume scattering of light. Traditional physically-based rendering engines~\cite{Pharr:2010:PBR:1854996, LuxRender, Shirley:2012:BPR:2407783.2407785} use extrinsic properties of materials, such as spectral reflectances and transmittances. These approaches are not complete enough to efficiently address our problem. An originality of the Virtuelium approach consists in coupling both extrinsic and intrinsic properties (optical constants).
The specific methodology of acquisition on glass samples is thus described in the first section. Section 2 presents some rendering algorithms including those used in Virtuelium, then we give details about how these algorithms are computed in parallel. Finally, a presentation of resulted images and speedups will close our demonstration.
\section{Data acquisition}
Extrinsic and intrinsic properties of glass materials can be measured with two distinct methods. Spectrophotometry~\cite{bass1995handbook} is commonly used to acquire spectral responses in reflection or transmission. This approach mainly consists in lighting the surface to be measured with known spectral (the emission spectrum) and geometrical (the incident angle) conditions. By analyzing the energetic quantities measured along a wavelength range, we can deduce the spectral functions for reflectance or transmittance. Then, the material is fully characterized in one point of its surface by repeated single measurements for regular sampling of both incident and view angles. In the case of glass materials, the obtained function is called Bidirectional Transmittance Distribution Function (BTDF) and it belongs to a larger family of spectral distribution functions.
Another method based on spectroscopic ellipsometry can be used to determine a BTDF. This second method offers a way to measure the complex index of refraction
\begin{equation}
\label{eq:optical}
\tilde{n}(\lambda) = n(\lambda)+ i k(\lambda) = n(\lambda) (1 + i \kappa(\lambda))
\end{equation}
which is called "optical constants"~\cite{palik1985handbook, callet1998couleur}.
The quantities $n(\lambda)$ denotes the optical index, $k(\lambda)$ stands for the index of absorption and $\lambda$ represents the wavelength. At the opposite of extrinsic properties, optical constants really define the electronic behavior of dielectric materials, and not just a spectral response. Besides, they are often used to simulate the visual appearance of metallic surfaces~\cite{cgaBerger, Woollam199444, 5111815} as these materials fully satisfy the Fresnel conditions (non-scattering and homogeneous). But that is not true for glass materials and particularly medieval glass materials.
In order to deal with the heterogeneity of medieval glass materials, input data are measured from a piece of a height-hundred-years-old stained glass window coming from the Maubuisson abbey. On another hand, a library of modern samples (from Saint Just Corporation) with correct Fresnel conditions are also used. Then, by respecting a rigorous colorimetric protocol, we select appropriate modern glass materials which are visually close to the ancient one. More details can be found in~\cite{Cerise:2012:NLM:2426256.2426337}.
\section{Rendering algorithms}
To simplify next discussions, we will only consider equations for the Bidirectional Reflectance Distribution Function (BRDF). The reasoning for the BTDF is the same except that we do not only consider a dome for incident lights but the entire sphere since the studied surface is non-opaque. The global rendering equation is deduced from the Radiative Transfer Equation (RTE) and described by Kajiya~\cite{Kajiya:1986:RE:15886.15902} as follow:
\begin{equation}
\label{eq:radiative_theo}
L_{r}(\vec{\omega_{o}}) = \int_{\Omega} F_{r}(\vec{\omega_{i}}, \vec{\omega_{o}}) L_{i}(\vec{\omega_{i}}) \vec{n}.\vec{w_{i}} d \omega_{i}
\end{equation}
where $\Omega$ is the dome of incident lights. Literally, a radiative balance is calculated. The remitted light $L_{r}$ in a direction $\vec{\omega_{o}}$ depends on every incident light $L_{i}$ emitted by the dome. It is then sufficient to know the BRDF, $F_{r}$, for computing the remitted light from a given incident light. In an applicative way, the equation (\ref{eq:radiative_theo}) can be simplified as follow if we consider only point or directional light-sources:
\begin{equation}
\label{eq:radiative_appli}
L_{r}(\vec{\omega_{o}}) = \sum_{s 1}^N F_{r}(\vec{\omega_{s}}, \vec{\omega_{o}}) L_{s}(\vec{\omega_{s}}) \vec{n}.\vec{w_{s}}
\end{equation}
where $N$ is the number of light-sources, $L_{s}$, the emission spectrum of the current light-source $s$ and $\vec{\omega_{s}}$ the incident direction.
At the scope of a whole 3D scene, image rendering engines observe two major kinds of interaction between lights and geometric objects. We talk about local illumination when the only lights hitting an object are those directly coming from the light sources without any other interaction. At the opposite, the mutual contributions of all the objects present in the 3D scene is then responsible of global illumination. As a result, rendering algorithms can be sorted between those which compute global illumination and those which ignore it. For now, Virtuelium provides one algorithm of each kind: the "Scanline rendering" technique for local illumination and the "Photon Mapping" for global illumination.
\subsection{Without global illumination}
The "Scanline rendering"~\cite{Wylie:1967:HPD:1465611.1465619} is based on the idea of inverse ray tracing algorithm~\cite{Arvo86backwardray}. Indeed it is easier not to traverse light paths in the logic direction, but from the camera to light-sources. As the image to be computed can be viewed as a matrix of pixels, a light-ray is emitted from each pixel, orthogonally to the image plane. Once emitted, each light-ray evolves independently. When a ray intersects with the closest object on his road, two actions can be executed. In a first time, we have to evaluate the received luminance at the given viewed direction. In order to achieve this goal, new rays are shot from the hit point to each light-source, thus determining all the needed incident directions. Then, secondary rays are thrown regarding to reflection and/or refraction laws and the process is repeated. The algorithm stops when the energetic value attached to the ray goes bellow a threshold or after the ray has bounced a predetermined number of times. Additional information retrieved from a hit point also allows more complex computations. By example, this can be texture coordinates for spatial distribution maps.
The complexity of ray tracers are mostly contained in the complexity of intersectors. That is why the main difference between algorithms of this family resides in the way polygons of objects are sorted. Sort in "Scanline rendering" is achieved by projecting every polygons onto the image plane. Then, the image is computed line by line, from top to bottom, determining the color of each pixel by considering the closest polygons around. Other very common algorithm also exists (see the "Z-buffer technique~\cite{Catmull:1974:SAC:907242} which is nowadays implemented by default on graphic cards). The main advantage of the "Scanline rendering" is that each pixel is only evaluated once. In return, the memory cost is high because all the polygons of the scene must be loaded at the same time, leading to bad performances for scenes with complex geometries.
\subsection{With global illumination}
Including global illumination (GI) in image rendering processes is a major progress in the quest of photo-realism, but great differences exist between GI techniques. For example, "Radiosity" methods~\cite{Wallace:1987:TSR:37402.37438, Sillion:1989:GTM:74333.74368} transform the phenomenon of global illumination into a system of linear equations, with different ways of resolutions: direct resolution~\cite{journals:vc:BuD89} (very effective but with a high complexity) or iterative algorithms~\cite{Cohen:1988:PRA:54852.378487}. In another direction, the use of the stochastic algorithm of "Monte-Carlo"~\cite{134595} is sometimes used despite its slower convergence. "Path Tracing" methods~\cite{CGF:CGF1863} launch random rays from pixels of the image plane until one hits an object. It can be bi-directional (shots rays from camera and sources simultaneously). "Metropolis Light Transport" algorithm (MTL)~\cite{Hachisuka:2008:PPM:1409060.1409083} optimizes "Path Tracing" by replacing the random shooting with heuristics. Modern versions of theses algorithms are progressive.
This means they are not limited by a maximum number of bounces but instead continue to converge while the user allows it (or until a quality condition is reached).
The version of "Photon Mapping" algorithm implemented in Virtuelium is not progressive yet but is simpler to implement. It was first defined by Jensen in 1996 and is improved since this date~\cite{Jensen:1996:GIU:275458.275461, Jensen:2004:PGG:1103900.1103920}.
It decomposes the rendering process into two steps which are executed sequentially. In the pre-rendering step, the position of photons (light-rays launched from light-source) hitting objects are stored in several appropriate structures (photon maps). At least two photon maps are needed, one for the global illumination itself and one for caustics. During the next step, four different contributions are evaluated based on the fact that $L_{r}(\vec{\omega_{o}})$ can be decomposed into a sum of different integrals. First, the direct and specular contributions are computed the same way than in "Scanline Rendering". Then, the caustic and indirect diffuse contributions are deduced from the two photon maps. Most recent versions of Photon Mapping are nevertheless progressive~\cite{Hachisuka:2008:PPM:1409060.1409083, Hachisuka:2011:RAP:2019627.2019633}.
\section{Standard parallelization}
The most common way of parallelizing an image rendering algorithm consists in decomposing the image grid. Indeed, as each pixel can always be treated separately without any interaction, it is simple to distribute the computations of several of them over multiple CPUs for a full asynchronous execution. However, some area of the image are probably longer to be rendered than others because of the heterogeneous spacing of objects, materials and lights-sources in the scene.
Thus, the first optimization to bring is a dynamic job-balancing mechanism allowing faster threads to work more and ensuring there is no inactivity period for any of them. Something similar can be applied to the global illumination pre-rendering step, replacing the image grid by the list of light-sources. Besides, because many photons are shot in several directions from the sources, rays can also be distributed dynamically in order to maintain a constant activity on every threads.
The main problem with this fully distributed solution is linked to the fact that the whole scene geometry must be known by each computational node. Predetermining the whole light path of a ray is obviously nearly impossible. Furthermore, each polygons in the scene can be hit several times by different rays. For these reasons, every polygon has to be copied onto the memory of each thread, what is not optimal. Hybrid computing (distributed + shared memory) can be used to assure that only a single copy exists on a computational node (and not one per thread), but the problem remains on multiple-node architectures. We know that copying modern complex scenes with several millions of polygons can become a limitation factor, depending on the network bandwidth.
When dealing with full spectral rendering engines like Virtuelium, another idea which is simple to implement consists in decomposing the spectral data themselves. Whereas RGB or RGBA values have only 3 or 4 components, size spectral values used in Virtuelium are often fixed to an array of 81 scalar values encapsulating the visible part of the wavelength range (380 to 780 nm with sampling step of 5 nm). But previous rendering techniques always consider wavelength separately. That is why it is possible to cut the spectral array by group of $n$ sub-values (for instance, $n$ can be equal to 3 of 4 to return to something very close to the RGB world). Only fluorescent phenomena bring interactions between wavelengths but this problem is solved if we gather wavelengths craftily to take into account this physical law. Nevertheless, the same problems remains and the scene geometry always has to be copied on every computational node.
The solution we proposed is thus to apply the ray-tracing domain decomposition method introduces in~\cite{magoules:patent:2011}.
\section{Domain Decomposition Method}
By splitting a global domain into several small sub-domains, domain decomposition methods~\cite{SBG1996}, \cite{QV1999}, \cite{TW2005}, \cite{Jar2007} allow to load input data and gather results in a parallel way, as each sub-domain can be associated to a unique processor. This splitting can be done only once for multiple execution of the processing algorithm. That is an important advantage particularly when considering very large models besides the fact that in such case, problems of memory allocation are avoided if the sub-domains are small enough.
A basic approach consists in simply splitting the set of pixels into multiple sets, one per sub-domain. In each sub-domain the light rays are shot in the whole geometry. A great part of the work is duplicated, especially the intersections detection and the loading/preprocessing of the model. Since there is absolutely no communication between the processes, this is a good candidate for largely distributed systems.
However, this method raises some load balancing issues since the processing time of each sub-domain can vary a lot. Thus there is another idea consisting in loading sub-domains on demand but it requires to compute a hierarchical acceleration structure in order to only load the first levels at the start. During the traversal of this hierarchical structure the data corresponding to a node are loaded when the node is reached. If it is an interior node, these data are the child nodes information, if it is a leaf, the data is the corresponding mesh and materials data. This system is pretty complicated to implement and needs a specific pre-computed data structure and it would be necessary to cover the latency of the node loading. Such a method is useful in a context where light rays launched by a process don't spread out of a specific bounded part of the model. Otherwise the process would tend to load the complete geometry.
The method described in~\cite{magoules:patent:2011} takes more advantage of some efficient domain decomposition techniques~\cite{magoules:journal-auth:4}, \cite{magoules:journal-auth:16}, \cite{magoules:journal-auth:21}. Besides the splitting of the global geometry itself, information along interfaces is shared between computational units which are processing neighboring sub-domains. A continuous approach~\cite{Des1993}, \cite{Gha1997}, \cite{CN1998}, \cite{magoules:journal-auth:28}, \cite{magoules:journal-auth:23}, \cite{magoules:journal-auth:18}, \cite{magoules:journal-auth:14} can be used to design efficient interface conditions.
Similarly a discrete approach~\cite{magoules:journal-auth:8}, \cite{magoules:proceedings-auth:6}, \cite{magoules:journal-auth:29}, \cite{magoules:journal-auth:12}, \cite{magoules:journal-auth:20}
can be used, which may increase significantly the performance of the algorithm.
The link between the continuous and discrete interface condition can be established like in~\cite{magoules:journal-auth:17}.
In this work, we split the geometry of the model into multiple sub-domains and base our method on the domain decomposition method~\cite{magoules:journal-auth:24}, \cite{magoules:journal-auth:9}, \cite{magoules:journal-auth:10} where the interface conditions assure the continuity of the light ray properties (such as direction, amplitude, angle, etc.) from one sub-domain to another one, as detailed in~\cite{magoules:patent:2011}. We only analyze the rays passing through interfaces between sub-domains. From a processor point of view, one could replace all neighboring sub-domains model by a simplified version in order to be more efficient.
Yet, unlike classical domain decomposition methods, a computational unit does not process only one sub-domain. Our concern here is related to the load balancing which could be very bad, for instance if there were only one light-source. In such case, most of the workload would be held by the unit processing the sub-domain containing the light-source. This is why we use a less static load-balancing scheme. From a processor point of view, the idea is to start by loading a certain number of sub-domains according to memory limitation. When it remains few light rays to be handled in a sub-domain, this sub-domain is unloaded if there is still other currently not handled sub-domains with a lot of rays not processed. Then the processor starts loading one or more of these sub-domains while handling another sub-domain already available in the memory. Unloading sub-domains allows doing most of the results gathering during the processing of other rays. This overlapping gathering and processing is efficient since gathering mainly uses the communication system. One could find a more complete description of an efficient implementation in~\cite{magoules:patent:2011}.
\section{Results and discussions}
An image rendered with Virtuelium is presented in figure~\ref{fig:virtuelium01}. Although it is not fully viewable, the whole church model has been rendered using our new parallelization method. The model we used for representing stained glass windows is quite particular. Indeed, instead of directly applying a texture, we used a distribution map of optical constants. According to medieval Cistercian art, nearly clear glass materials were used rather than highly colored tiles. Nevertheless, with our distribution map, we can create diversity between tiles, what is particularly visible on the right windows (where reflections of architecture and other windows are visible). The yellow edgings are also created only with the map. Distribution maps can also be used to distribute other information along the object geometry. This is the case here for glass thicknesses. Only refractive properties of glass materials are simulated for now. One of our future objectives is to develop the material model in order to extend our simulations to a larger part of the physical phenomenon (by example, light scattering is needed). Again, new distribution maps could be used to accurately represent the material complexity.
\begin{table}
\centering
{\small
\begin{tabular}{|l|c|c|c|c|}
\hline
& 16 & 32 & 64 & 128 \\%header
& threads & threads & threads & threads \\%header
\hline %
{1 sub-domain} & 10.6 & 16.7 & 25.4 & 20.2 \\
{2 sub-domains} & 11.9 & 22.1 & 34.3 & 45.4 \\
{4 sub-domains} & 10.4 & 22.3 & 35.1 & 50.7 \\
{8 sub-domains} & 11.2 & 24.2 & 39.8 & 66.9 \\
\hline
\end{tabular}
}
\caption{Speedup of the Virtuelium DDM program (Ethernet) with respect to the number of threads and sub-domains.}
\label{tab:ddm_virtuelium}
\end{table}
\begin{table}
\centering
{\small
\begin{tabular}{|l|c|c|c|c|}
\hline
& 16 & 32 & 64 & 128 \\%header
& threads & threads & threads & threads \\%header
\hline %
{1 sub-domain} & 14.5 & 22.8 & 27.4 & 21.0 \\
{4 sub-domains} & 14.6 & 26.6 & 46.1 & 49.2 \\
{8 sub-domains} & 14.7 & 26.4 & 47.9 & 75.9 \\
\hline
\end{tabular}
}
\caption{Speed-up of the acoustic DDM program (Ethernet) with respect to the number of threads and sub-domains.}
\label{tab:ddm_acoustic}
\end{table}
Speedups of Virtuelium execution are shown in table~\ref{tab:ddm_virtuelium}. They are very closed to what we obtained with the acoustic simulation software presented in table~\ref{tab:ddm_acoustic}~\cite{6636420}. Simulations were run on a hybrid, both distributed and shared memory, computational platform consisting of 4 nodes containing a quad core processor (a total of 16 cores). Each node were provided with 8 Gigabytes RAM (Random Access Memory). As we were expecting, DDM techniques significantly improved the performance of the parallelization. Although the speedups from acoustic simulation are quite better, we can notice that in both cases, from 16 to 128 threads, 8 sub-domains decomposition allowed to multiply the speedup by nearly 6, while classical parallelization only reach a factor less than 2. On another hand, for a fixed number of threads, the speedup keeps increasing as the number of sub-domains do.
\begin{figure}
\centering
\scalebox{1.0}{\includegraphics[width=0.45\textwidth]{./virtuelium01.pdf}}
\caption{Illustration of the image rendering in the church of Royaumont abbey (interior view)}
\label{fig:virtuelium01}
\end{figure}
\section{Conclusion}
In this paper, we proposed an original ray-tracing domain decomposition method for image rendering with natural lighting. According to domain decomposition methods principle, light rays characteristics have been matched as interface constraints between neighboring sub-domains. We presented a test case on a model of the church of the Royaumont abbey where we particularly deal with medieval glass material properties. It outlined the performance and efficiency of our method, relatively to multi-core architectures.
\section*{Acknowledgements}
The authors acknowledge the Foundation Royaumont for its help and in particular Jerome Johnson and Nathalie Le Gonidec for the helpful discussions and comments.
|
1,108,101,562,977 | arxiv | \section{Introduction}\label{sec:int}
Although being ``invented" around the same time as the conventional calculus, fractional calculus did not attract much attentions of researchers until very recently. Due to the nonlocal nature of fractional integral or differential operators, the numerical schemes for solving fractional partial differential equations (FPDEs) give rise to dense stiffness matrices and/or long tails in time or a combination of both, which results in high computational complexity and large memory requirements.
This is one of the main reasons why FPDE models have not been widely used.
However, it has been shown recently that fractional integrals and derivatives possess better modeling capabilities for describing challenging phenomena in physics, material science, biology, stochastic computation, finance, etc.; see, for example, \cite{BenWhe00b,DelCar,GW10,Mag,MeeSik,MetKla00,MetKla04,Pod,RSM}.
In particular, time-fractional partial differential equations (TFPDEs) are typically used to model subdiffusion phenomena.
Because of the fractional time derivative of the state variable in the model, a solution at a time instance $t$ is related to the solution at all the time previous to $t$. Thus, the corresponding numerical schemes would yield a long-tail in time.
As a result, the numerical simulation by classical numerical methods could become too expensive to be feasible, especially in problems requiring long time modeling and of large scales. Hence, in terms of computational complexity and memory requirement, it is of great importance to seek efficient and reliable numerical techniques to solve the TFPDEs.
So far, there are few publications for developing fast algorithms of the TFPDEs: for example, in \cite{KNS15,LPS15}, based on the block lower triangular Toeplitz with tri-diagonal block matrix resulting from the finite difference discretization, an approximate inversion method and a divide-and-conquer strategy are developed respectively; a parareal algorithm combined with the spectral method is presented in \cite{XHC15}; and in \cite{ZZK16}, several second-order in time fast Poisson solvers for high-dimensional subdiffusion problems are proposed to reduce the computational complexity in physical space.
One of the main challenges in applying TFPDEs is to identify certain free parameters of the model.
For example, the fractional order of TFPDEs
is typically related to the fractal dimension of the media and is usually unknown {\em a priori} \cite{glockle1995fractional,MeeSik}.
The related identification process can be formulated as an inverse problem:
given some experimental data, to find the parameter value by minimizing the difference between the numerical output of TFPDEs and data under certain norms.
Some research has been done in this direction:
for instance, Liu et al. \cite{CLJTB} proposed a fast finite difference scheme for identifying the fractional derivative orders of two-dimensional (2D) space-fractional diffusion model;
Zhuang et al. \cite{ZYJ15} considered a time-fractional heat conduction problem for an experimental
heat conduction process in a 3-layer composite medium and the time-fractional order was numerically identified by the Levenberg-Marquardt (L-M) method;
Cheng et al. \cite{CNYY} presented a theoretical proof for the uniqueness of the diffusion coefficient in an inverse problem of one-dimensional (1D) time-fractional diffusion equation; Jin et al. studied an inverse problem of recovering a spatially varying potential term in a 1D time-fractional diffusion equation in \cite{JR12};
Wei et al. \cite{WWZ} proposed a Tikhonov regularization method for solving a backward problem of the time-fractional diffusion equation; and
a coupled method was developed to solve the inverse source problem of spatial fractional anomalous diffusion equation in \cite{WCSL}.
Overall, tackling the inverse problems through an optimization approach would involve many runs of the forward problems, which solves the TFPDEs at different values of the parameters.
Since the forward problem simulation is already computationally expensive, the optimization process could become computationally prohibitive.
To overcome this issue, model reduction techniques, such as proper orthogonal decomposition (POD), balanced truncation method, reduced basis method and related variations, and CVT-based approach (\cite{Ant05,burkardt2006pod,HLB96,maday2002reduced,patera2007reduced}), have a great potential.
In this paper, we propose a reduced order modeling approach for TFPDEs by using the POD method and the discrete empirical interpolation method (DEIM).
The POD has been widely used in providing a computationally inexpensive, yet accurate surrogate model for large-scale simulations of PDEs (for example, \cite{bui2007goal,carlberg2011low,daescu2008dual,HLB96,iollo2000stability,KV01,SK04,LMQR14}).
The main idea of the POD is to extract a handful of optimal, global basis functions from given snapshots and obtain a reduced-order approximation on the subspace spanned by the basis set.
Since the dimension of the resulting system is low, the computational cost could be greatly reduced.
When systems involve non-polynomial nonlinearities, the DEIM could be used to further reduce the computational complexity for evaluating the nonlinear terms \cite{chaturantabut2010nonlinear}.
To our knowledge, the performance of POD/DEIM has not been well investigated in the context of FPDEs.
Thus, in this paper,
we first develop a POD/DEIM reduced-order model (ROM) for TFPDEs;
and then design a ROM-based optimization strategy for the parameter identification problem.
The rest of the paper is organized as follows. In Section \ref{sec:mod}, we present a model problem governed by TFPDEs and develop a full-order model (FOM) by using finite difference approximations.
In Section \ref{sec:pod}, we construct the POD/DEIM ROM and test its numerical performance. Several numerical experiments show that the ROM yields accurate approximation over a long-time simulation, hence it provides a natural, efficient alternative model of the TFPDEs in practice.
In Section \ref{sec:par}, an inverse problem for identifying the order of the fractional derivative of TFPDEs is presented, which is then formulated as an optimization problem.
Taking the POD/DEIM ROM as a surrogate, the optimization problem is then solved by an algorithm combining an L-M regularization iterative method and the Armijo rule.
We carry out numerical experiments in Section \ref{sec:num}, which demonstrate the effectiveness and
efficiency of the proposed method.
A few concluding remarks are drawn at the last section.
\section{The Full-Order Model}\label{sec:mod}
In this paper, we consider the following time-fractional diffusion-reaction partial differential equation
\begin{equation}\label{TFPDE:e1}
\left\{
\begin{array}{ll}
{}_0^C D_t^{\beta}u({\bf x},t)-\nabla\cdot(\mu({\bf x}) \nabla u({\bf x}, t)) + g(u({\bf x}, t)) = f({\bf x}, t), \quad& {\bf x} \in \Omega, ~ 0 < t \le T,\\
u({\bf x}, t) = 0, & {\bf x} \in \partial\Omega, ~ 0 \le t \le T,\\
u({\bf x}, 0) = u_0({\bf x}), & {\bf x}\in \Omega,
\end{array}
\right.
\end{equation}
where $\Omega\subset \mathbb{R}^d$ for $d= 1, 2, 3$,
${}_0^C D_t^{\beta}u$ is the Caputo fractional derivative of order $\beta$ ($0 < \beta < 1$) defined by (see \cite{Pod})
\begin{equation}\label{FDE:e2}
{}_0^C D_t^{\beta}u({\bf x},t) := \frac{1}{\Gamma(1-\beta)} \int_0^t \frac{\partial u({\bf x},s)}{\partial s}(t-s)^{-\beta} \,d s,
\end{equation}
$\mu({\bf x})$ is a diffusion coefficient that is bounded from below and above by
$$0<\mu_{\min}\le \mu({\bf x}) \le \mu_{\max}<\infty,$$
$g(u({\bf x},t))$ is a nonlinear reaction term that depends on the unknown $u({\bf x},t)$
and $f({\bf x},t)$ accounts for external source and sink, $u_0({\bf x})$ a prescribed initial data.
To shorten our presentation, in the following, we consider the 1D case, i.e., $d=1$.
However, higher dimensional cases can be treated in a similar manner.
To seek a numerical solution to the TFPDE (\ref{TFPDE:e1}), we use a finite difference scheme.
The time interval $I := [0, T]$ is divided into $M$ equal subintervals with the time step $\Delta t = \frac{T}{M}$.
The spatial domain $\Omega:=[a, b]$ is partitioned uniformly with the mesh size $h = \frac{b - a}{N+1}$, where $N$ is the number of interior grids.
Denoted by $u_i^m$ the finite difference approximation to $u(x_i, t_m)$, where $x_i = a + i h$ for $0\leq i\leq N+1$ and $t_m = m \Delta t$ for $m =0, 1, \ldots, M$.
We define $\mu_{i+\frac{1}{2}} :=\mu(x_{i+\frac{1}{2}})$, introduce $F(u, x, t):= g(u(x,t))-f(x, t)$ and let $F_i^m := F(u_i^m, x_i, t_m)$.
As pointed out in \cite{LinXu}, the Caupto fractional derivative (\ref{FDE:e2}) can be approximated by the $L1$ scheme as follows:
\begin{equation}\label{FODE:e4}
{}_0^C D_t^{\beta}u(x_i,t_{m}) = \frac{1}{\Gamma(2-\beta)}\sum_{j=0}^{m-1} b_j \frac{u_i^{m-j}-u_i^{m-j-1}}{\Delta t^\beta}
+\mathcal{O}(\Delta t^{2-\beta}),
\end{equation}
where
$b_{j}=(j+1)^{1-\beta}-j^{1-\beta}$ for $j=0,1,\cdots, m-1$ with the following properties:
$b_{j}>0$, $1=b_0>b_1>\cdots>b_m, ~b_m\rightarrow 0 ~ \textrm{as}~ m\rightarrow \infty$, and
$\sum_{j=0}^{m-1} (b_j-b_{j+1})+b_{m}=1$.
Indeed, other methods such as Gr\"{u}nwald-Letnikov scheme can also be used here to approximate the Caputo fractional time derivative, the proposed reduced-order modeling can be naturally extended to them.
Meanwhile, the 1D diffusion operator
in (\ref{TFPDE:e1}) can be approximated by the standard centered-difference scheme
\begin{equation}\label{SCD}
\begin{split}
\frac{\partial}{\partial x} \left(\mu \frac{\partial u}{\partial x}\right)\bigg|_{\scriptsize \begin{array}{c}x=x_i\\t=t_m\end{array}}=\frac{\mu_{i+\frac{1}{2}}u_{i+1}^m-(\mu_{i+\frac{1}{2}}+\mu_{i-\frac{1}{2}})u_{i}^m+ \mu_{i-\frac{1}{2}}u_{i-1}^m}{h^2}+ \mathcal{O}(h^2).
\end{split}
\end{equation}
Substituting the approximations (\ref{FODE:e4})-(\ref{SCD}) into (\ref{TFPDE:e1}), we get
\begin{equation}\label{TFPDE:e2}
\begin{split}
\frac{1}{\Gamma(2-\beta)}\sum_{j=0}^{m-1} b_j \frac {u_i^{m-j}-u_i^{m-j-1}}{\Delta t^\beta}
&-\frac{\mu_{i+\frac{1}{2}} u_{i+1}^m-(\mu_{i+\frac{1}{2}}+\mu_{i-\frac{1}{2}})u_{i}^m + \mu_{i-\frac{1}{2}}u_{i-1}^m}{h^2} \\
& +F_i^m = 0.
\end{split}
\end{equation}
Denote $\gamma:=\Delta t^\beta \Gamma(2-\beta)$ and $\eta_{i+\frac{1}{2}}:=\mu_{i+\frac{1}{2}} /h^2$, then (\ref{TFPDE:e2}) can be rewritten as, for $i = 1,\cdots,N$ and $m =1, \cdots, M$,
\begin{eqnarray}\label{TFPDE:e3}
-\eta_{i-\frac{1}{2}} \gamma u_{i-1}^m
+\left(1+ \eta_{i-\frac{1}{2}} \gamma + \eta_{i+\frac{1}{2}}\gamma \right)u_{i}^m
&-&\eta_{i+\frac{1}{2}} \gamma u_{i+1}^m + \gamma F_i^m \nonumber \\
&=&\sum_{j=1}^{m-1} (b_{j-1}-b_j) u_i^{m-j} + b_{m-1} u_i^{0}
\end{eqnarray}
with
$$u_{0}^m=u_{N+1}^m=0,\quad u_i^{0}=u_0(x_i).$$
Let ${\bf u}^{m}= [u_1^m, u_2^m, \cdots, u_{N}^m]^\top$ and
$\mathbf{F}^{m}=[F_1^m, F_2^m, \cdots, F_N^m]^\top$,
we can write the finite difference scheme (\ref{TFPDE:e3}) into the following matrix-vector formulation.
\begin{equation}\label{TFPDE:e5}
\left(\mathbf{I}_{N} + \gamma\mathbf{A}\right){\bf u}^{m} +\gamma \mathbf{F}^{\,m}
= \sum_{j=1}^{m-1} (b_{j-1}-b_j){\bf u}^{m-j} + b_{m-1}{\bf u}^{0},
\end{equation}
where $\mathbf{I}_{N}$ is the identity matrix of order $N$, and $\mathbf{A}$ is a tri-diagonal stiffness matrix of order $N$ such that
\begin{equation}\label{TFPDE:e6}
\begin{split}
\mathbf{A}&=
\left[\begin{array}{ccccc}
\eta_{\frac{1}{2}}+\eta_{\frac{3}{2}} & -\eta_{\frac{3}{2}} \\%[4pt]
-\eta_{\frac{3}{2}} & \eta_{\frac{3}{2}}+\eta_{\frac{5}{2}} & -\eta_{\frac{5}{2}} \\%[4pt]
& \ddots & \ddots & \ddots \\%[4pt]
& & -\eta_{N-\frac{3}{2}} & \eta_{N-\frac{3}{2}}+\eta_{N-\frac{1}{2}} & -\eta_{N-\frac{1}{2}} \\%[4pt]
& & & -\eta_{N-\frac{1}{2}} & \eta_{N-\frac{1}{2}}+\eta_{N+\frac{1}{2}}
\end{array}\right].
\end{split}
\end{equation}
When $g(u)= 0$, the system (\ref{TFPDE:e5}), named the FOM, is
a tri-diagonal linear system of order $N$.
It can be directly solved using Thomas algorithm in $\mathcal{O}(N)$ flops per time step.
The total computational complexity for the full-order simulation is $\mathcal{O}(M^2N)$ flops.
The required memory storage is $\mathcal{O}(MN)$ due to the nonlocal property of the time-fractional derivative.
When $g(u)\neq 0$, the system is nonlinear. To find a solution, we apply Gauss-Newton iterative method at each time step.
The Jacobian of the system \eqref{TFPDE:e5} is
\begin{equation}\label{TFPDE:e7}
\begin{split}
\mathbf{J}({\bf u}^{m}):= \mathbf{I}_N + \gamma\mathbf{A} + \gamma \mathbf{D}_\mathbf{F}({\bf u}^{m}),
\end{split}
\end{equation}
where $\mathbf{D}_\mathbf{F}({\bf u}^{m})$ is a diagonal matrix given by
\begin{equation}\label{TFPDE:e8}
\begin{split}
\mathbf{D}_\mathbf{F}({\bf u}^{m}):= \textsl{diag}\{F'(u_1^{m}), F'(u_2^{m}),\ldots, F'(u_N^{m})\} \in \mathbb{R}^{N\times N}
\end{split}
\end{equation}
and $F'=\frac{\partial F}{\partial u}$.
Denote
\begin{equation}
\begin{split}
\mathbf{r}_{(l)}^{m}:=\left(\mathbf{I}_N + \gamma\mathbf{A}\right){\bf u}_{(l)}^{m} + \gamma \mathbf{F}^{m} - \sum_{j=1}^{m-1} (b_{j-1}-b_j){\bf u}^{m-j} - b_{m-1}{\bf u}^{0},
\end{split}
\end{equation}
the Gauss-Newton method finds the search step $\mathbf{d}_l $ at the $l$-th iteration satisfying
\begin{equation}\label{TFPDE:e9}
\mathbf{J}\left({\bf u}_{(l)}^{m}\right) \mathbf{d}_l = -\mathbf{r}_{(l)}^{m}
\end{equation}
and update the approximation
$$ {\bf u}_{(l+1)}^{m}={\bf u}_{(l)}^{m}+ \mathbf{d}_l $$
till a prescribed tolerance is satisfied.
Note that the linearized system \eqref{TFPDE:e9} is a tri-diagonal system of order $N$, which can also be solved by the Thomas algorithm in $\mathcal{O}(N)$ flops per iteration. Thus, the computational complexity for the full-order simulation is $\mathcal{O}(M^2NK)$ flops, where $K$ is the total number of Newton iterations used in the simulation.
\section{The POD/DEIM Reduced-Order Model}\label{sec:pod}
For the purpose of real-time control or optimizations, the full-order model (\ref{TFPDE:e5}) needs to be simulated for many times at different values of control inputs or parameters.
To obtain an efficient yet reliable surrogate model, we develop a POD reduced-order model for the TFPDEs in this section.
\subsection{The POD Method}
Let the $L^2(\Omega)$ space be endowed with inner product $(\cdot, \cdot)$ and norm $\|\cdot\|_{0}$.
Assume that the data $\mathcal{V}$ (so-called snapshots) is a collection of time-varying functions $u(x, t) \in L^2(0, T; L^2(\Omega))$, the POD method seeks a low-dimensional basis, $\varphi_1(x), \ldots, \varphi_r(x) \in L^2(\Omega)$, that optimally approximates the data.
Mathematically, for any positive $r$, the POD basis is determined by minimizing the error between the data and its projection onto the basis, that is,
\begin{equation}
\min_{ \{\varphi_j\}_{j=1}^r }
\int_0^T
\Big\| u(\cdot, t) -
\sum_{j=1}^r \left( u(\cdot, t), \varphi_j(\cdot) \right) \, \varphi_j(\cdot)
\Big\|_{0}^2\, d t,
\label{pod_min}
\end{equation}
subject to the conditions that $(\varphi_i, \varphi_j) = \delta_{ij}, \ 1 \leq i, j \leq r$, where $\delta_{ij}$ is the Kronecker delta.
This is equivalent to finding the basis function $\varphi(x)$ that maximizes the ensemble average of the inner product between $u(x, t)$ and $\varphi(x)$:
\begin{equation}
\max
\int_0^T
\left|\left( u(\cdot, t), \varphi(\cdot) \right) \right|^2\, d t \quad \text{ s.t. }\quad \|\varphi\|^2= 1.
\label{pod_max}
\end{equation}
In the context of the calculus of variations, the functional of this constrained variational problem is
\begin{equation}
J[\varphi] = \int_0^T
\left|\left( u(\cdot, t), \varphi(\cdot) \right) \right|^2\, d t - \lambda(\|\varphi\|^2-1)
\label{pod_func}
\end{equation}
and a necessary condition for extrema is that the functional derivative vanishes for all admissible variations
$\psi(x)\in L^2(\Omega)$ and any $\epsilon \in \mathbb{R}$:
\begin{equation}
\frac{d}{d\epsilon}J[\varphi+\epsilon \psi]\Big|_{\epsilon=0} = 0.
\label{pod_func_der}
\end{equation}
It can be shown that the POD basis $\{\varphi_1, \ldots, \varphi_r\}$ is the first $r$ dominant eigenfunctions of the integral equation
\begin{equation}
\int_{\Omega}R(x, x') \varphi(x')\, dx' = \lambda \varphi(x),
\label{pod_corr}
\end{equation}
where the kernel is the averaged autocorrelation $R(x, x')= \int_0^T u(x, t)u^*(x', t)\,dt$.
For more details on POD, the reader is referred to \cite{HLB96}.
Once the POD basis functions are obtained, the state variable $u(x, t)$ can be approximated by
$$u_r(x, t) = \sum_{i=1}^r a_i(t) \varphi_i(x) = \boldsymbol{\varphi}(x) {\bf a}(t),$$
where $\boldsymbol{\varphi}(x)= [\varphi_1(x), \varphi_2(x), \ldots, \varphi_r(x)]$ and ${\bf a}(t)= [a_1(t), a_2(t), \ldots, a_r(t)]^\top$.
By substituting $u_r$ into the equation (\ref{TFPDE:e1}), we get a reduced-order approximation
\begin{equation}\label{FODE_rom}
{}_0^C D_t^{\beta}\boldsymbol{\varphi}(x) {\bf a}(t) -\nabla\cdot(\mu(x) \nabla \boldsymbol{\varphi}(x) ){\bf a}(t) + F(\boldsymbol{\varphi}(x){\bf a}(t), x, t) = 0,
\end{equation}
where $F(\boldsymbol{\varphi}(x){\bf a}(t), x, t)=g\left(\boldsymbol{\varphi}(x){\bf a}(t)\right) - f(x, t)$ and ${\bf a}(0)= (u_0(x), \boldsymbol{\varphi}(x))$.
\begin{remark}
We need to consider the finite dimensional case in numerical simulations, in which the snapshot matrix ${\bf U}= [{\bf u}_1, \ldots, {\bf u}_{n_s}]\in \mathbb{R}^{N\times n_s}$.
The $j$-th column of ${\bf U}$ is the trajectory ${\bf u}_j$ at a particular time instance $t_j$ and at certain parameter values.
Then the POD method seeks a low-dimensional basis by minimizing the mean square error in $2$-norm between the snapshot data and its projection onto the basis, that is,
\begin{equation}
\min_{Rank({\boldsymbol \Phi})=r}
\sum_{j=1}^{n_s}
\Big\| {\bf u}_j -
{\boldsymbol \Phi}\bPhi^\top {\bf u}_j
\Big\|^2
\qquad
s.t.
\qquad
{\boldsymbol \Phi}^\top{\boldsymbol \Phi}= {\bf I}_r,
\label{pod_min}
\end{equation}
where the POD basis matrix ${\boldsymbol \Phi}=[{\boldsymbol \phi}_1, \ldots, {\boldsymbol \phi}_r]\in \mathbb{R}^{N\times r}$ and ${\bf I}_r$ is an $r\times r$ identity matrix.
The POD basis is typically the first $r$ left singular vectors of the snapshot matrix ${\bf U}$.
Assume the associated $i$-th dominant singular value is $\sigma_i$, the POD truncation error satisfies
\begin{equation}
\sum_{j=1}^{n_s}
\Big\| {\bf u}_j -
{\boldsymbol \Phi}\bPhi^\top {\bf u}_j
\Big\|^2
= \sum_{i=r+1}^d \sigma_i^2,
\label{pod_min_err}
\end{equation}
where $d$ is the rank of the snapshot matrix ${\bf U}$.
\end{remark}
\subsection{The DEIM Approximation}
Because the nonlinear term in ROMs needs to be evaluated at all the grid points, the computational complexity of the reduced-order simulation still depends on the total number of degrees of freedom. Therefore, the discrete empirical interpolation method was developed to reduce such computational cost \cite{chaturantabut2010nonlinear}. It has been successfully applied in many nonlinear ROMs \cite{chaturantabut2011application,chaturantabut2012state,chaturantabut2010nonlinear,cstefuanescu2012pod,wang2015}.
In general, it employs the following ansatz on a nonlinear function $F(u(x, t))$:
\begin{equation}
F(u(x, t)) = \sum\limits_{j=1}^{s} \psi_j(x) c_j(t),
\end{equation}
where $\psi_j(x)$ is the $j$-th nonlinear POD basis obtained by applying the POD method on the nonlinear snapshots.
Define the nonlinear POD basis vectors ${\bf \Psi} = [{\boldsymbol \psi}_1, \ldots, {\boldsymbol \psi}_{s}]\in \mathbb{R}^{N\times s}$,
the DEIM optimally selects a set of interpolation points $\wp := [\wp_1, \ldots, \wp_s]^{\intercal}$ as shown in Algorithm \ref{alg: DEIM}, in which $e_{\wp_i}$ be the $\wp_i$-th column in the identity matrix.
The DEIM approximation of the nonlinear term
$${F}({\bf u})=[F(u(x_1,t)), F(u(x_2,t)), \ldots, F(u(x_N,t))]^\top$$
is given by
\begin{equation}
{\bf F}_s = {\bf \Psi}({\bf P}^\intercal {\bf \Psi})^{-1} {\bf P}^\intercal {F}({\bf u}),
\label{eq:deim}
\end{equation}
where ${\bf P} = [e_{\wp_1}, \ldots, e_{\wp_s}]\in \mathbb{R}^{N\times s}$ is the matrix for selecting the corresponding $s$ indices $\wp_1, \ldots, \wp_s$.
For a detailed description of the DEIM method, the read is referred to \cite{chaturantabut2010nonlinear}.
\begin{algorithm}
\label{alg: DEIM}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\caption{DEIM}\label{alg: DEIM, selection of interpolation points}
\vspace{.3cm}
\Input{$\{{\boldsymbol \psi}_{\ell}\}_{\ell=1}^{s} \subset \mathbb{R}^{s}$ linear independent}
\Output{$\wp = [\wp_1, \ldots, \wp_s]^{\intercal} \in \mathbb{R}^s$}
$[|\rho|,\, \wp_1] = \max\{|{\boldsymbol \psi}_1|\}$\;
${\boldsymbol \Psi} = [{\boldsymbol \psi}_1], {\bf P} = [{\bf e}_{\wp_1}], \wp = [\wp_1]$\;
\For{$\ell = 2$ \KwTo $s$}{
Solve $({\bf P}^{\intercal} {\boldsymbol \Psi}){\bf c} = {\bf P}^{\intercal} {\boldsymbol \psi}_{\ell}$ for $\bf c$ \;
${\bf r}={\boldsymbol \psi}_{\ell}- {\boldsymbol \Psi} {\bf c}$\;
$\left[ |\rho|, \wp_{\ell} \right] = \max\{|{\bf r}|\}$\;
${\boldsymbol \Psi} \leftarrow [{\boldsymbol \Psi} \quad {\boldsymbol \psi}_{\ell}], {\bf P}\leftarrow [{\bf P}\quad {\bf e}_{\wp_{\ell}}], \wp \leftarrow
\left[\begin{array}{c} \wp \\ \wp_{\ell}\end{array}\right]$\;
}
\end{algorithm}
\subsection{The POD/DEIM ROM}
In what follows, we will consider a full discretization of the POD/DEIM ROM and regard the order of fractional diffusion, $\beta$, as a parameter, which belongs to the domain $[\underline{\beta}, \overline{\beta}] \subset (0,1)$.
To construct a discrete ROM, we first select several representative samples $\beta_1, \cdots, \beta_k$ in the parameter space and solve the corresponding full-order models respectively.
For example, we choose the samples uniformly in the parameter space and use the same grid for the spatial discretization in all the full-order simulations.
The snapshot set is then composed of the corresponding numerical solutions at selected time instances.
Depends on the choice of time integration in each simulation, the number of snapshots for parameters $\beta_j$ could be different.
Define the number of snapshots for the parameter $\beta_j$ to be $M_j$, and
denoted by ${\bf u}^{m, \beta_j}$ the vector values of $u(\cdot, t_m)$ for $m= 1, \ldots, M_j$.
Let the snapshot matrix
$$\mathfrak{U}=[{\bf u}^{1, \beta_1}, {\bf u}^{2, \beta_1}, \ldots, {\bf u}^{M_1, \beta_1},
\ldots,
{\bf u}^{1, \beta_k}, {\bf u}^{2, \beta_k}, \ldots, {\bf u}^{M_k, \beta_k}],$$
and the nonlinear snapshot matrix
\begin{equation*}
\begin{split}
\mathfrak{F}:=\Big[F\left({\bf u}^{1, \beta_1}\right), F\left({\bf u}^{2, \beta_1}\right), \ldots, F\left({\bf u}^{M_1, \beta_1}\right),
\ldots,
\\
F\left({\bf u}^{1, \beta_k}\right), F\left({\bf u}^{2, \beta_k}\right), \ldots, F\left({\bf u}^{M_k, \beta_k}\right)
\Big].
\end{split}
\end{equation*}
Correspondingly, the POD basis matrix ${\boldsymbol \Phi}\in \mathbb{R}^{N\times r}$
and
the nonlinear POD basis matrix ${\boldsymbol \Psi}\in \mathbb{R}^{N\times s}$.
We use the same symbol ${\bf a}$ to denote the unknown POD basis coefficient ${\bf a}(t)=[a_1(t), \ldots, a_r(t)]^\top$, then the POD approximation ${\bf u}_r(t)= {\boldsymbol \Phi} {\bf a}(t)$.
With the same numerical discretization as (\ref{TFPDE:e5}), we
use the POD method and the DEIM approximation \eqref{eq:deim}, and construct the POD/DEIM ROM as follows.
\begin{equation}\label{TFPDE:rom}
\begin{aligned}
\left( \mathbf{I}_N + \gamma \mathbf{A}\right)\,{\boldsymbol \Phi} {\bf a}^{m}
&+ \gamma \, {\bf \Psi}({\bf P}^\intercal {\bf \Psi})^{-1} {\bf P}^\intercal {F}({\boldsymbol \Phi} {\bf a}^{m})\\
&= \sum_{j=1}^{m-1} (b_{j-1}-b_j) {\boldsymbol \Phi} {\bf a}^{m-j} + b_{m-1} {\boldsymbol \Phi} {\bf a}^{0},
\end{aligned}
\end{equation}
where ${\bf a}^m:={\bf a}(t_m).$
Multiplying ${\boldsymbol \Phi}^\top$ on both sides of the above equation
and using ${\boldsymbol \Phi}^\top {\boldsymbol \Phi}= \mathbf{I}_{r}$,
we have the following Galerkin projection-based POD/DEIM ROM
\begin{equation}\label{TFPDE:rom3}
\begin{aligned}
\left( \mathbf{I}_r + \gamma\, {\boldsymbol \Phi}^\intercal \mathbf{A}\,{\boldsymbol \Phi}\right) {\bf a}^{m}
&+ \gamma\, {\boldsymbol \Phi}^\intercal{\bf \Psi}({\bf P}^\intercal {\bf \Psi})^{-1} {\bf P}^\intercal {F}({\boldsymbol \Phi} {\bf a}^{m})
\\
&= \sum_{j=1}^{m-1} (b_{j-1}-b_j) {\bf a}^{m-j} + b_{m-1} {\bf a}^{0},
\end{aligned}
\end{equation}
for $m =1, \cdots, M$ and initial condition
${\bf a}^0= {\boldsymbol \Phi}^\intercal {\bf u}^0$.
The Gauss-Newton iterative method can also be used to solve the POD/DEIM ROM \eqref{TFPDE:rom3} for ${\bf a}^m$.
The Jacobian matrix of the ROM reads
\begin{equation}\label{TFPDE:rom4}
\begin{split}
\mathbf{\tilde{J}}({\bf a}^{m}):= \mathbf{I}_r + \gamma\, {\boldsymbol \Phi}^\intercal \mathbf{A}\,{\boldsymbol \Phi} + \gamma\, {\boldsymbol \Phi}^\intercal{\bf \Psi}({\bf P}^\intercal {\bf \Psi})^{-1} {\bf P}^\intercal \mathbf{\widetilde{D}}_\mathbf{F}({\boldsymbol \Phi} {\bf a}^{m}),
\end{split}
\end{equation}
where
$\mathbf{\widetilde{D}}_\mathbf{F}({\boldsymbol \Phi} {\bf a}^{m}):= \textsl{diag} \{F'_1, F'_2,\ldots, F'_N)\}\,{\boldsymbol \Phi} \in \mathbb{R}^{N\times r}$
with $F'_j= \frac{\partial F}{\partial u} (\sum_{i=1}^r ({\boldsymbol \phi}_i)_{j} a_i^{m} )$.
Denote
\begin{equation}
\begin{aligned}
\mathbf{\tilde{r}}_{(l)}^{m}:=( \mathbf{I}_r + \gamma\, {\boldsymbol \Phi}^\intercal \mathbf{A}\,{\boldsymbol \Phi}) {\bf a}_{(l)}^{m}
&+ \gamma\, {\boldsymbol \Phi}^\intercal{\bf \Psi}({\bf P}^\intercal {\bf \Psi})^{-1} {\bf P}^\intercal {F}({\boldsymbol \Phi} {\bf a}_{(l)}^{m})
\\
&- \sum_{j=1}^{m-1} (b_{j-1}-b_j) {\bf a}^{m-j} - b_{m-1} {\bf a}^{0}.
\end{aligned}
\end{equation}
The Gauss-Newton iterative algorithm, at the $l$-th iteration, finds the step size $\mathbf{\tilde{d}}_{(l)}$ and update the solution ${\bf a}_{(l+1)}^{m}$ as follows:
\begin{equation}\label{TFPDE:rom6}
\left\{
\begin{aligned}
&\mathbf{\tilde{J}} \left( {{\bf a}_{(l)}^{m}} \right) \mathbf{\tilde{d}}_{(l)} = -\mathbf{\tilde{r}}_{(l)}^{m},\\
&{\bf a}_{(l+1)}^{m}={\bf a}_{(l)}^{m}+ \mathbf{\tilde{d}}_{(l)}.
\end{aligned}
\right.
\end{equation}
For each iteration, it takes $\mathcal{O}(r^3 + rs + mr)$ flops to solve (\ref{TFPDE:rom6}).
The simulation requires a total memory storage of $\mathcal{O}(Mr+s)$.
Comparing with the FOM, the POD/DEIM ROM \eqref{TFPDE:rom3} is computationally more competitive since $r, s\ll N$, especially, for problems requiring repeated large scale simulations in control and optimization applications.
\subsection{Verification of ROMs}
The goal of this subsection is to test the numerical performance of the reduced-order model for the TFPDEs.
Both linear and nonlinear equations are considered.
The error at the final time in the discrete $L^2$ norm is used for the criterion, that is, for any $u$, $v$
\begin{equation}\label{error:e1}
\|u- v\|_{L^2} := \Big (\sum_{i=1}^{N} h \big | u(x_i) - v(x_i) \big |^2 \Big )^{1/2}.
\end{equation}
For cases in which exact solution $u$ is known, we compare the full-order approximation errors, $\|u-u_{h}\|$, with the reduced-order approximation error, $\|u-u_{h, r}\|$.
For cases in which exact solution is unknown, we compare the difference between the full-order solution and reduced-order solution, $\|u_h-u_{h, r}\|$.
\paragraph{Test I.} In this test, we consider the 1D linear TFPDEs with $g(u)=0$, $\mu(x)=1+x$, and the exact solution depends on the parameter
$\beta$ that is given by
$$u(x,t)=t^{1+\beta} \sin (\pi x)\quad \text{ on } [0,1]\times [0,T].$$
The corresponding source term
\begin{equation}\label{test:e1}
\begin{aligned}
f(x,t)=\frac{\Gamma(2+\beta)}{\Gamma(2)} t \sin (\pi x)+ t^{1+\beta}[(1+x) \pi^2 \sin (\pi x)-\pi \cos(\pi x)].
\end{aligned}
\end{equation}
Assume a prescribed range of the parameter $\beta \in (0, 1)$.
To construct the ROM, we first solve the FOM at several sampling parameters.
We uniformly select $\beta=0.2, 0.4, 0.6, 0.8$ for simplicity.
In these simulations, mesh size $h$ and time step $\Delta t$ are taken as $1/64$.
The obtained solutions are collected as snapshots and the POD basis functions are obtained correspondingly.
The first four basis functions are shown in Figure \ref{fig:test-3}.
These basis are then used to derive the $r$-dimensional ROM \eqref{TFPDE:rom3}.
Note that the ROM is linear since $g=0$.
It is observed that $r=2$ yields accurate reduced-order approximations.
\begin{figure}[htp]
\centering
\includegraphics[width=.5\textwidth]{./test-3.eps}
\caption{The first four POD basis functions in {\it Test I}. }
\label{fig:test-3}
\end{figure}
The numerical performance of $r$-dimensional ROMs is investigated at different values of $\beta$, including both the samples and non-sample points.
The numerical errors when $t=1$ of the FOM, $\|u-u_h\|$, and the 2-dimensional ROM, $\|u-u_{h, 2}\|$, are listed in Table \ref{tab:test2T1}.
It is observed that the reduced-order solutions achieve the same accuracy as that of the FOM; and the reduced-order approximations have the same order of accuracy at all the tested parameter values.
To study a long term behavior of the ROM, we change the final time to be $T=10$.
The results at $t=10$ are listed in Table \ref{tab:test2T10}, which shows that the ROM is also competitive even for long time modeling.
\begin{table}[htp]
\begin{center}
\caption{Error comparison of FOM and ROM at $t=1$ for different $\beta$ in {\it Test I}.}
\label{tab:test2T1}
\begin{tabular}{| c | c | c | c | c | c | c | c | c | c | c|} \hline
$\beta$ & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9\\ \hline
$\|u-u_h\|$ & --- & 1.31E-4 & --- & 1.36E-4 & --- & 1.66E-4 & --- & 3.07E-4 & --- \\
$\|u-u_{h, 2}\|$ & 1.31E-4 & 1.31E-4 & 1.32E-4 & 1.36E-4 & 1.45E-4 & 1.66E-4 & 2.12E-4 & 3.07E-4 & 5.02E-4\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[!htp]
\begin{center}
\caption{Error comparison of FOM and ROM at $t=10$ for different $\beta$ in {\it Test I}.}
\label{tab:test2T10}
\begin{tabular}{| c | c | c | c | c | c | c | c | c | c | c |} \hline
$\beta$ & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 \\ \hline
$\|u-u_h\|$ & --- & 2.12E-3 & --- & 3.40E-3 & --- & 5.46E-3 & --- & 8.79E-3& --- \\
$\|u-u_{h, 2}\|$ & 1.67E-3 & 2.12E-3 & 2.69E-3 & 3.40E-3 & 4.31E-3 & 5.46E-3 & 6.92E-3 & 8.79E-3 & 1.13E-2\\
\hline
\end{tabular}
\end{center}
\end{table}
\paragraph{Test II.} In this test, we consider a 1D nonlinear TFPDE model with $g(u)=\sin(u)$, $\mu=0.05$ and an analytic solution
$$u(x,t)=4t^2x(1-x)\exp(-50(x-0.5)^2) \text{ on } [0,1]\times [0,T].$$
The related source term is
\begin{equation}\label{test:e1}
\begin{aligned}
& f(x,t)=\sin(u(x,t))+\frac{4\Gamma(3)}{\Gamma(3-\beta)} t^{2-\beta} x(1-x)\exp(-50(x-0.5)^2)\\
&-4\mu t^2(- 10000x^4 + 20000x^3 - 12000x^2 + 2000x + 98)\exp(-50(x-0.5)^2).
\end{aligned}
\end{equation}
We postulate $\beta \in (0, 1)$ and construct the POD/DEIM ROM based on the full-order simulations at $\beta= 0.2, 0.4, 0.6, 0.8$ and mesh sizes $h=\Delta t =1/64$.
When the final time $T=1$, the first four POD basis, nonlinear POD basis and corresponding four DEIM points are shown in Figure \ref{fig:test-2}, respectively.
We generate the POD/DEIM ROM using $r=4$ POD basis functions and $s=10$ DEIM points.
To study the performance of the ROM, we vary the length of simulation time by taking $T=1$ and $T=10$ separately, and test the values of $\beta$ from 0.1 to 0.9.
The numerical errors of the POD-DEIM simulations at the final time are listed in Tables \ref{tab:test2t1} and \ref{tab:test2t10}.
It is found that, similar to the linear case, the nonlinear reduced-order approximation achieves the same accuracy as that of the full-order solution; and the reduced-order approximation errors keep the same order of magnitude at all the tested parameter values.
\begin{figure}[!ht]
\centering
\includegraphics[width=.45\textwidth]{./test-21.eps}
\hspace{.2cm}
\includegraphics[width=.45\textwidth]{./test-22.eps}
\caption{\footnotesize The first four POD basis functions (left) and the first four nonlinear POD basis functions with DEIM points in {\it Test II}.}
\label{fig:test-2}
\end{figure}
\begin{table}[!ht]
\begin{center}
\caption{Error comparison of FOM and ROM at $t=1$ for different $\beta$ in {\it Test II}.}
\label{tab:test2t1}
\begin{tabular}{| c | c | c | c | c | c | c | c | c | c | c |} \hline
$\beta$ & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 \\ \hline
$\|u-u_h\|$ & --- & 6.68E-4 & --- & 6.72E-4 & --- & 7.41E-4 & --- & 1.18E-3 & --- \\
$\|u-u_{h, 4}\|$ & 6.71E-4 & 6.68E-4 & 7.92E-4 & 6.72E-4 & 7.84E-4 & 7.41E-4 & 7.67E-4 & 1.18E-3 & 1.80E-3 \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[!ht]
\begin{center}
\caption{Error comparison of FOM and ROM at $t=10$ for different $\beta$ in {\it Test II}.}
\label{tab:test2t10}
\begin{tabular}{| c | c | c | c | c | c | c | c | c | c | c |} \hline
$\beta$ & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 \\ \hline
$\|u-u_h\|$ & --- & 7.17E-2 & --- & 7.31E-2 & --- & 7.46E-2 & --- & 7.63E-2 & --- \\
$\|u-u_{h, 4}\|$ & 7.18E-2 & 7.21E-2 & 7.25E-2 & 7.31E-2 & 7.38E-2 & 7.45E-2 & 7.54E-2 & 7.62E-2 & 7.73E-2\\ \hline
\end{tabular}
\end{center}
\end{table}
From the preceding two numerical tests, we demonstrate that the POD/DEM ROM (\ref{TFPDE:rom3}) yields a reliable approximation, thus could be regarded as an alternative model for TFPDEs.
\section{Parameter Identification} \label{sec:par}
Many application problems demand the identification of parameters of mathematical models.
A typical example is the order of time derivative $\beta$ in the TFPDEs, which is not known {\em a priori}.
Therefore, one obtains certain measurements through physical/mechanical experiments, and uses the data to calibrate the parameters in the mathematical model.
This is an inverse problem: given the source function $f(x,t)$, the initial value $u_0(x)$ of the TFPDE (\ref{TFPDE:e1}), and certain observation (or desired) data such as values of the state variable ${\bf g}$ at the final time, one seeks for the order $\beta$ of the time-fractional PDE.
In this section, we formulate the inverse problem as an optimization and develop a Levenberg--Marquardt regularization method (see, \cite{Ch09,NW,SY06}) to iteratively identify the parameter.
It is known that the inverse problem usually requires a multiple runs of the forward problem, in which the parameter is chosen and the TFPDE is solved.
Considering the computational cost of the forward problem is already high, the inverse problem could become infeasible.
Therefore, we use the POD/DEIM ROM developed in Section \ref{sec:pod} as a surrogate model and design an efficient ROM-based optimization algorithm for parameter identification.
\subsection{L-M Regularization Method}\label{sec:LM}
The parameter identification of $\beta$ can be formulated as follows: to find $\beta_{inv}$ satisfying
\begin{equation}\label{model:ls}
\begin{aligned}
\beta_{inv}=\arg \min_{\beta \in (0,1)} \mathcal{F}(\beta):=\frac{1}{2}\sum_{i=1}^N \left(u(x_i,T;\beta)-g_i\right)^2,
\end{aligned}
\end{equation}
where $g_i$ is the value of observations ${\bf g}$ at the point $x_i$.
An iterative algorithm such as Newton method with line searching could be employed
to find the solution of (\ref{model:ls}). Basically,
the Newton algorithm for minimizing (\ref{model:ls}) uses the first and second derivatives of the objective function $\mathcal{F}(\beta)$:
\begin{equation}\label{model:2s}
\begin{aligned}
\beta_{k+1}=\beta_{k}-\frac{\mathcal{F}'(\beta_k)}{\mathcal{F}^{''}(\beta_k)},
\end{aligned}
\end{equation}
where $k$ represents the $k$th iteration.
It is easy to check that (\ref{model:2s}) is equivalent to solve
\begin{equation}\label{model:GN}
\begin{aligned}
\beta_{k+1}=\beta_{k}-(\mathbf{J}_{k}^\top\mathbf{J}_{k})^{-1}\mathbf{J}_{k}^\top \mathbf{r}_{k},
\end{aligned}
\end{equation}
where $\mathbf{r}_k=(r_1,\cdots,r_N)^\top$ with $r_i=u(x_i,T;\beta)-g_i$ and
\begin{equation}\label{model:Jac}
\begin{aligned}
\mathbf{J}_{k}=\left(\frac{\partial u(x_1,T;\beta)}{\partial \beta}, \cdots, \frac{\partial u(x_N,T;\beta)}{\partial \beta}\right)^\top \in \mathbb{R}^N.
\end{aligned}
\end{equation}
Note that in practice, one may use the finite difference $\frac{u(x_i,T; \beta+\delta)-u(x_i,T; \beta)}{\delta}$ with a small enough $\delta$ to approximate the derivatives in (\ref{model:Jac}).
However, the Newton method may fail to work because of $\mathbf{J}_{k}^\top\mathbf{J}_{k}$ may be nearly zero.
Therefore, the search direction $d_{k}:=-\mathbf{J}_{k}^\top \mathbf{r}_{k}/\mathbf{J}_{k}^\top\mathbf{J}_{k}$ may not point in a descent direction.
A common technique to overcome this problem is the L-M algorithm (or Levenberg algorithm since a single parameter case is considered in this paper), which modifies (\ref{model:GN}) by introducing some regularity:
\begin{equation}\label{model:LM}
\begin{aligned}
\beta_{k+1}=\beta_{k}-(\mathbf{J}_{k}^\top\mathbf{J}_{k}+\alpha_{k})^{-1}\mathbf{J}_{k}^\top \mathbf{r}_{k},
\end{aligned}
\end{equation}
where $\alpha_{k}$ is a positive penalty parameter.
The method coincides with the Newton algorithm when $\alpha_k=0$; and it gives a step close to the gradient descent direction when $\alpha_k$ is large.
\begin{table}[!ht]
\begin{center}
{\sc Algorithm 4.1. ROM-based parameter identification algorithm.}
\begin{tabular}{l} \hline
Given the observation data ${\bf g}$ and other information of the TFPDE;\\
\textbf{Offline. } Select some samples in the parameter space $[\underline{\beta}, \overline{\beta}] \subset (0,1)$ and
solve the FOM
\\
problem (\ref{TFPDE:e5}) respectively, and construct ROM (\ref{TFPDE:rom3}) by using the $r$ POD basis functions.\\
\textbf{Online. } Given an initial guess $\beta_0$ and choose $\rho \in (0,1)$, $\sigma \in (0, \frac{1}{2})$, $\alpha_0>0$ and $\delta$ small enough. \\
For $k=0,1,\cdots$, $K_{max}$\\
\textbf{$\diamond$ Step 1. } Solve the ROM problem (\ref{TFPDE:rom3}) corresponding to $\beta_k$ and $\beta_k+\delta$ respectively\\
\quad to obtain $u_r(\cdot,T; \beta_k)$ and $u_{r}(\cdot,T; \beta_k+\delta)$ .\\
\textbf{$\diamond$ Step 2. } Compute $\mathbf{J}_{k}$ and $\mathbf{r}_{k}$, and update the search direction $d_{k}:=-\mathbf{J}_{k}^\top \mathbf{r}_{k}/\mathbf{J}_{k}^\top\mathbf{J}_{k}$.\\
\textbf{$\diamond$ Step 3. } Determine the search step $\rho^m$ by the Armijo rule: \\
\centerline{$\mathcal{F}(\beta_k+\rho^m d_k) \le \mathcal{F}(\beta_k) + \sigma \rho^m d_k\mathbf{J}_{k}^\top \mathbf{r}_{k}$}\\
\quad where $m$ is the least nonnegative integer.\\
\textbf{$\diamond$ Step 4. } If $|\rho^m d_k|\le $ Tol, then stop and let $\beta_{inv}:=\beta_{k}$. Otherwise update \\
\centerline{$\beta_{k+1}:=\beta_{k}+\rho^m d_k, ~\alpha_{k+1}:=\alpha_{k}/2$} \\
\quad and go to \textbf{Step 1} again. \\ \hline
\end{tabular}
\end{center}
\end{table}
The proposed approach of the inverse parameter identification is summarized in {\sc Algorithm 4.1}, which includes the details of the L-M method.
In particular, the Armijo rule \cite{A66} in Step 3. of the online process, known as one of the inexact line search techniques, is imposed to ensure the objective function $\mathcal{F}$ has sufficient decent.
Other rules and related convergence theory can be found in \cite{SY06}.
\subsection{Numerical Experiments}\label{sec:num}
Next, we test the proposed method for numerically identifying the parameter $\beta$.
Denoted by $\beta^*$ the exact order of the time-fractional derivative in (\ref{TFPDE:e1}),
$\beta_0$ an initial guess for the optimization and $\beta_{inv}$ the numerical finding.
Let `Itr.' be the number of iterations, and `CPU time' represent the online time for implementing {\sc Algorithm 4.1}.
To test the algorithm, we take the observation data ${\bf g}$ to be the solution of FOM (\ref{TFPDE:e5}) at $t=T$ when fractional derivative is $\beta^*$.
Since the realistic data may be contaminated by noise, we also consider cases in which the data has a small random perturbation, i.e.,
\begin{equation}
\begin{aligned}
g^{\epsilon}(x_i)=g(x_i)(1+\epsilon\% \cdot randn(i)),
\end{aligned}
\end{equation}
for $i = 1,\cdots,N$, where $\epsilon$ is the noise level and $randn$ represents the random noise generated by the standard normal distribution.
Assume $\beta \in (0, 1)$ and $\beta^*=0.75$.
In the following tests, we use a four-dimensional ($r= 4$) ROM generated offline based on the full-order solutions corresponding to $\beta=0.2, 0.4, 0.6, 0.8$;
and select the parameters $\alpha_0=1$, $\rho = 0.75$, $\sigma =0.25$, $\delta=10^{-3}$, and Tol $= 10^{-7}$ in the online process.
Test cases in 1D and 2D spatial domains are considered.
\subsubsection{One Dimensional Cases}
We revisit some examples used in Section \ref{sec:pod}.
The space-time domain is chosen as $[0,1]^2$ and the mesh sizes are $h=\Delta t =1/64$.
\paragraph{Example 1.} The exact solution, initial condition and source function in this example are the same as those in {\it Test I}.
Varying the initial guess $\beta_0$ and the noise level $\epsilon$, we test the proposed algorithm ({\sc Algorithm 4.1}) on this linear problem.
The associated output $\beta_{inv}$ and approximation error $|\beta^*-\beta_{inv}|$, and iteration numbers of the optimization process are listed in Table \ref{tab:1dex1}.
\begin{table}[!ht]
\begin{center}
\caption{Numerical observation of $\beta^*=0.75$ with $\epsilon\%$-level noise-contaminated data in {\it Example 1}.}
\label{tab:1dex1}
\begin{tabular}{| c | c | c | c | c || c | c | c | c |} \hline
$\epsilon\%$ &$\beta_0$ & $\beta_{inv}$ & $|\beta^*-\beta_{inv}|$ & Itr. &$\beta_0$ & $\beta_{inv}$ & $|\beta^*-\beta_{inv}|$ & Itr. \\ \hline
& 0.1 & 7.5000E-1 & 8.8659E-9 & 12 & 0.7 & 7.5000E-1 & 6.2172E-8 & 11 \\
0\% & 0.3 & 7.5000E-1 & 6.3319E-9 & 12 & 0.8 & 7.5000E-1 & 6.6172E-8 & 11 \\
& 0.5 & 7.5000E-1 & 3.7111E-9 & 12 & 0.9 & 7.5000E-1 & 2.8085E-9 & 12 \\ \hline
& 0.1 & 7.4971E-1 & 2.8815E-4 & 12 & 0.7 & 7.5026E-1 & 2.5526E-4 & 11 \\
0.01\% & 0.3 & 7.5006E-1 & 5.7065E-5 & 12 & 0.8 & 7.5007E-1 & 7.0675E-5 & 11 \\
& 0.5 & 7.5043E-1 & 4.3908E-4 & 12 & 0.9 & 7.5010E-1 & 1.0379E-4 & 12 \\ \hline
& 0.1 & 7.5104E-1 & 1.0463E-3 & 12 & 0.7 & 7.4556E-1 & 4.4472E-3 & 11 \\
0.1\% & 0.3 & 7.4978E-1 & 2.2298E-4 & 12 & 0.8 & 7.5236E-1 & 2.3619E-3 & 11 \\
& 0.5 & 7.5078E-1 & 7.8280E-4 & 12 & 0.9 & 7.5734E-1 & 7.3391E-3 & 12 \\ \hline
& 0.1 & 7.3621E-1 & 1.3791E-2 & 12 & 0.7 & 7.2562E-1 & 2.4375E-2 & 11 \\
1\% & 0.3 & 7.6237E-1 & 1.2373E-2 & 12 & 0.8 & 7.1960E-1 & 3.3040E-2 & 11 \\
& 0.5 & 7.0238E-1 & 4.7617E-2 & 12 & 0.9 & 7.6846E-1 & 1.8461E-2 & 12 \\\hline
\end{tabular}
\end{center}
\end{table}
For cases in which the data is uncontaminated and contaminated by random noise at a relative $1\%$-level, we plot the change of parameter errors and values of the objective function with respect to the number of iterations in Figures \ref{fig:1dex1unc}-\ref{fig:1dex1c}, respectively.
Note that different random noises are imposed for each run of the algorithm, thus, the data to be used is different in every inverse problem when the initial guess changes.
Therefore, we can see that, for example, in the $1\%$-level case with the initial guesses 0.1 and 0.3, the outputs $\beta_{inv}$ are different.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=.48\linewidth]{./err_k_u-2.eps}
\hspace{.1cm}
\includegraphics[width=.48\linewidth]{./obj_k_u-2.eps}
\caption{$\beta^*=0.75$ for uncontaminated observation data in {\it Example 1}.}
\label{fig:1dex1unc}
\end{center}
\end{figure}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=.48\linewidth]{./err_k_c-2.eps}
\hspace{.1cm}
\includegraphics[width=.48\linewidth]{./obj_k_c-2.eps}
\caption{$\beta^*=0.75$ for 1\%-level noise contaminated observation data in {\it Example 1}.}
\label{fig:1dex1c}
\end{center}
\end{figure}
It is seen that (\textbf{i}) the proposed algorithm achieves a close approximation of the desired parameter $\beta^*$ for different initial guesses, in particular, $\beta_0=0.1$ and $0.9$ are beyond the range of sampling set;
(\textbf{ii}) the optimization process takes only a few iterations to reach the tolerance;
(\textbf{iii}) When the observation data ${\bf g}$ is contaminated by random noise, it can still produce satisfactory results but with a relatively low error accuracy compared with the uncontaminated case. For example, if the initial guess $\beta_0=0.7$, the numerical
observation $\beta_{inv}$ equals to $7.5000\times10^{-1}$, $7.5026\times10^{-1}$, $7.4556\times10^{-1}$, and $7.2562\times10^{-1}$, respectively, for the uncontaminated data, the 0.01\%-level contaminated data, the 0.1\%-level contaminated data, and the 1\%-level contaminated data.
This is because the real parameter $\beta^*$ has been slightly perturbed by the noise on the observation data.
Such influence becomes more obvious when the noise level increases.
\paragraph{Example 2.} We consider {\it Test II} again and perform the same type of tests as in {\it Example 1}.
The algorithm output $\beta_{inv}$ and approximation error $|\beta^*-\beta_{inv}|$, and iteration numbers of the optimization process are listed in Table \ref{tab:1dex2}.
For cases in which the data is uncontaminated and contaminated by random noise at a relative $1\%$-level, we plot the change of parameter errors and values of the objective function with respect to the number of iterations in Figures \ref{fig:1dex2unc} and \ref{fig:1dex2c}, respectively.
The same conclusions as that of {\it Example 1} can be drawn in this case.
\begin{table}[htp]
\begin{center}
\caption{Numerical observation of $\beta^*=0.75$ with $\epsilon\%$-level noise-contaminated data in {\it Example 2}.}
\label{tab:1dex2}
\begin{tabular}{| c | c | c | c | c || c | c | c | c |} \hline
$\epsilon\%$ &$\beta_0$ & $\beta_{inv}$ & $|\beta^*-\beta_{inv}|$ & Itr. &$\beta_0$ & $\beta_{inv}$ & $|\beta^*-\beta_{inv}|$ & Itr. \\ \hline
& 0.1 & 7.5000E-1 & 9.9664E-10 & 8 & 0.7 & 7.5000E-1 & 2.9806E-8 & 7 \\
0\% & 0.3 & 7.5000E-1 & 3.4394E-10 & 8 & 0.8 & 7.5000E-1 & 3.6300E-8 & 7 \\
& 0.5 & 7.5000E-1 & 1.6732E-9 & 8 & 0.9 & 7.5000E-1 & 1.0195E-7 & 7 \\ \hline
& 0.1 & 7.4998E-1 & 1.5424E-5 & 8 & 0.7 & 7.4989E-1 & 1.1190E-4 & 7 \\
0.01\% & 0.3 & 7.4997E-1 & 3.0629E-5 & 8 & 0.8 & 7.5006E-1 & 5.6022E-5 & 7 \\
& 0.5 & 7.5003E-1 & 2.8158E-5 & 8 & 0.9 & 7.5007E-1 & 6.5169E-5 & 7 \\ \hline
& 0.1 & 7.5044E-1 & 4.3990E-4 & 8 & 0.7 & 7.5012E-1 & 1.2027E-4 & 7 \\
0.1\% & 0.3 & 7.5025E-1 & 2.4610E-4 & 8 & 0.8 & 7.4959E-1 & 4.0968E-4 & 7 \\
& 0.5 & 7.5007E-1 & 7.4076E-5 & 8 & 0.9 & 7.4968E-1 & 3.1646E-4 & 7 \\ \hline
& 0.1 & 7.5440E-1 & 4.3964E-3 & 8 & 0.7 & 7.5120E-1 & 1.2030E-3 & 7 \\
1\% & 0.3 & 7.5246E-1 & 2.4605E-3 & 8 & 0.8 & 7.4590E-1 & 4.0992E-3 & 7 \\
& 0.5 & 7.5074E-1 & 7.4073E-4 & 8 & 0.9 & 7.4683E-1 & 3.1669E-3 & 8 \\\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[htp]
\begin{center}\includegraphics[width=.48\linewidth]{./err_k_u-31.eps}
\hspace{.1cm}
\includegraphics[width=.48\linewidth]{./obj_k_u-31.eps}
\caption{\footnotesize $\beta^*=0.75$ for uncontaminated observation data in {\it Example 2}.}
\label{fig:1dex2unc}
\end{center}
\end{figure}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=.48\linewidth]{./err_k_c-31.eps}
\hspace{.1cm}
\includegraphics[width=.48\linewidth]{./obj_k_c-31.eps}
\caption{\footnotesize $\beta^*=0.75$ for 1\%-level noise contaminated observation data in {\it Example 2}.}
\label{fig:1dex2c}
\end{center}
\end{figure}
\subsubsection{Two Dimensional Cases}
In this subsection, we consider an application of the ROM-based algorithm ({\sc Algorithm 4.1}) for 2D TFPDEs (\ref{TFPDE:e1}).
A linear equation is considered in {\em Example 3} and a nonlinear case is considered in {\em Example 4.}
The goal of these tests is two-fold: we check the accuracy of the estimated parameter; and measure the efficiency of the proposed ROM-based algorithm by comparing the CPU time with a FOM-based L-M algorithm.
\paragraph{Example 3.}
First, a linear TFPDE is considered, that is, $g=0$ in \eqref{TFPDE:e1}.
Let $\Omega=[-1,1]^2$, $T=1$, $\mu=1$, $f=0$, and the initial condition $u_0(x,y)=(x-1)(x+1)(y-1)(y+1)$.
The forward problem is solved at parameter samples $\beta= 0.2, 0.4, 0.6, 0.8$ to generate snapshots.
The space-time domain is decomposed into a $64\times 64 \times 64$ grid.
It indicates that one has to solve a series of 3096-by-3096 linear algebraic systems when the FOM-based L-M algorithm is used.
The offline construction work of a four-dimensional POD-ROM takes about 195 seconds.
The four POD basis functions are shown in Figure \ref{fig:2dpod}.
Since the dimension is low, the computational cost for the online implementation would be greatly reduced.
\begin{figure}[!ht]
\begin{center}\includegraphics[width=.48\linewidth]{./pod-bs-1.eps}
\hspace{.1cm}
\includegraphics[width=.48\linewidth]{./pod-bs-2.eps}\\
\includegraphics[width=.48\linewidth]{./pod-bs-3.eps}
\hspace{.1cm}
\includegraphics[width=.48\linewidth]{./pod-bs-4.eps}
\caption{\footnotesize The first four POD basis functions in {\it Example 3}.}
\label{fig:2dpod}
\end{center}
\end{figure}
As the linear algebraic systems are all symmetric and positive definite, during the tests we consider utilizing the preconditioned conjugate gradient (PCG) iterative solver. In Tables \ref{tab:2dunc} and \ref{tab:2dc}, we show the numerical results for the parameter estimation problem based on FOM and ROM when the data is uncontaminated and contaminated by 1\% level random noise, respectively.
The ideal observation data and one example of a $1\%$-level noise are shown in Figure \ref{fig:2dexdata}.
The error $|\beta^*-\beta_{inv}|$ and the objective function $\mathcal{F}(\beta)$ versus the number of iterations for different initial guesses are plotted in Figures \ref{fig:2dexerr}-\ref{fig:2dexf}, respectively.
It is seen that the proposed ROM-based algorithm achieves the same accuracy as the FOM-based L-M algorithm, and both algorithms converge after a few number of iterations.
However, the CPU time of the former approach has obviously reduced from, for instance, 529 seconds to 34 seconds (the online time) for the latter one when the initial guess $\beta_0=0.5$ and data is free of noise.
As the observation data is contaminated by $1\%$-level noise, the CPU time for the FOM could be greatly increased, while the ROM-based approach still consumes about 35\,s to complete the optimization process.
Of course, for large-scale or long-time modeling problems, the ROM-based approach will become more competitive.
\begin{table}[htp]
\begin{center}
\caption{Comparison of FOM and ROM with uncontaminated data in {\it Example 3}.}
\label{tab:2dunc}
\begin{tabular}{| c | c | c | c | c | c |} \hline
&$\beta_0$ & $\beta_{inv}$ & $|\beta^*-\beta_{inv}|$ & Itr. & CPU time \\ \hline
& 0.5 & 7.5000E-1 & 2.0301E-8 & 5 & 529s \\
& 0.6 & 7.5000E-1 & 3.4855E-9 & 5 & 528s \\
FOM & 0.7 & 7.5000E-1 & 2.8237E-8 & 4 & 418s \\
& 0.8 & 7.5000E-1 & 6.9110E-10 & 5 & 505s \\
& 0.9 & 7.5000E-1 & 8.7546E-9 & 5 & 494s \\ \hline
& 0.5 & 7.5000E-1 & 2.0300E-8 & 5 & 34s \\
& 0.6 & 7.5000E-1 & 3.4840E-9 & 5 & 35s \\
ROM-4 & 0.7 & 7.5000E-1 & 2.8235E-8 & 4 & 28s \\
& 0.8 & 7.5000E-1 & 6.8953E-10 & 5 & 35s \\
& 0.9 & 7.5000E-1 & 8.7531E-9 & 5 & 35s \\\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[htp]
\begin{center}
\caption{Comparison of FOM and ROM with fixed $1\%$-level noise-contaminated data in {\it Example 3}.}
\label{tab:2dc}
\begin{tabular}{| c | c | c | c | c | c |} \hline
&$\beta_0$ & $\beta_{inv}$ & $|\beta^*-\beta_{inv}|$ & Itr. & CPU time \\ \hline
& 0.5 & 7.4986E-1 & 1.3766E-4 & 5 & 556s \\
& 0.6 & 7.4986E-1 & 1.3768E-4 & 5 & 1,162s \\
FOM & 0.7 & 7.4986E-1 & 1.3765E-4 & 4 & 442s \\
& 0.8 & 7.4986E-1 & 1.3768E-4 & 5 & 506s \\
& 0.9 & 7.4986E-1 & 1.3767E-4 & 5 & 695s \\ \hline
& 0.5 & 7.4986E-1 & 1.3766E-4 & 5 & 34s \\
& 0.6 & 7.4986E-1 & 1.3768E-4 & 5 & 33s \\
ROM-4 & 0.7 & 7.4986E-1 & 1.3765E-4 & 4 & 26s \\
& 0.8 & 7.4986E-1 & 1.3768E-4 & 5 & 35s \\
& 0.9 & 7.4986E-1 & 1.3767E-4 & 5 & 33s \\\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=.48\linewidth]{./observe-data.eps}
\hspace{.1cm}
\includegraphics[width=.48\linewidth]{./1-level-noise.eps}
\caption{The observation data and the fixed $1\%$-level noise in {\it Example 3}.}
\label{fig:2dexdata}
\end{center}
\end{figure}
\begin{figure}[htp]
\begin{center}\includegraphics[width=.48\linewidth]{./err_k_u-4.eps}
\hspace{.1cm}
\includegraphics[width=.48\linewidth]{./obj_k_u-4.eps}
\caption{$\beta^*=0.75$ for uncontaminated observation data in {\it Example 3}.}
\label{fig:2dexerr}
\end{center}
\end{figure}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=.48\linewidth]{./err_k_c-4.eps}
\hspace{.1cm}
\includegraphics[width=.48\linewidth]{./obj_k_c-4.eps}
\caption{$\beta^*=0.75$ for fixed 1\%-level noise contaminated observation data in {\it Example 3}.}
\label{fig:2dexf}
\end{center}
\end{figure}
\paragraph{Example 4.} Next, we consider a nonlinear TFPDE model \eqref{TFPDE:e1} with $\Omega=[0,1]^2$, $T=1$,
and ${\boldsymbol\mu}=\left[
\begin{array}{cc}
1 &0 \\
0 &2
\end{array}
\right]$, $g(u)=u^3$ and the source term
\begin{equation}\label{test:e1}
\begin{aligned}
f(x,t)=u(x,t)^3 +6\pi^2 u(x,t)+ \left(\frac{\Gamma(3+\beta)}{\Gamma(3)} t^2 + \frac{\Gamma(3)}{\Gamma(3-\beta)} t^{2-\beta}\right)\sin(2\pi x)\sin(\pi y)
\end{aligned}
\end{equation}
such that the analytic solution is
$u(x,t)=(t^{2+\beta}+t^2+1)\sin(2\pi x)\sin(\pi y)$.
The same spatial and temporal discretization as in {\em Example 3} is used for this test.
The set of parameter samples for constructing the POD/DEIM ROM is also selected to be the same as used in {\em Example 3}.
We construct a 4-dimensional POD/DEIM ROM, which uses $r= 4$ leading POD basis functions as shown in Figure \ref{fig:2dpod2}, $s=10$ nonlinear POD basis (the first four ones are plotted in Figure \ref{fig:2ddeim}), and $10$ DEIM points as shown in Figure \ref{fig:2ddeimp}.
The offline time of the reduced-order simulations is about 528 seconds.
\begin{figure}[!ht]
\begin{center}\includegraphics[width=.48\linewidth]{./pod-bs-11.eps}
\hspace{.1cm}
\includegraphics[width=.48\linewidth]{./pod-bs-21.eps}\\
\includegraphics[width=.48\linewidth]{./pod-bs-31.eps}
\hspace{.1cm}
\includegraphics[width=.48\linewidth]{./pod-bs-41.eps}
\caption{The first four POD basis functions in {\it Example 4}. }
\label{fig:2dpod2}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}\includegraphics[width=.48\linewidth]{./deim-bs-11.eps}
\hspace{.1cm}
\includegraphics[width=.48\linewidth]{./deim-bs-21.eps}\\
\includegraphics[width=.48\linewidth]{./deim-bs-31.eps}
\hspace{.1cm}
\includegraphics[width=.48\linewidth]{./deim-bs-41.eps}
\caption{The first four POD basis functions for the nonlinear function $F(u)$ in {\it Example 4}. }
\label{fig:2ddeim}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}\includegraphics[width=.48\linewidth]{./deim_pts.eps}
\caption{The first ten DEIM points for the nonlinear function $F(u)$ in {\it Example 4}. }
\label{fig:2ddeimp}
\end{center}
\end{figure}
\begin{table}[htp]
\begin{center}
\caption{Comparison of FOM and ROM with uncontaminated data in {\it Example 4}.}
\label{tab:2dund}
\begin{tabular}{| c | c | c | c | c | c |} \hline
&$\beta_0$ & $\beta_{inv}$ & $|\beta^*-\beta_{inv}|$ & Itr. & CPU time \\ \hline
& 0.01 & 7.5000E-1 & 4.6031E-9 & 8 & 2803s \\
& 0.1 & 7.5000E-1 & 3.9489E-9 & 8 & 2784s \\
& 0.3 & 7.5000E-1 & 2.6262E-9 & 8 & 2753s \\
FOM & 0.5 & 7.5000E-1 & 1.4268E-9 & 8 & 2725s \\
& 0.8 & 7.5000E-1 & 2.9511E-8 & 7 & 2334s \\
& 0.9 & 7.5000E-1 & 8.8339E-8 & 7 & 2334s \\
& 0.99 & 7.5000E-1 & 1.3489E-9 & 8 & 2647s \\ \hline
& 0.01 & 7.5000E-1 & 4.6045E-9 & 8 & 9s \\
& 0.1 & 7.5000E-1 & 3.9490E-9 & 8 & 9s \\
ROM & 0.3 & 7.5000E-1 & 2.6276E-9 & 8 & 9s \\
& 0.5 & 7.5000E-1 & 1.4268E-9 & 8 & 9s \\
& 0.8 & 7.5000E-1 & 2.9511E-8 & 7 & 8s \\
& 0.9 & 7.5000E-1 & 8.8337E-8 & 7 & 8s \\
& 0.99 & 7.5000E-1 & 1.3504E-9 & 8 & 9s \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[htp]
\begin{center}
\caption{Comparison of FOM and ROM with fixed $1\%$-level noise-contaminated data in {\it Example 4}.}
\label{tab:2dd}
\begin{tabular}{| c | c | c | c | c | c |} \hline
&$\beta_0$ & $\beta_{inv}$ & $|\beta^*-\beta_{inv}|$ & Itr. & CPU time \\ \hline
& 0.01 & 7.3045E-1 & 1.9554E-2 & 8 & 2821s \\
& 0.1 & 7.3045E-1 & 1.9554E-2 & 8 & 3038s \\
& 0.3 & 7.3045E-1 & 1.9554E-2 & 8 & 2789s \\
FOM & 0.5 & 7.3045E-1 & 1.9554E-2 & 8 & 2761s \\
& 0.8 & 7.3045E-1 & 1.9554E-2 & 7 & 2381s \\
& 0.9 & 7.3045E-1 & 1.9554E-2 & 8 & 3503s \\
& 0.99 & 7.3045E-1 & 1.9554E-2 & 8 & 2682s \\ \hline
& 0.01 & 7.3045E-1 & 1.9554E-2 & 8 & 10s \\
& 0.1 & 7.3045E-1 & 1.9554E-2 & 8 & 10s \\
ROM & 0.3 & 7.3045E-1 & 1.9554E-2 & 8 & 10s \\
& 0.5 & 7.3045E-1 & 1.9554E-2 & 8 & 10s \\
& 0.8 & 7.3045E-1 & 1.9554E-2 & 7 & 9s \\
& 0.9 & 7.3045E-1 & 1.9554E-2 & 8 & 10s \\
& 0.99 & 7.3045E-1 & 1.9554E-2 & 8 & 9s \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=.48\linewidth]{./observe-data1.eps}
\hspace{.1cm}
\includegraphics[width=.48\linewidth]{./1-level-noise1.eps}
\caption{The observation data and the fixed $1\%$-level noise in {\it Example 4}.}
\label{fig:2dexdata2}
\end{center}
\end{figure}
\begin{figure}[htp]
\begin{center}\includegraphics[width=.48\linewidth]{./err_k_u-41.eps}
\hspace{.1cm}
\includegraphics[width=.48\linewidth]{./obj_k_u-41.eps}
\caption{$\beta^*=0.75$ for uncontaminated observation data in {\it Example 4}.}
\label{fig:2dexerr2}
\end{center}
\end{figure}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=.48\linewidth]{./err_k_c-41.eps}
\hspace{.1cm}
\includegraphics[width=.48\linewidth]{./obj_k_c-41.eps}
\caption{$\beta^*=0.75$ for fixed 1\%-level noise contaminated observation data in {\it Example 4}.}
\label{fig:2dexf2}
\end{center}
\end{figure}
The numerical results for the parameter identification problem based on FOM and ROM are listed in Tables \ref{tab:2dund}-\ref{tab:2dd}, for cases in which the data is uncontaminated and contaminated by 1\% level random noise, respectively.
The ideal observation data and one example of a $1\%$-level noise are shown in Figure \ref{fig:2dexdata2}.
The error $|\beta^*-\beta_{inv}|$ and the objective function $\mathcal{F}(\beta)$ versus the number of iterations for different initial guesses are plotted in Figures \ref{fig:2dexerr2}-\ref{fig:2dexf2}.
The proposed ROM-based algorithm achieves the same accuracy as the FOM-based L-M algorithm, and both algorithms converge after a few number of iterations.
However, the CPU time of the former approach is dramatically decreased from, for instance, 2803 seconds to 9 seconds (the online time) for the latter one when the initial guess $\beta_0=0.01$ and data is free of noise.
Similar speed-up factors are also obtained for the noise-contaminated data.
\section{Conclusions}
As a first step of investigations on the reduced-order modeling of fractional partial differential equations, a POD/DEIM-based reduced-order model is proposed for time-fractional diffusion problems in this paper.
The numerical study on the reduced-order simulations shows that the POD/DEIM ROM is able to achieve the same accuracy as the full-order model, but greatly reduces the associated computational complexities.
Motivated by realistic applications of the time-fractional diffusion problems, in which the fractional order $\beta$ of TFPDEs is usually unknown {\em a priori}, we consider an inverse problem for parameter identification.
Based on the POD/DEIM ROM of TFPDEs and the Levenberg-Marquardt algorithm, we developed a ROM-based optimization algorithm for seeking an optimal $\beta$ so that our model output can match the experimental observations.
Numerical tests verify the effectiveness of the proposed algorithm on both linear and nonlinear TFPDEs.
At the next step, we will extend the idea to more general FPDEs including the case of $\beta>1$ and apply the proposed methods to the application problems in engineering and scientific computing.
\begin{acknowledgements}
The first author would like to thank the support of China Scholarship Council for visiting the Interdisciplinary Mathematics Institute at University of South Carolina during the year 2015 to 2016.
\end{acknowledgements}
|
1,108,101,562,978 | arxiv | \section{Introduction}
A set of simple closed curves on a surface is said to fill if it cuts the surface into topological disks and once-punctured disks. Any such filling set must contain at least two curves; by a simple topological argument (see for instance \cite{ts}) if two curves fill, they must intersect at least $|\chi|$ times, where $\chi$ is the Euler characteristic of the surface. So if we bound the number of times they can intersect and increase the complexity of the surface, we will need more curves; but how many? The main goal of this paper is to give an answer to this question.
For closed surfaces of genus $g$, it is known \cite{app} that the number $N$ of curves in a filling set of curves that pairwise intersect at most $k$ times satisfies
$$N^2-N\geq\frac{4g-2}{k}.$$
Moreover the bound is essentially sharp.
In this paper we study finite type surfaces of negative Euler characteristic and with punctures. Somewhat surprisingly, the bounds we obtain differ depending on the parity of $k$, the number of times curves are allowed to pairwise intersect. For even $k$, we obtain a similar result to the one for closed surfaces mentioned above.
\begin{thmintro}\label{thmintro:even}
Let $S$ be a surface with at least one puncture and let $k$ be a positive even integer. Any filling set of curves on $S$ pairwise intersecting at most $k$ times has cardinality at least $N$, where $N$ is the smallest integer satisfying
$$N(N-1)\geq \frac{2}{k}|\chi(S)|.$$
Furthermore, if $g(S)\leq 1$ then there exists a filling set of curves pairwise intersecting at most $k$ times of size $N$. However if the surface has genus at least two then there exists a filling set of size less than
$$
\sqrt{\frac{2|\chi(S)|}{k} +\frac{1}{4} }+ 6
$$
\end{thmintro}
Note that the above formulas determine the order of growth (in function of the Euler characteristic) of a minimal filling set (the leading term being $\sqrt{\frac{2|\chi(S)|}{k}}$).
In contrast for odd $k$ we show that the order of growth is different; to explain our result we need to talk a little bit about a related problem. We denote $M_g(k)$ the {\it maximum} number of curves that pairwise intersect at most $k$ times on a closed genus $g$ surface. Determining $M_g(k)$ is a surprisingly hard problem. Although bounds are known (see the work of Przytycki in \cite{przytycki} and also Aougab in \cite{aougab}, Juvan--Malni\v{c}--Mohar in \cite{jmm}, and Malestein--Rivin--Theran \cite{mrt}) even the rough order of growth of $M_g(k)$ is not known. For $k=1$, its growth in terms of the genus is known to be somewhere between quadratic and cubic. This somewhat mysterious quantity appears in our next theorem.
\begin{thmintro}\label{thmintro:odd} Let $S$ be a surface of genus $g$ with at least one puncture. Then a filling set of curves pairwise intersecting at most $k$ times has cardinality at least $N$, where $N$ is the smallest integer satisfying
$$\frac{k}{2}N(N-1)-\frac{N}{2}\left(\frac{N}{M_g(k)}-1\right)\geq|\chi(S)|.$$
\end{thmintro}
Note that the order of growth is really different from the one in Theorem \ref{thmintro:odd} because of the extra term $-\frac{N}{2}\left(\frac{N}{M_g(k)}-1\right)$.
In light of the problem of determining and realizing the quantity $M_g(k)$, when $k$ is odd we are not able to give explicit constructions for small filling sets of curves pairwise intersecting at most $k$ times that match our lower bound, with one notable exception.
When $g=1$ and $k=1$, it is not difficult to show that $M_g(1) = 3$. Using this, when the surface is a punctured torus and $k=1$ we can prove a precise result.
\begin{thmintro}\label{thmintro:torus}
Let $ \delta_1,\hdots, \delta_N$ be a filling set of curves that pairwise intersect at most once on a torus with $n$ punctures. Then
$$
N \geq \sqrt{3n}.
$$
Conversely for a torus with $n$ punctures, there exists a filling set of $N$ isotopically distinct simple curves $ \delta_1,\hdots, \delta_N$ that pairwise intersect at most once with
$$
N \leq \sqrt{3n+1}.
$$
\end{thmintro}
The results described up until now are purely topological. One motivation for understanding the topology of curves that pairwise intersect at most a small number of times comes from the study of {\it systoles} on surfaces. A systole is a shortest closed non contractible curve non isotopic to boundary. For any given Riemannian metric, systoles intersect at most twice (and at most once on a closed surface). An important class of metrics is complete finite area hyperbolic surfaces; we consider systoles on these.
In particular, we can ask what happens to our previous bounds if one requires that the curves be systoles and we show that the growth of the lower bound is very different.
\begin{thmintro}\label{thmintro:systoles}
Let $S$ be a hyperbolic surface of signature $(g,n)$ and systole length $\ell$. If $\gamma_1,\dots,\gamma_M$ is a filling set of systoles, then
$$M\geq \frac{2\pi(2g-1)+\pi(n-2)}{4\ell}.$$
\end{thmintro}
We also give examples of constructions of surfaces with a filling set of systoles of cardinality linear in the Euler characteristic, showing that the order of growth of Theorem \ref{thmintro:systoles} is roughly correct.
The paper is organized as follows. In Section \ref{prelim} we give the main definitions and prove Theorem \ref{thmintro:torus}. In the subsequent section, we prove Theorem \ref{thmintro:even}, treating separately the constructions for the case of spheres, tori and higher genus surfaces. Theorem \ref{thmintro:odd} is proven in Section \ref{odd} and the final section is dedicated to the lower bounds on the number of filling systoles.
\section{The case of genus $1$}\label{prelim}
In this section we introduce some of the objects of interest and illustrate our objective by proving Theorem \ref{thmintro:torus}. Although it is relatively straightforward, it contains many of the main steps that will be used in the sequel.
A simple closed curve is {\it essential} if it is not homotopic to a point or to a puncture. Throughout the paper, by {\it curve} we will mean a simple, closed, essential curve.
A set $\Gamma$ of pairwise non-homotopic curves on a surface $S$ {\it fills} if the complement $S\setminus \Gamma$ is a union of disks and once-punctured disks. A {\it $k$-filling set} is a set of pairwise non-homotopic curves which fill and pairwise intersect at most $k$ times.
With this notation, Theorem \ref{thmintro:torus} can be restated as follows.
\begin{theorem}\label{thm:torus}
Let $ \delta_1,\hdots, \delta_N$ be a $1$-filling set on a torus $T$ with $n$ punctures. Then
$$
N \geq \sqrt{3n}.
$$
Conversely for $T$ a torus with $n$ punctures, there exists a $1$-filling set of cardinality $N$ with
$$
N \leq \sqrt{3n+1}.
$$
\end{theorem}
\begin{proof}
We begin by recalling the well-known fact that for $n=0$ (or $n=1$), there can be at most $3$ topologically distinct curves that pairwise intersect at most once. Associated to $T$ is the torus $T^{0}$ obtained by forgetting the punctures of $T$. This map acts on curves of $T$ sending them to curves on $T^{0}$ and is called the forgetful map. Note that if two curves on $S$, say $\delta$ and $\tilde{\delta}$, intersect at most once, their images on $S$ do as well.
Now given a set of curves $ \delta_1,\hdots, \delta_N$ that pairwise intersect at most once, let's consider the curves obtained on $T^{0}$ via the forgetful map. The image consists in at most three curves. Let's denote these curves $\alpha, \beta$ and $\gamma$ (if the image is smaller, we arbitrarily choose the remaining curves so that they intersect at most once). We can split the the curves $ \delta_1,\hdots, \delta_N$ into three sets depending on whether they are preimages of $\alpha$, $\beta$ or $\gamma$. Up to renumbering, let's assume that the preimages of $\alpha$ are
$$
\delta_1,\hdots,\delta_a,
$$
those of $\beta$ are
$$
\delta_{a+1},\hdots,\delta_{a+b},
$$
and those of $\gamma$ are
$$
\delta_{a+b+1},\hdots,\delta_{a+b+c}
$$
where $N=a+b+c$.
Observe that $\inter(\delta_i,\delta_j) = 1$ if and only if $\delta_i$ and $\delta_j$ are the preimages of different curves among the set $\alpha,\beta,\gamma$. As such the total number of pairwise intersections among the curves $\delta_1,\hdots,\delta_N$ is
$$
a b + bc + a c.
$$
We can assume no intersection points coincide on $S$; by an Euler characteristic argument the number above is also the number of connected components of $T \setminus \{\delta_1,\hdots,\delta_N\}$. As there must be at least as many connected components as punctures we obtain:
$$
n \leq a b + bc + a c.
$$
Via Lagrange, the quantity $ab + bc + ac$ is maximal among all $a,b,c$ satisfying $a+b+c=N$ when $a,b,c$ are equal. Thus
$$
n\leq \frac{1}{3} N^2
$$
which proves the first assertion.
Note that if $N$ is not divisible by $3$ we can get a slightly better bound. Indeed, in this case the maximum that can be achieved is $\frac{N^2-1}{3}$ (for instance for $a=\lfloor\frac{N}{3}\rfloor$, $b=\lfloor\frac{N}{3}\rfloor+1$ and $c=a$ or $b$, depending on whether $N\equiv 1$ or $2$ modulo $3$). So in this case we get
$$n\leq \frac{N^2-1}{3}.$$
To prove the second assertion it suffices to reverse engineer the above process. Consider a torus $T$ with three curves $\alpha,\beta$ and $\gamma$ which all pairwise intersect at most once. We begin by choosing the minimal $N$ satisfying the above inequality and take $a:=\lfloor{\frac{N}{3}\rfloor}$ parallel copies of $\alpha$, $b:= \lfloor{\frac{N}{3}\rfloor} + d$ parallel copies of $\beta$ and $c:= \lfloor{\frac{N}{3}\rfloor} + d'$ parallel copies of $\gamma$ where $0\leq d,d'\leq 1$ are integers and $a+b+c =N$. We now have a collection of $N$ curves on $S$.
Now as above, the number of connected components of the complementary regions to all of the curves is $a b + bc + a c$. We place at most one puncture in each of the connected regions for a total of $n$ punctures. The result is an $n$-times punctured torus with a filling set of curves that satisfies the desired inequality.
\end{proof}
\section{The topological setup: case $k$ even}\label{even}
In this section we will always assume $k$ to be an even positive integer.
We begin by proving a lower bound on the number of curves in a $k$-filling set of curves on any punctured surface.
\begin{theorem}\label{thm:lower_even}
Let $S$ be a surface with at least one puncture and of negative Euler characteristic. Any $k$-filling set of curves on $S$ has cardinality at least $N$, where $N$ is the smallest integer satisfying
$$N(N-1)\geq \frac{2}{k}|\chi(S)|.$$
\end{theorem}
\begin{proof}
Suppose $\{\gamma_1,\dots,\gamma_m\}$ is a $k$-filling set of curves on a surface $S$ of signature $(g,n)$. Up to homotopy, we can assume no three curves intersect in the same point and each intersection is transversal. This implies that the curves define a $4$-valent graph $G$ on $S$, with the intersections as vertices and edges given by the arcs of the curves. Denote by $v(G)$ and $e(G)$ the number of vertices and edges of $G$, and by $f(G)$ the number of connected components of $S\setminus G$. By the hand-shaking lemma, since $G$ is $4$-valent, we have
$$e(G)=2v(G).$$
Any two curves pairwise intersect at most $k$ times, so the number of vertices satisfies
$$v(G)\leq k {m\choose 2}=k\frac{m(m-1)}{2}.$$
Moreover, since the set of curves is filling and the surface has $n$ punctures, there are at least $n$ connected components of $S\setminus G$, i.e.\ $f(G)\geq n$. By computing the Euler characteristic of $S$ as $v(G)-e(G)+f(G)$ and using the estimates on $v(G)$ and $f(G)$ we obtain the desired lower bound.
\end{proof}
\begin{rmk}
Note that the lower bound of Theorem \ref{thm:lower_even} holds for odd $k$ as well, but, as we will show later, for $k$ odd we can get a better bound.
\end{rmk}
We begin with the case of the sphere - in this case we can show that the lower bound of Theorem \ref{thm:lower_even} is sharp.
\begin{theorem}\label{thm:spheres}
Let $S$ be a sphere with $n\geq 4$ punctures. There exists a $k$-filling set of curves on $S$ of cardinality $N$, where $N$ is the smallest integer satisfying
$$N(N-1)\geq \frac{2n-4}{k}.$$
\end{theorem}
\begin{proof}
Fix $k$; we start by constructing the set of curves in the case in which
$$n=\frac{k N(N-1)+4}{2}$$
for some integer $N$.
Consider the rectangle $[0,k\pi]\times [-1,1]\subseteq \mathbb{R}^2$ and the graphs of the functions $f_s(x)=\sin(x+s\varepsilon)$ for $s\in\{0,1,\dots ,N-1\}$ and $\varepsilon$ small. Note that we can choose $\varepsilon$ small enough so that any two of the above graphs intersect exactly $k$ times and there are no triple intersections (as in Figure \ref{fig:sine}).
\begin{figure}[H]
\includegraphics{cylinder.pdf}
\caption{The graphs of $f_0, f_1,f_2$ and $f_3$ on the rectangle $[0,12\pi]\times[-1,1]$}\label{fig:sine}
\end{figure}
Consider the cylinder obtained by identifying $(0,t)$ with $(k\pi,t)$ for any $t\in[-1,1]$. On this cylinder, the graphs project to $N$ curves, all pairwise intersecting exactly $k$ times, with no three curves intersecting in the same point. We glue disks to the two boundary components of the cylinder to obtain a sphere. As in the proof of the lower bound, we consider the graph $G$ induced by the curves on the sphere. Again it is $4$-valent, so $e(G)=2v(G)$. Since all curves pairwise intersect exactly $k$-times and no three curves have a common intersection, we have
$$v(G)=k {N\choose 2}=k\frac{N(N-1)}{2}=n-2.$$
Since $2=\chi(\mathbb{S}^2)=v(G)-e(G)+f(G)$, the number of connected components of the complement of $G$ is
$$f(G)=2+v(G)=n.$$
So we add a puncture to each connected component. This gives a $n$-punctured sphere with a $k$-filling set of the desired size.
Now consider $n$ not of the form $\frac{k N(N-1)+4}{2}$ for any $N$. Then there exists an integer $N$ such that
\begin{equation}\label{eqn:boundN}
\frac{k (N-1)(N-2)+4}{2}<n<\frac{k N(N-1)+4}{2}.
\end{equation}
We construct a sphere with $N$ curves pairwise intersecting $k$ times as in the previous case. The difference is that this time we have less cusps than connected components. To be sure that no two curves are homotopic, it is enough to place a single puncture to separate the first curve from all of the other curves, then a puncture between the second and the subsequent curves and so on. Hence it is enough to have $n\geq N-1$ punctures and this inequality holds via the lower bound in \ref{eqn:boundN}. So again we obtain a filling set of curves of the right size.
\end{proof}
With the same techniques we can prove a similar statement for tori.
\begin{theorem}
Let $T$ be a torus with $n\geq 1$ punctures and $k$ be even. There exists a $k$-filling set of curves on $T$ of cardinality $N$, where $N$ is the smallest integer satisfying
$$N(N-1)\geq \frac{2n}{k}.$$
\end{theorem}
\begin{proof}
The proof is essentially the same as for spheres. The only difference is that instead of gluing two disks to turn the cylinder with the curves into a sphere, we glue its two boundary components to get a torus.
\end{proof}
To prove the result in the case of surfaces of genus at least two we combine the idea of the construction in the cases of spheres and tori and a known result about $k$-filling sets on closed surfaces from \cite{app}.
\begin{theorem}\label{thm:genus}
Let $S$ be a surface of signature $(g,n)$, with $g\geq 2$ and $n\geq 1$. For any even $k\geq 2$, there is a $k$-filling set on $S$ of size $N$ satisfying
$$\frac{5}{2}+\sqrt{\frac{1}{4}+\frac{2|\chi(S)|}{k}}\leq N<6+\sqrt{\frac{1}{4}+\frac{2|\chi(S)|}{k}}.$$
\end{theorem}
\begin{proof} We construct a $k$-filling set of the desired size.\\
Consider the closed surface $S^0$ obtained by filling in the punctures. By a result in \cite{app}, we know that there exists an $k$-filling set $\mathcal{C}^0$ of cardinality $x$ or $x+1$, where $x$ is the smallest integer satisfying
\begin{equation}\label{eqn:boundx}
x(x-1)\geq \frac{4g-2}{k}.
\end{equation}
The construction in \cite{app} has most curves that pairwise intersect exactly $k$ times. In fact, the curves are constructed algorithmically and this property is true for all but (possibly) the final two curves in the construction. Pick a curve $\gamma$ from the construction that is not one of the final two and replace it with a thin cylinder with a set of $y+1$ curves (as in the construction of Theorem \ref{thm:spheres}). We obtain set of curves $\mathcal{C}$ of cardinality $x+y$ or $x+y+1$. Choose $y$ to be the smallest integer such that $S^0\setminus \mathcal{C}$ has at least $n$ connected components. Since at least $x+y-2$ curves of $\mathcal{C}$ pairwise intersect exactly $k$ times, the number of components of $S^0\setminus \mathcal{C}$ is at least
$$2-2g+k{x+y-2\choose 2}.$$
So we choose $y$ to be the smallest integer such that
\begin{equation}\label{eqn:boundy}
2-2g+k{x+y-2\choose 2}\geq n.
\end{equation}
Using the two inequalities \ref{eqn:boundx} and \ref{eqn:boundy}, one can compute that $x+y$ satisfies
$$\frac{5}{2}+\sqrt{\frac{1}{4}+\frac{2|\chi(S)|}{k}}\leq x+y<5+\sqrt{\frac{1}{4}+\frac{2|\chi(S)|}{k}}.$$
Set $N=|\mathcal{C}|$; we know that $N=x+y$ or $x+y+1$. Moreover, the complement of $\mathcal{C}$ is a union of disks. We want to place at most one puncture per connected component. By construction, we have enough components. Also, all curves are pairwise non-homotopic, except possibly for the $y+1$ in the thin cylinder. To be sure these are pairwise non-homotopic, it is enough to have at least $y$ punctures, i.e.\ it is enough to have $n\geq y$. This is true if $y=0$ or $y=1$ (by assumption). If we have only two punctures, it is enough to have two curves in the cylinder, so $y\leq 1$. This means that if $y=2$ or $y=3$, we must have had $n\geq 3$. So if $y\leq 3$, the condition $n\geq y$ is satisfied.
Assume now $y\geq 4$. By the minimality of $y$, we know that
$$2-2g+k{x+y-3\choose 2}< n$$
which implies, using inequality \ref{eqn:boundx} and basic computations
$$n\geq 2+\frac{k}{2}(y-3)^2+\frac{k}{2}(y-3).$$
So it is enough to have
$$2+\frac{k}{2}(y-3)^2+\frac{k}{2}(y-3)\geq y,$$
which holds, under our assumption $y\geq 4$.
Thus we can add punctures in chosen connected components and we obtain a filling set of size $N$.
\end{proof}
\section{The topological setup: case $k$ odd}\label{odd}
In this section we will always assume $k$ to be an odd positive integer.
A {\it $k$-system} on a surface $S$ is a set of curves which pairwise intersect at most $k$-times. We set $M_g(k)$ to be the maximum cardinality of a $k$-system on a closed surface of genus $g$.
\begin{theorem}
Let $S$ be a surface of signature $(g,n)$, with $g\geq 1$ and $n\geq 1$. Then a $k$-filling set has cardinality at least $N$, where $N$ is the smallest integer satisfying
$$\frac{k}{2}N(N-1)-\frac{N}{2}\left(\frac{N}{M_g(k)}-1\right)\geq|\chi(S)|.$$
\end{theorem}
\begin{proof}
Let $\Gamma=\{\gamma_1,\dots ,\gamma_N\}$ be a $k$-filling set on $S$; up to isotopy we can assume that there are no triple intersection points and all intersections are transverse. As in the proof of the lower bound in Theorem \ref{thm:torus}, we consider the associated surface $S^0$ obtained by forgetting the punctures and the forgetful map $\pi:S\rightarrow S^0$. Let $\delta_1,\dots,\delta_M$ be the isotopy classes in $\pi(\Gamma)$ and consider the families $\mathcal{F}_i=\pi^{-1}(\delta_i)$. Note that if two curves in $\Gamma$ belong to the same family $\mathcal{F}_i$, they are isotopic on $S^0$, so they can only have an even number of intersections. Since $k$ is odd, this means that they intersect at most $k-1$ times. Let $a_i$ be the cardinality of $\mathcal{F}_i$.
As in the proof of Theorem \ref{thm:lower_even}, we consider the graph $G$ induced by $\Gamma$ on $S^0$. Again it is $4$-valent, thus $e(G)=2v(G)$ and $f(G)=\chi(S^0)+v(G)\geq n$. We have
$$
v(G)=\underbrace{\sum_{i=1}^M\left(\sum_{\alpha,\beta\in \mathcal{F}_i}|\alpha\cap\beta|\right)}_{\stackrel{\mbox{\small intersections between curves}}{\mbox{\small in the same family}}}+\underbrace{\sum_{i<j}\left(\sum_{\alpha\in\mathcal{F}_i,\beta\in \mathcal{F}_j}|\alpha\cap\beta|\right)}_{\stackrel{\mbox{\small intersections between curves}}{\mbox{\small in different families}}}
$$
By what we said before, the intersections $|\alpha\cap\beta|$ in the first sum are bounded by $k-1$ and the ones in the second sum simply by $k$. So
$$
v(G)\leq\sum_{i=1}^M\left(\sum_{\alpha,\beta\in \mathcal{F}_i}(k-1)\right)+\sum_{i<j}\left(\sum_{\alpha\in\mathcal{F}_i,\beta\in \mathcal{F}_j}k\right)
=(k-1)\sum_{i=1}^M{a_i\choose 2}+k\sum_{i<j}a_ia_j$$
By Lagrange, $(k-1)\sum_{i=1}^M{a_i\choose 2}+k\sum_{i<j}a_ia_j$ is maximized for $a_1=\dots a_M=\frac{N}{M}$. Using this and the fact that $M\leq M_g(k)$ we get
$$v(G)\leq \frac{k}{2}N(N-1)-\frac{N}{2}\left(\frac{N}{M_g(k)}-1\right).$$
Combining this estimate with $\chi(S^0)+v(G)\geq n$ we obtain our claim.
\end{proof}
\section{Systoles of punctured surfaces}\label{systoles}
In this section we prove bounds on how the minimum number of filling systoles grows in function of the number of punctures of a (finite area complete) hyperbolic surface. Our bounds will show that, like in the case of closed surfaces (see \cite{app}), the topological condition of intersecting at most once or twice is very far from the geometric condition of being systoles.
\begin{theorem}\label{thm:systoles}
Let $S$ be a hyperbolic surface of signature $(g,n)$ and systole length $\ell$. If $\gamma_1,\dots,\gamma_M$ is a filling set of systoles, then
$$M\geq \frac{2\pi(2g-1)+\pi(n-2)}{4\ell}.$$
\end{theorem}
\begin{rmk} Theorem \ref{thm:systoles}, together with the systole bounds in \cite{schmutz} and \cite{fp}, implies that if $g$ is fixed and $n$ goes to infinity, then $M\geq An$, for some constant $A$. If $n$ is fixed and $n$ goes to infinity, $M\geq B\frac{g}{\log g}$, for some constant $B$.
\end{rmk}
\begin{proof}
The main idea is to use the isoperimetric inequality of the hyperbolic plane.
Consider a hyperbolic surface $S$ with $n$ punctures and its set of filling systoles $\gamma_1,\hdots,\gamma_M$ of length $\ell$. We begin by considering the unique hyperbolic metric $S'$ with cone angle $\pi$ in every puncture conformally equivalent to $S$ outside of the cone angle points. This surface is uniquely determined by the conformal structure of $S$ (see Troyanov \cite{troyanov}) and via the Pick-Schwartz inequality (see Troyanov \cite{troyanov2}) enjoys a certain number of properties. All closed curves on $S'$ are of length strictly less than the corresponding curves of $S$ and in particular
\begin{equation}\label{eqn:S'}
\ell_{S'}(\gamma_k) < \ell_{S}(\gamma_k)=\ell
\end{equation}
for all $k=1,\hdots,M$.
Because the curves $\gamma_k$, $k=1,\hdots,M$ fill $S$, they also fill $S'$. Cutting along the curves then produces a collection of polygons, each with at most one cone point in its interior. We want to apply the isoperimetric inequality of the hyperbolic plane to this set -- but the cone points are an obstruction.
To get rid of this obstruction we perform the following covering operation on those with a cone point of angle $\pi$: a polygon with a cone point of angle $\pi$ is the quotient of a centrally symmetric polygon by an involution so we replace is by its double cover which is a genuine hyperbolic polygon. We now have a full collection of hyperbolic polygons $P_1,\hdots,P_p$.
By the isoperimetric inequality the boundary lengths of the polygons satisfy
$$
\sum_{k=1}^{p} \ell(\partial P_k) > \sqrt{\area(S')^2+4\pi\area(S')}>\area(S')=2\pi(2g-1)+\pi(n-2)
$$
We'll now look at how the sum above relates to the sum of the $\ell_S(\gamma_k)$s. Using inequality \ref{eqn:S'}, fact that each $\gamma_k$ contributes exactly twice to the length of the filling set and finally the fact that the length of a $\partial P_j$ may have been doubled, we have:
$$
\sum_{j=1}^{p} \ell(\partial P_j) \leq 4 \sum_{k=1}^{M} \ell_S (\gamma_k)=4M\ell.
$$
Putting the two inequalities above together gives the result.
\end{proof}
Actually, the growth of the lower bound in Theorem \ref{thm:systoles} is roughly correct. Indeed, we can construct families of surfaces with a filling set of systole growing linearly in $g+n$.
The first example is the family of surfaces $\{S_{g,n(g)}\}_{g\geq 2}$ constructed in Lemma $3.5$ of \cite{fp}. For every $g\geq 2$, $S_{g,n(g)}$ has genus $g$, $n(g)=46(g-1)$ cusps and an ideal triangulation where all but one vertex have degree $6$ and the remaining vertex has degree $12g-6$. Systoles correspond to edges between two vertices of degree $6$, so one can show that there are $36g-54$ systoles. As these correspond to all edges of the triangulation, except the ones incident to a single vertex, they fill. Moreover, an explicit computation shows that the length of a systole is precisely $\arccosh 3$ (and thus independent of $g$). Note that Theorem \ref{thm:systoles} gives, for these surfaces, that a set of filling systoles on $S_{g,n(g)}$ must have at least
$$\frac{25\pi}{2\arccosh 3}(g-1)$$
so the construction gives surfaces with less than twice the necessary amount of curves.
The second example is a family of spheres with a filling set of systoles of cardinality equal to the set of punctures.
\begin{prop}
For any $n\geq 4$, there is a $n$-punctured sphere with a set of filling systoles of cardinality $n$.
\end{prop}
\begin{proof}
Consider an ideal maximally symmetric $n$-gon in the hyperbolic plane. In the Poincar\'e disk model, we can think of it as the $n$-gon with ideal vertices $v_k=e^{i\frac{2\pi k}{n}}$, for $k$ from $0$ to $n-1$. Take two copies of it and glue them such that the endpoints of orthogonals between two non-consecutive sides are identified. In particular this means that the orthogonals give simple closed geodesics on the sphere. We will show that these are the only systoles. Since there are $n$ of these curves and they fill the surface, this concludes the proof.
Consider the center of the polygon (in the Poincar\'e disk model this is the origin). To compute its distance $d$ to any side, consider the right-angled triangle given by the orthogonal from the center to a side, the geodesic from the center to one of the two vertices of the side and the part of the side from the vertex to the foot of the orthogonal. By hyperbolic trigonometry, the distance $d$ satisfies
$$\cosh d=\frac{1}{\sin\frac{\pi}{n}}.$$
\begin{figure}[H]
\begin{center}
\begin{overpic}[scale=1]{systoles.pdf}
\put (20,10) {$d$}
\put (17,24) {$\frac{\pi}{n}$}
\put (69,13) {$d$}
\put (84,12) {$d_k$}
\put (73,26) {$\frac{\pi k}{n}$}
\end{overpic}
\end{center}
\caption{Computing $d$ and $d_k$}
\end{figure}
Consider now two non-consecutive sides $a$ and $b$. Suppose $k-1$ is the minimum number of sides between them. Then the smallest angle between the two orthogonals from the center to $a$ and $b$ is $\frac{2\pi k}{n}$. These two orthogonals and the common orthogonal $d_k$ between $a$ and $b$ determine a pentagon with four right angles and and a $\frac{2\pi k}{n}$ angle. By taking the orthogonal from the center to $d_k$, we cut the pentagon into two quadrilaterals with three right angles and a $\frac{\pi k}{n}$ angle. By hyperbolic trigonometry, we find that
$$\cosh\frac{d_k}{2}=\cosh d\sin\frac{\pi k}{n}.$$
In particular, if $k>k'$, $d_k>d_{k'}$ and two non-consecutive sides which are adjacent to the same side are closest to each other than any other two.
Consider now any simple closed geodesic on the surface. It cannot be contained in one of the two polygons, otherwise it would be contractible. So it needs to cross two sides. It cannot cross only two consecutive sides, otherwise it would be homotopic to a cusp. Hence it needs to cross two non-consecutive sides, so it contains at least two arcs of length at least $d_2$. Moreover, it is of length exactly $2d_2$ only if it is given by exactly two orthogonal between sides adjacent to the same side.
As such the curves we are considering are the only systoles.
\end{proof}
\bibliographystyle{alpha}
|
1,108,101,562,979 | arxiv | \chapter*{Dedication}
\addcontentsline{toc}{chapter}{Dedication}
\input{./chapters/Dedication}
\chapter*{Acknowledgement}
\addcontentsline{toc}{chapter}{Acknowledgement}
\input{./chapters/Acknowledgement}
\chapter*{Abstract}
\addcontentsline{toc}{chapter}{Abstract}
\input{./chapters/Abstract}
\chapter*{List of Publications}
\addcontentsline{toc}{chapter}{List of Publications}
\input{./chapters/publications.tex}
\tableofcontents
{\countdef\interlinepenalty1000
\listoffigures
}
{\countdef\interlinepenalty1000
\listoftables
}
\newpage
\pagenumbering{arabic}
\chapter{Introduction}
\include{./chapters/chapter1}
\chapter{Multiwavelength searches for dark matter}
\include{./chapters/chapter2}
\chapter{Fermi Large Area Telescope (Fermi-LAT) Gamma-Ray Observatory}
\include{./chapters/chapter3}
\chapter{Likelihood Analysis of LAT Data}
\include{./chapters/chapter4}
\chapter{Constraints on dark matter models from the Fermi-LAT observation of Triangulum-II}
\include{./chapters/chapter5}
\chapter{Analysis of Fermi-LAT data from Tucana-II: an intriguing hint of a signal}
\include{./chapters/chapter6}
\chapter{Multiwavelength analysis of low surface brightness galaxies}
\include{./chapters/chapter7}
\chapter{Synchrotron and gamma-ray radiation from few ultra faint dwarf galaxies}
\include{./chapters/chapter10}
\chapter{Discussion and Concluding Remarks}
\include{./chapters/chapter12}
\chapter{Appendix}
\include{./chapters/chapter9}
\bibliographystyle{unsrt}
\section{Instrumental Requirements}
\noindent The possible mass of the DM candidates varies from tens of GeV to a few hundred TeV
depending on the theoretical models \cite{Bertone:2010zza}. Hence, the gamma-ray
detector should have a number of capabilities. The gamma-ray detector for DM searches
should have a good energy resolution and sensitivity over a wide energy range. The instrument should have good angular and energy resolution.
With good angular resolution, it would be possible to detect a faint gamma-ray
emission originating from WIMP annihilation. With good energy
resolution, we can distinguish an annihilation spectrum from
astrophysical backgrounds. Moreover,
the instrument should have a large field-of-view (FOV) because that would help
it to observe a vast region of sky at once. Lastly, the instrument
should have a good timing resolution and a high observing cadence, so that it can identify variable sources such as pulsars (high frequency) or
active galactic nuclei (low frequency).
\noindent In section~1.6.2, we have briefly discussed the
detection methods of various space-based telescopes which are especially dedicated to
search the indirect signature of WIMP annihilation/decay. The
telescopes are designed to meet most of the necessary features that we have
discussed.
In my thesis, for investigating the DM signature in gamma rays, we have used the data observed by the Fermi Large Area Telescope (LAT).
In the following sections, we will discuss its working principle in detail.
\section{The Large Area Telescope}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\linewidth]{figures/fermi7.jpg}
\caption{A schematic diagram of the Large Area Telescope (LAT).}
\end{center}
\end{figure}
\noindent Fermi-LAT is expected to perform as a brilliant gamma-ray space
detector over the entire celestial sphere, with comparatively better
sensitivity
than other earlier gamma-ray missions. Fermi-LAT team has made
significant improvements in angular resolution, effective area, FOV, energy
resolution and time resolution of the detector. Such advanced features in
Fermi-LAT can address several unresolved issues in high-energy gamma-ray astrophysics.
\noindent The LAT scans the whole sky for every $\approx$ 192 minute from the
low-Earth orbit of 565 km altitude at a 25.6-degree inclination with an
eccentricity $<$0.01 \cite{Atwood:2009ez}. It is launched on June 11, 2008, by
the Delta II Heavy launch vehicle from Cape Canaveral.
\noindent The principal objective of the Fermi-LAT is to conduct a long term
high sensitivity observation of the celestial sources for a wide range
of
energy band i.e. from $\approx$ 20 MeV to $>$ 500 GeV. It has a large effective
area combined with good energy, angular and time resolution. Its low deadtime is sufficient enough to study
transient phenomena. Some key properties of Fermi-LAT are described in Table~3.1
\footnote{\tiny{
https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone{\_
}Introduction/LAT{\_}overview.html}}.
\begin{center}
\begin{table}
\caption{Properties of Fermi-LAT.} \label{Table-1}
\begin{tabular}{|p{4cm}|p{11cm}|}
\hline
\hline
Parameter & Value or Range \\
\hline
Energy Range & $\approx$ 20 MeV to $>$ 500 GeV \\
\hline
Energy Resolution & $<$ 15 $\%$ at energies $>$ 100 MeV \\
\hline
Effective Area & $>$ 8,000 $cm^{2}$ maximum effective area at normal incidence
\\
\hline
Single Photon Angular Resolution & $<~0.15^{\circ}$, on-axis, 68$\%$ space
angle
containment radius for E $>$ 10 GeV; $<~3.5^{\circ}$, on-axis, 68$\%$ space
angle containment radius for E = 100 MeV \\
\hline
Field of View & 2.4 sr \\
\hline
Source Location Determination & $<$ 0.5 arcmin for high-latitude source\\
\hline
Point Source Sensitivity & $<~6 \times 10^{-9}~ph~cm^{-2}~s^{-1}$ for E $>$ 100
MeV, 5$\sigma$ detection after 1 year sky survey\\
\hline
Time Accuracy & $<$ 10 microseconds, relative to spacecraft time\\
\hline
Background Rejection (after analysis) & $<~10\%$ residual contamination of a
high latitude diffuse sample for E = 100 MeV - 500 GeV.\\
\hline
Dead Time & $<$ $<$ 100 microseconds per event\\
\hline
\hline
\end{tabular}
\end{table}
\end{center}
\noindent These features have allowed the LAT to explore the new physics associated with $\gamma$-ray emission.
\subsection{Observational Constraint}
\noindent The LAT has very large FOVs and it can change its direction of
observation with
very ease. But its detectors have their observational constraints that need
to be handled carefully. Fermi-LAT should avoid pointing at or near the Earth
because
that can increase the chances of detecting a large number of astrophysical photons. But, at
low energy Fermi-LAT may sometimes observe the Earth's limb at the time of
detecting the albedo gamma rays for instrument calibration. Applying a special
Zenith angle cut can eliminate the photons resulting from the Earth's limb.
There is another strict precaution for Fermi-LAT observation. Fermi-LAT should
not record any events when it would transit the South
Atlantic Anomaly (SAA). This region has a high concentration of charged
particles which are trapped by the Earth's magnetic field.
\subsection{Detection Methodology}
\noindent The Fermi-LAT is a pair conversion detector. During the observation,
the incident gamma rays penetrate the detector and then interact with a
high Z converter material. For Fermi-LAT, the tungsten foil is used to convert
the gamma rays into electron-positron pair. They then pass through the
silicon strip detectors that track the position of electron-positron pair. As the energy
of
gamma-ray is much larger than the rest mass of the electron and positron, the
daughter products (i.e the charged pair) also predominantly follow the
direction
of the incoming gamma-ray. In this process, the new reconstructed direction of
the gamma rays is restricted by multiple scatterings of the electron-positron
pair in the tracker and also by the spatial resolution of the tracker material.
At the bottom of the LAT, the charged particles are deposited into a
calorimeter
made of CsI. The calorimeter is thick enough to measure the energy of the
pairs in the LAT energy band.
\noindent The charged particles deposit their energy in different parts of the
tracker and the
calorimeter and then the instrument produces the pulse-height signal as the
output. In
order to reconstruct the trajectory of the charged particles and their amount
of energy losses, one needs to combine
the pulse heights with the x-y coordinates from each silicon strip detector
where the charged
particles hit.
Both
on-board and ground analysis reconstruct the tracks of the charged particles
from their output pulsed data. The Data Acquisition System (DAQ) characterizes
the interaction that produced the charged particles and also tries to
distinguish the photons from the background events. Meanwhile, this process
also
determines the direction of the incident photon and its estimated energy.
\subsection{The LAT Instrument}
\noindent The Fermi-LAT consists of three primary instruments: i) a
segmented
anti-coincidence detector (ACD), ii) 16 precision tracker/converter, and iii)
16
imaging calorimeter (Figure 3.2). The tracker and the calorimeter form the
central structure of Fermi-LAT, while the ACD surrounds the tracker
and the calorimeter. The ACD is again covered by a micrometeorite shield and thermal
blanket. Trackers and calorimeters act all together to calculate the direction
of incident particles and their respective energy. The main principle of the
ACD
is to identify the incoming charged particles and to distinguish them from
gamma-rays. The LAT consists of 4$\times$4 arrays of 16 tracker/calorimeter
modules. The instrument has nearly $10^{6}$ electronic channels operated on
a power budget of $\approx$ 650 W \cite{Atwood:2009ez}. The working principle pf LAT instrument is depicted in Figure 3.3.
\begin{figure}
\subfigure[]
{\includegraphics[width=0.5\linewidth]{figures/fermi5.jpg}}
\subfigure[]
{\includegraphics[width=0.5\linewidth]{figures/pc_telescope.pdf}}
\caption{A schematic diagram of the LAT instrument. The dimensions of
the LAT are $1.8m~\times~1.8m~\times~0.72m$. A cutaway image of the LAT module
shows its
tracker and calorimeter components, while the anticoincidence detector covers
the tracker and the upper third of the calorimeter. The image is adapted from
Atwood \textit{et al.}, 2009.}
\end{figure}
\subsubsection{Anti-coincidence Detector (ACD)}
\noindent The cosmic charged particles passing through the LAT can
generally outnumber the gamma rays by factors of $10^{2}$-$10^{5}$. Those charged particles can be recorded by
LAT and as a result, the background counts would be increased. In order to
eliminate such background events resulting from charged particles, the LAT
instrument is surrounded by an ACD. The
ACD consists of 89 plastic scintillator tiles which are used to identify
background events and to
issue a veto signal. ACD detects the veto signal through wave-length shifting
fibers by
two photomultiplier tubes (PMT). In order to detect the charged particles, for
maximum
ACD efficiency, the tiles are overlapped in one direction and gaps in the other
direction
are filled by scintillating fiber ribbons.
\noindent ACD covers the entire internal system of the LAT instrument, thus one
of the responsibilities of the ACD is to identify the charged particles with an
efficiency of 0.9997 \cite{Moiseev:2007hk}, while ACD also simultaneously needs
to avoid the ``self-vetoes'' resulting from the backsplash effect. To examine
the actual energy of the source, it is very advisable to consider the effect of
backsplash seriously. It is sometimes possible that the secondary charged
particles generating by an incident high energy photon in the calorimeter
(potentially a valid event) can again travel back up through the tracker and
cross the ACD. These particles can Compton scatter and thereby create signals
from the recoiled electrons. This effect is called the backsplash effect and for
this effect, the valid gamma rays would be vetoed by ACD. Hence, for reducing
the effect of backsplash, the LAT team has designed the segmented structure of
ACD. With the segmented structure, ACD would now only veto those events which
would trigger an ACD tile in the projected path of the incident photon. The
segmentation helps to achieve a uniform threshold and also significantly
increases the sensitivity of Fermi-LAT, especially for high-energy gamma rays.
\noindent There are two types of the output signals generated by the ACD
photomultiplier: (1) the fast veto pulses that are accessed by on-board LAT
trigger electronics and (2) the slower pulse-shaped signals that are used for
charged particle rejection method on the ground. For protecting the ACD from
the
space environment, it is covered by a micrometeorite shield and a thermal
blanket.
\subsubsection{The Tracker (TKR)}
\noindent The principal role of the LAT tracker/converter (TKR) is to convert
the
incident $\gamma$ rays into electron-positron pairs and then accurately track
the resulting particles \cite{Atwood:2007ra}. The TKR consists of 18 XY
detector planes. Each tracker consists of two orthogonal x-y layers that have
an
array of silicon strip detectors (SSDs) for tracking the charged particles. TKR maintains a perfect balance between the thin
converter for preserving the angular resolution at low energy and the thick
converter for maximizing $\gamma$-ray conversion efficiency at high energy. For
this purpose, the TKR is segmented into `FRONT' and `BACK' section. The FRONT
section consists of 12 planes covering the thin tungsten foil converter of
0.035
radiation lengths, while the BACK section consists of 4 planes covering the
thick tungsten foil converter of 0.18 radiation lengths. For preserving the
triggering efficiency for $\gamma$ rays that converts in the final thick
converter, the last 2 final planes that place immediately in front of the
calorimeter does not have any converter. In order to localize the track of
charged
particles, each plane of SSDs has two planes of silicon strips, one is along
the
x-direction and the other along the y-direction. In one of the TKR's converting
tungsten plates, the incoming gamma rays are converting into a pair of electron
and positron.
\noindent After the conversion point, SSD planes record the
directions of the incoming electron and positron pair. But the multiple scattering of the
charged particles in the conversion plane would
affect the angular resolution of the system, especially for low energy range.
Apart from the electron-positron pair, the cosmic rays also interact inside
the TKR modules. Thus TKR needs to accurately identify the nature of each
passing particle
and their reconstructed energy. The advantage of using the thick converters is
that it can also
partially shield the FRONT portion of the TKR from the effect of low-energy
calorimeter backsplash.
The on-axis depth for the TKR module is around 1.5 radiation lengths and that
increases the probability of the $\gamma$-ray conversion by $\approx 63\%$
\cite{Atwood:2007ra}.
\subsubsection{The Calorimeter (CAL)}
\noindent The basic function of the Fermi-LAT calorimeter (CAL) is to estimate
the energy
deposited by the electron-positron pair \cite{Atwood:2009ez}. Each CAL
module contains 96 CsI crystals which are arranged in eight alternating
orthogonal layers where the total number of crystals is 1536. The output of the
crystals is recorded on each end by both large and small photodiodes. This
structure and the segmentation of CAL provide a large dynamic energy range for
each crystal (2 MeV to 70 GeV) and a precise derivation of the
three-dimensional
position of particle shower. The on-axis depth of the CAL is about 8.6
radiation
lengths and for the significant amount of gamma rays with energy $\gtrsim$ 100
GeV, most of the shower fall outside the active region of CAL. But it is
very interesting to note that the imaging efficiency of the CsI crystals
provides a precise estimation of the shape of the electromagnetic shower and
their energy \cite{Bruel:2012bt}.
\subsection{The LAT's Data Acquisition System (DAQ)}
\noindent The Data Acquisition System (DAQ) has a very crucial role in interpreting the signal detected by LAT. In order to control the counts of background events from
transmitting into the ground, DAQ conducts the onboard filtering on the
observed
data. This system converts the detected events into a data stream with a speed
of around 1.2 Mbps. Apart from that, the DAQ also executes the controlling of
the system and instrument monitoring such as housekeeping and power switching.
Sometimes, for improving the performance of the processing, the working
onboard system is modified by uploading new software.
\noindent Amongst all the penetrated particles through the LAT trackers, the
astrophysical
photons only share a very tiny portion. The LAT on-board analysis system
decreases the raw LAT trigger rate (i.e. from 10 kHz to $\approx$ 400 Hz) and
then sends the signal to the ground for further analysis. From those
$\approx$400 Hz counts, only a very small portion (i.e. between $\approx$ 2-5
Hz) are
astrophysical photons. When the reprocessed data for an event is passing
through the on-board analysis, all the conservative cuts, the time stamp and information of
the signals obtained from various LAT components are saved in a packet.
\noindent As the number of the signal obtained from an event varies, each data
packets have different
length. The data packets are the primary version of
the data product. LAT further transfers these data packets to the Solid State
Recorder (SSR) of
spacecraft
\footnote{\tiny{https://fermi.gsfc.nasa.gov/ssc/data/p7rep/analysis/documentation/Cicerone/Cicerone{\_}Introduction/LAT{\_}overview.html}}.
\begin{figure}
\begin{center}
\includegraphics[width=0.6\linewidth]{figures/drlica3.png}
\caption{Event display of a simulated 27 GeV $\gamma$-ray interacting with the
LAT instrument. Clusters of hit TKR strips are represented by black crosses,
while the location and magnitude of energy depositions in the CAL crystals are
represented by variable-size blue squares. Hit ACD tiles are represented by
coloured boxes, with a colour corresponding to the amount of energy deposited.
The
dotted line represents the true
$\gamma$-ray direction, the dashed lines represent reconstructed TKR tracks,
and
the solid line represents the CAL axis. Figure from Ackermann et al., 2012.}
\end{center}
\end{figure}
\subsection{LAT Instrument Performance}
\noindent The instrument response functions (IRF) of any detector is the
mapping
between the incoming photon flux and the detected events where the detected
events depend on the LAT hardware and the analysis process. The analysis
process
determines whether the event parameters are from the observables and then
assigns the probability of the event being a photon. In Fermi-LAT, the IRF is represented by a set of parameters such
as instrument coordinates, observed event energy ($E^{\prime}$), and incident
direction ($\widehat{v}^{\prime}$) as a function of true event energy (E), and
incident direction ($\widehat{v}$).
\noindent The LAT response function is derived by a dedicated GEANT4-based
Monte
Carlo simulation of $\gamma$ rays interacting with the LAT detector and Fermi
spacecraft \cite{Atwood:2009ez}. In order to cover all possible photon
inclination angles and energies of photons with good statistics, a large number
of gamma-ray events are being simulated. The Fermi-LAT team designed a separate
set of IRFs for each event class and event type selection and we need to select
the correct IRF at the time of performing analysis.
\noindent In LAT, the performance of the IRF is factorized into three terms: 1)
efficiency in terms of the detector's effective area (A(E, $\widehat{v}$)), 2)
resolution as given by the point-spread function (PSF, P($\widehat{v}^\prime|E,
\widehat{v}$)), and 3) energy dispersion (D($E^{\prime}|E, \widehat{v}$)).\\
\begin{itemize}
\item A(E, $\widehat{v}$) is the product of the geometric
collection area, $\gamma$-ray conversion probability, and the efficiency of a
given event selection.\\
\item P($\widehat{v}^\prime|E, \widehat{v}$) is the probability density to reconstruct an event direction $\widehat{v}^\prime$, for a given true
energy (E) and direction $\widehat{v}$.\\
\item D($E^{'}|E, \widehat{v}$) is the probability density to reconstruct an event energy $E^{'}$, for a given true
energy (E) and direction $\widehat{v}$.\\
\end{itemize}
\noindent The Fermi Sciencetools has provided us with multiple IRFs and has allowed the user to choose them according to the preferences of the
analysis types. The most recent version of IRFs released by the LAT team is the
``PASS 8''\footnote{\tiny{https://www.slac.stanford.edu/exp/glast/groups/canda/lat\_Performance.htm}}. In comparison to the earlier version, the ``Pass 8'' improves the LAT analysis by using a completely new set of
event-level reconstruction algorithms and that would effectively decrease the pile-up effects. The performance of the “Pass 8” is shown in Figure 3.4.
\begin{figure}
\subfigure[]
{ \includegraphics[width=0.5\linewidth]{figures/effective_area1.png}}
\subfigure[]
{ \includegraphics[width=0.5\linewidth]{figures/containment_angle1.png}}
\caption{The performance of the Pass 8 at normal incidence as a function of incident photon energy is shown here.
(a) the effective area and (b) the points spread function.
The figure is adapted from Bruel et al., 2018.}
\end{figure}
\section{Introduction}
\label{sec:intro}
The presence of an abundant non-baryonic dark matter (DM) component in
our universe has not only been established through a variety of
independent observations\cite{Peter:2012rz}, but is also expected to
have played a key role in the evolution of structure at a multitude of
scales\cite{Primack:1997av}. While a particulate nature of the DM is
called for, such an interpretation necessitates physics beyond the
Standard Model (SM) not only in terms of the identity of the said
particle, but also in terms of the dynamics that determines its relic
density. Several different classes of scenarios have been proposed
with the WIMP (Weakly Interacting Massive Particles) paradigm being
one of the most theoretically appealing ones. This is so because not
only are the parameters (masses and couplings) naturally at the
electroweak scale, but the very interaction that decides the relic
density of the DM particle $\chi$ (largely through processes like
$\chi\chi \to {\cal SS}$, where ${\cal S}$ is a generic SM field)
would also render amenable their detection in terrestrial
experiments \cite{Lisanti:2016jxe,Goodman:Witten,Schumann:2019eaa}.
While much effort has gone into theoretical studies that seek to
reproduce a relic density commensurate with what the cosmological data
suggests, unfortunately no confirmatory signals have been seen in any
laboratory experiments. These include not only dedicated {\em Direct
Search} experiments (based on $\chi {\cal S} \to \chi {\cal
S}$)~\cite{Angle:2008we, Aprile:2016swn,
Aguilar-Arevalo:2016ndq,Amole:2017dex,Cui:2017nnn,Akerib:2017kat,Agnes:2018ves},
but also those at collider facilities (exploiting, typically ${\cal
S}_1{\cal S}_2 \to \chi\chi {\cal S}_3$) , whether it be the
LHC~\cite{Kahlhoefer:2017dnp}, or low-energy facilities such as
Belle~\cite{Borodatchenkova:2005ct, Seong:2018gut,Choudhury:2019sxt}
or Da$\Phi$Ne~\cite{Borodatchenkova:2005ct}. Such non-observation
tends to militate against the requirements for obtaining the correct
relic-density, calling into question the entire paradigm. It should be
realised, though, that event rates in such terrestrial experiments are
governed by many assumptions about the nature of the interaction ({\em
e.g.}, spin-dependence) or the identity of ${\cal S}$ for the dominant
process. For example, if ${\cal S}$ were a third generation fermion,
the signal rates would be quite suppressed, without unduly affecting
cosmological evolution. Similarly, if $\chi$ were very heavy, not only
would production at colliders be suppressed, but the consequent
reduction in its number density (so as to maintain the requisite
energy density) would also suppress the direct detection rates.
This, then brings us to the realm of indirect searches, nominally
carried out through satellite-based and ground-based
experiments~\cite{Buckley:2013bha,Gaskins:2016cha,Fermi-LAT:2016uux,Colafrancesco:2015ola,Acharyya:2020sbj}.
The DM particles in open space can annihilate into a pair of SM
particles and impinge on space-based detectors either directly, or at
the end of a cascade of decays. Free from the aforementioned
suppressions that often plague the other avenues, indirect searches
proffer one of the most promising ways to not only detect the DM, but
also to determine its
properties. Signatures~\cite{Fermi-LAT:2016uux,Ahnen:2016qkx,TheFermi-LAT:2017vmf}
include anomalous photon, positron or antiproton production that could
be observed over and above known astrophysical sources.
In this paper, we concentrate on photon signals, which serve to
provide some of the most robust and stringent constraints on DM
annihilation into a variety of final
states~\cite{Bringmann:2012ez,Cirelli:2015gux}. As can be easily
appreciated, the simplest and most unmistakable signal would be
monochromatic photon emission resulting from either a two body final
state (where the second particle need not be a photon) or from
internal
bremsstrahlung~\cite{Bergstrom:2004cy,Bergstrom:2004nr,Bergstrom:2005ss,
Bringmann:2007nk,Bringmann:2011ye}. Of course, since the DM is electrically
neutral\footnote{While millicharged DM can be accommodated in a
specific class of models, we shall not consider such.},
pair-annihilation into a two-body state with a photon can proceed only
as a loop-level process. A second possibility exists, though, in the
form of, say, $\chi \chi \to e^+ e^-$ with the positron subsequently
annihilating with a low-energy (and astrophysically produced)
electron to lead to a nearly monochromatic photon line. In either
case, the effective cross section is small.
Much larger cross sections are obtained for processes that lead to a
gamma-ray continuum (rather than a line
emission)~\cite{Colafrancesco:2005ji, Colafrancesco:2006he}. These
can, roughly, be divided into two classes, namely
\begin{itemize}
\item {\em prompt emissions} from dark matter annihilations
into any SM pair ${\cal S \bar S}$, with photon(s) being either
radiated off or emanating from the decay of a particle that has arisen
either directly from a primary decay product or as a result
of its hadronization (such as $\pi^0 \to \gamma\gamma$). The photon
energy could range from $E_{\rm max}\, (M_\chi)$ all the way down.
\item {\em secondary emissions} from the
(quasi)stable, charged particles
produced in the primary annihilation event.
Inverse Compton scattering of relativistic particles with the cosmic microwave
background radiation (CMBR) as well as starlight can give photons with energies in the X-ray band, whereas in the presence of astrophysical magnetic fields,
synchrotron radiation (typically in the radio band) result.
\end{itemize}
Various astrophysical systems can be used to look for those
aforementioned signal of DM. For example, the central region of
ordinary Galaxies (like our own Milky Way or M31) are considered
interesting arenas for WIMP searches. Signals from these galactic
central (GC) regions may also be used to understand the well known
{\it cusp/core} issue of the innermost part of the DM distribution
profile. However, photon signals of DM annihilation in the GC region
can be contaminated by galactic diffuse emissions as well as
electromagnetic radiations from various other nearby astrophysical
object such as supernova remnants, pulsars etc. Owing to such
unresolved backgrounds, GCs are not ideal for
effecting precision studies~\cite{Petrovic:2014xra, Gaggero:2017jts, Gaggero:2015nsa,
Cholis:2015dea, Carlson:2015ona}.
N-body simulations indicate the existence of a large number of
DM-dominated sub-halos around typical galaxies~\cite{Kuhlen:2009jv,
Drlica-Wagner:2013bhh}. Some of these sub-halos might be massive
enough to host a dwarf galaxy \cite{Kuhlen:2009jv}, and these appear
as dwarf spheroidal galaxies (dSphs). Constituting the largest
galactic substructures around the Milky Way, these dSphs are expected
harbour the densest DM distributions in the galactic halo, with a
mass-to-light ratio lying in the range (100–1000) $M_{\circ}
/L_{\circ}$, where $M_{\circ}$ and $L_{\circ}$ are the solar mass and
the solar luminosity respectively. Their overwhelming DM content,
minimal Galactic foreground emission, and lack of astrophysical
radiation \cite{Mateo:1998wg, Grcevich:2009gt} render the dSphs
promising targets for the indirect detection of DM.
We consider, here, electromagnetic radiation over a wide range, from gamma down to radio frequencies, emanating from
dSphs.
Surrounded by large-scale magnetic fields, the sheer size
thereof confines the $e^\pm $ produced by WIMP annihilation
long enough for these to radiate substantially. Since
the dSphs targeted by the \textit{Fermi} Large Area
Telescope (Fermi-LAT) gamma-ray searches have no
millisecond pulsars associated with them \cite{Winter:2016wmy},the astrophysical gamma-ray background is essentially negligible.
Consequently, the non-observation of such high energy gamma-rays or
radio-emission
from nearby dSphs may be used to put strong constraints on the dark
matter annihilation/decay rates. A decade after the Sloan Digital Sky
Survey (SDSS) revealed a population of ``ultrafaint'' dwarf galaxies
(UFD) in the northern hemisphere (e.g., \cite{Willman:2004kk,
Zucker:2006he, Belokurov:2006ph}), a new generation of sky surveys has
begun charting the southern hemisphere. In
the past few years, nearly two dozen UFDs have been discovered using
data from Pan-STARRS (\cite{Laevens:2015kla, Laevens:2015una}), the
Dark Energy Survey (\cite{Bechtol:2015cbp, kim:2015abc, Kim:2015ila,
Koposov:2015cua}), and other surveys using the Dark Energy Camera at
Cerro Tololo (\cite{Kim:2015xoa, Kim:2016cgf, Kim:2015ghi,
Martin:2015xla}).
Their small halo mass and negligible baryonic
mass render UFDs extremely valuable laboratories
for exploring the nature of dark matter. The southern UFDs provide new
opportunities to address unanswered old questions about the nature and
origin of Galactic substructure, and, in particular, the galactic
luminosity function. While the latter continues to be revised, its
faint end sets boundary conditions for galaxy formation within a given
cosmological model (\cite{Koposov:2009ru}).
Due to their proximity, high dark-matter content, and the apparent
absence of non-thermal processes, the Milky Way dwarf spheroidal
satellite galaxies are excellent targets for the indirect detection of
dark matter (\cite{Evans:2003sc,Bonnivard:2015xpq}). To this end, we analyse nearly eleven years of gamma-ray
data collected by the space-based \textit{Fermi} Large Area
Telescope \footnote{http://fermi.gsfc.nasa.gov}.
In particular,
we consider fourteen recently discovered dwarf spheroidal
galaxies. Amongst them, 13 dSphs are Milky Way satellites and one
(Eridanus II) is from the Local field (LF) (Local Field dwarf
spheroidal galaxies are not bound to Milky Way and M31).
The remaining of the paper is organized as follows: in Section \ref{sec:gamma_flux}, we
describe the different components of the $\gamma$-ray flux and
the astrophysical $J$-factor for the different dSphs. This is followed, in
Section \ref{sec:DM_profile}, by a discussion
of different possible dark matter profiles, and the consequent $J$-factors. In
Section \ref{sec:analysis}, we analyze the Fermi-LAT data
for different dwarf spheroidal galaxies to obtain
upper limits on the annihilation
cross sections for different channels. We also examine
the uncertainty accruing from the determination of
astrophysical parameters and the choice of the dark matter profile.
In Section \ref{sec:synchr}, we focus on the synchrotron radiation from the
dwarf spheroidal galaxies, especially in the context of the existant
radio telescopes such as from Giant Metrewave Radio Telescope (GMRT) and Very Large Array (VLA) telescope and the projected sensitivity of the
Square Kilometer Array (SKA). We calculate the upper limits on the DM annihilation cross section using the radio flux upper limits from GMRT and VLA.
We calculate the synchrotron flux for the dwarf spheroidal galaxies and compare them with the sensitivity curves of SKA.
In Section \ref{sec:synchrotron_uncertainty} we study the
uncertainty in the predicted synchrotron flux because of the
uncertainties in different astrophysical parameters. We discuss our
conclusions in Section \ref{section:conclusion}.
\section{Source details}\label{section:source_details}
\noindent In Table~\ref{table:astro_fundamental_param_dwarfs}, we have described the properties of the dwarf spheroidal galaxies considered in our study. We have principally chosen all those dwarf galaxies because of their very high mass to light ratio and moderately large velocity dispersion of their member stars. The high mass to light ratio and the large velocity dispersion value generally conform that the UFDs might be very rich in dark matter contents \cite{Baumgardt:2008zt}. \\
\noindent The properties of our selected dwarf galaxies obtained from their corresponding spectroscopic and the photometric studies are mentioned in Table~\ref{table:astro_fundamental_param_dwarfs}. From this Table~\ref{table:astro_fundamental_param_dwarfs}, $M_{\odot}$ and $L_{\odot}$ represent the mass and the luminosity of the Sun, respectively. The quantities M/L, d, $r_{1/2}$ and $\sigma$ from Table~\ref{table:astro_fundamental_param_dwarfs} denote the mass to light ratio, heliocentric distance, half light radius and velocity dispersion of each UFD galaxies, respectively. The values are obtained from the Ref.~\cite{Pace:2018tin}. \\
\noindent The Aquarius-II was detected on the fringes of the VST ATLAS and the SDSS surveys (\cite{Torrealba:2016svf} ). The Carina-II was discovered in the vicinity of the Large Magellanic Cloud (LMC) in data from the Magellanic Satellites Survey (MagLiteS) (\cite{Torrealba:2018svf}). The discovery of Draco-II was obtained from the search for compact stellar over-densities in the photometric catalog of the Panoramic Survey Telescope and Rapid Response System 1 (Pan-STARRS 1) 3$\pi$ survey (\cite{Laevens:2015kla}). Eridanus-II, Grus-I, Horologium-I, Reticulum-II, aTucana-IInd Tucana-III were first discovered by the Dark Energy Survey (\cite{Koposov:2015cua}). Ref~\cite{Walker:2016mcs} also confirmed that Tucana-II to be a UFD and not a part of any globular cluster. Eridanus II is one of the most distant and least luminous dwarf discovered by the Dark Energy Survey (\cite{Li:2016utv}). For Grus-I, the most significant outlier which is reported by Ref~\cite{Walker:2016mcs}, is to contain four stars (out of seven measured) with [Fe/H] $>$ -1.4. Hydra-II was found serendipitously in DECam data taken for the Survey of the Magellanic Stellar History (\cite{Martin:2015xla}). This satellite is compact and faint, but well within the realm of dwarf galaxies. Leo-V is spectroscopically confirmed ultra-faint dwarf galaxies but it may experience significant tidal stripping as ember stars of Leo-V orbit the Milky Way (\cite{collins:2017}). Pegasus-III was first observed by the archival SDSS data (\cite{Ahn:2013gms, Kim:2015xoa}) and later its stellar overdensity was confirmed with deeper photometry from the DECam (\cite{Kim:2015xoa}). The kinematic and chemical properties of Pisces-II obtained by Keck/DEIMOS spectroscopy of stars suggest that Pisces II is dwarf galaxies (\cite{Kirby:2015ija}). Triangulum-II was first detected by the Pan-STARRS Survey (\cite{Laevens:2015una}). In case of Hydra-II, Triangulum-II and Tucana-III, they may be dwarfs but for which either no spectroscopy has been published or the data are inconclusively included.\\
\begin{table}[!h]
\begin{center}
\begin{tabular}{||p{2.5cm}|p{2cm}|p{2cm}|p{1.8cm}|p{2.5cm}|p{2.5cm}||}
\hline
\hline
Galaxy & M/L $(M_{\odot}/L_{\odot})$ & d (Kpc) & $r_{1/2}~(pc)$ & $\sigma~(km~s^{-1})$ & $\theta_{max}^o$ \\
\hline
Aquarius~II & $1330^{+3242}_{-227}$ & $107.9^{+3.3}_{-3.3}$ & $123^{+22}_{-22}$ & $6.2^{+2.6}_{-1.7}$ & 0.11134 \\
\hline
Carina~II & $369^{+309}_{-161}$ & $37.4^{+0.4}_{-0.4}$ & $76^{+8}_{-8}$ & $3.4^{+1.2}_{-0.8}$ & 0.23\\
\hline
Draco~II & $501^{+1083}_{-421}$ & $20.0^{+3.0}_{-3.0}$ & $12^{+5}_{-5}$ & $3.4^{+2.5}_{-1.9}$ & 0.1\\
\hline
Eridanus~II & $420^{+210}_{-140}$ & $366.0^{+17.0}_{-17.0}$ & $176^{+14}_{-14}$ & $7.1^{+1.2}_{-0.9}$ & 0.062 \\
\hline
Grus~I & $<~2645$ & $120.2^{+11.1}_{-11.0}$ & $52^{+25}_{-25}$ & $4.5^{+5.0}_{-2.8}$ & 0.093\\
\hline
Horologium~I & $570^{+1154}_{-112}$ & $87.0^{+8.0}_{-8.0}$ & $32^{+5}_{-5}$ & $5.9^{+3.3}_{-1.8}$ & 0.0619 \\
\hline
Hydra~II & $<~315$ & $151.0^{+8.0}_{-8.0}$ & $71^{+11}_{-11}$ & $<6.82$ & 0.08509 \\
\hline
Leo~V & $264^{+326}_{-264}$ & $173.0^{+5.0}_{-5.0}$ & $30^{+16}_{-16}$ & $4.9^{+3.0}_{-1.9}$ & 0.077 \\
\hline
Pegasus~III & $1470^{+5660}_{-1240}$ & $215.0^{+12}_{-12}$ & $37^{+14}_{-14}$ & $7.9^{+4.4}_{-3.1}$ & 0.03049\\
\hline
Pisces~II & $370^{+310}_{-240}$ & $183.0^{+15}_{-15}$ & $48^{+10}_{-10}$ & $4.8^{+3.3}_{-2.0}$ & 0.06861\\
\hline
Reticulum~II & $467^{+286}_{-168}$ & $30^{+2}_{-2}$ & $32^{+3}_{-3}$ & $3.4^{+0.7}_{-0.6}$ & 0.24\\
\hline
Tucana~II & $1913^{+2234}_{-950}$ & $57.5^{+5.3}_{-5.3}$ & $115^{+32}_{-32}$ & $7.3^{+2.6}_{-1.7}$ & 0.225\\
\hline
Tucana~III & $<~240$ & $25.0^{+2}_{-2}$ & $43^{+6}_{-6}$ & $<2.18$ & 0.2\\
\hline
Triangulum~II & $<~2510$ & $30^{+2}_{-2}$ & $28^{+8}_{-8}$ & $<6.36$ & 0.15\\
\hline
\hline
\end{tabular}
\end{center}
\caption{Properties of our selected UFD galaxies (\cite{Pace:2018tin})}
\label{table:astro_fundamental_param_dwarfs}
\end{table}
\section{$\gamma$-ray flux from pair-annihilation of WIMPs}
\label{sec:gamma_flux}
A given scenario may include multiple new WIMP candidates (${\cal
W}_i$), of which perhaps only one could constitute the DM. Indirect
detection of the said WIMPs depends on two key processes : $(i)$ the
pair annihilation of the WIMPs through processes such as ${\cal W}_i
{\cal W}_j \to \sum_k {\cal S}_k $, or $(ii)$ the decay\footnote{In
the second case, clearly ${\cal W}_i$ is not protected by a discrete
symmetry. It could have arisen, for example, from the
pair-annihilation of a $Z_2$-odd DM, namely $\chi\chi \to {\cal W}_i
{\cal W}_j$. It should be realised though that a decaying DM is also
admissible as long as the decay is very slow, as, for example, happens
in gravity-mediated decays~\cite{Arun:2017zap}.} of a WIMP into SM
particles, namely ${\cal W}_i \to \sum_k {\cal S}_k $. In each case, a
bunch of energetic standard model particles result in the final state,
either directly from the aforementioned processes or as a result of
the subsequent decay cascades of the ${\cal S}_k$. However, it
should be noted that neither of the two processes are a must in a given
particle dark matter scenario. Furthermore, they bear relevance (apart from
inthe cosmological context) only if they occur at a rate
that can be detected on various experimental facilities based on the
earth or on satellites. This relevance is, of course, determined jointly
by their rates and
the detectability (in terms of the final particle identities and the
energies they carry), in such facilities, earth-bound or
satellite-based. These, in turn, are
determined by the details of the underlying particle physics model,
both in terms of the spectrum as well the sizes, or even the
existence, of the couplings. In addition to these, several
astrophysical input parameters also play a major role in the final
observation (both in the annihilation rate as well as in the
propagation of various standard model particles through the
space). Some of these astrophysical parameters, understandably,
contain large uncertainties rendering any conclusive prediction of the
indirect signature of the dark matter very challenging\cite{Conrad:2014nna}.
In this analysis, we focus on the annihilation of Majorana dark matter
in an almost model independent manner, with the simplifying assumption
that the pair-annihilation proceeds into a single channel alone, with
a $100\%$ probability. The consequent individual
sensitivities/constraints can, then, be convoluted, in a relatively
straightforward manner, to that applicable for a given realistic
model. The DM annihilation rate per unit volume is given by
$\langle \sigma v\rangle \rho^2_{\rm DM}/2 m^2_{\rm DM}$, where
$\rho_{\rm DM}$ is the density, $\sigma$ the cross section and $v$ the
velocity of the second DM particle in the rest frame of the
first\footnote{For a DM annihilation rate of $\langle\sigma
v\rangle \sim 10^{-26}~{\rm cm^3/s}$---one that is consistent with the
correct relic density for $m_{\rm DM} \sim {\cal O}(10^{2-3})\, {\rm
GeV}$---an observable flux of gamma rays is obtained.}. The thermal
average $\langle\sigma v\rangle$ is estimated using the knowledge of
particle physics and is model-specific. On the other hand, assuming
that the DM density has a spherical symmetry (a very good
approximation), the radial dependence of $\rho_{\rm DM}$ is modelled
based on astrophysical observations, as we shall discuss later. For a
specific energy $ E (\equiv E_\gamma)$, the differential $\gamma$-ray
flux $\phi_{\rm{WIMP}} (E, \Delta \Omega)$ (in units of photons
cm$^{-2}$s$^{-1}$GeV$^{-1}$) lying within a solid angle
$\Delta \Omega$ and having arisen from the annihilations of WIMPs of
mass $m_{\rm DM}$, can be expressed~\cite{Abdo:2010ex} as a product of
two terms, one each accounting for the particle-physics and
astrophysics factors, namely
\begin{equation}
\phi_{\rm{WIMP}}(E, \Delta \Omega)~ = ~ \Phi^{pp}(E) \times J(\Delta \Omega) \ .
\label{eqn:dm_flux}
\end{equation}
The particle physics factor can be written as \cite{Abdo:2010ex}:
\begin{equation}
\Phi^{pp}(E)~ = ~\frac{\langle\sigma v\rangle}{8 \pi ~m^{2}_{\rm{DM}}} \sum_{f}
\frac{dN_{\gamma,f}}{dE}B_{f} \ ,
\label{eqn:dm_pp}
\end{equation}
where $dN_{\gamma, f}/dE$ is the differential photon spectrum (per
annihilation event) for a given final state '$f$', and $B_{f}$ is the
corresponding branching fraction. Several numerical packages like
Pythia \cite{Sjostrand:2007gs}, DarkSUSY (\cite{Gondolo:2004sc}),
DMFit \cite{Jeltema:2008hf} etc. are designed to estimate differential
photons yields from each annihilation channel. While the selection of
standard model final states, through which annihilation would occur
(e.g. $b\bar{b}$, $\tau^{+}\tau^{-}$, $\mu^{+}\mu^{-}$ etc.), is
theoretically motivated, as stated above, we remain agnostic and
consider only a single channel dominance.
Thus, unless otherwise mentioned, in the rest of the analysis,
only a single final state $(b \bar b, \tau^+\tau^- ,
\mu^+\mu^-)$ will be considered to have been produced ({\em i.e.}, with $100\%$ probability)
in a DM annihilation process, and consequent limits obtained on the
cross sections.
\subsection{Astrophysical Factor (J)}
Since the total flux is proportional to the factor $J$, it behoves us
to examine it closely. As we are primarily concerned with pair
annihilations, it should depend on $\rho^{2}_{\rm DM}$. While the
galactic center, where $\rho_{\rm DM}$ is the largest, is associated
with the highest flux, it is also associated with an intense
astrophysical background. In contrast, the dSphs present features that
make them ideal sources. The typical values of the
$J$-factor \cite{Funk:2013gxa} for the GC are $J\approx
10^{22}-10^{23} {\rm GeV}^2 {\rm cm}^{-5} $, while $J\approx
10^{16}-10^{19} {\rm GeV}^2 {\rm cm}^{-5} $ for dSphs and $ J\approx
10^{15}-10^{19} {\rm GeV}^2 {\rm cm}^{-5} $ for galaxy clusters.
While the aforementioned quadratic dependence is indicative, a true
measure of the effective $J$ involves the line-of-sight (l.o.s)
integration of the same, namely~\cite{Abdo:2010ex}
\begin{equation}
J (\Delta \Omega) = \int \int \rho^{2}_{\rm DM} (r(\lambda)) d\lambda ~ d\Omega
= 2 \pi \int_{\theta_{\rm{min}}}^{\theta_{\rm{max}}} d\theta \,
\rm{sin} \theta \int_{\lambda_{\rm{min}}}^{\lambda_{\rm{max}}} d\lambda \; \rho^{2}_{\rm DM} \left(r(\lambda)\right) \ ,
\label{eqn:Jfactor_analytical}
\end{equation}
where $\lambda$ is the l.o.s
distance and $\theta$ is the angle between the l.o.s and the center of
the UFD. The galactocentric distance $r(\lambda)$ is obtained in terms
of the UFD's distance $d$ from the Sun through
\begin{equation}
r(\lambda) = \sqrt{\lambda^{2} + d^{2} - 2~ \lambda ~d~ \rm{cos \theta}} \ .
\label{eqn:r_lambda}
\end{equation}
The DM density profile in dSphs remain a topic of debate.
Two broad classes are popularly used to fit the observational data,
namely those with a cusp-like profile\cite{Navarro:1996gj} or a cored profile\cite{Burkert:1995yz, Salucci:2011ee, Gunn:1972sv} respectively.
While the lack of sufficient kinematical observationals prevents the
selection of a particular profile type, N-body cosmological
simulations, with the most recent results, favors the cuspy
Navarro-Frenk-White (NFW) form~\cite{Navarro:1996gj}, specially for
dSphs and UFD galaxies. This is parametrized as~\cite{Navarro:1996gj}
\begin{equation}\label{eqn:density_NFW}
\rho_{DM} (r)=\frac{\rho_{s}r_{s}^{3}}{r(r_{s} + r)^{2}} \ ,
\end{equation}
where $\rho_{s}$ and $r_{s}$ are the characteristic density and scale
radius respectively. These, as well as another one, namely, $r_h$ (that we would
find useful when we discuss synchrotron radiation in Sec.\ref{sec:synchr})
can be obtained using $d$, $\theta_{max}^0$ and $\sigma$. While
the details can be found in ref.\cite{Evans:2016xwx}, we discuss the relations
briefly.
To begin with, consider the mass $M_{1/2}$ contained within the
half-light radius ($r_{1/2}$) of the dSph and approximately
expressed in terms of $r_{1/2}$ and the dSph velocity dispersion $\sigma$ as
\begin{equation}\label{eqn:mhalf}
M_{1/2} = M(r_{1/2}) \approx \frac{2.5}{G_N} \sigma^2 r_{1/2}.
\end{equation}
where $G_N$ is Newton's constant. The NFW parameter $r_s$ can
be approximated in terms of $r_{1/2}$, {\em
viz.}
\begin{equation}\label{eqn:rs}
r_s = 5 \, r_{1/2} \ ,
\end{equation}
whereas $\rho_s$ is given by
\begin{equation}\label{eqn:rhos}
\rho_s = \frac{M_{1/2}}{4\pi r_s^3}
\left[\log\left(\frac{r_s + r_{1/2}}{r_s}\right)
-\, \frac{r_{1/2}}{r_s + r_{1/2}}\right]^{-1} \ .
\end{equation}
The distance to the outermost star of the dSph from the center of dSph is,
of course,
\begin{equation}\label{eqn:rmax}
r_{\rm max} = d\, \, \sin \theta^\circ_{\rm max}.
\end{equation}
And, finally, the diffusion radius ($r_h$) is
defined as twice the distance of the outermost
star from the center of the dSph, namely
\begin{equation}\label{eqn:rh}
r_h = 2 \, r_{\rm max} = 2\, d\, \, \sin \theta^\circ_{\rm max}.
\end{equation}
Using eqns.(\ref{eqn:rhos}--\ref{eqn:rh}) with the values of the
parameters $d$, $\sigma$, and $r_{1/2}$ given in
Table~\ref{table:astro_fundamental_param_dwarfs} of the
Appendix, we calculate the parameters $\rho_s$, $r_s$ and $r_h$ and
list them in Table~\ref{table:astro_param_dwarfs}. The parameters in
Table~\ref{table:astro_param_dwarfs} correspond to the central values
of the parameters in Table \ref{table:astro_fundamental_param_dwarfs}
of the appendix.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{c c c c c}
dSphs & d(Kpc) & $r_h(Kpc)$ & $\rho_s (GeV/cm^3)$ & $r_s (Kpc)$ \\ \hline
Aquarius II & 107.9& 0.42 & 2.27035 & 0.615 \\
Carina II & 37.4 & 0.3 & 1.78834 & 0.38 \\
Draco II & 20 & 0.07 & 71.73 & 0.06 \\
Eridanus II & 366 & 0.792 & 1.454 & 0.88 \\
Grus I & 120.2 & 0.39 & 6.7 & 0.26 \\
Horologium I & 87 & 0.188 & 30.37 & 0.16 \\
Hydra II & 151 & 0.448 & < 8.244 & 0.335 \\
Leo V & 173 & 0.465 & 23.83 & 0.15 \\
Pegasus III & 215 & 0.228 & 40.73 & 0.185 \\
Pisces II & 183 & 0.438 & 8.93 & 0.24 \\
Reticulum II & 30 & 0.251 & 10.08 & 0.16 \\
Tucana II & 57.5 & 0.452 & 3.6 & 0.575 \\
Tucana III & 25 & 0.174 & < 2.296 & 0.215 \\
Triangulum II & 30 & 0.157 & < 46.1 & 0.14 \\ \hline \hline
\textbf{Draco} & 80.0 & 2.5 & 1.4 & 1.0 \\ \hline
\end{tabular}
\end{center}
\caption{ The values of the astrophysical parameters for the 14 newly discovered dwarf galaxies \cite{Pace:2018tin}.
The classical dwarf galaxy Draco is also shown for comparison.
The parameters in this table correspond to the central values of the parameters in Table \ref{table:astro_fundamental_param_dwarfs}.}
\label{table:astro_param_dwarfs}
\end{table}
Given the measurements of $\rho_s$ and $r_s$, it is a straightforward
task to determine $J$. In Table~\ref{table:table-1}, we present this for a
set of dSphs, adopting, in each case, the standard choices of
$\theta_{\rm min} = 0$ and $\theta_{\rm max}= 0.5^\circ$. Also listed,
for comparison, are the corresponding values listed by Pace {\em et
al.}~\cite{Pace:2018tin}, using an empirical scaling relation motivated by
the analytical work of Evans {\em et al.}~\cite{Evans:2016xwx}. For the
NFW profile, the empirical relation reads
\begin{equation}\label{eqn:jfactor_pace}
\frac{J(0.5^{\circ})}{GeV^{2}cm^{-5}} \approx 10^{17.72} \left(\frac{\sigma_{los}}{5kms^{-1}}\right)^{4} \left(\frac{d}{100kpc}\right)^{-2}\left(\frac{r_{1/2}}{100pc}\right)^{-1} \ ,
\end{equation}
where $\sigma_{\rm l.o.xs}$ and $r_{1/2}$ denote, respectivly, the
velocity dispersion and the half-light radius of the UFD. It is worth
pointing out the remarkably good agreement between the exact numerical
result and the empirical formula. Indeed, at the level of accuracy of
our results, the two are virtually indistinguishable. It should be
bourne in mind, though, that a different choice of the density profile
would lead to a substantially different value for $J$, a point that we
would return to at a later section. It should also be noticed that, for certain UFDs, only
upper bounds on the $J$-factor exists. This can be traced to the
insufficiency of the kinematic data, which leads to only an upper
bound on the velocity dispersion and, hence, on the $J$-factor \cite{Bhattacharjee:2018xem}.
\begin{table}[!t]
\centering
\begin{tabular}{|p{2.5cm}|c|c|c|c|}
\hline \hline
Galaxy & \multicolumn{4}{c|}{$\log_{10}(J/{\rm GeV}^2\, {\rm cm}^{-5})$}
\\
\cline{2-5}
& Pace {\em et al}\cite{Pace:2018tin} & \multicolumn{3}{c|}{Direct Integration}\\
\cline{3-5}
& (NFW) & NFW & Burkert & ISO \\
\hline \hline
Aquarius-II & $18.27^{+0.65}_{-0.59}$ & $18.11^{+0.68}_{-0.63}$ & $18.53^{+0.72}_{-0.66}$ & $18.01^{+0.73}_{-0.66}$ \\
\hline \hline
Carina-II & $18.24^{+0.53}_{-0.53}$ & $18.16^{+0.55}_{-0.53}$ & $18.45^{+0.60}_{-0.56}$ & $18.05^{+0.58}_{-0.54}$ \\
\hline \hline
Draco-II & $18.97^{+1.29}_{-1.69}$ & $19.07^{+1.33}_{-1.69}$ & $19.54^{+1.35}_{-1.70}$ & $18.90^{+1.34}_{-1.70}$ \\
\hline \hline
Eridanus-II & $17.29^{+0.35}_{-0.26}$ & $17.14^{+0.35}_{-0.30}$ & $17.68^{+0.35}_{-0.31}$ & $17.06^{+0.35}_{-0.31}$ \\
\hline \hline
Grus-I & $16.87^{+1.52}_{-1.68}$ & $16.94^{+1.57}_{-1.74}$ & $17.48^{+1.60}_{-1.75}$ & $16.76^{+1.54}_{-1.67}$ \\
\hline \hline
Horologium-I & $19.25^{+0.79}_{-0.70}$ & $19.01^{+0.83}_{-0.73}$ & $19.37^{+0.85}_{-0.75}$ & $18.73^{+0.85}_{-0.75}$ \\
\hline \hline
Hydra-II & $<~17.71$ & $<~17.92$ & $<~18.46$ & $<~17.84$ \\
\hline \hline
Leo-V & $17.69^{+0.93}_{-0.99}$ & $17.91^{+1.03}_{-1.06}$ & $18.51^{+1.02}_{-1.08}$ & $17.84^{+1.01}_{-1.07}$ \\
\hline \hline
Pegasus-III & $18.41^{+0.89}_{-1.07}$ & $18.46^{+0.94}_{-1.05}$ & $19.06^{+1.02}_{-1.07}$ & $18.39^{+1.03}_{-1.05}$ \\
\hline \hline
Pisces-II & $17.31^{+0.97}_{-0.107}$ & $17.53^{+1.02}_{-1.09}$ & $18.10^{+1.04}_{-1.09}$ & $17.45^{+1.03}_{-1.09}$ \\
\hline \hline
Reticulum-II & $18.95^{+0.57}_{-0.52}$ & $18.76^{+0.53}_{-0.48}$ & $19.21^{+0.53}_{-0.54}$ & $18.66^{+0.53}_{-0.53}$ \\
\hline \hline
Triangulum-II & $<~19.72$ & $<~19.74$ &$<~20.18$ & $<~19.64$ \\
\hline \hline
Tucana-II & $19.02^{+0.57}_{-0.52}$ & $18.93^{+0.62}_{-0.58}$ & $19.22^{+0.64}_{-0.61}$ & $18.83^{+0.66}_{-0.62}$ \\
\hline \hline
Tucana-III & $<~17.68$ & $<~17.87$ & $<~18.20$ & $<~17.76$ \\
\hline \hline
Draco & $18.83^{+0.10}_{-0.10}$ & $18.85^{+0.12}_{-0.12}$ & $19.08^{+0.13}_{-0.13}$ & $18.75^{+0.13}_{-0.13}$ \\
\hline \hline
\end{tabular}
\caption{The $J$-factors for the various UFDs as obtained by directly integrating
Eq.~\ref{eqn:Jfactor_analytical} for three density profiles and for $\theta_{max}=0.5^{\circ}$.
Also shown, for comparison,
are the values obtained by Pace et al~\cite{Pace:2018tin}, for the NFW profile, using
an approximate scaling relation.}
\label{table:table-1}
\end{table}
\subsection{Dependence of $J$ on the density profiles}
\label{sec:DM_profile}
While the NFW is a traditional benchmark choice for the DM profile
motivated by the $N$-body simulations, its cusp-like nature at the
center of the galaxy is quite different from the alternate cored
profiles. While we would use the NFW for most of our numerical
results, it is an imperative to ensure that the cuspy nature does not
lead to extreme answers, To this end, we examine two examples of the
second category, namely the pseudo-isothermal(ISO)~\cite{Gunn:1972sv}
profile and the one originally proposed by
Burkert~\cite{Burkert:1995yz, Salucci:2011ee}. These are given by
\begin{equation}
\rho_{\rm ISO}(r)=\frac{\rho_{c} \, r_c^2}{r^{2}+ r_{c}^{2}} \ , \qquad
\rho_{\rm BURK}(r)=\frac{\rho_{B}r_{B}^{3}}{(r_{B}+r)(r_{B}^{2} + r^{2})}
\end{equation}
where $r_B$ and $r_{B}$ represent the respective core radii, whereas
$\rho_{c}$ and $\rho_B$ are the corresponding densities at the very
center. While $ \rho_{\rm BURK}(r) $ resembles an isothermal profile
in the inner regions $(r \ll r_B)$, for large $r$ it falls off much
faster ($r^{-3}$ versus $r^{-2}$). As in the case for the NFW
profile, the parameters are to be determined from observational data
related to a given galaxy. Based on the study of DM dominated
galaxies, ref.\cite{Boyarsky:2009rb} has approximately related the
parameters for the three profiles, namely $r_s \simeq 6.1 r_c \simeq
1.6 r_B$ and $\rho_s \simeq 0.11 \rho_c \simeq 0.37 \rho_B$.
While $N$-body simulations~\cite{Navarro:2008kc, Wagner:2020opz} tend
to favour a cuspy profile (such as the NFW) over the smooth (and
relatively flat at the center) profiles such as the Burkert or the
ISO, insufficient kinematic data (such as those on rotational curves)
prevent us from strongly favoring any one profile. Indeed, some
observations actually favour cored haloes over cuspy ones. Fortunately
for us, the $J$-factor does not have too strong a dependence on the
choice. Indeed, as Table~\ref{table:table-1} demonstrates, the NFW
profile leads to values that are not too far from the average of those
obtained from the three profiles. In other words, the theoretically
favoured choice leads is also a good representative.
\section{Analysis of $\gamma$-ray fluxes from dwarf spheroidal galaxies}
\label{sec:analysis}
For our study of the high energy gamma-ray signals, originating from
pair annihilations of DM into SM particles within dSphs, we consider the data from the Large Area Telescope (LAT)
onboard the Fermi observatory.
The Fermi-LAT is a $\gamma$-ray pair conversion space-based
detector that scans the whole sky for every 3 hours from a low-Earth
orbit of 565 km altitude at a 25.6-degree inclination with an
eccentricity $<$0.01. Launched on June 11, 2008, by the Delta
II Heavy launch vehicle from Cape Canaveral, the principal objective
of the Fermi-LAT was to conduct a long term high sensitivity gamma-ray
observations of celestial sources in the energy range from $\approx$
20 MeV to $>$ 500 GeV \cite{Atwood:2009ez}.
In our study, we have analyzed almost eleven years of sky
survey data (2008-09-01 to 2019-02-04) from the direction of each
UFDs described in section \ref{section:source_details}.
\subsection{The analysis methodology}
Using the latest version of Fermi ScienceTools (v1.2.1) for our
analysis, we process the data with an improved PASS 8 instrument
response function (IRF), in particular the source class IRF,
$\rm{P8R3\_SOURCE\_V2}$. Furthermore, the tool \textit{`gtmktime'}
has been used to extract the ``good time interval'' data from the
whole dataset. Extracting the LAT data within a $15^{\circ}$ radius of
interest (ROI) around each source, we consider only a limited range
for the reconstructed energy $E$, such as $E \approx [0.1, 300]$~GeV, so as
to reduce possible uncertainties at low energies on the one hand and
background contamination at high energies on the other. To remove the
possible contamination from the earth albedo, we apply a zenith-angle
cut at $90^{\circ}$ as recommended by the Fermi-LAT analysis team.
With the Earth’s limb lying at a zenith angle of $113^{\circ}$, the
application of a zenith cut at $90^{\circ}$ eliminates a large
fraction of the background atmospheric gamma-rays.
The binned likelihood analysis for the extracted dataset was performed
with the `gtlike' tool \cite{Cash:1979vz, Mattox:1996zz}. To this end,
we first generate a source model file with the inclusion of all the
sources from the 4FGL catalog \cite{Fermi-LAT:2019yla} within a
20$^{\circ}$ ROI from the position of the `source of interest'. Note
that we have extended the spatial coverage up to $20^{\circ}$ ROI to
account for possible overlapping between the point spread functions of
nearby sources. In addition, to
eliminate the possible background effect resulting from galactic and
extragalactic diffuse emission, we have added the Galactic diffuse
model ($\rm{gll\_iem\_v07.fits}$) and the isotropic extragalactic
diffuse model ($\rm{iso\_P8R3\_SOURCE\_V2\_v1.txt}$) to the source
model. The spectral parameters of all the 4FGL sources
\cite{Fermi-LAT:2019yla} within ROI, as well as the normalization
parameters of two diffuse models have been left free in the fitting
procedure. The rest of the background sources within the
$20^{\circ}~\times~20^{\circ}$ ROI have been kept fixed at their
values given in the 4FGL catalog \cite{Fermi-LAT:2019yla}.
Table~\ref{table:fermi_lat_parameters}, lists all the
parameters used at different stages of the analysis of the data.
\begin{table}
\caption{Parameters used in \texttt{Science Tools} for \textit{Fermi}-LAT data analysis}
\begin{tabular}{||p{6.8 cm}p{6.8 cm}||}
\hline \hline
{\bf Parameter for data extraction} &\\
\hline\hline
Parameter & Value & &\\
\hline \hline
Radius of interest (ROI) & $15^{\circ}$\\
TSTART (MET) & 241976960 (2008-09-01 15:49:19.000 UTC)\\
TSTOP (MET) & 570987500 (2019-02-04 15:38:15.000 UTC)\\
Energy Range & 100 MeV - 300 GeV\\
\textit{Fermitools} version & \texttt{1.2.1}\\
\hline \hline
$~~~~~~~~~~~~~~~~~~~$\texttt{gtselect} for event selection &\\
\hline \hline
Event class & Source type (128)\\
Event type & Front+Back (3) \\
Maximum zenith angle cut & $90^{\circ}$\\
\hline \hline
$~~~~~~~~~~~~~~~~~~~$\texttt{gtmktime} for time selection & \\%\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtmktime.txt}}
\hline \hline
Filter applied & $\textit{(DATA\_QUAL>0)\&\&(LAT\_CONFIG==1)}$ \\
ROI-based zenith angle cut & No & &\\
\hline \hline
$~~~~~~~~~~~~~~~~~~~$\texttt{gtltcube} for livetime cube &\\
\hline \hline
Maximum zenith angle cut ($z_{cut}$) & $90^{\circ}$ \\%\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_Likelihood/Exposure.html}}$
Step size in $cos(\theta)$ & 0.025\\
Pixel size (degrees) & 1\\
\hline \hline
$~~~~~~~~~~~~~~~~$\texttt{gtbin} for 3-D counts map &\\
\hline \hline
Size of the X $\&$ Y axis (pixels) & 140\\
Image scale (degrees/pixel) & 0.1\\
Coordinate system & Celestial (CEL)\\
Projection method & AIT\\
Number of logarithmically uniform energy bins & 24\\
\hline \hline
$~~~~~~~~~~~~~~~~~~~$\texttt{gtexpcube2} for exposure map & \\
\hline \hline
Instrument Response Function (IRF) & $\rm{P8R3\_SOURCE\_V2}$ \\%\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Pass8_usage.html}\label{pass8}}$
Size of the X $\&$ Y axis (pixels) & 400\\
Image scale (degrees/pixel) & 0.1\\
Coordinate system & Celestial (CEL)\\
Projection method & AIT\\
Number of logarithmically uniform energy bins & 24\\
\hline \hline
$~~~~~~~~~~~~~~~~~~~$diffuse models and Source model XML file &\\%\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/user/make4FGLxml.py}} & & &\\
\hline \hline
Galactic diffuse emission model & $\rm{gll\_iem\_v07.fits}$\\
Extragalactic isotropic diffuse emission model & $\rm{iso\_P8R3\_SOURCE\_V2\_v1.txt}$\\%\footref{background}$ & &\\
Source catalog & 4FGL & & \\
Extra radius of interest & $5^{\circ}$\\
Spectral model & DMFit Function\cite{Jeltema:2008hf} \\%\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/source_models.html}} & &\\
\hline \hline
\end{tabular}
\label{table:fermi_lat_parameters}
\end{table}
To search for the $\gamma$-ray emission coincident with our targets, we
have first modelled our sources with a power-law spectral model ($dN/dE \propto E^{-\Gamma}$) with spectral index $\Gamma$ = 2 \cite{Bhattacharjee:2018xem}. As a statistical discriminator, we use the ratio of the maximum likelihoods
for two hypotheses, namely, $TS = -2\ln\Big(L_{\rm {(max, 0)}}/L_{\rm
{(max, 1)}}\Big)$, where $L_{\rm {(max, 1)}}$ and $L_{\rm {(max,
0)}}$ respectivelty denote the maximum likelihood in the presence
of a signal and the null hypothesis. As no gamma-ray signal was observed from the direction of any of the UFDs, we then derive 95$\%$ confidence level (CL) upper limits (UL) on the gamma-ray flux from the region of these objects. To estimate the flux upper limits in 95$\%$ C.L., we use the Bayesian approach (\cite{Helene:1990yi}), as this is more sensitive~\cite{Rolke:2004mj, Barbieri:1982eh} than the profile
likelihood method for low statistics. The approach developed in Ref. \cite{Helene:1990yi} is already implemented in the pyLikelihood module of \texttt{ScienceTools}.
\subsection{Possible DM annihilation Constraint with eleven years of Fermi-LAT data}\label{section:gamaray_sigmav_constraint}
\begin{figure}[h]
\centering
\includegraphics[width=0.49\textwidth,height=0.44\textwidth]{figures/flux_bb.pdf}
\includegraphics[width=0.49\textwidth,height=0.44\textwidth]{figures/flux_tt.pdf} \\
\vskip -1in
\includegraphics[width=0.49\textwidth,height=0.44\textwidth]{figures/flux_mm.pdf}
\includegraphics[width=0.49\textwidth,height=0.44\textwidth]{figures/legend.pdf}
\vskip -0.5in
\caption{$95\%$ C.L. upper limits on the $\gamma$-ray fluxes
(from DM pair-annihilations in UFDs) as a function of
$\rm{m_{DM}}$. In deriving these, the indicated channel is assumed
to be an exclusive one. The curves for Triangulum-II, Hydra-II and
Tucana-III denote their maximum possible upper
limits.} \label{figure:fermi_flux}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.49\textwidth,height=0.44\textwidth]{figures/sigma_bb_scaled.pdf}
\includegraphics[width=0.49\textwidth,height=0.44\textwidth]{figures/sigma_tt_scaled.pdf} \\
\vskip -1.0in
\includegraphics[width=0.49\textwidth,height=0.44\textwidth]{figures/sigma_mm_scaled.pdf}
\includegraphics[width=0.49\textwidth,height=0.44\textwidth]{figures/legend.pdf}
\vskip -0.6in
\label{fig:cross_legends}
\caption{95\% C.L. upper limit on the thermally
averaged WIMP pair-annihilation $\langle \sigma v \rangle$ as derived
from the upper limits on the gamma-ray fluxes from individual UFDs.
In each case, the annihilation channel is assumed to be
exclusive.
The curves for Triangulum-II, Hydra-II and Tucana-III, denote their maximum possible upper limits.}
\label{figure:fermi_cross}
\end{figure}
The aforesaid upper limits on the gamma-ray fluxes from DM
annihilation can be translated to constraints in the two dimensional
plane of the WIMP mass and the thermally averaged pair-annihilation
cross section $\langle \sigma v\rangle$. This exercise, though,
depends on the final states resulting from the annihilation processes
and, hence, on the details of the model. However, as indicated
at the outset, we adopt an agnostic standpoint and consider three
{\em exclusive} channels, namely $b \bar b$, $\tau^+\tau^-$ and $\mu^+\mu^-$.
For estimating the flux upper limits in 95$\%$ C.L. and limits to the
$\langle \sigma v \rangle $ of WIMP pair-annihilation, we have fitted the
$\gamma$-ray spectrum arising from the DM-dominated dSphs with an
MC-simulated DM self-annihilation spectrum, DMFitFunction
\cite{{Jeltema:2008hf}}. The DMFit package is based on the particular set
of MC simulations of hadronization and/or decay of the annihilation
products as used by the DarkSUSY \cite{Gondolo:2004sc} team.
The functional form of DMFitFunction
(modified form of Eq.~\ref{eqn:dm_flux}) can be written as
\begin{equation}\label{eqn:dm_spectrum}
\frac{dN}{dE} (E,\Delta \Omega) = \langle \sigma v \rangle ~ J(\Delta \Omega) (B~F(M_{DM},C_{0}) + (1 - B~F(M_{DM},C_{1}))
\end{equation}
From Eq. \ref{eqn:dm_spectrum},
B, $C_{0}$ and $C_{1}$ define the branching ratio,
primary decay channel and secondary decay channel,
respectively. The DMFitFunction is implemented in Fermi
\texttt{ScienceTools} as a DMFit package
\cite{Jeltema:2008hf} and the values of F($M_{DM}$,C) are provided by
the Fermi-LAT team. For J-factor, we have taken the values from Pace et al,
2019 (\cite{Pace:2018tin}) (See Table~\ref{table:table-1})
In Fig.~\ref{figure:fermi_flux}, we have shown the variation
of flux upper limits in $95\%$ C.L. of all UFDs with $m_{DM}$ for
$b\bar{b}$, $\tau^{+}\tau^{-}$ and $\mu^{+} \mu^{-}$ annihilation
channels. For all possible annihilation channels, with increasing mass, the spectrum
from WIMP annihilation always shifts to higher energies
\cite{Serpico:2009vz} and so, we can expect that the variation of flux
upper limits would be comparatively less for high $m_{DM}$. From
\ref{figure:fermi_flux}, we can also check the same.
In Fig.~\ref{figure:fermi_cross}, we have displayed the $95\%$
C.L. upper limit of the thermally averaged WIMP pair-annihilation
$\langle \sigma v \rangle $ of UFDs, as a function of the $m_{DM}$ for each of the
annihilation channels mentioned above.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{figures/horologium_cross_comparison.pdf}
\caption{Variation of $\langle \sigma v\rangle$ upper limits of Horologium-I with $m_{DM}$ for three annihilation final states.}
\label{fig:horo_cross}
\end{figure}
In Fig.~\ref{figure:fermi_flux} and
Fig.~\ref{figure:fermi_cross}, we have also compared our obtained
upper limits with Draco which is considered as one of the best
candidates for indirect DM detection. Here we would like to
mention that among all our selected dwarf galaxies, Horologium-I
provides the strongest limit (Fig.~\ref{figure:fermi_cross}) for all
three annihilation final states and from our study, we can observe the
limits obtained from Horologium-I can even provide more stringent
limits on the theoretical WIMP models than Draco.
But we should also keep in mind that, due to insufficient kinematics
data, newly discovered Horologium-I might come with large
uncertainties in J-factor and so, we could not firmly state that
Horologium-I is the better candidate than Draco. But, from this
signature, we can expect that in future Horologium-I can come as a
very rich DM dominated galaxy. In subsection~\ref{section:uncertainties_horo_tuc}, we have studied
the uncertainty band in Horologium-I and Tucana-II.
From Fig.~\ref{figure:fermi_flux} and Fig.~\ref{figure:fermi_cross}, the flux upper limits and corresponding $\langle \sigma v \rangle $
limits of Triangulum-II, Hydra-II and Tucana-III can only produce the maximum possible upper limits.
If we check the Table~\ref{table:table-1}, we can observe that at present
we can only estimate the upper limits of J-factor for these three UFDs. So, from Triangulum-II, Hydra-II and Tucana-III,
we can only obtain the limiting values for $\gamma$-ray flux upper limits and $\langle \sigma v \rangle $.
We have taken the Horologium-I from Fig.~\ref{figure:fermi_cross} and in Fig.~\ref{fig:horo_cross},
we have displayed the variation of $\langle \sigma v \rangle$ of Horologium-I for three annihilation channels.
From Fig.~\ref{fig:horo_cross}, we can observe that for $\gamma-ray$ observation $100\%$ $b\bar{b}$ channel provide the more stringent limits than other two final states in the plane of ($\langle \sigma v \rangle, m_{DM}$). All our selected UFDs have showed the same signature like
Horologium-I for each annihilation channels. Thus, in Fig.~\ref{fig:horo_cross}, we have only
shown the results for Horologium-I.
Next, we check how does the upper limits to the
thermally averaged pair-annihilation $\langle \sigma v \rangle $ can change with
changing the DM density profiles. In section~\ref{sec:DM_profile}, we have already
derived the J-factor for NFW, BURK and ISO profiles. Now, using
Eq.~\ref{eqn:dm_spectrum} and J-factor values from
Table~\ref{table:table-1}, we have also compared the upper
limits to the thermally averaged pair-annihilation $\langle \sigma v \rangle $ of
Horologium-I=I for three density profiles and for this purpose we have
considered the $\rm{100\%~b\overline{b}}$ annihilation channel. From
Fig.~\ref{figure:profile_comparison}, we can observe that all three
density profiles have provided nearly the same order of $\langle \sigma v \rangle $
upper limits. But among them, the BURK density profile can impose the
most stringent $\langle \sigma v \rangle $ upper limit on the theoretical models
whereas ISO profile would produce comparatively weaker limits in the
plane of ($\langle \sigma v \rangle, m_{DM}$) than two other density profiles.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth,clip,angle=0]{figures/density_profile_comparison_horo.pdf}
\caption{Comparison between the $\langle \sigma v \rangle $ upper limits for three density profiles for $100\%$ $b\overline{b}$ final state.}
\label{figure:profile_comparison}
\end{figure}
Here we would like to point out that except for Tucana-II, most of our targets have not shown any faint emission from their location (i.e. TS $\preceq$ 5). Thus, here we have only reported the result for Tucana-II. In Table~\ref{table:ts_values}, we have listed the TS peak value of UFDs for $b\bar{b}$ and $\tau^{+}\tau^{-}$ annihilation channels. In our recent publication (ref.~\cite{Bhattacharjee:2018xem}), we have
reported the intriguing hint of faint $\gamma$-ray emission from the
location of Tucana-II. In that paper (\cite{Bhattacharjee:2018xem}),
we have also shown that the TS peak of Tuc-II is increased with the
larger dataset and the same nature is followed by both the $b\bar{b}$
and $\tau^{+}\tau^{-}$ annihilation channels.
\begin{table}
\begin{center}
\begin{tabular}{|p{2.5cm}|p{2.5cm}|p{2.5cm}|}
\hline \hline
UFDs & $TS_{peak}$ for $b\bar{b}$ & $TS_{peak}$ for $\tau^{+}\tau^{-}$ \\
\hline \hline
Aquarius II & 2.88 & 2.94 \\
Carina II & 1.24 & 1.81 \\
Draco II & 1.37 & 1.88 \\
Eridanus II & 0.81 & 1.23 \\
Grus I & 1.59 & 1.65 \\
Horologium I & 4.21 & 4.71 \\
Hydra II & 2.21 & 2.31 \\
Leo V & 0.88 & 0.92 \\
Pegasus III & 1.91 & 2.13 \\
Pisces II & 1.22 & 1.96\\
Reticulum II & 4.85 & 4.95 \\
Tucana II & 11.87 & 12.47 \\
Tucana III & 4.36 & 4.53 \\
Triangulum II & 1.19 & 1.25 \\
\hline \hline
\end{tabular}
\end{center}
\caption{TS peak value for two annihilation final states}
\label{table:ts_values}
\end{table}
In our present work, we have analyzed nearly eleven years of Fermi-LAT
data and in ref.~\cite{Bhattacharjee:2018xem} we analyzed nearly nine
years of data. So, we can expect that the TS value for Tucana-II would
again increase with eleven years of data. In
Fig.~\ref{figure:ts_tucana}, we have obtained our expected
result. Even though the observed significance with eleven years of
Fermi-LAT data is faint (i.e. TS~$<$~25) to claim anything
strongly. The most encouraging part of this result is that the TS peak
of Tuc-II is continuously increasing with time and in the future, this
could lead us to a detection of a real signal either from any
astrophysical connection or from WIMP annihilation. \\
\begin{figure}[h!]
\begin{center}
{\includegraphics[width=0.5\textwidth,clip,angle=0]{figures/ts_compare.pdf}}
\caption{The observed TS peak of the excess $\gamma$-ray
emission from the direction of Tuc-II for four different time intervals of
Fermi-LAT data, whereas, the red and blue markers denote the TS peak for
$100\%$ $b\bar{b}$ and $100\%$ $\tau^{+}\tau^{-}$ annihilation channels,
respectively.}
\label{figure:ts_tucana}
\end{center}
\end{figure}
\subsection{Comparison between our obtained result from the Fermi-LAT analysis with the estimated limits provided by Planck}
In the early epoch of the universe, the WIMP annihilation could
pump in electromagnetically interacting particles into the cosmic bath with
a wide-ranging observable effects. The injection of such charged
particles could significantly increase the residual ionization fraction which
in turn modify the last scattering surface as well the observed
CMB anisotropy\cite{Gruzinov:1998un, Lewis:1999bs}. The WMAP \cite{Bennett_2013} and most recent the Planck satellite experiments \cite{Aghanim:2018eyx} are
highly sensitive to capture any such modifications in the CMB.
Thus, the Planck satellite experiment placed a very
strong and model-independent limits on aforementioned energy-injections by
the annihilating WIMPs~\cite{Aghanim:2018eyx}.
This limit on the energy-injections has been
further translated into the WIMP annihilation cross-section.
To compare this limit with that of obtained from the Fermi-LAT collaboration,
we choose Horologium-I as our target dSphs and in
Fig.~\ref{figure:planck_comparison} we display the comparison. From
Fig.~\ref{figure:planck_comparison}, we observe that for the limit
obtained by the Fermi-LAT Collaboration on the WIMP annihilation
cross-section for the exclusive $b{\bar b}$ channel
is better limit than the Planck limits for ${\rm M_{\rm DM}} \sim 10 $ GeV to
10 TeV. On the other hand in the case of exclusive
$\mu^{+}\mu^{-}$ annihilation channel the Planck Collaboration
provides the stringent limit comparted to that of give by the
Fermi-LAT Collaboration. For the $\tau^{+}\tau^{-}$ annihilation
channel, the Fermi-LAT imposes stringent limits
in the mass range $\lesssim 400$~GeV, while Planck is
likely to impose the most stringent limits on $\langle \sigma v \rangle $
in the mass range $\gtrsim 400$~GeV.
\begin{figure}[h!]
\begin{center}
{\includegraphics[width=.5\linewidth]{figures/comparison_between_fermi_and_planck.pdf}}
\caption{Comparison between the limits obtained for Horologium-I from Fermi-LAT with the limits provided by Planck. }
\label{figure:planck_comparison}
\end{center}
\end{figure}
\subsection{Uncertainty band in Horologium-I and Tucana-II}\label{section:uncertainties_horo_tuc}
The $\gamma$-ray analysis is one of the most popular ways to examine
the indirect detection of DM signature. But the large
uncertainty in J-factors can be a matter of concern. Focusing on the
uncertainty in J-factor is important because the large uncertainty in
J-factor can dominantly weaken the DM particle constraints
(i.e. limits on the cross section). \\
So, in this section, we have tried to show the variation of 95$\%$
C.L. upper limit of the WIMP pair-annihilation $\langle \sigma v \rangle $ with
increasing $m_{DM}$ for J-factor with its uncertainties. For this
purpose, we have chosen Horologium-I and Tucana-II and have only shown
the variation for $b\bar{b}$ annihilation channel. We have considered
Horologium-I because out of all UFDs it has provided the most
stringent limits. In the case of Tucana-II, we have obtained a faint
emission from the source location \cite{Bhattacharjee:2018xem}, so we
would like to check the variation for Tucana-II. The median J-factor
and its uncertainties are taken from Pace at al, 2019
(\cite{Pace:2018tin}). \\
In Fig.~\ref{figure:crossuncertainty} we have shown the 95$\%$ C.L. upper
limit of the WIMP annihilation $\langle \sigma v \rangle $ for Horologium-I and
Tucana-II. From Fig.~\ref{figure:crossuncertainty} the dashed lines
correspond to the uncertainty in J-factors of each UFDs, while the
solid line depicts the median value of J-factor and the regions
between two dashed lines represent the uncertainty in the DM
density profiles of UFDs.
From Fig.~\ref{figure:crossuncertainty}, it is evident that for Tucana-II
and Horologium-I imposes a large uncertainty band. In the case of
UFDs, very few numbers of stars have been detected so far which is the
main obstacle in understanding the DM distribution in
UFDs. The large uncertainties in J-factor come from insufficient
kinematics data. With the more precise observation of the internal
structure, in the future, we can expect to reduce such uncertainty
band to a possible single upper limit curve of $\langle \sigma v \rangle $ and that
might improve the constraint limit on beyond Standard Model.
\begin{figure}[h!]
\centering
\includegraphics[width=.45\linewidth]{figures/horologium_uncertainty.pdf}
\includegraphics[width=.45\linewidth]{figures/tucana_uncertainty.pdf}
\caption{Variations of 95$\%$ C.L. upper limit of the WIMP pair-annihilation $\langle \sigma v \rangle $ with increasing $m_{DM}$ for $b\bar{b}$ annihilation final states of (a) Horologium-I and (b) Tucana-II. The dashed lines represent the uncertainty in the DM density profiles of UFDs and the solid line denotes the upper limit of $\langle \sigma v \rangle $ in 95$\%$ C.L. corresponds to median J-factor. The colors and the line-styles of different curves are indicated in the diagram}
\label{figure:crossuncertainty}
\end{figure}
\section{Synchrotron radiation from dwarf spheroidal galaxies}
\label{sec:synchr}
A charged particle propagating through the interstellar medium would
lose energy owing to a variety of electromagnetic processes such as
inverse Compton radiation, synchrotron radiation, Coulomb losses and
bremsstrahlung. While this applies to any charged particle, the
radiation would be substantial only if the particle is sufficiently
long-lived, or in other words if it is one of $e^\pm$ or $p, \bar
p$. The contributions from the last two are much smaller, both on
account of their larger masses as well as the small probability for a
quark (from the hard process) fragmenting into a $p, \bar p$. Given this,
we develop the subsequent arguments for $e^\pm$ alone. The other
species can be treated analogously.
A complete treatment of the DM-initiated synchrotron radiation must
consider the diffusion and aforementioned energy loss contributions
from the secondary particles. The formalism,
developed
in refs.\cite{Colafrancesco:2005ji, Colafrancesco:2006he, McDaniel:2017ppt},
can
be summarised by the transport equation for $n_e(r,E)$, the number density
of $e^\pm$ of a given energy
$E$ at the position $\mathbf{r}$ with respect to the center of the
dSph), {\em viz.},
\begin{align}\label{eqn:diffusion}
\frac{\partial}{\partial t} \frac{dn_e(r,E)}{dE}
= \nabla . \Big( D(E,\mathbf{r}) \nabla \frac{dn_e(r,E)}{dE}\Big)
+ \frac{\partial}{\partial E} \Big( b(E,\mathbf{r}) \frac{dn_e(r,E)}{dE}\Big)
+ Q_e (E,\mathbf{r}).
\end{align}
Here, $ D(E,\mathbf{r})$ is the space-dependent diffusion coefficient,
$b(E,\mathbf{r})$ encapsulates the energy loss term and the source
term $Q_e$ is given by
\begin{align}
Q_e (E,\mathbf{r}) = \frac{\rho^2_\chi(\mathbf{r}) \langle \sigma v\rangle}{2 m_\chi^2} \frac{dN_{e}}{dE},
\end{align}
where $N_e$ is the number of $e^{\pm}$ produced with a given energy $E$
per DM annihilation.
The energy loss term receives several independent contributions
(from the processes listed earlier) and is given by \cite{Colafrancesco:2005ji,McDaniel:2017ppt}
\begin{align}
b(E) = & b_{IC}(E) + b_{Syn}(E) + b_{Coul}(E) + b_{brem}(E) \nonumber \\
= & b_{IC}^0 E^2 + b_{syn}^0 B^2 E^2 \nonumber \\
& + b_{Coul}^0 n ( 1 + log(\gamma /n)/75) + b_{brem}^0 n (log(\gamma /
n) + 0.36).
\end{align}
where the dependence on $\mathbf{r}$ has been suppressed.
Here, $B$ is the magnetic field in $\mu G$, whereas $n$ is the number
density of thermal electrons in cm$^{-3}$ and $\gamma = E/m_e$
is the time-dilation factor.
The various energy loss
parameters $b^0_{\rm IC}, b^0_{\rm syn}, b^0_{\rm Coul} $ and $b^0_{\rm brem}$
have values $ 0.25,0.0254,6.13$ and $1.51 $ respectively in the units
of $ 10^{-16}$ GeV s$^{-1}$ \cite{Colafrancesco:2005ji}.
In the absence of detailed knowledge of the structure of the dSph,
the diffusion coefficient $D(E, \mathbf{r})$ may be
assumed to be independent of position. Comparisons of simulations with
data suggest that
the consequent loss of accuracy is not too severe and that
the form
\begin{equation}\label{eqn:diffusion_coefficient}
D(E) = D_{0} \left(\frac{E}{1 \rm GeV}\right)^{\gamma_{D}}
\end{equation}
may be used.
Assuming spherical symmetry, a uniform magneric field and a uniform
number density of thermal electrons, the stationary state solution of
the diffusion equation is given by
\begin{align}\label{eqn:solutionndifusion}
\frac{dn_e}{dE}(r,E) = \frac{1}{b(E)} \int_{E}^{M_\chi} dE^\prime \,
G\Big(r, v(E)-v(E^\prime)\Big) Q_e(E^\prime,r),
\end{align}
where the Greens function is given by
\begin{equation}
G(r, \Delta v) = \frac{1}{\sqrt{4\pi \Delta v}} \sum_{k=-\infty}^{\infty} (-1)^k
\int_{0}^{r_h} dr^\prime \frac{r^\prime}{r_k}
\left(\frac{\rho_\chi(r^\prime)}{\rho_\chi(r)}\right)^2
\left[ \exp\left(-\frac{(r^\prime -r_k)^2}{4 \Delta v}\right)
- \exp\left(-\frac{(r^\prime + r_k)^2}{4 \Delta v}\right)
\right],
\end{equation}
Here $r_h$ defines the diffusion zone of the dSph, namely the
radius at which the free escape boundary condition $dn_e(r_h,E)/dE =
0$ may be imposed. Typically, $r_h$ is approximately twice the radius
of the last stellar component of the galaxy ({\em i.e.}, twice the
distance of the outermost star from center).
The synchrotron power spectrum or the total power radiated per unit
frequency at $\nu$ by an electron of energy $E$ present
in a magnetic field $B$, {\em viz.}, $P_{\rm synch}(\nu,E,B)$ is defined as:
\begin{equation}
P_{\rm synch}(\nu, E, B) = \pi \sqrt{3} r_0 m_e c \nu_0 \, \int_0^\pi \, d\theta \, \sin^2\theta \, F\big(\frac{x}{\sin\theta }\big),
\end{equation}
where $\theta$ is the pitch angle, $r_0 = e^2/(m_e c^2)$ is the
classical electron radius and $\nu_0 = eB/(2\pi m_e c)$ is the
non-relativistic gyro-frequency. While
\begin{equation}
F(y) = y \, \int_y^\infty d\zeta \, K_{5/3}(\zeta) \simeq 1.25 \, y^{1/3}\,
e^{-y} \, (648 + y^2)^{1/12} \ ,
\end{equation}
the quantity $x$ is given by
\begin{equation}
x = \frac{2 \, \nu\, m_e^2 \, (1+z)}{3 \, \nu_0\, E^2}
\end{equation}
with $z$ being the redshift of the source. For the dSphs under consideration,
$z \approx 0 $.
We can, now, estimate the total energy radiated or the local
emissivity (i.e. the amount of energy radiated at a given
$\mathbf{r}$, per unit volume per unit time) at a given frequency
$\nu$ in the form of synchrotron radiation in terms of $P_{\rm synch}$
and $dn_e/dE$ {\em viz.},
\begin{equation}\label{eqn:emissivity}
j_{\rm synch}(\nu, r)
= \int_{m_e}^{M_{\chi}} dE \left(\frac{dn_{e^+}}{dE}
+ \frac{dn_{e^-}}{dE}\right) P_{\rm synch}(\nu,E,B) =
2 \int_{m_e}^{M_{\chi}} dE \, \frac{dn_{e^-}}{dE}\,
P_{\rm synch}(\nu,E,B) \ .
\end{equation}
The integrated synchrotron flux density spectrum is, now, given by
\begin{equation}\label{eqn:syn_flux}
S_{\rm synch}(\nu) = \frac{1}{4\pi d^2}\int d^3 r \, \, j_{\rm synch}(\nu,r),
\end{equation}
where $d$ is the distance to the dSph and
the integration is over the whole diffusion volume.
\subsection{Synchrotron radiation from the newly discovered dwarf spheroidal galaxies}
As we have seen above, unlike in the case for the
gamma-rays,
the synchrotron flux is not related to the $J$-factor, especially on
account of the dependence on the diffusion and energy loss processes.
Consequently, apart from the particle physics details and the
astrophysical parameters already discussed, the synchrotron flux also
depends on the magnetic field $B$ inside the dSph as well as
the diffusion coefficient parametrized by $D_0$.
Observations indicate that the magnetic field of dSphs are of the
order of 1 $\mu$G
\cite{Colafrancesco:2006he,McDaniel:2017ppt, Spekkens:2013ik}.
Although dSphs in the outer region of the Milky Way's magnetic
field can have magnetic field larger than 1 $\mu$G,
\cite{McDaniel:2017ppt,Natarajan:2015hma}, for our calculations
we assume a uniform profile of strength
$B=1 \, \mu$G~ \cite{Colafrancesco:2006he, Jeltema:2008hf}.
The diffusion coefficient, for which we assume the simplified form of
eqn.\ref{eqn:diffusion_coefficient}, has larger uncertainties. For
galaxy clusters, a value for the coefficient $D_0$ as large as
$10^{28}$--$10^{30}\, {\rm cm}^2/{\rm s}$ have been argued
for~\cite{Natarajan:2015hma,Jeltema:2008ax}. Constraints on the Milky
Way diffusion parameters can be inferred from
data~\cite{1992ApJ...390...96W,Baltz:1998xv,Maurin:2001sj} and
typically range between $10^{27}$--$10^{29}\, {\rm cm}^2/{\rm s}$.
Similarly, the parameter $\gamma_D$ is expected to lie in the range
$0\leq \gamma_D \leq 1$ \cite{Jeltema:2008ax}.
To be specific, we choose values close to the geometric means of the
individual ranges, namely
$D_0 = 3 \times 10^{28}\, {\rm cm}^2/{\rm s}$ and $\gamma_D= 0.3$
\cite{McDaniel:2017ppt}, postponing the discussion
of the dependence on the choices until a little later.
For a given DM particle, the synchrotron flux, understandably, depends
on the states to which it pair-annihilates and their subsequent
cascades. As in the preceding sections, we consider three annihilation
channel states, {\em i.e.}, $b\bar{b}$, $\tau^+ \tau^-$ and
$\mu^+ \mu^-$, and for the sake of simplicity, continue to assume that
a single channel dominates overwhelmingly. We use the RX-DMFIT
code \cite{McDaniel:2017ppt} for the calculation of synchrotron
flux; to understand the profile, we use a
typical value for the velocity averaged DM annihilation cross section, namely,
$\langle \sigma v \rangle = 10^{-26} \, {\rm cm}^3/{\rm
s}$, postponing a derivation of constraints on the same until
later. For all the dSphs we have used a thermal electron density
$n \approx 10^{-6}$
cm$^{-3}$ \cite{Colafrancesco:2006he,McDaniel:2017ppt},
and the NFW density profile for the DM distribution within.
\begin{figure}[!h]
\centering
\subfigure[]
{\includegraphics[width=0.5\linewidth]{figures/dnde_200GeV_r_1e-1kpc_Tucana_II.pdf}}
\label{fig:dnde_200GeV}
\subfigure[]
{\includegraphics[width=0.5\linewidth]{figures/dnde_2TeV_r_1e-1kpc_Tucana_II.pdf}}
\label{fig:dnde_2TeV}
\caption{Solution of the diffusion equation at a radial distance $r=0.1$~kpc
for Tucana II for three different exclusive annihilation channels, to
$b\bar{b}$ (red), to $\tau^+ \tau^-$ (green) and to
$\mu^+ \mu^-$ (blue). Two different DM mass values have been
considerd, (a) 200 GeV and (b) 2 TeV. For both cases $B \, = \, 1\, \mu$G, $D_0 = 3 \times 10^{28}$ cm$^2$/s, $\gamma_D = 0.3$
and $\langle \sigma v\rangle \, = 10^{-26}$ cm$^3$/s have been used.
NFW density profile has been used for DM distribution inside the dSph. }
\label{figure:dnde}
\end{figure}
Fig.\ref{figure:dnde} shows the stationary electron distribution
spectrum for Tucana II at a radial distance 0.1 kpc for two different
DM mass values, 200 GeV and 2 TeV. With the cascades from a $b$-decay being
capable of producing more $e^\pm$ than a $\tau$ or a $\mu$ can (the latter, only one), it is understandable that the integrated spectrum is much larger for
the $b\bar b$ channel than it is for the others. This also explains
the relative softness of the three spectra in Fig.\ref{figure:dnde}.
\begin{figure}[!h]
\centering
\includegraphics[width=.49\linewidth,height=3in]{figures/sync_power_vs_E.pdf}
\caption{Synchrotron power spectrum for different frequencies with magnetic field 1 $\mu$G.}
\label{figure:synpower}
\end{figure}
For a given frequency, the energy corresponding to the peak of the
synchrotron power spectrum contributes significantly to $j_{\rm
synch}(\nu,r)$ in Eqn.\ref{eqn:emissivity}. The power spectrum $P_{\rm
synch}(\nu,E,B)$ for $B= 1$ $\mu$G and for different frequencies in
the range 5 MHz-50 GHz are shown in
Fig.\ref{figure:synpower}. Understandably, for higher frequencies, the
synchrotron power peaks at a higher value of energy. Clearly, for a
given frequency, the channel resulting in a larger number of $e^\pm$
with energies closer to the peak of the synchrotron power spectrum
will result in a larger synchrotron flux. Hence, for higher
frequencies, the synchrotron flux from a leptonic channel will
dominate over that from a hadronic channel. This feature can be
observed in Fig.\ref{figure:bbbartautau_flux_comp}, where for a given
DM mass ({\em e.g.}, 200 GeV) the $\tau^+ \tau^-$ channel dominates
over the $b \bar{b}$ channel for higher frequencies; by the same
token, for lower frequencies, the $b\bar{b}$ channel dominates over
the $\tau^+ \tau^-$ channel. Since the electrons originating from the
pair-annihilation of a DM of mass $M_\chi$ can have a maximum energy
$M_\chi$, the spectrum corresponding to a heavier DM would be harder
(as shown by Fig.\ref{figure:dnde}). Consequently, for larger
frequencies, the synchrotron power spectrum peaks at a higher value of
the electron energy. This is reflected by
Fig.\ref{figure:bbbartautau_flux_comp} where, for a larger $M_\chi$,
the crossover from $b\bar b$ dominance to $\tau^+ \tau^-$ dominance
occurs at progressively higher frequencies.
\begin{figure}[h!]
\centering
\includegraphics[width=.49\linewidth,height=3in]{figures/Tucana_II_bbbar_tautau_mumu_200GeV_2TeV_flux.pdf}
\caption{Synchrotron flux for three final states for DM annihilation, i.e., 100$\%$ to $b\bar{b}$ (red), 100$\%$ to $\tau^+ \tau^-$ (green) and 100$\%$ $\mu^+ \mu^-$ (blue).
The solid line corresponds to DM mass 200 GeV and the dashed line corresponds to DM mass 2 TeV.
$B \, = \, 1\, \mu$G, $D_0 = 3 \times 10^{28}$ cm$^2$/s, $\gamma_D = 0.3$
and $\langle \sigma v\rangle \, = 10^{-26}$ cm$^3$/s have been used.
NFW density profile has been used for DM distribution inside the dSph. }
\label{figure:bbbartautau_flux_comp}
\end{figure}
\begin{table}[!h]
\centering
\begin{tabular}{|p{2.5cm}|p{4cm}|p{4cm}|}
\hline \hline
Galaxy & From GMRT (unit Jy) & From VLA (unit Jy) \\
& Freqency~=~0.1475 GHz & Freqency~=~1.4 GHz \\
\hline \hline
Aquarius-II & $3.5 \times10^{-3}$ & $4.4 \times10^{-4}$ \\
\hline \hline
Draco-II & $4.6 \times10^{-3}$ & $5.8 \times10^{-4}$ \\
\hline \hline
Eridanus-II & $4 \times10^{-3}$ & X \\
\hline \hline
Grus-I & $2.1 \times10^{-3}$ & X \\
\hline \hline
Hydra-II & $4.5 \times10^{-3}$ & $5.8 \times10^{-4}$ \\
\hline \hline
Leo-V & $3.1 \times10^{-3}$ & $5 \times10^{-4}$ \\
\hline \hline
Pegasus-III & $5.1 \times10^{-3}$ & $4.9 \times10^{-4}$ \\
\hline \hline
Pisces-II & $1.8 \times10^{-3}$ & $4.5 \times10^{-4}$ \\
\hline \hline
Triangulum-II & $3.1 \times10^{-3}$ & $5.5 \times10^{-4}$ \\
\hline \hline
Draco & $3.7 \times10^{-3}$ & $4.7 \times10^{-4}$ \\
\hline \hline
\end{tabular}
\caption{Upper limits on the radio flux density from the dSphs obtained from GMRT and VLA. The absence of data has been indicated by an ``X''.
For Carina-II, Horologium-II, Reticulum-II and Tucana-II\&III, neither
observatory provides any data.}
\label{table:radio_flux_upper_limits}
\end{table}
\begin{figure}[h!]
\centering
\includegraphics[width=.3\linewidth]{figures/bbbar_Exclusion_Curve_GMRT_VLA_comparison_sigmav_versus_M.pdf}
\hskip 10pt
\includegraphics[width=.3\linewidth]{figures/mumu_Exclusion_Curve_GMRT_VLA_comparison_sigmav_versus_M.pdf}
\hskip 10pt
\includegraphics[width=.3\linewidth]{figures/tautau_Exclusion_Curve_GMRT_VLA_comparison_sigmav_versus_M.pdf}
\hskip 10pt
\includegraphics[width=0.9\linewidth]{figures/legends_Exclusion.pdf}
\caption{Upper limits on $<\sigma v>$ for different values of DM mass obtained using data from GMRT and VLA. The left, center and right
panels are for $\chi \chi \rightarrow b \bar{b}\ , \ \tau^+ \tau^-\ , \ \mu^+\mu^- $ respectively.
For each plot $B \, = \, 1\, \mu$G, $D_0 = 3 \times 10^{28}$ cm$^2$/s, $\gamma_D = 0.3$ have been used.
NFW density profile has been used for DM distribution inside the dSphs.
For each dSph the values of the parameters $d$, $r_h$, $\rho_s$ and $r_s$ have been used from table \ref{table:astro_param_dwarfs}~.
}
\label{figure:sigmav_Exclusion_curve}
\end{figure}
Of the existing radio-frequency observatories, two particularly
important ones are
\begin{itemize}
\item the Giant Metrewave Radio Telescope (GMRT) \cite{Intema:2016jhx} with its sky-survey, covering the expanse over $-53^{\circ}$ to $+90^{\circ}$, with
a particularly useful set corresponding to $\nu =0.1475~{\rm GHz}$, and
\item the NVSS survey by the Very Large Array
(VLA) telescope \cite{condon1998}, over $-40^{\circ}$ to $+90^{\circ}$,
and at $\nu = 1.4~{\rm GHz}$.
\end{itemize}
The non-observation of any emission from the location of the UFDs
under consideration can be tranlsated to 1$\sigma$ level of upper limits on the fluxes same as
listed in Table \ref{table:radio_flux_upper_limits}. Here we would like to mention that all radio interferometric maps are made per unit beam where beam is generally convolved with their respective point spread function (PSF) while making the map. So we can directly obtain the flux density by using the final image in units of Jy. Note here that
the limited sky coverage implies that no statement can be made for
certain UFDs, such as Tucana-II. These limits can, then, be
translated to upper limits on $\langle
\sigma v \rangle$ for different annihilation channels, as shown in
Figure \ref{figure:sigmav_Exclusion_curve}. As with the case of the
gamma-ray observations, no such limit can be derived for Hydra II and
Triangulum II as current observations only admit upper limits on their
densities (see Table \ref{table:table-1}).
For a given dSph the upper limit obtained from VLA is stronger
compared to that obtained from GMRT for higher values of DM mass. The
synchrotron flux for a dSph depends on the choice of magnetic field
(B) and the diffusion coefficient ($D_0$). Hence, the upper limits on
$\langle \sigma v \rangle$ calculated using the upper limit on the
flux will depend on the choice of these parameters. In
Figure \ref{figure:Exclusion_curve_B_D0_dependence} we show the strong
effect of $B$ and $D_0$ on the resulting upper limit on
$\langle \sigma v\rangle$. Since Aquarius II gives the strongest
limit on $\langle \sigma v \rangle$ among the newly discovered UFDs,
we choose it to show the effect of $B$ and $D_0$ on the upper limit.
Figure \ref{figure:Exclusion_curve_VLA_Fermilat_comparison} shows the
comparison between the best limits from VLA with the best from
Fermi-LAT for all the three annihilation channels, {\rm i.e.}, $b\bar{b}$, $\mu^+ \mu^-$ and $\tau^+ \tau^-$. For Fermi-LAT
Horologium I gives the best limit and for VLA Draco gives the best
limit among all the considered dSphs. It can be observed that for the mass range 10 GeV-10 TeV VLA gives stronger limit for $\mu^+ \mu^-$ and $\tau^+ \tau^-$, whereas for
$b\bar{b}$ Fermi-LAT gives stronger constraint towards lower DM mass.
\begin{figure}[h!]
\centering
\includegraphics[width=.4\linewidth]{figures/Aquarius_II_bbbar_Exclusion_Curve_sigmav_versus_M_B_D0_Dependence.pdf}
\caption{Upper limits on $\langle \sigma v \rangle$ for $b \bar{b}$ annihilation channel for Aquarius II with different choices of $B$ and $D_0$.
For all the three cases we have used a constant values for $\gamma_D$ = 0.3.
NFW density profile has been used for DM distribution inside the dSph.}
\label{figure:Exclusion_curve_B_D0_dependence}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=.3\linewidth]{figures/bbbar_Exclusion_Curve_sigmav_versus_M_Fermilat_VLA_Comparison.pdf}
\hskip 10pt
\includegraphics[width=.3\linewidth]{figures/mumu_Exclusion_Curve_sigmav_versus_M_Fermilat_VLA_Comparison.pdf}
\hskip 10pt
\includegraphics[width=.3\linewidth]{figures/tautau_Exclusion_Curve_sigmav_versus_M_Fermilat_VLA_Comparison.pdf}
\caption{Comparison of Upper limits on $\langle \sigma v \rangle$ from VLA and Fermi-LAT. We have compared the best limit from VLA (Draco)
with the best limit from Fermi-LAT (Horologium I). The left, center and right
panels are for $\chi \chi \rightarrow b \bar{b}\ , \ \tau^+ \tau^-\ \rm{and} \ \mu^+\mu^- $ respectively.
The VLA limit for Draco has been obtained with the astophysical parameters
$B \, = \, 1\, \mu$G, $D_0 = 3 \times 10^{28}$ cm$^2$/s and $\gamma_D = 0.3$. NFW density profile has been used for DM distribution inside the dSph.}
\label{figure:Exclusion_curve_VLA_Fermilat_comparison}
\end{figure}
Square Kilometer Array (SKA) is world's largest radio telescope ever
planned and search for particle signature of DM is one of the primary
goals of the project \cite{Bull:2018lat, Colafrancesco:2015ola}. SKA
will work over a wide radio frequency range 50 MHz - 50 GHz. This
enables SKA to observe the synchrotron emision from DM annihilation in
astrophysical structures \cite{Bull:2018lat, Colafrancesco:2015ola}.
We calculate the synchrotron flux from the newly discovered dSphs and
study the possibility of observing these signals at SKA.
Fig.\ref{figure:synflux_newgalaxies}, shows the resultant synchrotron
fluxes, for the dSphs listed in Table \ref{table:astro_param_dwarfs},
for each of the three exclusive annhilation channels ($b\bar{b}$,
$\tau^+ \tau^-$ and $\mu^+ \mu^-$). Also shown are the predicted
sensitivity curves for 10 hr, 100 hr and 1000 hr of data collecting
time for the SKA telescope. The SKA sensitivity limits have been
obtained from \cite{braun2017ska,braun2019anticipated}. For a given
frequency if the synchrotron flux from a dSph lies above the SKA
sensitivity curve then the flux can be observed by the SKA. We have
calculated the synchrotron flux for two different values of DM mass,
i.e., 200 GeV and 2 TeV. For the prediction of synchrotron flux we
have used the annihilation cross section $\langle \sigma v \rangle $
$=$ $10^{-26}$ $cm^3/s$. For both 200 GeV and 2 TeV DM mass
this value of cross section satisfies the upper limits obtained
earlier.
\begin{figure}[h!]
\centering
\includegraphics[width=.3\linewidth]{figures/bbbar_new_udfs_ska_radio_sensitivity_200GeV.pdf}
\hskip 10pt
\includegraphics[width=.3\linewidth]{figures/tautau_new_udfs_ska_radio_sensitivity_200GeV.pdf}
\hskip 10pt
\includegraphics[width=.3\linewidth]{figures/mumu_new_udfs_ska_radio_sensitivity_200GeV.pdf}
\includegraphics[width=.3\linewidth]{figures/bbbar_new_udfs_ska_radio_sensitivity_2TeV.pdf}
\hskip 10pt
\includegraphics[width=.3\linewidth]{figures/tautau_new_udfs_ska_radio_sensitivity_2TeV.pdf}
\hskip 10pt
\includegraphics[width=.3\linewidth]{figures/mumu_new_udfs_ska_radio_sensitivity_2TeV.pdf}
\hskip 10pt
\includegraphics[width=0.9\linewidth]{figures/legends.pdf}
\caption{Synchrotron fluxes from the new galaxies including Draco. The left, center and right
panels are for $\chi \chi \rightarrow b \bar{b}\ , \ \tau^+ \tau^-\ , \ \mu^+\mu^- $ respectively, while the top (bottom) rows correspond to $M_\chi = 200 \rm GeV (2 \, {\rm TeV})$.
For each plot $B \, = \, 1\, \mu$G, $D_0 = 3 \times 10^{28}$ cm$^2$/s, $\gamma_D = 0.3$ and $\langle \sigma v\rangle \, = 10^{-26}$ cm$^3$/s have been used.
NFW density profile has been used for DM distribution inside the dSphs.
For each dSph the values of the parameters $d$, $r_h$, $\rho_s$ and $r_s$ have been used from table \ref{table:astro_param_dwarfs}~.
}
\label{figure:synflux_newgalaxies}
\end{figure}
Fig.\ref{figure:synflux_newgalaxies} also contains the synchrotron
flux for one of the well studied classic dSph, namely Draco (with a
$J$-factor of
$0.679 \times 10^{19}$ GeV$^2$ cm$^{-5}$ at $\theta = 0.5^0$). It is worth noting
that for an identical choice of parameters $(\langle\sigma v\rangle, B,
D_0, M_{\chi})$, the flux from Draco is expected to be larger than those
from the newly discovered dSphs.
We can observe that a 200 GeV DM annihilating to any one of the three channels $b \bar{b}$, $\mu^+ \mu^-$ and $\tau^+ \tau^-$,
the synchrotron flux from all the 15 dSphs can be observed with 1000 hrs data collection time at SKA.
For a given annihilation channel if we go higher in DM mass the synchrotron flux decreases and the feasibility of observing radio signals at SKA
decreases. This can be observed from the plots for 2 TeV DM mass from Figure \ref{figure:synflux_newgalaxies}.
For a 2 TeV DM the synchrotron signals from $b \bar{b}$ channel can be observed at 1000 hr of SKA for all the 15 dSphs.
But for most of the 15 dSphs the synchrotron signals from $\mu^+ \mu^-$ and $\tau^+ \tau^-$ channels
can not be observed with 1000 hr of data collection time at SKA.
Only for Draco the synchrotron signals for both 200 GeV and 2 TeV DM, and for all the three annihilation channels
can be observed with only 10 hr data of SKA.
\begin{figure}[!h]
\centering
\subfigure[]
{\includegraphics[width=0.5\linewidth]{figures/Tucana_II_bbbar_200GeV_d_fundamental_uncertainty.pdf}}
\label{fig:d_uncertainty}
\subfigure[]
{\includegraphics[width=0.5\linewidth]{figures/Tucana_II_bbbar_200GeV_rhalf_fundamental_uncertainty.pdf}}
\label{fig:rhalf_uncertainty}
\subfigure[]
{\includegraphics[width=0.5\linewidth]{figures/Tucana_II_bbbar_200GeV_sigma_fundamental_uncertainty.pdf}}
\label{fig:sigma_uncertainty}
\caption{ Uncertainty in the synchrotron flux from Tucana II due to 1$\sigma$ uncertainty in the parameters (a) $d$, (b) $r_{\frac{1}{2}}$
and (c) $\sigma$. NFW density profile has been used for DM distribution inside the dSph.
We have considered a 200 GeV DM annihilating to $b\bar{b}$ final state. For each of the three cases we have used
$B \, = \, 1\, \mu$G, $D_0 = 3 \times 10^{28}$ cm$^2$/s, $\gamma_D = 0.3$ and $\langle \sigma v\rangle \, = 10^{-26}$ cm$^3$/s.
}
\label{figure:uncertainty}
\end{figure}
\subsection{Uncertainty in Synchrotron flux}
\label{sec:synchrotron_uncertainty}
As with the case for the gamma-ray fluxes, the synchrotron fluxes too
are subject to uncertainties on account of the errors in determining the
the astrophysical parameters $d$, $r_{1/2}$ and $\sigma$. Using the
$1\sigma$ uncertainties
in these, as listed in
Table \ref{table:astro_fundamental_param_dwarfs}, we show, in
Figure \ref{figure:uncertainty}, the consequent
uncertainities in the synchrotron flux for a 200 GeV DM annihilating to $b\bar{b}$ final
state in Tucana II. The particular UFD is chosen to demonstrate this as it is the
one associated with the highest synchrotron flux.
\begin{figure}[h!]
\centering
\includegraphics[width=.49\linewidth,height=3in]{figures/bbbar_NFW_Burkert_ISO_Tucana_II_200GeV.pdf}
\caption{Synchrotron flux vs. frequency for Tucana II for different dark matter distribution profiles : NFW, Burkert and ISO.
We have considered a 200 GeV DM annihilating to $b\bar{b}$ final state. We have used
$B \, = \, 1\, \mu$G, $D_0 = 3 \times 10^{28}$ cm$^2$/s, $\gamma_D = 0.3$ and $\langle \sigma v\rangle \, = 10^{-26}$ cm$^3$/s.}
\label{figure:sync_profile_dependence}
\end{figure}
A further source of uncertainty is due to the density distribution for the DM.
While we have, until now, used the NFW profile, in
Fig.\ref{figure:sync_profile_dependence} we display the differences in
predicted flux for Tucana II for NFW, Burkert and ISO profiles. It is
interesting to note that while, for the gamma-ray flux, the NFW
profile closely corresponded to the median prediction, in the present
context it leads to the highest fluxes, while the Burkert profile
leads to the lowest.
The values of magnetic field ($B$), diffusion constant ($D_0$) and the
exponent ($\gamma_D$) for the dSphs are also not known precisely. In
Section \ref{sec:analysis} we discussed about the possible values for
$B$, $D_0$ and $\gamma_D$. To see the effect of these parameters on
the prediction of the amount of synchrotron flux, in
Figure \ref{fig:flux_diff_BD0gamma} we plot the synchrotron flux for
different values of $B$, $D_0$ and $\gamma_D$. We have taken the
values of $B$ in the range $0.5$-$10$ $\mu$G, $D_0$ in the range
$3\times 10^{26}$-$10^{30}$ $cm^2/s$ and $\gamma_D$ in the range
$0.1$-$1$. Since magnetic field is the cause of synchrotron radiation,
the amount of flux increases as we go higher in magnetic field. From
Figures \ref{fig:flux_different_D0} and \ref{fig:flux_different_gamma}
we can see how diffusion strongly affects the amount of synchrotron
emission in dSphs \cite{Colafrancesco:2005ji,Colafrancesco:2006he}.
\begin{figure}[!h]
\centering
\subfigure[]
{\includegraphics[width=0.5\linewidth]{figures/Tucana_II_bbbar_200GeV_different_B.pdf}}
\label{fig:flux_different_B}
\subfigure[]
{\includegraphics[width=0.5\linewidth]{figures/Tucana_II_bbbar_200GeV_different_D0.pdf}}
\label{fig:flux_different_D0}
\subfigure[]
{\includegraphics[width=0.5\linewidth]{figures/Tucana_II_bbbar_200GeV_different_gamma.pdf}}
\label{fig:flux_different_gamma}
\caption{ Synchrotron flux versus frequency for different values of (a) $B$, (b) $D_0$ and (c) $\gamma_D$.
The mass of DM is 200 GeV and the DM pair annihilates to $b\bar{b}$ final state with $\langle \sigma v\rangle \, = 10^{-26}$ cm$^3$/s.
NFW density profile has been used for DM distribution inside the dSph.}
\label{fig:flux_diff_BD0gamma}
\end{figure}
\section{Conclusions}\label{section:conclusion}
\noindent Dwarf spheroidal galaxies, apart from being dominated by dark matter,
have almost no millisecond pulsars, and yet possess considerable
large scale magnetic fields. These render them ideal aren\ae\ for
indirect detection of dark matter through photonic signals.
In this paper, we have studied such signals in two distinct
parts of the electromagnetic spectrum, namely
high energy gamma rays on the one hand and radio signals on the other.
Eleven years of gamma-ray data collected by the Fermi-LAT
collaboration has failed to show any significant excess from any of
the fifteen ultra faint dwarf spheroidal galaxies. The stringent upper
limits on the $\gamma$-ray flux that these null-results imply are
translated to upper limits on the thermal average of the pair
annihilation cross section $\langle\sigma v\rangle$, for a given value
of $M_{\rm DM}$. We do this for 12 of the UFDs. As for the remaining
three (viz. Triangulum-II, Hydra-II and Tucana-III), since only upper
limits on the corresponding (astrophysical) $J$-factors are available,
the best one can do is to project a {\em best possible} upper bound.
The DM could, of course, annihilate into a plethora of channels, with
the relative probabilities having a strong dependence on the
underlying dynamics governing the system. Rather than be slave to a
particular model, we consider instead that the pair annihilation
proceeds {\em exclusively} through one of three channels, {\em viz.},
$b \bar b$, $\tau^+\tau^- $ and $\mu^+ \mu^-$. The ensuing independent
constraints can, then, be combined to constrain the parameter space of
any model where the annihilation is dominated by one of the three
channels. Indeed, simple scaling arguments can also be used to infer
bounds applicable for annihilations into other light fermions.
As far as the gamma-ray signal is concerned, of all the dwarf
galaxies, Holorologium-I imposed the most stringent limits, and, for
the $b \bar b$ channel, easily outdoes the limits obtained by the
Planck collaboration. It needs
to be said here that the limits obtained by us have a significant
dependence on the DM density profile. While most of our results were
derived using the NFW profile, we have considered the Burkert and ISO
profiles as well. In general the Burkert profile led to the strongest
bounds on $\langle \sigma v\rangle$ and the ISO the weakest. Our
limits, thus, represent, typically, the middle-of-the-road
constraints.
On the other hand, electrons produced by the annihilation of
the dark matter to Standard Model particles emit synchrotron
radiation as the electrons travel through the magnetic field of the
dSphs. The synchrotron radiation falls in the radio frequency range.
We calculated the upper limit on DM annihilation cross section for different annihilation final
states, i.e., $b\bar{b}$, $\tau^+ \tau^-$ and $\mu^+ \mu^-$ for the dSphs using the available radio flux upper limits from the GMRT and VLA telescope data.
We compared these limits with those obtained from Fermi-LAT data and found that Fermi-LAT gives stongest
limit. For all dSphs, we have calculated the synchrotron fluxes for
those aforementioned final states for frequencies ranging from 10 MHz to 100 GHz assuming
fixed values for annihilation cross section ($\langle \sigma v
\rangle$), magnetic field $(B)$, diffusion coefficient $(D_0)$ and
the Kolmogorov parameter ($\gamma_D$). To showcase the dependence
of the synchrotron radiation on the dark matter mass we have
considered two representative values of DM mass, i.e. 200 GeV and 2
TeV respectively. The SKA telescope will be sensitive between 50 MHz
to 50 GHz frequency range. We have compared these predicted
synchrotron flux with the sensitivity curves of SKA telescope for 10
hr, 100 hr and 1000 hr data collection time.
\section{Reply to Examiner's Comment}
\hrulefill \\
\section{Authors' Reply}
Author of the thesis entitled \textbf{``Study Of Potential Self-Annihilation Signal From Dark Matter Particles In Some Prospective Astrophysical Dark Matter Sources"} thank the examiner very much for pointing out few important and valid points. All the responses are given below.
\subsection{Questions to be clarified in the text/during viva-voce.}
\begin{enumerate}
\item Referee's Comments: \\
\noindent Discuss the bounds/exclusions on mass of non-baryonic DM from previous searches.\\
\textbf{Author's response :} \\
\noindent I discussed the possible bounds on DM mass obtaining from previous searches in the conclusion section of Chapter 6.\\
\item Referee's Comments: \\
\noindent \textbf{(Section 3.2)} Table 3.1 mentions that the energy resolution is $<$ 15$\%$ at $>$ 100 MeV.
How the energy resolution of the Fermi-LAT depends on the energy of the incident photon?
What are the factors that affect the energy resolution of the detector? How is it possible to
have a good energy resolution, when for a high energy photon ($>$100 GeV), ``most of the
shower fall outside the active region of CAL''?\\
\textbf{Author's response :} \\
\noindent The energy resolution of the Fermi-LAT strongly depends on the energy of the incident photons. Between 1 GeV and 100 GeV (which is the best region for spectral analysis), the energy resolution of Fermi-LAT is $<10\%$ whereas at 100 MeV it is around $20\%$ at MeV.
At low energy, the spacecraft orbits with high particle backgrounds and that increases the activity of LAT as well as reduce the photon selection efficiency. At high energy, the limitations of the LAT approach are mostly due to the small statistics available.
In Pass 8 the Fermi-LAT collaboration team introduced a clustering stage, which proved to be very effective to improve the energy resolution at high energy.\\
\item Referee's Comments: \\
\noindent \textbf{(Section 6.4)} In Fig. 6.2(a), with nine years of Fermi-LAT data,
for 100$\%$ $b\bar{b}$ channel the TS value peaks at $m_{DM}$ = 14 GeV, while for 100$\%$ $\tau^{+} + \tau^{-}$ it peaks at $m_{DM}$ = 4 GeV [20]. Have you checked, if the peak value of TS varies (14 GeV for $b\bar{b}$ and 4 GeV for $\tau^{+} + \tau^{-}$) with the amount of data used in analysis. Figure 6.2(a) seems to indicate that the
position of the peak value for TS is changing with the amount of data analysed.\\
\textbf{Author's response :} \\
\noindent Yes, we checked whether the peak TS value varies with periods of data and found that the peak value is more or less constant.\\
\item Referee's Comments: \\
\noindent \textbf{(Figure 6.3)} Comment on how the `error bars' in the figure have been assigned – statistical or statistical+
systematic. The error bars are large in the region of interest (in the right plot). The (apparent) excess is over a rather large range of energy.\\
\textbf{Author's response :} \\
\noindent At lower energy, the large `error bars' are mainly caused by the systematic of the instrument.\\
\item Referee's Comments: \\
\noindent \textbf{(Section 6.4.1, P69)}
There were many earlier studies that already analyzed Tuc-II with six or seven years of Fermi-LAT data [212–215]. But in our analysis, we have studied Tuc-II with nine years of Fermi-LAT data and thus the increase in TS peak values possibly originate from the larger dataset [20]. Hence, such an increase in $\gamma$-ray excess with increasing the time period of $\gamma$-ray excess with increasing the time period of analysis seems to encourage the indirect detection of DM annihilation signal [20].
If possible, add results from previous analyses in table 6.5 or figure 6.2. Add a comment
in the text too.\\
\textbf{Author's response :} \\
\noindent I added few lines in Chapter 6.\\
\item Referee's Comments: \\
\noindent \textbf{(P71)} How p-value is calculated should be defined in chapter 4 or in the annexure.\\
\textbf{Author's response :} \\
\noindent I described the p-value calculation in Chapter 4.\\
\item Referee's Comments: \\
\noindent What improvements in the observations (detector, data analysis, etc) can make your results more robust and put more stringent limits?\\
\textbf{Author's response :} \\
\noindent There are several ways to improve our results. The most three important factors are 1) periods of data, 2) improving the sensitivity of the detector, 3) improving the energy and angular resolutions.\\
\item Referee's Comments: \\
\noindent \textbf{(P83)} What are the future experiments that can investigate the excess seen in Tuc-II?
Discuss the features of the future experiments which can help resolve the issue.\\
\textbf{Author's response :} \\
\noindent The Cherenkov Telescope Array (CTA) would be the most crucial detector to investigate the excess seen in Tuc-II. I discussed this in Chapters 7 and 8.\\
\item Referee's Comments: \\
\noindent Table 7.8 shows that J values calculated using NFW, ISO and BURKERT vary widely for
UGC 3371, etc. But, Table 8.2 shows that the J values obtained from them are close for
Aquarius II, etc. Please explain the difference.\\
\textbf{Author's response :} \\
\noindent The value of the J-factor could be very different for each source. We can only compare them for alike sources.\\
\end{enumerate}
\hrulefill \\
\subsection{General Editorial Comments}
\begin{enumerate}
\item Referee's comment: \\
\noindent References should appear in sequence. The first reference that appears in the text is
[25] whereas reference [1] comes after [39]. \\
\textbf{Author's response :} \\
\noindent I corrected the reference sequence.\\
\item Referee's comment: \\
\textbf{Use of capital letters in the section titles should be uniform. See the disparity below.}\\
\noindent 1.3 Observational Evidence of Dark Matter.\\
\noindent 1.4.2 Weakly interacting massive particles.\\
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I corrected this mistake.\\
\item Referee's comment: \\
\noindent Avoid using ‘artist’s rendition’ type figures/diagrams, replace if possible. \\
\textbf{Author's response :} \\
\noindent I modified the images.\\
\item Referee's comment: \\
\noindent Full form and meaning of terms like TSTART (MET), TSTOP (MET) be mentioned.\\
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I made the changes.\\
\end{enumerate}
\hrulefill \\
\subsection{Specific Editorial Comments}
\begin{enumerate}
\item Referee's comment : \\
\noindent \textbf{Abstract}\\
\noindent \textbf{P5}\\
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I modified the sentence in Abstract (P5).\\
\item Referee's comment : \\
\noindent \textbf{Section 1.1}\\
\noindent \textbf{P1}\\
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I modified the sentence in section 1.1 (P1).\\
\item Referee's comment : \\
\noindent \textbf{P2}\\
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I modified the paragraph (P2).\\
\item Referee's comment : \\
\noindent \textbf{P3 first para}\\
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I modified the paragraph (P3 first para).\\
\item Referee's comment: \\
\noindent `Unfortunately, we are yet to detect this strange material in the laboratory.' Remove this sentence as the same is stated in the next.\\
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I removed the sentence.\\
\item Referee's comment : \\
\noindent \textbf{Section 1.2}\\
\noindent \textbf{P3}\\
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I modified the sentence in section 1.2 (P3).\\
\item Referee's comment : \\
\noindent \textbf{Section 1.2:}:\\
\noindent \textbf{P4}\\
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I modified the sentences in section 2.1 (P4).\\
\item Referee's comment : \\
\noindent \textbf{P5}
\textbf{Author's response :} \\
\noindent I added the abbreviation for HI.\\
\noindent Like the referee’s suggestion, I modified the sentences in section 1.2 (P4).\\
\item Referee's comment : \\
\noindent \textbf{Section 1.3}
\textbf{Author's response :} \\
\noindent I followed all the referee's suggestions and modified the Section 1.3 accordingly.
\item Referee's comment : \\
\noindent \textbf{P6}
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I modified the sentences in P6.\\
\item Referee's comment : \\
\noindent \textbf{P7}
\textbf{Author's response :} \\
\noindent I followed all the suggestions that the referee suggested for P7.\\
\item Referee's comment : \\
\noindent \textbf{P8}
\textbf{Author's response :} \\
\noindent I replaced the figure 1.3.\\
\item Referee's comment : \\
\noindent \textbf{Section 1.4}
\noindent \textbf{P9}
\textbf{Author's response :} \\
\noindent I followed all the referee's suggestions and incorporated the changes in P9 accordingly.\\
\item Referee's comment : \\
\noindent \textbf{P10}
\textbf{Author's response :} \\
\noindent I followed all the referee's suggestions and incorporated the changes in P10 accordingly.\\
\item Referee's comment : \\
\noindent \textbf{P11}
\textbf{Author's response :} \\
\noindent I followed all the referee's suggestions and incorporated the changes in P11 accordingly.\\
\item Referee's comment : \\
\noindent \textbf{P12}\\
\textbf{Author's response :} \\
\noindent I followed all the suggestions that the referee suggested for P12.\\
\item Referee's comment : \\
\noindent \textbf{Section 1.5}\\
\noindent \textbf{P12}\\
\textbf{Author's response :} \\
\noindent I followed all the suggestions that the referee suggested for Section 1.5 (P12).\\
\item Referee's comment : \\
\noindent \textbf{Section P13}\\
\noindent \textbf{Figure 1.5 – make it bigger.}\\
\textbf{Author's response :} \\
\noindent I increased the size of figure 1.5.\\
\item Referee's comment : \\
\noindent \textbf{P14}\\
\textbf{Author's response :} \\
\noindent I followed all the suggestions that the referee suggested for P14.\\
\item Referee's comment : \\
\noindent \textbf{Section 1.6}\\
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I modified the sentences in section 1.6.\\
\item Referee's comment : \\
\noindent \textbf{Section 1.6.1}\\
\noindent \textbf{P16}\\
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I incorporated the changes in section 1.6.1 (P16).\\
\item Referee's comment : \\
\noindent \textbf{Section 1.6.3}\\
\noindent \textbf{P19}\\
\textbf{Author's response :} \\
\noindent I followed all the suggestions that the referee suggested for Section 1.6.3 (P19).\\
\item Referee's comment : \\
\noindent \textbf{Section 2.1}\\
\noindent \textbf{dwarf spheroidal galaxies (dSphs) – this abbreviation is mentioned 3 times in this chapter!}\\
\textbf{Author's response :} \\
\noindent I made the necessary changes.\\
\item Referee's comment : \\
\noindent \textbf{P21}\\
\textbf{Author's response :} \\
\noindent I followed all the suggestions that the referee suggested for P21.\\
\item Referee's comment : \\
\noindent \textbf{P22}\\
\textbf{Author's response :} \\
\noindent I followed all the suggestions that the referee suggested for P22.\\
\item Referee's comment : \\
\noindent \textbf{Section 2.1}
\noindent \textbf{P23, P24}\\
\noindent \textbf{Explain the symbols properly.}
\textbf{Author's response :} \\
\noindent In the corrected copy, I used the different symbols for each density profile and explained them properly.\\
\item Referee's comment : \\
\noindent \textbf{P25, P26}\\
\noindent \textbf{DMFIT, DMFit – please make it uniform}\\
\noindent \textbf{DMFIT, Dark-SUSY, Pythia - references given multipple times - please remove.}\\
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I used the uniform abbreviation. I also removed the repetitive references.\\
\item Referee's comment : \\
\noindent \textbf{2.3.2}
\noindent \textbf{P26}\\
\noindent \textbf{Include the definition of $\theta_{min}$ and its numerical value used.}\\
\noindent \textbf{Include the definition of tidal radius $R_{t}$ and its numerical value used.}\\
\noindent \textbf{What is d in equation 2.8?}
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I added the definition of $\theta_{min}$, $R_{t}$ and d in P26.\\
\item Referee's comment : \\
\noindent \textbf{Section 2.3.4}
\noindent \textbf{P28}\\
\textbf{Author's response :} \\
\noindent I followed all the suggestions that the referee suggested for P28.\\
\item Referee's comment : \\
\noindent \textbf{Section 2.4}
\noindent \textbf{P28-P29}\\
\textbf{Author's response :} \\
\noindent I followed all the suggestions that the referee suggested for Section 2.4.\\
\item Referee's comment : \\
\noindent \textbf{P29}\\
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I incorporated all the changes in P29.\\
\item Referee's comment : \\
\noindent \textbf{P32}\\
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I incorporated all the changes in P32.\\
\item Referee's comment : \\
\noindent \textbf{Section 3.1}
\noindent \textbf{P36}\\
\textbf{Author's response :} \\
\noindent I followed all the suggestions that the referee suggested for Section 3.1.\\
\item Referee's comment : \\
\noindent \textbf{Section 3.2.3}
\noindent \textbf{P38}\\
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I made the necessary changes.\\
\item Referee's comment : \\
\noindent \textbf{Section 3.2.3.2}
\noindent \textbf{P40}\\
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I incorporated all the changes in Section 3.2.3.2.\\
\item Referee's comment : \\
\noindent \textbf{Section 3.2.5}
\noindent \textbf{P42}\\
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I incorporated all the changes in Section 3.2.5.\\
\item Referee's comment : \\
\noindent \textbf{P42-P43}\\
\noindent \textbf{Give reference for `PASS 8’.}
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I added the reference for `PASS 8'.\\
\item Referee's comment : \\
\noindent \textbf{Section 4.2.}
\noindent \textbf{P45}\\
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I incorporated all the changes in Section 4.2.\\
\item Referee's comment : \\
\noindent \textbf{Section 5.2}
\noindent \textbf{P52}\\
\noindent \textbf{Explain `event class' and `event type', particularly `event class 128' and `event type 3'.}
\textbf{Author's response :} \\
\noindent I explained the term `event class’ and `event type’ in Section 5.2.\\
\item Referee's comment: \\
\noindent \textbf{Section 5.2}
\noindent \textbf{P52; Figure 5.1}\\
\noindent \textbf{The lines in the upper plot should be adequately explained
using legends in the figure itself as well as in the caption and the text. See figure 5.2 for example.}
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I made the necessary changes in the caption of Figure 5.2 and also added few lines in Section 5.2.\\
\item Referee's comment : \\
\noindent \textbf{Section 6.1}
\noindent \textbf{P63}\\
\noindent \textbf{Reference for Michigan Magellan Fibre System (M2FS).}
\textbf{Author's response :} \\
\noindent I added the reference for M2FS.\\
\item Referee's comment : \\
\noindent \textbf{Section 6.2.1}
\noindent \textbf{P66; Figure 6.1}\\
\noindent \textbf{Same comment as Figure 5.1.}
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I made the necessary changes in the caption of Figure 6.1 and also added few lines in Section 6.2.1.\\
\item Referee's comment: \\
\noindent \textbf{Section 6.3}
\noindent \textbf{P67}\\
\noindent \textbf{``For our work, we have taken the J-factors of Tuc-II, Ret-II and UMi from Evans et al., 2016
[12].'' This is repeat of what is already stated in the same section/paragraph.}
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I removed this sentence.\\
\item Referee's comment : \\
\noindent \textbf{P71}\\
\noindent \textbf{Give the reference for T-TEST here.}
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I added the reference for T-TEST.\\
\item Referee's comment : \\
\noindent \textbf{Section 6.4.2}
\noindent \textbf{P73}\\
\noindent \textbf{``Among all sort of above-mentioned background sources,'' to ``Among these'',}
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I modified the sentence.\\
\item Referee's comment : \\
\noindent \textbf{Section 7.2}
\noindent \textbf{P87: Table 7.1}\\
\noindent \textbf{The table caption should be more explicit mentioning the quantities given in the columns.}
\textbf{Author's response :} \\
\noindent Like the referee’s suggestion, I modified Table 7.1.\\
\end{enumerate}
\section{Overview of Likelihood}
\noindent For analyzing the Fermi-LAT data, it is very important to construct
the likelihood function. This function would be needed to obtain the best fit
model parameters during analysis. These best-fitted parameters would describe
the source's spectrum, its position and its real significance.
\noindent The likelihood function ($\mathcal{L}$) is the probability
of obtaining
the data for the assigned input model if the LAT data is true. The input model
would include the description of the nature of gamma-ray sources (i.e. whether
they are point, extended or transient), and also derive the intensity of the
source and its possible spectra. For this purpose, we would first assume that
we
have a handsome knowledge on the mapping of the input model of the gamma-ray
sky
to the data.
\section{The LAT Likelihood Function}
\noindent During analysis, it is advisable to distribute the LAT data into a
large number of bins. The binning is important because the LAT counts are
dependent on many variables, thus despite having a large number of counts,
each bin will have a small number of counts. The observed number of counts in
each bin will follow the Poisson distribution.
\noindent The expression of the Likelihood function is:
\begin{equation}
\mathcal{L(\alpha|D) \equiv P(D|\alpha)}
\end{equation}
\noindent Here, $\mathcal{L}$ is the probability of obtaining the data
($\mathcal{D}$) for a
given input model with parameters ($\alpha$).
\noindent For binned LAT likelihood analysis, the function, $\mathcal{L}$, is
defined as
the
a product of Poisson likelihoods i.e. the product of the probabilities of
observing the detected counts in each bin.
\begin{equation}
\mathcal{L(\alpha|D)} = \prod \limits_{k} \frac{\lambda_{k}^{n_{k}}
e^{-\lambda_{k}}}{n_{k}!}
\end{equation}
$\mathcal{L(\alpha|D)}$ can also be written as:
\begin{eqnarray}
\mathcal{L(\alpha|D)} &=& \prod \limits_{k} e^{-\lambda_{k}} \prod \limits_{k}
\frac{\lambda_{k}^{n_{k}}}{n_{k}!}\\
&=& e^{-N_{pred}} \prod \limits_{k} \frac{\lambda_{k}^{n_{k}}}{n_{k}!} ,
\end{eqnarray}
\noindent where, $N_{pred}$ denotes the total number predicted counts from the source model.\\
\noindent
Instead of $\mathcal{L}$, the logarithm of $\mathcal{L}$ is comparatively easier to handle as this factor is maximized
during the fitting. The log-likelihood can
be
expressed in the following form:
\begin{equation}
\mathcal{\log \mathcal{L}} = \sum \limits_{k} n_{k} \log \lambda_{k} - N_{pred}
\end{equation}
\noindent Here, the observed counts in bin, k are $n_{k} = n_{k}(\mathcal{D})$
and the counts predicted by the model is $\lambda_{k} = \lambda_{k}(\alpha)$.
If
the bin size is infinitely small, then we can assume $n_{k} = 0~\rm{or}~1$ (i.e. for unbinned
likelihood). In that case, the functional form of $\mathcal{L}$ would be:
\begin{equation}
\mathcal{L(\alpha|D)} = \prod \limits_{k} \lambda_{k}^{n_{k}} e^{-\lambda_{k}}
\end{equation}
\noindent where k is now representing individual photons.
\noindent From Eqs. (4.1 - 4.6), we have observed that the likelihood function
explicitly depends on the predicted counts by the model but it also depends on
the differential $\gamma$-ray flux of a source. In Eq.~4.7, we express the
distribution of $\gamma$-ray source as $S(E,\widehat{p}|\alpha$) where
$\widehat{p}$ denotes the direction of a $\gamma$-ray in celestial coordinates.
With the help of the spacecraft orientation, direction in celestial
coordinates can be converted to instrument coordinates, i.e. to
$\widehat{v}(t|\widehat{p})$. The source model can be expressed as:
\begin{equation}
S(E,\widehat{p}|\alpha) = \sum \limits_{k} s_{k}(E)
\delta(\widehat{p}-\widehat{p_{k}}) +
S_{G}(E,\widehat{p}|\alpha)+S_{eg}(E,\widehat{p}|\alpha) + \sum \limits_{l}
S_{l}(E,\widehat{p}|\alpha)
\end{equation}
\noindent Here, $s_{k}(E), S_{G}(E,\widehat{p}|\alpha)$,
$S_{eg}(E,\widehat{p}|\alpha)$ and $S_{l}(E,\widehat{p}|\alpha)$ define the
source model for point source, galactic emission, extragalactic isotropic
emission and other sources, respectively. In case of faint $\gamma$-ray
sources,
it is very important to consider the correct model of the diffuse $\gamma$-ray
background. In Fermi-LAT, the diffuse background is divided into Galactic and
Extragalactic components:
\begin{itemize}
\item \textbf{Galactic Component:} A spatially-structured Galactic component
corresponding to the $\gamma$-ray emission from the interaction of cosmic rays
with interstellar gas, dust, and photon fields. \\
\item \textbf{Isotropic Extragalactic Component:} An isotropic component
corresponding to the combination of extragalactic $\gamma$-ray emission and
instrumental charged particle background.
\end{itemize}
\noindent At the time of searching for a new $\gamma$-ray source, it is
advisable to free the normalization parameters of the two diffuse components
while we can fix the spectral shapes. Now, if we want to calculate the
predicted counts by a model for a given bin, k, we can then estimate the
differential flux of each $\gamma$-ray source for their IRFs as.
\begin{equation}
\lambda_{k}(\alpha) = \sum \int\int\int S(E,\widehat{p}|\alpha)
A(E,\widehat{v})
P(\widehat{v}'|E,\widehat{v}) D(E'|E,\widehat{v}) d\Omega dE dt
\end{equation}
\noindent In Eq.~4.8, we have summed over all $\gamma$-ray sources and then
integrated over the total observing time, the energy range, and the solid angle
with respect to the LAT frame.
\noindent Several simplifying assumptions can be considered for lowering the
total computational cost of likelihood calculation. The region of interest
(ROI)
defines centred region around the location of our source of interest and during
the
likelihood process the $\gamma$-ray emission model is generated for only those
sources which are situated within a few PSF-widths of this region. The duration
of observation and exposure are precomputed and that helps to discard the IRF's
dependence
on the azimuthal angle. When the bin size of the
analysis is comparatively larger (i.e. for binned analysis) than the scale of
the energy dispersion, we
can ignore the effects of energy dispersion.
\section{The Profile Likelihood}
\noindent For estimating the best-fit parameters for a given model, we need to
maximize
the likelihood with respect to the parameters of interest, i.e.,
\begin{equation}
\widehat{\alpha} = arg_{\alpha} max \mathcal{L(\alpha|D)}
\end{equation}
\noindent where $\widehat{\alpha}$ represents the estimator of maximum
likelihood (MLE) for the parameters, $\alpha$. Practically, performing a
non-linear maximum likelihood for a large set of parameters is not
computationally possible. Hence, a conventional solution is to partition the
set
of parameters i.e., $\alpha$, into a set of parameters of interest i.e., $\mu$,
and a set of nuisance parameters i.e., $\theta$, such that $\alpha =
\{\mu,\theta\}$. For instance, when we are trying to determine the possible
spectra
for gamma-ray source, the spectral index or flux of a specific $\gamma$-ray
source can be the parameters of interest while the background $\gamma$-ray
sources or constraints on source characteristics derived from independent
analysis could be the the nuisance parameters. The expression of the Profile
likelihood is:
\begin{equation}
\mathcal{L}_{p}(\mu|\mathcal{D}) = sup_{\theta} \mathcal{L(\mu,\theta|D)}
\end{equation}
\noindent The advantage of the profile likelihood is that by maximizing the
likelihood function concerning the
nuisance parameters, it decreases the dimensionality of likelihood. The
profile likelihood function does not disclose the full
distribution of the nuisance parameters of the system but it still maintains
the
statistical properties of the likelihood function\cite{Bartlett:1953gfw}.
\section{The Joint Likelihood}
\noindent The sources which belong to the same class generally
share a common set of physical characteristics i.e., they have the same
range of luminosity or can be described by the same spectral models, etc. For
such cases, the sensitivity to the characteristic of the sources can be
increased by combining them. This formulation would
follow the likelihood-based analysis and would lead to the concept of joint
likelihood. The joint likelihood is the function of the product of the
individual likelihoods where the function combines the parameters of all individual sources.
For each source, i, the expression of the joint likelihood is:
\begin{equation}
\mathcal{L}(\mu, \{\alpha_{i}\}|\mathcal{D})=\prod \limits_{i=1}
\mathcal{L}_{i}(\mu, \alpha_{i}|\mathcal{D})
\end{equation}
\noindent Here, the parameters are divided into a set of parameters shared by
all
sources i.e., $\mu$, and a set of parameters depending on each individual
source i.e., {$\alpha_{i}$}.
\noindent For example, when we are considering a set of dSphs for DM study, the
intrinsic luminosity or spectral model can be treated as the set of shared
parameters, while the distance to each source, point-like background sources
within each ROI, or the normalization of a diffuse background near each source
will be considered as the independent parameters. The joint likelihood
would
then act as the likelihood of a single source, including the construction of a
profile joint likelihood. In order to obtain the combined limits for DM
annihilation signal, the individual likelihood function is weighted with their respective J-factor.
\section{Hypothesis Testing}
\noindent The likelihood formalism allows a robust statistical framework for
hypothesis testing. In hypothesis testing we check how much the parameters of
interest ($\mu$) deviate from their nominal expected value ($\mu_{0}$). From
the ratio of the maximum likelihood test assuming for two hypothesis
\cite{Neyman:1933wgr}, we can derive the ``test statistic'' (TS). The
expression
of the TS is:
\begin{equation}
TS = 2~ln \Big(
\frac{\mathcal{L(\widehat{\mu}|D)}}{\mathcal{L}(\mu_{0}|\mathcal{D})} \Big) =
2~(ln \mathcal{L(\widehat{\mu}|D)} - ln\mathcal{L}(\mu_{0}|\mathcal{D}))
\end{equation}
\noindent where, $\mathcal{L}(\mu_{0}|\mathcal{D})$ is the maximum likelihood
value for a model without an additional source (i.e., the `null hypothesis') and
$\mathcal{L(\widehat{\mu}|D)}$ is the maximum likelihood value for a model with
the source at a specified location.
\noindent Wilks theorem \cite{Wilks:1938dza} and Chernoff
\cite{Chernoff:1954eli} theorem state that the asymptotic distribution of TS
values under the null hypothesis (i.e., $\mu = \mu_{0}$) should follow a
$\chi^{2}_{n}$-distribution, where n represents the dimensionality of $\mu$.
Hence, it signifies that the TS value can be drawn from this
$\chi^{2}_{n}$-distribution if the null hypothesis holds. The large TS
indicates that the source is
present
in the location i.e., the null hypothesis is incorrect.
\noindent The most general application of the likelihood ratio test is to check
the significance of a $\gamma$-ray source. The detection significance
($\sigma$)
of any source is approximately equal to the square root of the TS value, i.e.,
$\sigma \approx \sqrt{TS}$. In order to check the significance of the source,
the parameter of interest is the flux of the gamma-ray source whereas the null
hypothesis assumes that the gamma-ray flux from the source location is zero.
From Eq.~4.12, the TS value can be defined by maximizing the likelihood
function for both the putative source flux that is free to vary and the
putative source flux that is fixed to zero. As a thumb rule, the threshold for
detection of the real signal is set at TS $\geq$ 25 i.e., corresponds to 5$\sigma$.
However, for some cases, the spectral index of the source is left free during
the model fitting, and that can decrease the detection significance of the
source from 5$\sigma$ to $\approx$ 4.2$\sigma$ \cite{Abdo:2010ru}.
\noindent Apart from finding the best-fitted parameter values for a
source-model, the likelihood algorithm can also estimate the uncertainty of
those parameters of interest \cite{Neyman:1937uhy}. From the shape of the
likelihood function, we can determine the uncertainty in the best-fit parameters
of interest. For high significant sources where the null hypothesis is not
valid, we choose the two-sided confidence interval for the estimation of maximum
likelihood, while for the faint sources when we could not strongly eliminate the
null hypothesis, we set the one-sided confidence interval on the maximum
likelihood estimate. But unfortunately, the calculation is not straight forward
for the system with low-counts where we could not directly use the asymptotic
formula to estimate the significance of the sources. Many literatures have
already provided the solutions for this issue\cite{Helene:1990yi,
Feldman:1997qc, Roe:1998zi, Rolke:2004mj} and for our analysis, we have
particularly used the delta-log-likelihood method provided by the Rolke et al.
\cite{Rolke:2004mj}.
\section{Derivation of Flux Upper limits}
\noindent From section 4.5, we find that for deriving the significance of any source, we need
to check the likelihood ratio test. In Wilks’s theorem, the
null hypothesis means no source exists, while the alternative hypothesis
assumes that the source exists.
For the analysis of the faint sources (or we can consider the case for DM searches), we observe a little to no gamma-ray emission from
the direction of the target. Hence, for that scenario, null hypothesis is a good approximation and it
is advisable to estimate upper bound on the gamma-ray flux.
\noindent In order to estimate the flux upper limit, we generally prefer to use the profile likelihood method
\cite{Rolke:2004mj, Barbieri:1982eh}. If we
assume that the delta-log-likelihood behaves asymptotically i.e., as $\chi^{2}$, then the 90$\%$ confidence region
would be relative to the change in log-likelihood by 2.71/2. Here, we need to mention that such changes in log-
likelihood function corresponds to the two-sided confidence interval.
If we would like to derive the
upper-limit corresponds to a one-sided 95$\%$ C.L., during likelihood, all the normalization parameters
along with the two diffuse components would be fitted with the entire dataset until
the logarithmic difference between two likelihood functions reach at 1.35 \cite{Abdo:2010ex}.
\section{P-value}
\noindent In statistics, the p-value or the calculated probability is the probability of obtaining results at least as extreme, when the null hypothesis ($H_{0}$) is assumed to be true \cite{pvalue}. However, in statistics, the term ‘extreme’ depends on how we are dealing with the hypothesis test. The null hypothesis ($H_{0}$) is usually an hypothesis of ``no difference'' whereas the alternative hypothesis ($H_{1}$) is the opposite of the null hypothesis. A smaller p-value means that there is stronger evidence in favor of the alternative hypothesis. \\
\noindent P-values are usually found using p-value tables or spreadsheets/statistical software. These calculations are based on the assumed or known probability distribution of the specific statistic being tested. P-values are calculated from the deviation between the observed value and a chosen reference value, given the probability distribution of the statistic, with a greater difference between the two values corresponding to a lower p-value. \\
\noindent In the contest of null-hypothesis testing, if we consider that the observed test-statistic (t) is drawn from the probability distribution (T), then the p-value would be of observing a test-statistic value at least as ``extreme'' as t when the null hypothesis ($H_{0}$) is true. In a formal significance test, the null hypothesis ($H_{0}$) is rejected if the p-value is less than a threshold alpha level ($\alpha$) or significance level. The term significance level ($\alpha$) is used to refer to a pre-chosen probability whereas we use the p-value to indicate a probability after a given study. The value of $\alpha$ is instead set by the researcher before examining the data. By convention, $\alpha$ is commonly set to 0.05, though lower alpha levels are sometimes used. \\
\section{Discussion and Concluding Remarks}
\noindent Finally, we arrive at this chapter to wrap up the thesis. We have performed a detailed study regarding the influence of five component model of elliptical galaxies on Bondi-type spherical or quasi spherical inflow / outflow solutions. Although, we summarized our outcomes at the end of each chapter, we briefly touch up again for the sake of completion. Let us take a quick look back at what has been accomplished so far and what may be possible going forward.
\noindent In our work for Triangulum-II, we have analysed nearly seven years of \texttt{Fermi-LAT} $\gamma$-ray data but could not observe any $\gamma$-ray excess from the location of Tri-II. We then derived the upper limit of $\gamma$-ray flux for two possible scenarios, such as for $\rm{\sigma_{v}= 4.2~km~s^{-1}}$ and $\rm{\sigma_{v} = 3.4~km~s^{-1}}$. For $\rm{\sigma_{v}= 4.2~km~s^{-1}}$, Tri-II constrain the mSUGRA and the MSSM models with low thermal relic densities, whereas the limits constraint the Kaluza-Klein DM in UED and the AMSB models for masses $\lesssim 230$~GeV and $\lesssim 375$~GeV, respectively. Even for $\rm{\sigma_{v} = 3.4~km~s^{-1}}$, Tri-II can constrain the MSSM model with low thermal relic densities and the AMSB model for masses $\lesssim 300$~GeV. Besides, from our work, we found that $\gamma$-ray data from Tri-II can even put stronger limits on the theoretical DM models than UMi. We would also like to point out that our results are entirely based on the standard NFW profile and we do not consider any effects of boost factor related to substructures in Tri-II or the Sommerfeld effect in accord to the annihilation cross-section. Finally, from our work, we can state that with more precise observations of Tri-II, in future we can establish Tri-II as a very strong DM candidate for indirect DM searching.\\
\noindent In case of Tucana-II, we have studied nearly nine years of \textit{Fermi}-LAT data from the location of Tuc-II to investigate the signatures of DM annihilation. We have detected a very faint $\gamma$-ray excess from the location of Tuc-II for both the power-law spectra and the $\gamma$-ray spectrum from DM annihilation. We would also like to report that for $\gamma$-ray spectrum from DM annihilation, we have shown the variation of the TS values for Tuc-II with DM mass. We have also observed that for nine years of \textit{Fermi}-LAT data, TS value of Tuc-II peaks at
~$m_{DM}\sim$~14 GeV for $100\%$ $b\bar{b}$ annihilation channel, while for $100\%$ $\tau^{+}\tau^{-}$ TS value peaks at $m_{DM}\sim$~4 GeV. In case of our Galactic Center, $m_{DM}$ range between 8 GeV to 15 GeV for $\tau^{+}\tau^{-}$ annihilation channel and the $m_{DM}$ range between 25 GeV to 70 GeV for $b\bar{b}$ annihilation channel play a crucial role to understand the $\gamma$-ray emission possibly arising from DM annihilation \cite{Gordon:2013vta, Hooper:2013rwa, Daylan:2014rsa, Zhou:2014lva, Calore:2014xka}. The mass range for our obtained TS peaks from the analysis for Tuc-II are slightly lower than the mass range required to describe the DM interpretation for Galactic Center. \\
\noindent From our analysis, we have also confirmed that excess from Tuc-II location is increased with increasing the time periods of data and such increase in TS peak value is approximately
proportional to $\sim\sqrt{t}$ \cite{Charles:2016pgz}; here t is the time periods of Fermi-LAT dataset. The most encouraging result of this analysis is that such successive increase in TS peak values of Tuc-II with larger time periods of the dataset can hint at the existence of any real signal either associated with any astrophysical scenario or resulting from DM annihilation. In the field of indirect DM detection, such hints of $\gamma$-ray emission from Tuc-II may open a new path in DM physics. \\
\noindent When we assume the $\gamma$-ray spectra for DM annihilating to 100$\%$ $\tau^{+}\tau^{-}$ channel, we have obtained a p-value of $\approx$ 0.003 from Tuc-II location corresponding to the background models provided by Fermi-LAT. It can be the result of rare statistical fluctuation in background. The one most tantalizing explanations of such excess are the presence of any surrounding unresolved bright sources. Among different types of unresolved sources, blazars are believed to be the main source of background fluctuation that emits $\gamma$-ray emission just below the threshold limit for Fermi-LAT. We have searched the BZCAT and the CRATES catalog, have found that three nearby radio sources liewithin $1^{\circ}$ ROI of Tuc-II and among all of them, the most nearby source i.e., J225455-592606 lies at just $0.55^{\circ}$ away from the location of Tuc-II. We have also checked the 4FGL catalog of Fermi-LAT (\cite{Fermi-LAT:2019yla}) and have noticed that a source, 4FGL 2247.7-5857 lies 0.66 degree away from Tuc-II location. Hence, it is very unlikely that the emission detected from Tuc-II location would be extremely contaminated by these nearby sources. \\
\noindent We have generated the residual TS maps of Tuc-II for energy $>$ 100 MeV (Figs.~6.4 and 6.5). From these residual TS maps, we have noticed an excess of TS value $\approx$ 6.5
that is $0.18^{\circ}$ from the location of Tuc-II. We have also shown that whenever we have included Tuc-II to our source model, the excess from that location is greatly reduced. Thus, there is a very high chance that such emission is associated with Tuc-II. We have generated our all residual TS maps for energy $>$ 100 MeV. But the PSF of Fermi-LAT is comparatively large at lower energies, while at higher energies (say for around energy $>$ 500 MeV), the 68$\%$ of the photons would be confined within 1 degree of the location of the source \footnote{\tiny{http://www.slac.stanford.edu/exp/glast/groups/canda/lat{\_}Performance.htm}} Thus, to again check the origin of the excess near Tuc-II location, we have produced a residual TS map for energy $>$ 500 MeV. Interestingly, from this new TS map (Fig 6.11), we could find that after including the Tuc-II to our source model, the nearby excess region has almost disappeared \cite{Bhattacharjee:2018xem}. This signature would probably hint that in Figs.~6.4 and 6.5 after including Tuc-II to source model, the remaining excess emission is associated with weak background modellings. Thus, from our result, we can at best conclude that the nearby excess is associated with Tuc-II location and it might indicate a DM annihilation signal from our Tuc-II \cite{Bhattacharjee:2018xem}.\\
\noindent Several Fermi collaboration papers observe that in a large region of the blank sky, the excess of TS $>$ 8.7 is very common. If we only consider the blazars within $1^{\circ}$ from the location of source, they would roughly account for 10$\%$ of such excesses. The DM subhalos may also be responsible for a $\approx$5$\%$-10$\%$ irreducible background. Therefore, we have re-calibrated our obtained significance and it decreases the TS peak value of Tuc-II from 8.61 to 4.79, i.e., from p value 0.003 to 0.029. At present, with nine years of data, the obtained emission from Tuc-II is much weaker than Fermi-LAT's threshold detection. But from our work, we have also found that the significance of Tuc-II is increased with an increase in time periods of data and from TS map we have also observed a localized excess just beside the Tuc-II. So, in future, with even more time periods of data and with better background modelling, we can expect to explain the origin of the $\gamma$-ray excess from the location of Tuc-II.
\noindent As we have already reported, the excess observed from Tuc-II location is below the detection threshold for Fermi-LAT. Thus we have derived the possible upper-limit of pair-annihilation $<\sigma~v>$ of the DM in Tuc-II as a function of DM mass and five annihilation channels. For our purpose, we have adopted the values J-factor and their uncertainties
from Evans \textit{et al.}, 2016 \cite{Evans:2016xwx}. \\
\noindent For this, we have analysed the larger periods of compared to other previous works performed on Tuc-II and thus from our analysis, we can expect to provide more stringent limits on the theoretical models. We have observed that for median J-factor value, Tuc-II has imposed a strong constraint on the blue points in both the mSUGRA and the MSSM model, while the
uncertainty band of Tuc-II have begun to constrain the red points. Because of the large uncertainty band, we may not obtain any impressive limits from Tuc-II in parameter space of ($\sigma v$, $m_{DM}$) but our obtained results stress that with a more detailed understanding of the internal structure, there is a high possibility that in future Tuc-II would provide very strong bounds on theoretically favoured DM models. From results show that for $>$100 GeV DM mass, Tuc-II imposes a stronger bound than the limits obtained from Ret-II. Thus, we can expect if we would have larger periods of dataset and more detailed information of the internal structure, we should be able to reduce the uncertainty band of Tuc-II to a possible narrow band in parameter space of
($<\sigma~v>$, $m_{DM}$). Then Tuc-II might be considered as one of the most DM dominated UFDs.
\noindent For this work, we have studied for nearly nine years of LAT data but have not detected any emission from the location of LSB. With DMFit tools, we estimated the $\gamma$-ray and $<\sigma v>$ upper limits for four annihilation states. But because of their low J-factors, individual limits obtained from the LSB galaxies have not put any stringent limits on the DM theoretical models. With the hope of increasing the LAT sensitivity, we have then performed the joined likelihood on the set of four LSB galaxies. As expected, the stacking method has improved the $<\sigma v>$ by the factor of 4 than the individual limits obtained from LSB galaxies. But, the combined $<\sigma v>$ were still around two orders of magnitude weaker than the $<\sigma v>$ limits obtained from refs.~Ackermann et al.\cite{Ackermann:2015zua} and Steigman et al.\cite{Steigman:2012nb}. \\
\noindent The observation data for our chosen LSB galaxies could not particularly favour cored profile over the cuspy profile. The rotational curves for LSBs are in an agreement with the prediction from $\lambda$CDM and some study also indicated that the cuspy profile could also provide a reasonable fit to the DM distribution at the internal core. Thus, motivated by all the
observation indications, we have modelled the DM distribution of LSB galaxies with the NFW profile. We have also performed a comparative study between NFW, ISO and BURK DM density profiles (check Fig.~7.13) and find that the $<\sigma v>$ limits for each density profiles are overlapping with other. Thus, from our study, we could not favour one profile between all three but for the median value of J-factor, the most stringent limits would come from the NFW profile.\\
\noindent For this study, we have used the multiwavelength approach which is considered as the complementary of the $\gamma$-ray detection method and is very popular in nowadays for the indirect searching of the DM signal. For our analysis, we have preferred to focus on the radio signal and for that purpose, we have followed the code RX-DMFIT. RX-DMFIT is the extension of DMFIt package and is specially designed to investigate the possible radio and X-ray emission from DM annihilation. LSB galaxies have very low nuclear activity and poor star formation rates and that makes them suitable targets for examining the diffuse radio emission most possibly coming from the DM annihilation/decay. We have estimated the multiwavelength SED plots for LSB galaxies and have also checked how the nature of SED varies with varying the parameter sets (check Figs.~7.8 $\&$ 7.9). We have searched for the radio flux limits for all LSB galaxies from the NVSS sky survey data but only the location of UGC 11707 gives detected flux density values and other thee LSBs only provide the upper limits to the flux density. With the VLA flux density, we have tried to predict the radio $<\sigma v>$ limits in parameter space of ($<\sigma v>$, $m_{DM}$) (check Fig.~7.10). If we consider the 2$\sigma$ uncertainty band associated with the radio limits, we have noticed that the radio limits are overlapping with the limits obtained from stacking analysis for LAT data (check Fig.~7.11) and all three annihilation channels have shown the same nature. Hence, from our analysis, we could, at best, comment that the radio data is competitive with the gamma-ray data. With more detailed observational data and precise analysis, in future, it might be possible for LSB galaxies to impose strong limits on DM models.\\
\noindent We have checked whether with the next generation radio (SKA) and gamma-ray (CTA) telescopes it would be possible to detect any emission from the location of LSB galaxies. We have noticed (check 7.12) that SKA might be able to detect the emission from the location of LSB galaxies and its 1000 hours of observation would have the highest possibility to detect the emission from LSBs. But we would also like mention that in order to claim that SKA would detect the emission from DM annihilation, we first need to perform a simulation study. Besides, the estimated radio emission is also dependent on the various astrophysical scenario. We need to have a well-defined knowledge on the distribution of diffusion zone, magnetic fields, DM density profile, etc.. Hence, from our analysis, we could, at best, hint the possibility of observing the radio signal from LSB galaxies by SKA. We have also found (Fig.~7.14) that for energy ranges between 100 GeV to 1 TeV, it might be possible for CTA to observe the $\gamma$-ray emission with the 50 hours of sensitivity curve. But like SKA, the same conclusion also holds for CTA. A simulation study is needed to examine whether it would be possible for CTA to detect the emission resulting
from the DM annihilation/decay.\\
\noindent Hence, from our work, we can conclude that the $\gamma$-ray data obtained from the Fermi-LAT could not impose the strong $<\sigma v>$ limits on the WIMP models. We find that the radio signal possibly originated from WIMP annihilation is quite competitive with the $\gamma$-ray emission observed by the Fermi-LAT. Our analysis, at best, indicates that to study $\gamma$-ray and radio signal from the LSB galaxies, SKA and CTA would play a very significant role in
future.
\noindent The UFDs, dominated by the DM content, can also possess the moderately large value of the magnetic field and that makes them an ideal target for the indirect detection of DM signals through the multiwavelength approach. In recent times, several literatures have tried to derive the strong limits on annihilation $\langle \sigma v \rangle$ from gamma-ray and radio data \cite{Hoof:2018hyn, DiMauro:2019frs, Fermi-LAT:2016uux, Beck:2019ukt, Regis:2017oet, Regis:2014tga}. For our work, we have considered the newly discovered UFDs detected by the observations performed by Pan-STARRS, DES and some other spectral survey \cite{Bhattacharjee:2020phk}. Using both the gamma-ray (detected by Fermi-LAT) and the radio data (detected by VLA and GMRT), we have searched for the WIMP annihilation signal in 15 UFDs. We have also predicted the possible spectra associated with the radio emission and have checked whether it would be possible for SKA to detect any emission from them \cite{Bhattacharjee:2020phk}.
\noindent With eleven years of Fermi-LAT data, we have not detected any significant emission from the location of UFDs. Thus, we have then derived the upper limits on $\langle\sigma v\rangle$ as a function of DM mass for our chosen DM annihilation channels. We have estimated the limits for 12 UFDs. Because, for Triangulum II, Hydra II and Tucana III, we only have the upper limits on J-factor, so they could not provide any robust limits on the parameter space of ($m_{DM}$, $\langle\sigma v\rangle$) \cite{Bhattacharjee:2020phk}. For gamma-ray data, Holorologium I provided the most stringent constraints but our obtained limits strongly depend on the distribution of DM. Using the NFW profile, we have derived most of the results. Besides, we have also performed a comparative study between NFW, Burkert and ISO profiles. In view of gamma-ray analysis, the Burkert profile imposed the strongest limits on $\langle \sigma v\rangle$, while the ISO imposed the weakest limits \cite{Bhattacharjee:2020phk}.
\noindent In view of synchrotron emission, we have considered the radio-flux limits observed by GMRT and VLA and have predicted the respective $\langle\sigma v\rangle$ upper limits for $b\bar{b}$, $\tau^+ \tau^-$ and $\mu^+ \mu^-$ final states. We have compared our obtained radio limits with the limits obtained from gamma-ray data and found that the VLA telescope has the potential to impose more stringent limits than Fermi-LAT.
\noindent We have derived the possible the synchrotron fluxes in UFDs for a wide range of frequencies, i.e., between 10 MHz to 100 GHz and compared these with the sensitivity curves of SKA. We find that for 200 GeV DM mass and $b \bar b$ final state, it might be possible for SKA to detect the radio emission from our considered UFDs, even with its 10 hours of sensitivity curve. For $\tau^+\tau^-$ and $\mu^+ \mu^-$ final states, the emission could be detected with the 100 hours of exposure curve of SKA. On the other side, for comparatively heavy DM masses, (say $\sim$ 2 TeV), the synchrotron spectrum would become harder, and thus a longer observation time would be necessary to detect the radio signal.
\noindent We also need to remember that the synchrotron fluxes have strong dependences on several astrophysical components, such as magnetic field, diffusion coefficient, distance, etc. But, due to insufficient observation, the values are not very precise. Thus, in order to predict the synchrotron fluxes in UFDs, we must have the most accurate information of the astrophysical parameters, especially the magnetic field and the diffusion coefficient. We have checked how the synchrotron flux in Tucana II varies with $B$, $D_0$ and $\gamma_D$ for DM mass 200 GeV and $b\bar{b}$ annihilation channel. We have noticed that synchrotron emission strongly depends on these. Besides, the emission is also controlled by the choice of DM density distribution in UFDs. We have found that for Tucana II, NFW density profile could produce the maximum amount of radio flux between all three density profiles. Our considered UFDs process a large uncertainties in $r_{1/2}$, $d$ and $\sigma$. The uncertainties in these astrophysical parameters can also affect the synchrotron emission arising from UFDs. We have performed the respective checks and have found that the largest contribution is coming from the uncertainties in $\sigma$.
\noindent Despite the dependence on these uncertainties, we can safely conclude that a very intriguing aspect of indirect searching for DM signal from UFDs has been discussed in our study. In Fig.~8.13, we have compared the most stringent obtained from the VLA sky-survey with the best limits obtained from the Fermi-LAT data for three final states. From Fig.~8.13, we could notice that for $\mu^+ \mu^-$ and $\tau^+ \tau^-$ final states, VLA imposes the better limits that Fermi-LAT, while for $b\bar{b}$ final state Fermi-LAT provides the stronger limits that VLA \cite{Bhattacharjee:2020phk}.
\noindent In view of indirect DM search, we expect that the next-generation $\gamma$-ray telescope, CTA would play a very crucial role. CTA would have the deepest sensitivity for a very wide range of energies\cite{CTAConsortium:2018tzg} and would be able to investigate the thermal $\langle \sigma v \rangle$ rate from several of DM rich targets. Along with the CTA, in radio sky, SKA is expected to become the most sensitive radio telescopes in the future. Besides, Low-Frequency Array (LOFAR) such as MeerKAT and ASKAP would also be complementary to the CTA and SKA. We can, at best, expect that all of these next-generation telescopes would be able to solve several crucial aspects of dark matter physics.
\section*{A: T-TEST for Unequal Variance}
\noindent T-TEST is the statistical hypothesis test that is generally applied to
check whether any significant deviation lies between two populations
\cite{student:1908df, Miodrag:2011gh, Kim:2015gh}. When we specially deal with
the small set of data, e.g. $n_{1}$ or/and $n_{2}$ $<$ 30, T-test is
favoured\cite{student:1908df, Miodrag:2011gh, Kim:2015gh}. The shape of the
T-TEST distribution is very similar to the Gaussian distribution and for
implying the T-TEST, the variables of each sample must be drawn from the
Gaussian distribution.
\noindent To obtain the output of T-TEST statistics, such as t-value and degree
of freedom (d.o.f), we need to provide the values of mean, standard deviation
and number of counts of each sample as inputs. This t-value is also defined as
the test statistics value which is derived from the two-sample dataset at the
time of performing the hypothesis test for T-TEST. \\
\noindent Depending on the standard deviation values from two samples, there are
generally three types of T-TEST hypothesis check. For our analysis \cite{Bhattacharjee:2018xem},
(ref.~Chapter~6), we have performed the T-TEST on two independent samples and
they have the unequal variance.
Thus, we have considered the T-TEST for the unequal variance which is also known
as the Welch's T-TEST\cite{Welch:1947df, Ruxton:2006dh}. The Welch's T-TEST is
generally favoured when two samples have different values of variance and their
sample size could be same or not. In that case, the formula for evaluating
t-value and d.o.f are:\\
\begin{equation}
\rm{t-value} =
\frac{mean_{1}-mean_{2}}{\sqrt{\frac{(var_{1})^{2}}{n_{1}}+\frac{(var_{2})^{2}}{
n_{2}}}}
\end{equation}
\begin{equation}
\rm{d.o.f.} =
\frac{\Big(\frac{(var_{1})^{2}}{n_{1}}+\frac{(var_{2})^{2}}{n_{2}}\Big)^{2}}{
\frac{(\frac{var_{1}^{2}}{n_{1}})^{2}}{n_{1}-1}+\frac{(\frac{var_{2}^{2}}{n_{2}}
)^{2}}{n_{2}-1}} ,
\end{equation}
\noindent where,\\
$\rm{mean_{1}}$ and $\rm{mean_{2}}$ are the mean values of $\rm{sample_{1}}$ and
$\rm{sample_{2}}$, respectively. \\
$\rm{var_{1}}$ and $\rm{var_{2}}$ are the variance of $\rm{sample_{1}}$ and
$\rm{sample_{2}}$, respectively. \\
$\rm{n_{1}}$ and $\rm{n_{2}}$ are the number of counts in $\rm{sample_{1}}$ and
$\rm{sample_{2}}$, respectively. \\
\noindent Once we obtain the t-value, we can then derive the probability i.e.,
the p-value by following the two-tailed t-distribution curve. We also have to
assign the significance level i.e., $\alpha$ and for our purpose, we have used
the $\alpha$ = 5$\%$. Now, from the p-value, we can determine whether our two
samples agree with the null-hypothesis. The p-value lies between 0 and 1. The
small p-value i.e., p$\le$ 0.05 indicates that we might reject the
null-hypothesis, while the large p-value i.e., p$>$
0.05 hints that it might not be possible to reject the null-hypothesis.\\
\noindent For Fig.~6.3(b), we find that for both the full energy residual
spectrum and the positive bump for energy above 500 MeV, we obtain the p-values
$>$ 0.05. This indicates that for both cases, we are not able to reject the
null-hypothesis \cite{Bhattacharjee:2018xem}.
Thus, from our analysis, we can, at best, conclude that the DM annihilation
spectra can provide an acceptable fit to the residual energy-spectrum of Tuc-II \cite{Bhattacharjee:2018xem}.
\section*{B: Normality Test of Dataset}
\noindent As we already have mentioned in the above section, we can only perform
the T-TEST hypothesis test, if both the sample set follow the Gaussian
distribution \cite{student:1908df, Miodrag:2011gh, Kim:2015gh}. Thus, we have
tried to check whether our sample data from Fig~6.3(b) follow the normal distribution \cite{Bhattacharjee:2018xem}. To check
the normality of the sample dataset, there are various statistical tests such as
Kolmogorov–Smirnov test \cite{Kolmogorov:1933gh, Smirnov:1948ff}, Shapiro–Wilk
\cite{Shapiro:1695hd}, normal quantile plot\cite{loy:2015fh, wilk:1968fb},etc.
For our analysis, we have generated the quantile-quantile (Q-Q) plot. The Q-Q
plot is the graphical representation that helps to check whether the dataset
from two samples originate from the same population which follow a common
distribution \cite{loy:2015fh, wilk:1968fb}. \\
\noindent In order to check the normality of the sample, this Q-Q plot shows the
quantiles of our dataset versus the quantiles values of an ideal Gaussian
distribution \cite{loy:2015fh, wilk:1968fb}. The quantile values obtained the
theoretical Gaussian distribution are plotted on the horizontal axis, while the
quantile values obtained from our samples are plotted on the y-axis. If our
sample dataset follows the Gaussian distribution, from Q-Q plot we should obtain
a straight line which indicates the correlation between our sample data and the
theoretical data from Gaussian distribution \cite{loy:2015fh, wilk:1968fb}. In
order to find the exact correlation between the paired dataset used for Q-Q
plot, they are fitted with the regression equation (y=ax) and that fitting would
return to the value of the coefficient of determination ($R^{2}$). Once we
obtain the value of $R^{2}$, we can then calculate the Pearson correlation
coefficient (r), i.e., r=$(R^{2})^{1/2}$=R \cite{asuero:2006cju, schober:2018fg}.
Ideally, the value of r should lie between 0.9 to 1 which indicates the high
correlation between our sample set and the Gaussian distribution
\cite{asuero:2006cju, schober:2018fg}. The r-value close to the 1 indicates that
the sample set has very less deviation from the normality, while r-value close
to the 0 denotes the large deviation from normality \cite{asuero:2006cju,
schober:2018fg}.\\
\noindent It is true that no experimental dataset would have the r-value = 1 but
for statistical hypothesis check, the dataset should roughly follow the Gaussian
distributed i.e., r-value should be $>$ 0.9. For our study (Chapter~6), we have
produced the Q-Q plot and have evaluated the respective r-value for the dataset
that we have considered for the T-TEST \cite{Bhattacharjee:2018xem}. We find that the residual energy
spectrum for both i) full energy range and ii) energy range $>$ 500 MeV,
produce a straight line in the Q-Q plots and their corresponding r-value lies
$>$ 0.94 \cite{Bhattacharjee:2018xem}. Thus, from this test, we can find that our sample has some deviation
from the normality but the r-value $>$ 0.94 indicates that we can safely use our
sample set for checking the T-TEST goodness of fitting \cite{Bhattacharjee:2018xem}.
\chapter{List of publications}\label{publications}
\noindent Throughout the journey of my PhD and even in the writing of this thesis, I have fortunately received a great deal of support and assistance by so many people, directly or indirectly, that it is a very difficult task for me to acknowledge them all. So, before proceeding further, I must say that the following is not at all a comprehensive endeavour.\\
\noindent First of all, I want to thank my PhD supervisor Dr. Parthasarathi Joarder for providing me with the opportunity to work in the field of astrophysics. Without his support, attention and guidance, none of this would have been possible. His insightful feedback, motivation, continuous support and above all constant questioning pushed me to sharpen my thinking and brought my work to a higher level. In this process, I have gradually learnt the subject with more accuracy by recognising the pre-possessed contradictions or misconceptions in me.\\
\noindent I also acknowledge the fruitful interactions with my joint supervisor Prof. Dhruba Gupta. Several discussions with him have enriched my comprehension of the subject.\\
\noindent I would also like to pay my special regards to Prof. Pratik Majumdar, without whom my PhD career would not have been taken-off. His perceptive feedback, consistent patience, guidance and all insightful questioning shape my PhD work. Specifically, I deeply appreciate his approach of rigorously participating in thoughtful argumentative dialogue in order to elicit the ideas and underlying presuppositions within me. This intense disciplined questioning, yet insouciant approach deeply help me in examining any thoughts logically and determining the validity of those ideas. Without his persistent help, the goal of this thesis would not have been realised. I am also grateful to Prof. Mousumi Das for her valuable insights into my research. I gratefully acknowledge the fruitful interactions with her.\\
\noindent I would like to acknowledge my senior colleague, Dr. Sayan Biswas, for his insightful academic discussion in various stages of my PhD. \\
\noindent I would also like to thank my other collaborators, Prof. Dilip Kumar Ghosh, Prof. Debajyoti Choudhury, Prof. Subinoy Das, Dr. Tulun Ergin, Dr. Kasinath Das, Dr. Lab Saha, for their valuable inputs in my research career. \\
\noindent I will forever be indebted towards my two colleagues and friends Sananda Raychaudhuri and Kaushik Naskar. Without them, this journey would not have been completed. I will sincerely miss the friendly, supportive and relaxed environment we maintained over these years. Without their constant support and intense academic or non-academic discussions would not have made this bumpy road to a smooth one.\\
\noindent I am grateful to my friends in Bose Institute: Pracheta Singha, Sumana Bhattacharjee, Deeptak Biswas, Souradeep Sasmal, Debarshi Das and Som Kanjilal.\\
\noindent In addition, I want to thank the Bose Institute for the infrastructure provided to pursue my PhD. I am also grateful to DST-INSPIRE for financial support. Without their support and funding, this thesis could not have reached its goal.\\
\noindent I gratefully acknowledge my teachers from school and college- Dr. Arup Roy, Dr. Satadal Bhattacharyya, Dr. Upendranath Nandi, Mr. Rabindranath Sasmal, Mr. Swapan Samanta, Ms. Jasmine Sinha for their guidance and inspiration. \\
\noindent I take this opportunity to mention the support and enormous encouragement I have received from my loving husband, Dr. Chowdhury Aminul Islam. He was always there to pamper my idiotic questions, sort out my philosophical confusions and clarify any difficulties I faced. Due to his continuous supports and suggestions, I never had a deficiency of interesting and stimulating problems to work on. I would like to whole-heartedly acknowledge that he was always able to put things in perspective and provide inspiration when the rigours of research and thesis writing seemed too much to handle. \\
\noindent I would like to mention the love and care I have received from my family members- my Dida, Dadu, Jethu, Barama, Mama, Mami. At this moment I remember my Jethu, Late Mr. Sanjay Bhattacharjee, who was very supportive of my academic and personal life. My mama, Shyamal Ghosh, was my first teacher. They are the ones who have constantly encouraged me to opt for a research career. My brother, Argha, was always there to provide me with support and affection. I do not have words to express my gratitude to my Maa (Ms. Bina Bhattacharjee) and Baba (Mr. Sudipta Bhattacharjee). Without their constant inspiration, motivation and moral support, I could not continue my PhD work. Thank you both for giving me strength to reach for the stars and chase my dreams.\\
\noindent At this moment I cannot name them all who were always there to provide me with support and affection. You know who you are, and I thank you all wholeheartedly for being there.
\section{Discussion and Concluding Remarks}
\noindent Finally, in this concluding section, we want to wrap up the thesis. We have performed a detailed study on the indirect detection for DM signature that the aims to investigate the signal originating from the self-annihilation of DM candidates. The methods for targeting the DM signal is two-fold, on one hand, we explore the gamma rays resulting from DM particles. On the other hand, we focus on complementary radio properties. In the earlier chapters, we already summarized the outcomes of each work at the end of the chapter. In this following section, we briefly cover that again for the sake of completeness. Let us take a quick look back at what has been accomplished so far and what may be possible going forward.
\noindent In our work for Triangulum-II (Tri-II), we analysed nearly seven years of \texttt{Fermi-LAT} data but could not observe any $\gamma$-ray excess from its location. We then derived the upper limit of $\gamma$-ray flux for two possible scenarios, such as for $\rm{\sigma_{v}= 4.2~km~s^{-1}}$ and $\rm{\sigma_{v} = 3.4~km~s^{-1}}$. For $\rm{\sigma_{v}= 4.2~km~s^{-1}}$, Tri-II constrain the mSUGRA and the MSSM models with low thermal relic densities, whereas the limits constrain the Kaluza-Klein DM in UED and the AMSB models for masses $\lesssim 230$~GeV and $\lesssim 375$~GeV, respectively. Even for $\rm{\sigma_{v}~=~3.4~km~s^{-1}}$, Tri-II can constrain the MSSM model with low thermal relic densities and the AMSB model for masses $\lesssim 300$~GeV. Besides, from our work, we found that $\gamma$-ray data from Tri-II can even put stronger limits on the theoretical DM models than UMi. We would also like to point out that our results are entirely based on the standard NFW profile and we do not consider any effects of boost factor related to substructures in Tri-II or the Sommerfeld effect in accord to the annihilation cross-section. Finally, from our work, we can state that with more precise observations of Tri-II, in future we can establish Tri-II as a very strong DM candidate for indirect DM searching.\\
\noindent In case of Tucana-II (Tuc-II), we studied for a longer period of (nearly nine years) \textit{Fermi}-LAT data to investigate the signatures of DM annihilation. Unlike the Tri-II, we detected a very faint $\gamma$-ray excess from the location of Tuc-II for both the power-law spectra and the $\gamma$-ray spectrum from DM annihilation. We checked the variation of the gamma-ray excess with DM mass and observed that for nine years of data, TS value of Tuc-II peaks at $m_{DM}\sim$~14 GeV for $100\%$ $b\bar{b}$ annihilation channel, while for $100\%$ $\tau^{+}\tau^{-}$, it peaks at $m_{DM}\sim$~4 GeV. Apart from that, our study also confirmed the successive increase in TS peak values with increasing the time periods of data. This hints its association with any real signal either astrophysical or resulting from DM annihilation. We also produced a residual TS map for energy $>$ 500 MeV (Fig 6.11). From the residual map, we can at best conclude that the nearby excess is associated with Tuc-II location and it indicates its connection with DM annihilation signal from our Tuc-II \cite{Bhattacharjee:2018xem}. In the field of indirect DM detection, such hints of $\gamma$-ray emission from Tuc-II may open a new path in DM physics. \\
\noindent For Low Surface Brightness (LSB) galaxies, we studied for nearly nine years of LAT data but did not detect any emission from the location of LSB. With DMFit tools, we estimated the $\gamma$-ray and $<\sigma v>$ upper limits for four annihilation channels. But because of their low J-factors, individual limits obtained from the LSB galaxies could not provide any stringent limits on the DM theoretical models. With the hope of increasing the LAT sensitivity, we then performed the joined likelihood on the set of four LSB galaxies. As expected, the stacking method improved the $<\sigma v>$ bounds by the factor of 4 than the individual limits obtained from LSB galaxies. But, the combined $<\sigma v>$ were still around two orders of magnitude weaker than the $<\sigma v>$ limits obtained from refs.~Ackermann et al.\cite{Ackermann:2015zua} and Steigman et al.\cite{Steigman:2012nb}. With the gamma-ray data for our chosen LSB galaxies, we could not particularly favour cored profile over the cuspy profile. The rotational curves for LSBs were in an agreement with the prediction from $\lambda$CDM and some study also indicated that the cuspy profile could also provide a reasonable fit to the DM distribution at the internal core. Thus, motivated by all the observational evidences, we modelled the DM distribution of LSB galaxies with the NFW profile. We also performed a comparative study between NFW, ISO and BURK DM density profiles (check Fig.~7.13) and find that the $<\sigma v>$ limits for each density profiles were overlapping with other. Thus, from our study, we could not favour one profile between all three but for the median value of J-factor, the most stringent limits would come from the NFW profile.\\
\noindent For this study, we also studied the complementary radio flux upper limits for the indirect search of the DM signal. For radio analysis, we used the RX-DMFIT tool which is an extension of DMFIt package. We estimated the multiwavelength SED plots for LSB galaxies and observed how their SED varies with the parameter sets (check Figs.~7.8 $\&$ 7.9). We surveyed the NVSS all-sky data and searched for the radio flux density for all LSB galaxies. But only the location of UGC 11707 appeared as an excess and other thee LSBs provide the upper limits to the flux density. With the VLA flux density, we have tried to predict the radio $<\sigma v>$ limits in parameter space of ($<\sigma v>$, $m_{DM}$) (check Fig.~7.10). When we considered the 2$\sigma$ uncertainty band associated with the radio limits, we noticed that the radio limits were overlapping with the limits obtained from stacking analysis for LAT data (check Fig.~7.11). Hence, from our analysis, we could, at best, comment that the radio data is competitive with the gamma-ray data. With more detailed observational data and precise analysis, in future, it might be possible for LSB galaxies to impose strong limits on DM models. We also checked whether with the next generation radio (SKA) and gamma-ray (CTA) telescopes, it would be possible to detect any emission from the location of LSB galaxies. We noticed (check 7.12) that SKA might be able to detect the emission from the location of LSB galaxies and its 1000 hours of observation would have the highest possibility to detect the emission from LSBs. But we would also like mention that in order to claim that SKA would detect the emission from DM annihilation, we first need to perform a simulation study. Besides, the estimated radio emission is also dependent on the various astrophysical scenario. We need to have a well-defined knowledge on the distribution of diffusion zone, magnetic fields, DM density profile, etc.. Hence, from our analysis, we could, at best, hint the possibility of observing the radio signal from LSB galaxies by SKA. We also found (Fig.~7.14) that for energy ranges between 100 GeV to 1 TeV, it might be possible for CTA to observe the $\gamma$-ray emission with the 50 hours of sensitivity curve. But like SKA, the same conclusion also holds for CTA. A simulation study is needed to examine whether it would be possible for CTA to detect the emission resulting from the DM annihilation/decay. From our work, we can ultimately conclude that the $\gamma$-ray data obtained from the Fermi-LAT could not impose the strong $<\sigma v>$ limits on the WIMP models. We found that the radio signal possibly originated from WIMP annihilation is quite competitive with the $\gamma$-ray emission observed by the Fermi-LAT. Our analysis, at best, indicates that to study $\gamma$-ray and radio signal from the LSB galaxies, SKA and CTA would play a very significant role in future. \\
\noindent In our last chapter, we considered the newly discovered UFDs detected by the observations performed by Pan-STARRS, DES and some other spectral survey \cite{Bhattacharjee:2020phk}. In recent times, several literatures tried to derive the strong limits on annihilation $\langle \sigma v \rangle$ from gamma-ray and radio data \cite{Hoof:2018hyn, DiMauro:2019frs, Fermi-LAT:2016uux, Beck:2019ukt, Regis:2017oet, Regis:2014tga}. Using both the gamma-ray (detected by Fermi-LAT) and the radio data (detected by VLA and GMRT), we searched for the WIMP annihilation signal in 15 UFDs. We also predicted the possible spectra associated with the radio emission and checked whether it would be possible for SKA to detect any emission from them \cite{Bhattacharjee:2020phk}. With eleven years of Fermi-LAT data, we did not detect any significant emission from the location of UFDs. Thus, we then derived the upper limits on $\langle\sigma v\rangle$ as a function of DM mass for our chosen DM annihilation channels. We estimated the limits for 12 UFDs. Because, for Triangulum-II, Hydra-II and Tucana-III, we only have the upper limits on J-factor, so they could not provide any robust limits on the parameter space of ($m_{DM}$, $\langle\sigma v\rangle$) \cite{Bhattacharjee:2020phk}. For gamma-ray data, Holorologium I provided the most stringent constraints but our obtained limits strongly depend on the distribution of DM. Using the NFW profile, we derived most of the results. Besides, we also performed a comparative study between NFW, Burkert and ISO profiles. In view of gamma-ray analysis, the Burkert profile imposed the strongest limits on $\langle \sigma v\rangle$, while the ISO imposed the weakest limits \cite{Bhattacharjee:2020phk}. \\
\noindent In view of synchrotron emission, we considered the radio-flux limits observed by GMRT and VLA and predicted the respective $\langle\sigma v\rangle$ upper limits for $b\bar{b}$, $\tau^+ \tau^-$ and $\mu^+ \mu^-$ final states. We compared our obtained radio limits with the limits obtained from gamma-ray data and found that the VLA telescope has the potential to impose more stringent limits than Fermi-LAT. We have derived the possible the synchrotron fluxes in UFDs for a wide range of frequencies, i.e., between 10 MHz to 100 GHz and compared these with the sensitivity curves of SKA. We found that for 200 GeV DM mass and $b \bar b$ final state, it might be possible for SKA to detect the radio emission from our considered UFDs, even with its 10 hours of sensitivity curve. For $\tau^+\tau^-$ and $\mu^+ \mu^-$ final states, the emission could be detected with the 100 hours of exposure curve of SKA. On the other side, for comparatively heavy DM masses, (say $\sim$ 2 TeV), the synchrotron spectrum would become harder, and thus a longer observation time would be necessary to detect the radio signal. We also need to remember that the synchrotron fluxes have strong dependences on several astrophysical components, such as magnetic field, diffusion coefficient, distance, etc. But, due to insufficient observation, the values are not very precise. Thus, in order to predict the synchrotron fluxes in UFDs, we must have the most accurate information of the astrophysical parameters, especially the magnetic field and the diffusion coefficient. We checked how the synchrotron flux in Tucana-II varies with $B$, $D_0$ and $\gamma_D$ for DM mass 200 GeV and $b\bar{b}$ annihilation channel. We noticed that synchrotron emission strongly depends on these. Besides, the emission is also controlled by the choice of DM density distribution in UFDs. We found that for Tucana II, NFW density profile could produce the maximum amount of radio flux between all three density profiles. Our considered UFDs process a large uncertainties in $r_{1/2}$, $d$ and $\sigma$. The uncertainties in these astrophysical parameters can also affect the synchrotron emission arising from UFDs. We performed the respective checks and found that the largest contribution was coming from the uncertainties in $\sigma$. \\
\noindent Despite the dependence on these uncertainties, we can safely conclude that a very intriguing aspect of indirect searching for DM signal from UFDs has been discussed in our study. In Fig.~8.13, we compared the most stringent obtained from the VLA sky-survey with the best limits obtained from the Fermi-LAT data for three final states. From Fig.~8.13, we could notice that for $\mu^+ \mu^-$ and $\tau^+ \tau^-$ final states, VLA imposed the better limits that Fermi-LAT, while for $b\bar{b}$ final state Fermi-LAT provided the stronger limits that VLA \cite{Bhattacharjee:2020phk}. In view of indirect DM search, we expect that the next-generation $\gamma$-ray telescope, CTA would play a very crucial role. CTA would have the deepest sensitivity for a very wide range of energies\cite{CTAConsortium:2018tzg} and would be able to investigate the thermal $\langle \sigma v \rangle$ rate from several of DM rich targets. Along with the CTA, in radio sky, SKA is expected to become the most sensitive radio telescopes in the future. Besides, Low-Frequency Array (LOFAR) such as MeerKAT and ASKAP would also be complementary to the CTA and SKA. We can, at best, expect that all of these next-generation telescopes would be able to solve several crucial aspects of dark matter physics.
\section{Triangulum-II}
\noindent Since last two decades, the Sloan Digital Sky Survey (SDSS)
\cite{York:2000gk}
discovered a new member of Milky Way satellites. They are ultra-faint and have
a
very
high mass to light ratio.Thus we can assume that they might be very rich in
DM contents
\cite{Willman:2004kk, Zucker:2006bf, Belokurov:2006ph, Irwin:2007jz,
Walsh:2007tm,
Strigari:2007at}. Over the past few years, the Panoramic Survey
Telescope and Rapid Response System (Pan-STARRS) \cite{Kaiser:2002zz} and the
Dark Energy Survey (DES) \cite{Abbott:2005bi} have observed a new population of
dSphs
\cite{Laevens:2015una,Bechtol:2015cbp,Kim:2015ila,Drlica-Wagner:2015ufc} around
our
Milky Way. Triangulum-II (hereafter, we would refer to as Tri-II) is one of the
newly discovered dSphs \cite{Biswas:2017meq},
which has been detected by the Pan-STARRS Survey \cite{Laevens:2015una}. This
survey has concluded that Tri-II is either an ultra-faint and DM dominated
dwarf
galaxy or a globular cluster. There are several pieces of studies
\cite{Genina:2016kzg, Hayashi:2016kcy, Biswas:2017meq} which have claimed that
Tri-II may come as a very potential target for indirect DM detection.
In this chapter, we would describe our findings for Tri-II \cite{Biswas:2017meq}.\\
\noindent For our study, we have considered Tri-II as a metal-poor galaxy with
large
mass to light ratio\cite{Biswas:2017meq}. But so far very few numbers of member stars of Tri-II have
been detected and its exact number is still uncertain \cite{Kirby:2015bxa,
Kirby:2017cyb}. Ref.~\cite{Kirby:2015bxa,
Martin:2016cyb} had observed nearly $6$ member stars in Tri-II, while their
most
recent
study \cite{Kirby:2017cyb} have discovered the existence of $13$ stars along
with
very velocity dispersion $\approx~\sigma_{\rm v}<4.2~{\rm {km}}~{\rm s}^{-1}$
and
$<3.4~{\rm {km}}~{\rm s}^{-1}$ for $95\%$ and $90\%$ C.L.,
respectively.
In Table~5.1, we have described some important properties of Tri-II.
\begin{table}[h!]
\begin{center}
\caption{Properties of Triangulum-II.}
\begin{tabular}{|p{4 cm}|p{6 cm}|p{4 cm}|}
\hline
\hline
Property & Value & Reference \\
\hline
Galactic longitude & $\rm{141.4^{\circ}}$ & \cite{Laevens:2015una} \\
\hline
Galactic latitude & $\rm{-23.4^{\circ}}$ & \cite{Laevens:2015una} \\
\hline
Galactocentric distance & $\rm{36_{-2}^{+2}~kpc}$ &
\cite{Genina:2016kzg,Kirby:2015bxa}\\
\hline
2D half light radius ($\rm{r_{h}}$) & $\rm{34_{-8}^{+9}~pc}$ &
\cite{Kirby:2015bxa,Kirby:2017cyb} \\
\hline
Velocity relative to galactic standard of rest (GSR) ($\rm{v_{GSR}}$) & -261.7
km $\rm{s^{-1}}$ & \cite{Kirby:2017cyb}\\
\hline
Mean heliocentric velocity $~<\rm{v_{helio}}>$ & $\rm{-381.7\pm2.9~km~s^{-1}}$
& \cite{Kirby:2017cyb}\\
\hline
Stellar Velocity Dispersion ($\rm{\sigma_{v}}$) &
$<~\rm{3.4~km~s^{-1}~(90\%~C.L.)}$ & \cite{Kirby:2017cyb}\\
& $<~\rm{4.2~km~s^{-1}~(95\%~C.L.)}$ & \cite{Kirby:2017cyb}\\
\hline
Mass within 3D half-light radius \Big($\rm{\frac{M_{1/2}}{M_{\odot}}}$\Big) &
$\rm{<~3.7~\times~10^{5}~(90\%~C.L.)}$ & \cite{Kirby:2017cyb}\\
& $\rm{<~5.6~\times~10^{5}~(95\%~C.L.)}$ & \cite{Kirby:2017cyb}\\
\hline
Mass-to-light ratio within 3D half-light radius
\Big($\rm{(M/L_{v})_{1/2}}$\Big)
& $\rm{<~1640~M_{\odot}~L_{\odot}^{-1}~(90\%~C.L.)}$ & \cite{Kirby:2017cyb}\\
& $\rm{<~2510~M_{\odot}~L_{\odot}^{-1}~(95\%~C.L.)}$ & \cite{Kirby:2017cyb}\\
\hline
Density within 3D half-light radius $\rm{\rho_{1/2}}$ &
$\rm{<~2.2~M_{\odot}~pc^{-3}~(90\%~C.L.)}$ & \cite{Kirby:2017cyb}\\
& $\rm{<~3.3~M_{\odot}~pc^{-3}~(95\%~C.L.)}$ & \cite{Kirby:2017cyb}\\
\hline
Metallicity ([$\rm{Fe/H}$]) & $\rm{-2.24\pm0.05}$ & \cite{Kirby:2017cyb}\\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\begin{center}
\end{center}
\noindent In Table~5.1, $M_{\odot}$ and $L_{\odot}$ denote the mass and the
luminosity of
the Sun, respectively. The values of $\rm{M_{1/2}}$, $\rm{(M/L_{v})_{1/2}}$
and $\rm{\rho_{1/2}}$ have been taken from~\cite{Kirby:2015bxa,Kirby:2017cyb}.
For our study, we have assumed Tri-II as spherically symmetric (because of its
low ellipticity)
and in a state of dynamical equilibrium \cite{Biswas:2017meq}.
From the observational study, ref.~\cite{Kirby:2017cyb} had obtained that
Tri-II
has large velocity dispersion concerning the galactic standard of rest (GSR).
All the observational studies (from \cite{Kirby:2015bxa, Kirby:2017cyb})
suggested that Tri-II might be
affected by the total tidal effect from the Milky Way \cite{Kirby:2015bxa}.
Several studies have also suspected the association of Tri-II with the
Triangulum-Andromeda halo sub-structures \cite{Laevens:2015una,
Majewski:2004zi}
and with the PAndAS stream
\cite{Martin:2014dja}, that might cause the effect of tidal disruption of
Tri-II.
Indeed, the above-mentioned observations did not provide any concrete proof
that
Tri-II is in dynamical equilibrium \cite{Biswas:2017meq}. But, any tidally disrupting galaxy would
show a high ellipticity, whereas Tri-II has low ellipticity \cite{Biswas:2017meq}. Moreover, the
tidal
radius of Tri-II is nearly three times of the 3D half-light radius of Tri-II
and
from that observational data, we can also predict that
the shape of Tri-II is insulated from Galactic tides. Thus, high mass to
light
ratio and large velocity dispersion value indicate a high concentration of DM \cite{Biswas:2017meq}.
\section{The \textit{Fermi}-LAT Data Analysis of Tri-II}
\noindent For examining the possible signal from the Tri-II, we have analysed
the
gamma-ray observed by the Fermi-LAT \cite{Biswas:2017meq}.
For our analysis, we have used the \texttt{Fermi
ScienceTools} version \texttt{v10r0p5} (released on June~24,
2015)\footnote{\tiny{https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/}}.
Here, we have used the fully reprocessed Pass8 dataset
\footnote{\tiny{
https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Pass8{\_}usage.html}
}.
Such Pass8 processed data provides an improved event reconstruction, a wider
energy range, a better energy resolution and a significantly increased
effective
area, especially for energy below $<$ 100 MeV. We have chosen
$10^{\circ}$ as
ROI\footnote{\tiny{https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data{
\_}preparation.html}} \cite{Biswas:2017meq}.
With `gtselect', we have applied $0.1 \le E \le 50$~GeV energy cut on the
photon
events \cite{Biswas:2017meq}. We have chosen this energy range to avoid the possible calibration
uncertainties for energy below 100 MeV and background contamination at
high
energy. To avoid the albedo contamination, we have used the zenith angle cut at
$\theta~=~
90^{\circ}$ \cite{Biswas:2017meq}. Next, we have performed the `binned likelihood', which is
implemented in the \texttt{ScienceTools} \cite{Cash:1979vz,Mattox:1996zz}, on
our reconstructed
dataset\footnote{\tiny{
https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/binned{\_}likelihood{\_}
tutorial.html}}\cite{Biswas:2017meq}. For our analysis, we have used the \textit{event class 128} and \textit{event type 3} \cite{Biswas:2017meq}. \textit{Event class 128} provids a good sensitivity to the point
sources and the moderately extended sources. The \textit{event type 3} is preferred for the $e^{+}$,~$e^{-}$ pair conversion that
is supposed to occur both at the FRONT and the BACK tracker layers of the
Fermi-LAT. Along with the above-mentioned selections, we have adopted
$P8R{\_SOURCE\_V6}$ as IRF\footnote{\tiny{
https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone{\_
}LAT{\_}IRFs/IRF{\_}overview.html}},
\footnote{\tiny{
https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/}} \cite{Biswas:2017meq}.\\
\noindent Along with all the sources within $10^{\circ}$ ROI, we have added the
model for
Tri-II in our source model \cite{Biswas:2017meq}.
As there was no pre-existing study of Tri-II by the
Fermi-LAT collaboration team, we have modelled Tri-II with a power-law
spectrum.
The spectral and spatial models of all the remaining sources in the ROI have
been taken from the 3FGL catalog. Moreover, to eliminate the possible galactic
and extragalactic diffuse emission, we have also included the galactic diffuse
model and its possible isotropic component to the source model ($gll{\_iem\_v05}.fits$ $\&$ $iso{\_source\_v05}.txt$)
\footnote{\tiny{
https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html}}.
During maximum likelihood fitting procedure, the normalisation parameter of
those diffuse components are kept free. In that process, the spectral parameters
for all the sources within $5^{\circ}$ from the location of Tri-II are left
free
but the parameters of other remaining sources in the ROI are kept fixed to
their
preferred values from 3FGL catalog. We have also fixed the localisation of
Tri-II \cite{Biswas:2017meq}.
\subsection{Results from the Power-law Modelling}
\begin{figure}
\centering
\subfigure[]
{\includegraphics[width=0.8\linewidth]{figures/fig1a.pdf}}
\subfigure[]
{\includegraphics[width=1.0\linewidth]{figures/fig1b.pdf}}
\caption{(a) The spectral fit to the observed data counts and (b) the
residual
plot for the location of Tri-II. We have modelled the Tri-II with
the power-law spectrum for $\Gamma = 2$. In figure 5.1(a), the solid red curve displays the best-fit total spectrum, along with the corresponding LAT-observed data
points (in green); the upper-most solid blue and green curves display the galactic diffuse background and
the isotropic background component, respectively.}
\end{figure}
\noindent We have modelled Tri-II with power-law spectrum and the differential
photon flux
obtained from the Tri-II would be \cite{Abdo:2010ex, Biswas:2017meq}:
\begin{equation}
\rm{\frac{dN}{dA dE dt} = N_{0} \Big(\frac{E}{E_{0}}\Big)^{-\Gamma}},
\end{equation}
\noindent Here $dN$ is the number of photons and their reconstructed energies
vary between
$E$ to $E + dE$. The photons are incident on an elemental area $dA$ of the
detector with an elemental time interval of $dt$. In Eq.~(5.1),
$\Gamma$ is the spectral index and $N_{0}$ is a normalisation parameter. We have fixed
the
energy scale $E_{0}$ at $100~\rm{MeV}$ \cite{Biswas:2017meq}.
During the model fitting of Tri-II, we have considered five different values of
the $\Gamma$ i.e., 1, 1.8, 2, 2.2 and 2.4 and have repeated the binned
likelihood
analysis for each $\Gamma$ value \cite{Biswas:2017meq}.
We have used $\Gamma = 1$ because of its connection with DM annihilation model
(\cite{Essig:2009jx, Biswas:2017meq}),
while the other four $\Gamma$ is considered to check constraints on the general
astrophysical source spectra.\\
\noindent In our power-law modelling, we have determined the best-fit values of
$N_{0}$
along with the normalisation parameter of
isotropic and the galactic diffuse model for each $\Gamma$ \cite{Biswas:2017meq}. In fig.~5.1(a), we
have displayed the spectral fits to the data from all the sources within ROI,
along with two diffuse background models for $\Gamma~=~2$ \cite{Biswas:2017meq}. In this figure, the sum of the best fit spectrum along with the LAT counts within ROI $10^{\circ}$ is denoted by the top red curve, while the best fit spectrum for the galactic and isotropic components are defined by the top blue and the top green curves, respectively. On the other hand,
in
Fig.~5.1(b), we have shown the residual plot of Tri-II for $\Gamma~=~2$ \cite{Biswas:2017meq}. \\
\noindent In Table~5.2, we have mentioned our obtained best-fit values of
$N_{0}$, their
statistical errors and
the $\rm{TS}$ values for all spectral indices \cite{Biswas:2017meq}. \\
\noindent From Table~5.2, we can find that for each of spectral indices,
the normalisation constant, $N_{0}$ of Tri-II is less than the statistical
errors
obtained from fitting procedure \cite{Biswas:2017meq}. The corresponding TS value is less than unity.
Hence, from Table~5.2, we could conclude that
the LAT could not detect any $\gamma$-ray signal from Tri-II \cite{Biswas:2017meq}.\\
\noindent As no significant emission has been detected by Fermi-LAT from the
direction
Tri-II, we have then
derived the upper limit of the possible $\gamma$-ray flux from the location of
Tri-II \cite{Biswas:2017meq}. With profile likelihood method \cite{Barbieri:1982eh, Rolke:2004mj} we
have determined the $\gamma$-ray flux upper limits for a full dataset where, we
have considered the full range of reconstructed energy of photons for our
analysis.
The upper limits of $N_{0}$ are evaluated with $95 \%$ confidence level (C.L.)
and during this procedure, $N_{0}$ and the normalisation parameters of the
galactic and isotropic model are fitted with the LAT-obtained spectrum at each
step. This analysis would continue until the difference of the logarithm of the
likelihood function reaches the value $1.35$
\cite{Abdo:2010ex} corresponding to a one-sided $95\%$ C.L. Next we have
applied the
Bayesian method to our dataset (\cite{Helene:1990yi}). This method is already
implemented in the \texttt{Fermi ScienceTools}
\cite{Abdo:2010ex} and is used for obtaining a more appropriate value of the upper
limit of the
$\gamma$-ray flux in $95 \%$ C.L. \cite{Biswas:2017meq}. In Table 5.3, we have displayed the upper
limits of the $\gamma$-ray flux for all five spectral indices \cite{Biswas:2017meq}. From Table 5.3,
we can observe that the gamma-ray flux upper limits
for $\Gamma = 1$ is about $16$ times lower than the one for $\Gamma = 2.4$ \cite{Biswas:2017meq}.
This result is in agreement with the results derived by Ref.~\cite{Abdo:2010ex}.\\
\begin{table}[!h]
\begin{center}
\caption{Best fit value of the normalisation parameter of Tri-II and the TS values for five $\Gamma$s.}
\begin{tabular}{|p{3cm}|p{5cm}|p{5cm}|}
\hline
\hline
Spectral~Index~($\Gamma$) & $\rm{N_{0} \times 10^{-5}}$ & Test Statistic (TS)
value \\
\hline
$1$ & $(1.41\pm2.75)\times10^{-9}$ & 0.41 \\
\hline
$1.8$ & $(6.66\pm11.49)\times10^{-8}$ & 0.44 \\
\hline
$2$ & $(1.06\pm2.41)\times10^{-7}$ & 0.23 \\
\hline
$2.2$ & $(1.88\pm5.53)\times10^{-7}$ & 0.02 \\
\hline
$2.4$ & $(1.41\pm2.75)\times10^{-11}$ & $-7.45\times 10^{-8}$ \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[!h]
\caption{Estimated $\gamma$-ray flux upper limits in $95\%$ C.L..}
\begin{center}
\begin{tabular}{|p{3cm}|p{8cm}|}
\hline
\hline
Spectral~Index~($\Gamma$) &
Flux~upper~limits~at~$\rm{95\%~C.L.~(cm^{-2}~s^{-1})}$ \\
\hline
1 & $8.29\times10^{-11}$ \\
\hline
1.8 & $4.55\times10^{-10}$ \\
\hline
2 & $7.14\times10^{-10}$ \\
\hline
2.2 & $1.04\times10^{-9}$ \\
\hline
2.4 & $1.37\times10^{-9}$ \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\section{J-factor for Tri-II}
\noindent For this work, we have modelled the DM distribution of Tri-II with
NFW
density
profile
\cite{Navarro:1996gj}. For estimating the J-factor value of Tri-II, we have
used
the following simple analytical
relation derived by ref.~\cite{Evans:2016xwx}. The J-factor estimated by this
analytical formula is in good agreement
with the numerically estimated values of J-factor \cite{Biswas:2017meq}.
\begin{equation}
J \approx\frac{25}{8G^{2}} \frac{\sigma_{{\rm{v}}}^{4}\theta}{dr_h^{2}} .
\end{equation}
here, $G$ is the gravitational constant, $\sigma_{{\rm{v}}}$ is the velocity
dispersion and $r_{h}$ is the 2D projected
half-light radius. For Tri-II, we have considered $\theta = 0.15^{\circ}$
\cite{Genina:2016kzg}.\\
\noindent In Table 5.4, we have shown two different values of J-factor \cite{Biswas:2017meq} for two
different
$\sigma_{{\rm{v}}}$ values of Tri-II \cite{Kirby:2017cyb}.
\begin{table}[!h]
\begin{center}
\caption{Parameters to calculate the J-factors.}
\begin{tabular}{|p{2cm}|p{3cm}|p{2cm}|p{2cm}|p{4cm}|}
\hline
\hline
d (kpc) \cite{Laevens:2015una} & $\sigma_{{\rm{v}}}$ ($\rm{km~s^{-1}}$)
\cite{Kirby:2017cyb}
& $r_{h}$ (pc) \cite{Kirby:2017cyb}& $\theta$ (deg) \cite{Genina:2016kzg}&
J-factor from
Eq.~(5.2) ($\rm{{GeV^{2}~cm^{-5}}}$)\\
\hline
$\rm{30\pm2}$ & $4.2~(95\%~\rm{C.L.})$ & 34 & $0.15^{\circ}$ &
$\rm{0.17\times10^{20}}$ \\
\hline
$\rm{30\pm2}$ & $3.4~(90\%~\rm{C.L.})$ & 34 & $0.15^{\circ}$ &
$\rm{0.75\times10^{19}}$ \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\noindent For our J-factor calculation, we did not consider the contribution
from the
substructures in Tri-II \cite{Biswas:2017meq}
which can increase the value of J-factor by few factors
\cite{Martinez:2009jh,Abdo:2010ex}.\\
\noindent We would also like to point out that, in our present calculation, we
did not
take into account the effect of
Sommerfeld enhancement \cite{ArkaniHamed:2008qn,Abdo:2010ex,
Feng:2010zp}. Such enhancement can increase the $\gamma$-ray flux due to the
dependence of annihilation cross-section (i.e., $<\sigma~v>$) on the relative velocity of particles.
This enhancement effects the relative velocity of thermal relics cross section
at freeze-out. Thus the value for
$<\sigma~v>$ would differ by few factors if we consider the
Sommerfeld and can almost maximize the cross section by a factor of
$7$ to $90$ for WIMP masses between $100$ GeV to $3$ TeV \cite{Feng:2010zp}.
To make a conservative approach, we have not included any such effect for our
calculation.
\section{Constraints on Annihilation Cross-section}
\noindent In this section, we would try to examine the possible $\gamma$-ray
emission
resulting
from the DM annihilation in Tri-II. For this purpose, we have determined the
$95
\%$ C.L.
upper limits on $\gamma$-ray flux as a function of the WIMP mass for some
specific annihilation channels \cite{Biswas:2017meq}. \\
\noindent To estimate the flux upper limits in 95$\%$ C.L. and the
corresponding
upper
limits to
the thermally averaged pair-annihilation $<\sigma v>$ of the WIMPs with the
variation of
the plausible WIMP masses ($\rm{m_{DM}}$), we have used the Bayesian approach
(\cite{Helene:1990yi})
as this is more sensitive~\cite{Rolke:2004mj, Barbieri:1982eh} than the profile
likelihood method for low statistics \cite{Biswas:2017meq}. \\
\noindent For estimating the plausible flux upper limits and limits to the
$<\sigma v>$ of WIMP pair-annihilation, we have fitted the
$\gamma$-ray spectrum arising from the DM-dominated dSphs \cite{Biswas:2017meq} with an
MC-simulated DM self-annihilation spectrum, DMFitFunction
\cite{{Jeltema:2008hf}}.\\
\noindent The functional form of DMFitFunction
(a modified form of Eq.~2.5) can be written as \footnote{https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/source{\_}models.html}:
\begin{equation}
\label{eqn:dm_spectrum}
\frac{dN}{dE} (E,\Delta \Omega) = <\sigma v> J(\Delta \Omega) (B~F(M_{DM},C_{0})
+ (1 - B~F(M_{DM},C_{1}))
\end{equation}
\noindent From Eq. 5.3, B, $C_{0}$ and $C_{1}$ define the branching ration,
primary decay channel and secondary decay channel,
respectively. The DMFitFunction is implemented in Fermi
\texttt{ScienceTools} as a DMFit package
\cite{Jeltema:2008hf} and the values of F($M_{DM}$,C) are provided by
the Fermi-LAT team. For J-factor, we have taken the values from Table~5.4 \cite{Biswas:2017meq}.\\
\noindent For this work, we have considered five supersymmetry-motivated pair
annihilation final states, such as $100\%$ b$\rm{\bar{b}}$, $100\%$
$\rm{\tau^{+} \tau^{-}}$,
$80\%$ b$\rm{\bar{b}} + 20\%$ $\rm{\tau^{+} \tau^{-}}$, $100\%$ $\rm{\mu^{+}
\mu^{-}}$ and $100\%$ $\rm{W^{+} W^{-}}$, respectively
\cite{Jungman:1995df}. These annihilation channels are
particularly favored when we consider the neutralino as the WIMP
candidate \cite{Biswas:2017meq}. Though the supersymmetry theory prefers neutralino as a
valid cold DM candidate, our obtained result would be generic
to all theoretical WIMP models \cite{Biswas:2017meq}. \\
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth,clip,angle=0]{figures/Flux_new.pdf}
\caption{The $\gamma$-ray lux upper limits of Tri-II for
five WIMP annihilation final states.}
\end{center}
\end{figure}
\noindent In Fig.~5.2, we have shown the variation
of flux upper limits in $95\%$ C.L. with DM mass for all five annihilation
channels.
From Fig.~5.2, we can observe that for all five final annihilation states, with
increasing mass, the spectrum
from WIMP annihilation always shifts to higher energies
\cite{Serpico:2009vz} and so, we can expect that the variation of flux
upper limits would be comparatively less for high $m_{\rm{DM}}$ \cite{Biswas:2017meq}.
Among all five final annihilation states producing hard $\gamma$-ray spectrum,
$100\%$ $\tau^{+}\tau^{-}$ and $100\%$ $\mu^{+} \mu^{-}$ final states produce the abundant photon
fluxes especially at high energies where, the diffuse background is comparatively clear \cite{Biswas:2017meq}. From Fig.~5.2, we can also find that at
$m_{\rm{DM}} \sim 1$~TeV,
the flux upper limit for all five channels varies within a factor of $3$
but for low mass WIMP, this variation is more than an order of magnitude \cite{Biswas:2017meq}. \\
\noindent The results that we have shown in Fig.~5.2 are only dependent on the
annihilation final state and DM mass \cite{Biswas:2017meq}. Thus, the flux upper limits from
Fig.~5.2 are generic to all DM theoretical models and do not depend on any
particular theory. Next, we have considered a few specific models to study the
annihilation cross-section of WIMPs \cite{Biswas:2017meq}.\\
\begin{figure}
\centering
\subfigure[]
{ \includegraphics[width=0.48\linewidth]{figures/msugra.pdf}}
\subfigure[]
{ \includegraphics[width=0.48\linewidth]{figures/mssm.pdf}}
\caption{Predictions from (a) the mSUGRA and (b) the MSSM models are shown in the parameter plane of
($m_{DM},<\sigma v>$). The red points denote the thermal relic abundance which is related to the DM
density, while the blue points denote the lower thermal relic DM density. The red and blues points have been taken from the Abdo \textit{et al.}, 2010. In both figures, the $<\sigma~v>$ upper limits for two
velocity dispersion ($\sigma$) values of Tri-II have been estimated for
$95\%$ C.L. The $<\sigma~v>$ upper limits for UMi have been similarly
estimated. The $<\sigma~v>$ upper limits of Tri-II for the higher value of J-factor (Genina \textit{et al.}, 2016) is denoted by the yellow curves. For UMi, we have used the parameter set mentioned in Abdo \textit{et al.}, 2010.}
\end{figure}
\begin{figure}
\centering
{\includegraphics[width=0.5\linewidth]{figures/AMSB_KALUZA.pdf}}
\caption{Predictions from the AMSB and the Kaluza-Klein UED models are shown in the parameter plane of
($m_{DM},<\sigma v>$). The $<\sigma~v>$ upper limits for two
velocity dispersion ($\sigma$) values of Tri-II have been estimated for
$95\%$ C.L. The $<\sigma~v>$ upper limits for UMi have been similarly
estimated. The $<\sigma~v>$ upper limits of Tri-II for the higher value of J-factor (Genina \textit{et al.}, 2016) is denoted by the yellow curves. For UMi, we have used the parameter set mentioned in Abdo \textit{et al.}, 2010.}
\end{figure}
\noindent In Figs.~5.3(a,b) and Fig.~5.4, we have compared the resulting LAT
sensitivity
for
Tri-II for three J values with a classical dSph, Ursa Minor (UMi) \cite{Biswas:2017meq}.
In Figs.~5.3(a,b), we have considered two theoretically preferred DM models
namely, minimal supergravity
(mSUGRA) \cite{Chamseddine:1982jx} and Minimal Supersymmetric Standard Model
(MSSM)
\cite{Chung:2003fi}, respectively. In the mSUGRA model, the supersymmetry
breaking
parameters are defined at a high energy scale; generally of the order of
grand unification scale $\sim 2 \times 10^{16}$~GeV.
In the MSSM model, all the supersymmetry breaking parameters are specified
at low energy scale i.e., in the electro-weak energy range.
In Fig.~5.4, we have considered two other DM models, namely,
anomaly mediated supersymmetry breaking (AMSB) model \cite{Giudice:1998xp} and
Kaluza-Klein particle of universal extra dimensions (UED)
\cite{Cheng:2002ej,Servant:2002aq,Hooper:2007qk}.
In the AMSB model, the supersymmetry breaking scenario might lead to the
production of
the wino-like neutralinos or the winos.
Winos are the mass eigenstate of neutralino that corresponds to the
supersymmetric fermionic partners of the SU(2) gauge bosons. At about
$2$ TeV mass of wino, the universal DM density agrees with the thermal relic DM
density generated by the winos. Several non-thermal production scenarios can
interpret the connection of wino with the lighter DM candidates for masses less
than 1 TeV\cite{Abdo:2010ex}.
In the Kaluza-Klein model, with very minimum setup, the first order excitation
of the U(1)
hypercharge gauge boson is also known as $\rm{B}^{(1)}$, might have a
connection
with the DM candidate. For Kaluza-Klein model, there exists a nearly exact
relationship between $m_{DM}$ and the pair annihilation $<\sigma v>$. Moreover,
from this model, we can obtain the thermal relic rate corresponding to the DM density
for DM masses above $700$~GeV \cite{Servant:2002aq}. \\
\noindent In Figs.~5.3(a,b) and Fig.~5.4, we have compared the LAT sensitivity
obtained
for Tri-II in the ($\rm{m_{DM}}$, $<\sigma v>$) plane with the $<\sigma v>$
limits predicted from the above mentioned four DM models namely mSUGRA, MSSM,
Kaluza-Klein DM in UED and wino-like DM
in AMSB \cite{Biswas:2017meq}. In figs.~5.3(a,b), the red points are associated with the thermal
relic
production while the blue points are related to the lower thermal relic density
\cite{Servant:2002aq}. All these assumptions have been taken from the
Ref.~\cite{Servant:2002aq}.
\noindent In figs.~5.3(a,b) and 5.4, we have showed the upper limits on
$<\sigma
v>$
obtained for Tri-II for its two
different values of velocity dispersion \cite{Kirby:2017cyb} and the
predictions
obtained
from mSUGRA, MSSM, AMSB and Kaluza-Klein UED models
respectively \cite{Biswas:2017meq}. The study by ref.~\cite{Kirby:2017cyb} has derived one optimistic
value
of $\sigma_{{\rm{v}}}$ $<$ 4.2~km~s$^{-1}$ in $95\%$ C.L. and another
conservative value
of $\sigma_{{\rm{v}}}$ $<$ 3.4~km~s$^{-1}$ in $90\%$ C.L. \cite{Biswas:2017meq}. In addition, we have
also compared
limits obtained for Tri-II with UMi \cite{Biswas:2017meq}. From Figs.~5.3(a,b), we can observe that
even for
the velocity dispersion value of 3.4~km~s$^{-1}$, at $m_{DM}= 100$ GeV,
the constraints obtained from mSUGRA and MSSM models for low thermal densities
are nearly a factor 2.5 lower than
the limits obtained from UMi \cite{Biswas:2017meq}. For velocity dispersion value of 4.2~km~s$^{-1}$,
the constraints has further improved by
a factor of $\sim$ 6 \cite{Biswas:2017meq}. Moreover, the Fig~5.4 has also indicated that for
$\sigma_{{\rm{v}}}$ = 4.2~km~s$^{-1}$, $<\sigma v>$ upper limits obtained from
Tri-II
has disfavored the Kaluza-Klein in UED and AMSB models for masses
$\lesssim 230$~GeV and $\lesssim 375$~GeV respectively \cite{Biswas:2017meq}. For $\sigma_{{\rm{v}}}$
= 3.4~km~s$^{-1}$, the limits obtained from Tri-II could not provide any
effective constraints on Kaluza-Klein in UED models, whereas it disfavors the
AMSB models for masses $\lesssim 300$~GeV \cite{Biswas:2017meq}.
Here we also want to mention that for $\gamma$-ray observation, $100\%$
$b\bar{b}$ channel provides the more stringent limits. Thus, in
Figs.~5.3(a,b) and 5.4,
we have only shown the results for $100\%$ b$\rm{\bar{b}}$ channel \cite{Biswas:2017meq}.\\
\noindent From Figs.~5.3(a,b) and 5.4, we want to note the fact that for even higher value of J-factor,
say for $\rm{0.59\times10^{21}}~\rm{{GeV^{2}~cm^{-5}}}$ (obtained from
ref.~\cite{Genina:2016kzg}), Tri-II
would put more stringent limits on the theoretical DM models than we have
obtained by
from the J values associated with the velocity dispersion of
Tri-II. At $m_{\rm{DM}}= 100$~GeV, the predicted value of
$<\sigma~v>$ corresponding to the J~$= \rm{0.59\times10^{21}}~\rm{{GeV^{2}~cm^{-5}}}$ is nearly
$\sim 30$ factor lower than
the $<\sigma~v>$ limits obtained for J~$=
\rm{0.17\times10^{20}}~\rm{{GeV^{2}~cm^{-5}}}$ \cite{Biswas:2017meq}. In addition,
this high J value disfavors the Kaluza-Klein in UED and AMSB model for mass
$<~700$~GeV $<~1000$~GeV, respectively \cite{Biswas:2017meq}.
\begin{figure}
\centering
{\includegraphics[width=0.5\linewidth]{figures/ursa_compare.pdf}}
\caption{Comparison between the $<\sigma~v>$ upper limits for the b$\rm{\bar{b}}$
annihilation channel obtained from our analysis and obtained by the
\textit{Fermi} collaboration for UMi (Ackermann \textit{et al.}, 2015).}
\end{figure}
\noindent We would also like to point out that to check the reliability of our
analysis
method, we have compared our analysis result for UMi with the result obtained
by
\textit{Fermi} collaboration \cite{Ackermann:2015zua}. For this comparison, we
have followed the
same data selection and analysis procedure as \textit{Fermi} collaboration by
ref.~\cite{Ackermann:2015zua}. In Fig.~5.5, we have shown the comparison. From
Fig.~5.5, we can conclude that \cite{Biswas:2017meq} our result is in good agreement with the result
obtained by ref.~\cite{Ackermann:2015zua}. This study supports the reliability
of the analysis procedure that we followed in this work \cite{Biswas:2017meq}.
\section{Conclusions $\&$ Discussions}
\noindent In this work, we have analysed nearly seven years of $\gamma$-ray
data
from the
direction of Tri-II by \texttt{Fermi ScienceTools} but
could
not observe any $\gamma$-ray excess
from the location of Tri-II. Thus, we have derived the upper limit of
$\gamma$-ray
flux for two possible scenarios.\\
\noindent Using the DM annihilation spectra, we have estimated the upper limits
of
$<\sigma v>$ where, we consider that DM candidates are in form of
neutralinos. From our analysis, we have shown that for $\rm{\sigma_{v}=
4.2~km~s^{-1}}$ with $100\%$ b$\rm{\bar{b}}$ channel, $<\sigma v>$ limits
obtained from
Tri-II
constrain the mSUGRA and the MSSM models with low thermal relic densities,
whereas the limits
constraint the Kaluza-Klein DM in UED and the AMSB models for masses $\lesssim
230$~GeV
and $\lesssim 375$~GeV,
respectively. \\
\noindent Even for the velocity dispersion with 90 $\%$ C.L. i.e., for
$\rm{\sigma_{v} = 3.4~km~s^{-1}}$, Tri-II can constrain the MSSM model with
low
thermal relic densities and the AMSB model for masses $\lesssim 300$~GeV.
Besides, from our work, we have found that $\gamma$-ray data from Tri-II
can even put stronger limits on the theoretical DM models than UMi.
We would also like to point out that our results are entirely based on
the standard NFW profile and we do not consider any effects of boost factor
related to substructures in Tri-II or the Sommerfeld effect in accord to the
annihilation cross-section. Finally, from our work, we can state that with more
precise observations of Tri-II, in future we can establish Tri-II as a very
strong
DM candidate for indirect DM searching.
\section{Introduction to Dark Matter}
\begin{quote}{Harley White}
\noindent Dark matter seems to be \\
What isn't there to be seen \\
In between\\
What we see.\\
\noindent They dub it dark since you cannot detect it\\
Nor can they inspect it\\
With telescopy.\\
\noindent Yet, while it can't be described\\
It cannot be denied\\
For equations that irk\\
To work.\\
\end{quote}
\noindent \textit{In our Universe, all the visible things i.e. Planets,
stars, asteroids, galaxies constitute less than $5\%$ of the total universe. So
what are the remaining parts? What does constitute the rest of our Universe?
This
is the mystery and beauty of our Universe. Several astrophysical and
experimental research suggest that a large part of the universe is composed of
a strange substance known as `dark matter'.} \\
\noindent In human history, one of the most extraordinary intellectual
achievements is to build the standard model (SM) of Particle Physics. Most of the
particles
were being discovered during the second half of the 20th century.
Experimentally and theoretically, we found that the SM is an
answer to a
question as old as civilization itself.\\
\noindent Now, the question is what are the fundamental elements of matter? The
SM gives us a very explicit representation of the fundamental elements of all
the matters that are detected in our terrestrial laboratories. We also have an
exact theoretical argument in a detailed mathematical form which explains how the
fundamental particles will act. In terms of the understanding of our Universe,
one of the most revealing discoveries is the baryonic matter, mostly in form of
protons and neutrons. But unfortunately, they are not the dominant form of material in our Universe.
Rather, a new mysterious form of ``invisible matter'' or we can say ``dark matter (DM)'' fills our
Universe
and from observational evidence, it has been found that they are
roughly five times more abundant than ordinary matter. \\
\noindent Unfortunately, the particle content of the SM - the quarks, the leptons, the mediators of the
interactions and the Higgs particle can not fill in the role of DM. This is evident from the
cosmological observations.\\
\noindent Accumulated observational data over the past century has established that visible matter
(baryonic matter) constitutes only 4.6$\%$ of the total substance in the Universe, while DM is theorized
to account for 24$\%$, dark energy accounting for the remaining 71.4$\%$. In
Fig.~1.1, we have shown the content of baryonic matter, DM and dark energy.
The invisible matter is termed as DM because it neither emits nor
absorbs any detectable electromagnetic radiation and hence it is very difficult
to study or identify it. It is not possible to directly detect the DM
by any traditional telescopes, but there are enough pieces of evidence for the
existence of DM \cite{Tegmark:2003uf}. Interestingly, the existence of
missing mass is robustly supported by macroscopic evidences, but the microscopic
nature and composition of DM are still in much debate. There are many ongoing experiments
which are dedicated to directly detect and study
the nature of DM candidates, but none have yet
succeeded. To have a complete understanding of DM, we need to study
several branches of physics and astronomy.\\
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{figures/121236_NewPieCharts720.png}
\caption{\small{The multiple components that compose our universe.
Dark energy comprises 71.4$\%$ of the mass energy density of the universe, DM comprises 24$\%$, and
atomic matter makes up 4.6$\%$.}}
\end{figure}
\section{Brief Overview of Thesis}
\noindent The thesis is organized as follows:\\
\noindent To start with, in the following section~1.3 of this chapter, I present a brief introduction to the indirect evidence of DM. Next, in
section~1.4, I discuss the possible DM candidates. Then in section~1.5, a brief review on
DM annihilation process has been discussed. For my thesis, I study the DM
signal resulting from the pair-annihilation. In this section, I would
introduce the reader to the theoretically favoured annihilation final states
and
how we can obtain the emission (for example $\gamma$-ray and radio emission) as an end product of
annihilation channels. In section~1.6, I briefly give a summary of the DM detection methodology. Direct detection, indirect detection and collider
searches are three popular methods to search for the signature of the DM candidates. For my thesis, I solely focused on the indirect detection
method.\\
\noindent Next in chapter 2, a brief introduction to the methods of
multiwavelength searches for DM is given. The mathematical formalism,
the notations and the other necessary concepts that I will explain in this
chapter would be used later on my thesis. First, in section 2.1, I discuss the
possible DM dense regions. Later in sections 2.3 and 2.4, I explain how
we can study the electromagnetic radiation over a wide range, from gamma-ray down to
radio frequencies appearing from DM annihilation. \\
\noindent For my thesis, I concentrate on the DM signature from some
particular DM sources through indirect detection. For this
purpose, we need dedicated and sensitive instruments. In Chapter 3, I describe the working
principle of the Fermi Large Area Telescope (Fermi-LAT) in detail. Fermi-LAT is a gamma-ray
space telescope that covers the entire celestial sphere. In view of indirect
detection of DM signal, Fermi-LAT is one of the most sensitive
gamma-ray
telescopes. For most of my works, I have analysed the gamma-ray data observed by the
Fermi-LAT. The detector and its working principle are described in Chapter 3 along with a review of its
performance.\\
\noindent Next, in Chapter 4, I give an overview of the Likelihood function for
Fermi-LAT data. The details of the mathematical formulation for Likelihood
function and its methodology are explained in this chapter. Here, I will also
explain how to estimate the upper limits if we could not detect any signal from
the source. I use this formulation for my work to estimate the possible
signature of DM annihilation. \\
\noindent In Chapter 5, we study Triangulum-II, a newly discovered
ultra-faint dwarf galaxy, which is assumed to be rich in DM. We
examine
the gamma-ray signal from the location of Triangulum-II and from that data try
to check whether this galaxy can provide strong annihilation rate than other
well studied sources. We show that Triangulum-II would provide very stringent
limits on the theoretical DM models and thermal annihilation rate,
even better than some well studied dwarf spheroidal galaxies (dSphs).\\
\noindent In Chapter 6, we study the Tucana-II. Like Triangulum-II, Tucana-II is a DM-dominated satellite galaxy of our Milky Way. We examine the gamma-ray
data from its location and unlike most of the dwarf galaxies, we
observe a faint emission. Then we first study the maximum significance of this
emission and check how this excess would vary with the DM mass,
annihilation channels and periods of exposure. Furthermore, we
investigate the origin of such emission and our study shows that such excess is mostly coming from the Tucana-II location and most
likely related to DM annihilation.\\
\noindent Next, in Chapter 7, we study four Low surface brightness galaxies (LSB). Unlike the
earlier two chapters., for this work, we use the multiwavelength approach to
investigate the DM signature. LSB galaxies have very diffuse and low
surface density stellar disks and their extended HI (neutral hydrogen) rotation curves indicate the
presence of very massive DM halos. We analyze the Fermi-LAT data
for high energy gamma rays and radio flux upper limits from Very Large Array (VLA) at a frequency of
1.4 GHz to obtain upper limits on annihilation cross-section $\langle\sigma
v\rangle$ at $95\%$ confidence level (CL) in a model-independent way. From this study, we show that
for LSB galaxies radio cross-section rate would be competitive with the limits
predicted from Fermi-LAT. We further discuss the projected sensitivity of the
upcoming ground-based telescope, the Cherenkov Telescope Array (CTA) and radio
telescope, the Square Kilometer Array (SKA) and investigate whether they can probe the radiation from LSB galaxies.\\
\noindent In Chapter 8,
we
consider
14 recently discovered ultra faint dwarf galaxies and study the electromagnetic radiation over a
wide
range, from gamma-ray down to radio frequencies appearing from them. We also,
check the $\langle\sigma v\rangle$ at $95\%$
CL for the gamma-ray and radio flux upper limits observed by Fermi-LAT,
Giant Metrewave Radio (GMRT)
and VLA. We study the uncertainty in the synchrotron and gamma-ray fluxes
arising from various astrophysical parameters. Furthermore, we discuss the
projected sensitivity of the SKA radio telescope in probing
the synchrotron radiation from the aforementioned dSphs.
\section{Observational Evidence of Dark Matter}
\begin{quote}{Harley White}
\noindent Dark matter exerts gravitational pull.\\
It glues stars together, makes galaxies full.\\
Unlike normal matter it plays hide and seek\\
And so much of it's interactively weak...\\
\end{quote}
\noindent The very first observational hint of the DM, or ``missing
mass'' began in early 1930. In 1932, Jan Oort observed a bizarre motion of the
stars of our Milky Way and that hinted the presence of some form of non-luminous
matter which is far more massive than anyone had ever predicted
\cite{Oort:1932gat}. By studying the Doppler shifts of each moving star in
galactic place, Oort calculated the velocities of these stars. The calculation showed
that stars were moving so quickly to escape from the gravitational pull of
Milky
Way. That made Oort to suspect the pressence of massive pull in the galactic plane which can hold the stars to their orbits
\cite{Oort:1932gat}. \\
\noindent Just one year after the Oort's finding, in 1933, Swiss astronomer F.
Zwicky, examined a much larger system, Coma Cluster. From Doppler
effect, he measured the velocity dispersion for member galaxies of the Coma
galaxy cluster and noticed that the member galaxies were
moving much faster than we could expect from their luminous components
\cite{Zwicky:1933gu, Zwicky:1937zza}. Zwicky measured the velocity dispersions
of each member galaxy (i.e. kinetic energy) and then by employing the virial
theorem, he estimated the total mass of the Coma cluster. With Virial theorem,
he established the relation between the total mass of the galaxy cluster and the averaged square
of the velocities of each galaxy. He then observed that in order to maintain the equilibrium in Coma cluster, a large amount of ``Dunkle
Materie'' or DM must be present to theoretically explain the large
velocity dispersion of the system. \\
\noindent The virial theorem denotes the following relation between
gravitational energy and kinetic energy. The expression is:
\begin{equation}
<T>= - \frac{1}{2} <U>
\end{equation}
\noindent The virial theorem (equation 1.1) states that for a spherically
symmetric system, the total kinetic energy ($T$) is equal to minus
$\frac{1}{2}$
times the total gravitational potential energy ($U$) \cite{Cari_ena_2012}.
Hence, if we know the kinetic energy of the system, we can calculate the
gravitational potential energy, and then the total mass of the system can be
easily estimated. If the obtained mass of the system is greater than the mass
of
the total luminous matter, then there must be some invisible i.e non-luminous
matter present in the system. The invisible matter can only interact
gravitationally. Hence, from virial theorem, Zwicky observed that the total
mass
of the cluster was about 400 times greater than the luminous mass. This result
led him to propose that there must be some source of invisible matter that
created such a difference with the observational estimation. Study of the Virgo cluster soon produced very similar results \cite{Smith:1936mlg}.\\
\noindent Next, roughly around 40 years later the discoveries of Oort and
Zwicky, beginning in the 1970s, Vera Rubin, Alberto Bosma, and others studied the orbital
velocities of stars in spiral galaxies (\cite{Rubin:1970zza, Freeman:1970mx, Einasto:1974njh, Ostriker:1974lna, roberts:1975bhh, roberts:1976bhf, bosma:1978ghf, Rubin:1978kmz, Rubin:1980zd}). Rubin and her collaborators separately performed an extended
study of rotational curves for around 60 individual galaxies
\cite{rubin:1983hgu}. They performed detailed
measurements of the Doppler shift for their targets and determined their
orbital velocities. Their studies also showed an extreme deviation from the theoretical prediction based on Newtonian gravity and other baryonic matter interactions \cite{rubin:1983hgu}. They found that the spiral galaxies have flat
rotation curves extending out to radii of tens of kpc and their orbital velocities
did not decrease as expected. From the flat rotational curve, Rubin estimated that the
galaxies
have contained almost 10 times more matter than the visible one. This
remarkable finding just confirmed the earlier claims by Zwicky. Rubin also predicted
that there might be an unobserved huge spherical halo of DM which
surrounds the inner luminous galaxy.\\
\noindent According to Newton’s Law of Gravitation (Newton 1687), the orbital
velocity should fall by increasing the distance from the center of the galaxy as,
\begin{equation}
v(r)= \sqrt{G \frac{m(r)}{r}} ,
\end{equation}
\noindent where $v(r)$ is the rotation velocity as a function of radius, and $m(r)$
is
the mass confined within radius, $r$. \\
\noindent From Eq. 1.2, we should expect to observe the fall of orbital velocity as: $v(r)~\propto~1/\sqrt{r}$. But interestingly, the
galactic rotation curves, as obtained by Rubin and her collaborators, did not
follow the expected nature. In their publication, Rubin, Kent Ford and Norbert
Thonnard \cite{Rubin:1980zd} reported their observational results for 21 spiral
galaxies. Their study showed that with increasing the distance from the center
of the galaxies the rotational velocity remained constant (or merely increased
for some galaxies). The rotational velocity of any galaxy can only remain constant if
the total mass of the system is increasing with radius from the center. The artistic view of their study is shown in
Fig.~1.2 (a). From this figure, it is
evident that the radial velocity of the galactic system is much larger than
what
would be expected if the gravitational potential of the galaxy came from only
the luminous matter i.e. from the stars and gas.\\
\begin{figure}[h!]
\subfigure[]
{ \includegraphics[width=0.4\linewidth, height=2.6in]{figures/M33_rotation_curve_HI.jpg}}
\hfill
\subfigure[]
{ \includegraphics[width=0.4\linewidth, height=2.8in]{figures/longprop4.png}}
\caption{(a) The artistic view of the observed and expected Rotational curve
from M33 galaxy. (b) The rotational curve of the spiral galaxy NGC6503.}
\end{figure}
\noindent After that, many scientists did similar kinds of studies and all of
them confirmed the same nature of the galactic rotational curve. The work by
Begeman, Broeils and Sanders, 1991 \cite{Begeman:1991iy} also reported the same. For their studies, they have chosen the spiral galaxy NGC6503
(Fig.~1.2
(b)). They showed the contributions to the rotational
velocity from luminous disk, gas and dark halo. Their analysis also
reported the extension of the DM halo beyond the stellar bulge of the
galaxy.\\
\noindent In the 1970s, scientists tried a new way to understand the
distribution of DM, `Gravitational lensing'. Einstein's theory of relativity
postulates that the strong gravitational field can bend the path of light rays,
i.e., a massive object can bend spacetime and also
affect the motion of nearby objects. This produces a lensing effect where
the surrounding objects follow the geodesics of the curved space. This effect
is called the gravitational lensing \cite{Lynds:1989gry}. For observing the
effect of gravitational lensing, it requires a very massive object (say the
cluster of galaxies) and a distant bright light source behind it. If the distant
object is located directly behind the massive body, the massive object would act
as a
gravitational lens and would create numerous images of the distant object. This
effect would create an Einstein ring structure (see, the blue ring structure from Fig.~1.3)
\footnote{\tiny{NASA images from Large Synoptic Survey Telescope (LSST);
http://www.lsst.org/lsst/public}} where the massive object would be at the
center and the images of the distant object would create the ring (Fig.~1.3). In
1979, D. Walsh et al. \cite{Walsh:1979nx} was the first to observe this form of
gravitational lensing. The detailed study of the
Einstein ring structure allows astronomers to estimate the total mass of any
massive body, such as: galaxy, cluster of galaxies, etc. Their observational studies show
that
only $10\%$ of the total mass of the clusters are in the form of individual
galaxies, the rest is DM \cite{Walsh:1979nx}.
\begin{figure}
\centering
{ \includegraphics[width=0.5\linewidth]{figures/lensing_Abell_370.jpg}}
\caption{Gravitational lensing of Abell 370 observed by Hubble Space Telescope (HST).}
\end{figure}
\noindent Another very strong evidence for DM is the Bullet cluster
(1E
0657-56). This cluster consists of two colliding cluster of galaxies. While the
galaxies crossed their paths, the stars within the galaxies and other visible
light passed by each other without being affected much by the collision. But,
the hot gas clouds which represent most of the baryonic matter merging from
two colliding galaxies interact electromagnetically and due to friction of
the gas molecules, the gases of both clusters slowed down much faster than the
stars. When the gas clouds were slowed down, the visible part of the galaxies
came into much clearer
view and that gave the scientists a scope to examine the total mass of the
Bullet cluster. With the data obtained from the
X-ray telescopes and gravitational lensing observations, the scientists
found that the mass and the gas element do not follow the same
distribution\cite{Clowe:2006eq}.
Then by measuring the gravitational lensing effect of the Bullet cluster, the
scientists determined that the
cluster bent the path of light more than they could expect from the luminous
mass.
This proved that there must be the presence of more mass in the cluster
than the visible matter. The composite image of bullet cluster (or galaxy
cluster 1E 0657-56) is shown in Fig. 1.4. The background part of this image is
showing the visible spectrum of the light stems obtained from Hubble Space and
Magellan Telescope, while the pink part of this image denotes the X-ray
emission of the colliding clusters recorded by the Chandra Telescope and lastly
the blue part shows the mass distribution of the Bullet clusters estimated from
the gravitational lensing effects
\footnote{\tiny{Nasa: A matter of fact, August, 2006;
http://www.nasa.gov/vision/universe/starsgalaxies/dark{\_}matter{\_}proven.html}
, \tiny{X-ray: NASA/CXC/CfA/M.Markevitch el al.}, \tiny{Optical: NASA/STScl, Magellan/U.Arizona/D.Clowe et al.; Lensing
Map:
NASA/STScl; EDO WFI; Magellan/U.Arizona/D.Clowe et al.}}.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{figures/Bullet_cluster_all.jpg}
\caption{X-ray image (pink) of Bullet cluster superimposed over a visible light
image (blue).}
\end{figure}
\section{Dark Matter Candidates}
In this section, we will discuss the nature of DM and its possible
candidates. To reveal the nature of the DM particles, the physicists first
focused on the known astrophysical bodies which are made of ordinary, baryonic
matters. Later, they have extended the standard model theory to explain the
non-luminous nature of the DM.\\
\noindent From several observational pieces of evidence, we can summarize the following
general properties of DM candidates.
\begin{itemize}
\item They do not emit or absorb light, indicating absence of electromagnetic interaction. Hence, they do not carry electric charge.
\item Majority of them neither participate in strong interaction nor carry colour charge. (A very small fraction of the DM is assumed to be baryonic and only they
can take part in strong interaction.)
\item They do only interact via gravity. The gravitational effect of the DM is very important to form a large structure of the universe.
\end{itemize}
\subsection{Possible DM Candidates:}
\noindent The observational evidence indeed gave us enough hints of the
existence of the DM, but the true nature of the DM remains unknown. Below we
will discuss the possible candidates for DM.\\
\subsubsection{The Standard Model and the Neutrino and Supersymmetry:}
\noindent The standard model (SM) consists of the following particles - six leptons (electron, muon, tau
and their corresponding neutrinos), six quarks (bup, down, strange, charm, bottom and top)
and five force carriers (photon, gluon, Z, $W^{\pm}$ and the Higgs scalar). Each of the
above-mentioned leptons and quarks
has their respective antiparticles which are generally denoted with a bar or
opposite charge sign (for example, the up antiquark's
symbol is $\bar{u}$). The Higgs Boson with a mass of $\sim$ 125 GeV was discovered in 2012 by the ATLAS \cite{ATLAS:2012yve} and the
CMS experiments performed at the Large Hadron Collider (LHC) of European Organization for Nuclear Research (CERN)\cite{CMS:2012qbp}. \\
\noindent In spite of the success of the SM in explaining behaviour of the elementary particles, it does
not contain any particle which can act as the DM candidate. One of the most stable, neutral and weakly
interacting particles from SM is the neutrino. But, the recent literature by
Spergel et al. \cite{Spergel:2003cb} completely ruled out the possibility of neutrinos being the
entire solution to missing mass of the Universe. From WMAP, they showed the
neutrino mass to be $m_{v}<~0.23~eV$, which in turn makes the cosmological density
$\Omega_{v}h^{2}<~0.0072$ \cite{Jarosik:2010iu}. Hence, neutrinos do only
account for a very small fraction of DM, and cannot be the prime source of
DM.\\
\noindent Hence, several possible extensions of the SM have been proposed, Supersymmetry (SUSY)
being one of them \cite{Jungman:1995df}. SUSY assumes an additional symmetry between the fermions and the
bosons, i.e., each particle in the SM has its superpartner - fermions have bosons as their
superpartners and vice-versa. The most encouraging finding of the SUSY is that it can
propose valid DM candidates. Several particles in the SUSY theory are possible DM candidates, like neutralino, sneutrino \cite{Falk:1994es, Hall:1997ah}, and gravitino \cite{Chun:1993vz, Borgani:1996ag}. All of these
three candidates show a WIMP (Weakly interacting massive particle)-like nature i.e., weakly interacting and
electrically neutral, but sneutrinos \cite{Falk:1994es, Hall:1997ah} might
annihilate very rapidly in the early universe and hence its relic densities are
very low to explain any cosmological phenomenon, whereas, gravitino
\cite{Chun:1993vz, Borgani:1996ag} would act as a hot DM. In most SUSY models, the lightest neutralino is considered the most promising candidate for
DM.\\
\noindent Several exotic particles are also considered as DM candidates - massive compact halo objects
(MACHOs), black holes, WIMPs, axions, etc. Some
theories also suggest that the DM can be both baryonic and non-baryonic and in that case,
MACHOs are considered as the baryonic type. The dominant part of the DM is mainly
composed of non-baryonic candidates, e.g. neutrinos, WIMPs, axions, etc. Based on the
physical properties, there are different types of DM. We will describe them below.\\
\noindent Kinematically, the DM can be divided into three categories based on their velocity at the
time of its decoupling of universe \cite{silk2000big}. This is important because it has a direct influence on
our galaxy formation and large and small structures of the universe. \\
\begin{itemize}
\item \textbf{Hot Dark Matter (HDM):} The HDM is made of abundant light
particles. The best candidate for HDM is the normal light neutrino. The mass of
an HDM is of the order of eV or less, $m_{HDM}$ $<$ 1 eV. \\
\item \textbf{Cold Dark Matter (CDM):} The CDM is at the opposite end of HDM in the mass-velocity spectrum. It is non-relativistic at the time of decoupling. Its
mass can be in the GeV order or larger. There are many proposed candidates for CDM,
including weakly interacting massive particles like neutralinos, WIMPZILLAs,
solitons, etc. \\
\item \textbf{Warm Dark Matter (WDM):} The WMD is something in between the HDM
and CDM, consisting of particles of $m_{WDM}$ $>$ 1
KeV which may interact even weaker than a neutrino. It is relativistic at the time of
decoupling, but non-relativistic at the radiation-to-matter dominance
transition. There are a few possible candidates for WDM, including sterile
neutrino, light gravitinos and photino, etc.\\
\end{itemize}
\noindent The DM can also be classified according to its production mechanism.
\begin{itemize}
\item \textbf{Thermal relics}: The thermal relics particles are assumed to be in
thermal equilibrium in the early universe and mass of that thermal relic is bound from the above by 340 TeV. Most of the favoured DM candidates are from
this category.\\
\item \textbf{Non-Thermal relics}: These particles are produced via
non-thermal mechanism and is believed that
they were never in equilibrium with the thermal bath of the universe. There are
several favoured DM candidates which are assumed to be non-thermal relics, such
as axions emitted by cosmic strings, superheavy WIMPZILLAs (masses lie between $10^{12}$ to $10^{16}$ GeV), etc.\\
\end{itemize}
\subsection{Weakly Interacting Massive Particles}
\noindent One of the leading candidates for DM is weakly interacting massive
particles (WIMPs) \cite{Steigman:1984ac, Bertone:2004pz}. The most favoured
DM candidates, like the neutralino from supersymmetry, the lightest
Kaluza-Klein particle from the superstring theory and theories of extra
dimensions and some other candidates from beyond the standard model theory are
assumed to be very massive and only interact via gravitational pull, i.e.,
weakly interacting. These are collectively referred to as WIMPs. They are non-baryonic and well-motivated by
independent considerations of particle physics \cite{Steigman:1984ac}. Systematic theoretical investigations to
understand their properties and experimental searches have to be carried out. \\
\noindent At
the early universe, say after the Big Bang, the particles were in chemical and
thermal equilibrium. By chemical equilibrium, we mean that every
reaction among the particles was reversal (e.g. the creation of WIMPs
pair-production from SM particles and the WIMP annihilation were in
equilibrium). Hence, the whole system of universe did not change by any reaction.
This equilibrium was maintained until the temperature of the universe became
lower than the particle mass, and as a result, the pair-production of WIMPs
stopped. When this equilibrium was broken, the abundance of DM candidates
decayed due to annihilation and this process continued until the annihilation
rate
fell below the expansion rate of the universe. This epoch is referred to as
the ``freeze-out''.\\
\noindent Another class of particles is the Superweakly interacting massive particles (superWIMPs),
which include sterile (right-handed) neutrinos, gravitino. etc. These have annihilation cross-
sections much smaller than that of the weak interaction.
\section{Dark Matter Annihilation}
\noindent In this section, we discuss ways to detect DM candidates of the WIMP type. Generally,
experiments look for the end products of their annihilation or decay channels. One very
popular way to detect the DM candidates is to search for the end products of WIMP
annihilation/decay channels. \\
\noindent We denote the DM and its anti-particles as $\chi$ and
$\bar{\chi}$, respectively. If DM is a Majorana particle,
$\chi$ and its anti-particle $\bar{\chi}$ would be the same. Several observational evidences as well as theoretical models
propose that the mass of WIMPs lies in the range of GeV to TeV. If $\chi$ is
assumed to be thermally relic DM candidates, then $\chi$ and $\bar{\chi}$ should
participate in the evolution of the universe as other SM particles. \\
\noindent In the early universe, through annihilation and pair-production
processes, $\chi$ and $\bar{\chi}$ were in equilibrium with ordinary SM particles (i.e equilibrium with fermions (f), quarks (Q), gauge bosons ($W^{\pm}$, Z) etc). The form of the
annihilation reaction could be described as: \cite{Salati:2014rua}\\
$\chi + \chi \rightarrow Q + \bar{Q} \rightarrow f + \bar{f}, W^{+} + W^{-},
Z + Z$,....,
where $Q$ and $\bar{Q}$ denote quark and its antiparticle, respectively. \\
\noindent After the big bang, all of these particles were in equilibrium and
were at the same temperature. The number density of $\chi$ in equilibrium at a
given temperature can be described as:
\begin{equation}
\int n_{\chi}^{eq} = \frac{g}{(2\pi)^3} \int f(p) d^{3}p
\end{equation}
\noindent where, g is the number of internal degree of freedom of $\chi$ and f(p)
is a function of the three-momentum p of $\chi$. Depending on the spin of the WIMP, $f(p)$ would either follow Dirac-Fermi or Bose-Einstein
distribution. At very high temperatures i.e for $T >> m_{\chi}$,
$n_{\chi}^{eq} \propto T$, while at lower temperature i.e. for $T << m_{\chi}$,
$n_{\chi}^{eq} \propto exp(-m_{\chi}/T)$. At $T << m_{\chi}$, the production of
$\chi$ $\bar{\chi}$ pair from SM particle pair will be suppressed and at the
same time the annihilation rate will remain the same, hence the number density
of
$\chi$ will be exponentially reduced. When the universe expands, the
temperature of the universe drops to sufficiently low and that would lead to the system out of the equilibrium. \\
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{figures/dm_density.png}
\caption{Comoving number density evolution as a function of the ratio
$m_{\chi}/T$ in the context of
the thermal freeze-out.}
\end{figure}
\noindent Due to a significant drop in the number density of $\chi$, it would
be
very hard for $\chi$ and $\bar{\chi}$ to find each other to annihilate, or to
be
scattered around by ordinary SM particles. Eventually, they would no longer be
in thermal equilibrium and $\chi$ is decoupled from the rest of the universe.
Then, except for a very rare occasion, $\chi$ would not annihilate or scatter
with
ordinary particles. But $\chi$ continues to expand freely with the Hubble flow.
The number density of $\chi$ is fixed to the $T^{3}$. In Fig.~1.5, we have
shown how does the comoving number density of the $\chi$ varies with $m_{\chi}/T$ at the epoch of thermal freeze-out.\\
\noindent The overall geometry of the universe \cite{Gaitskell:2004gd} is
determined by the density parameter ($\Omega$) of our universe. The expression
of the density parameter is: $\Omega = \rho/\rho_{c}$. Where $\rho$ is observed
density and $\rho_{c}$ is the critical density of our universe. The critical
density is the average of the matter-density that is needed for our universe to
halt its expansion and it can be expressed as:
\begin{equation}
\rho_{c} = \frac{3H_{0}}{8\pi G}
\approx 1.88 \times 10^{-26} \, h^{2} \, kg \, m^{-3}
\end{equation}
\noindent where $H_{0}$ is the Hubble constant and h is the dimensionless form of
$H_{0}$ in units of 100 km/s/Mpc \cite{Gaitskell:2004gd}. From the density
parameter of the universe, we can guess the contributions of baryonic matter,
DM and dark energy, that is, $\Omega = \Omega_{B} + \Omega_{DM} +
\Omega_{\Lambda}$. Here $\Omega_{B}$, $\Omega_{DM}$ and $\Omega_{\Lambda}$ are
the relative density parameter for normal baryonic matter, DM and dark
energy, respectively. The recent observations of the Planck collaboration
obtained $\Omega_{B}$ = 0.05, $\Omega_{DM}$ = 0.265 and $\Omega_{\Lambda}$ =
0.685 \cite{Ade:2015xua}. The DM density ($\Omega_{DM}$) depends on
the
annihilation cross-section ($\sigma$) weighted by the average velocity ($v$) of
the particle i.e. on $<\sigma v>$. In order to match the abundance measured by
the Planck collaboration, the DM relic density would be equal to
$\Omega_{DM} h^{2} = 0.1197 \pm 0.0022$ \cite{Ade:2015xua}. The expression of
the
$\Omega_{DM} h^{2}$ is:
\begin{equation}
\Omega_{DM} h^{2} = 0.11 \frac{3 \times 10^{-26} cm^{3} s^{-1}} {<\sigma v>_{0}}
\end{equation}
\noindent From eq.(1.5), it is evident that DM might have an annihilation
cross-section, $<\sigma v>_{0} \approx 3 \times 10^{-26} cm^{3} s^{-1}$ at
thermal freeze-out \cite{Abeysekara:2014ffg}.\\
\noindent Like we already discussed above, WIMPs are thought to be first
self-annihilate into a quark-antiquark pair and later that pair decays to
several possible SM particles, as shown in Fig. 1.6. From this image, we can
observe that as an end product of WIMP annihilation, it can generate
$\gamma$-ray, lepton pairs such as muon-antimuon pairs ($\mu^{-}\mu^{+}$) or
electron-positron ($e^{-}e^{+}$) pairs, and also boson pairs like $Z Z$
or $W^{+} W^{-}$. Thus, even though the WIMPs are invisible to us, we can
try to probe these SM particles originated from WIMP annihilation. We can start
our search for DM signature by looking for the areas in the universe that are
thought to be rich in DM. One of the most popular ways is to
scan the universe for the end products which might come from the DM
annihilation/decay. \\
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figures/lindhome4.png}
\caption{WIMP annihilation chain and the end products.}
\end{figure}
\noindent For my thesis, we have focused on five theoretically motivated DM annihilation
channels (in the later sections we would discuss this in detail). Those channels
are: $\chi\chi \rightarrow \tau^{+}\tau^{-}$;
$\chi\chi \rightarrow \mu^{+}\mu^{-}$;
$\chi\chi \rightarrow W^{+}W^{-}$;
$\chi\chi \rightarrow b\bar{b}$ and
$\chi\chi \rightarrow 80\%~b\bar{b} + 20\%~\tau^{+}\tau^{-}$.\\
\noindent The lifetime of tau lepton ($\tau^{-}$) is around $2.9 \times
10^{-13}$s and its mass is $1776.82 MeV/c^{2}$. $\tau^{-}$ can decay into
the combination of neutral pions, tau neutrinos and charged pions ($\pi^{\pm}$).
There are multiple possible decay channels for $\tau^{-}$ and amongst them
$90\%$ of the decay possibilities are accounting for five channels and the
remaining around $10\%$ of decay possibilities can be related to twenty five
different decay modes \cite{Nakamura:2010zzi}. The five dominant $\tau^{-}$
decay
channels are: $\tau^{-} \rightarrow e^{-} + \bar{\nu_{e}} + \nu_{\tau}$ ,
$\tau^{-} \rightarrow \mu^{-} + \bar{\nu_{\mu}} + \nu_{\tau}$, $\tau^{-}
\rightarrow \pi^{-} + \pi^{0} + \nu_{\tau}$, $\tau^{-} \rightarrow \pi^{+} +
2\pi^{-} + \nu_{\tau}$, and $\tau^{-} \rightarrow \pi^{-} + 2\pi^{0} +
\nu_{\tau}$ \cite{Nakamura:2010zzi}. The dominant decay modes of neutral pions are $\pi^{0} \rightarrow
2\gamma$ (98.82$\%$) and $\pi^{0} \rightarrow \gamma + e^{-} + e^{+}$ (1.17$\%$). Thus decays of taus generate radiation. \\
\noindent The lifetime of muon ($\mu^{-}$) is around $2.2 \times 10^{-6}$ s and
its mass is 105 $MeV/c^{2}$ and decays via weak interaction: $\mu^{-} \rightarrow e^{-}
+
\bar{\nu_{e}} + \nu_{\mu}$ as an end of final
state radiation. \\
\noindent The mediator of charged weak interaction W boson, has a mass of 80.4 $MeV/c^{2}$, and decays
into a fermion-antifermion pair. \\
\noindent The two heaviest quarks, top (173210 $MeV/c^{2}$) and bottom (4180 $MeV/c^{2}$) quarks decay via
weak interaction and produce gamma rays as final state.\\
\noindent Four annihilation channels have been chosen for the following reasons. Because of the phase space, we can expect that the DM
particles would dominantly annihilate into the heaviest possibles channels
\cite{Abeysekara:2014ffg}. Hence, we consider the $\tau^{+}\tau^{-}$
annihilation channels. Several ongoing experiments such as Fermi-LAT, MAGIC,
etc. have studied $b\bar{b}$ annihilation channel for searching the indirect DM
signal. Thus, we have chosen the $b\bar{b}$ annihilation channel to check the
direct comparison of results. We have chosen the bosonic $W^{+}W^{-}$ channel
because in several experiments this bosonic annihilation channel is widely
considered. Finally, we have included the $\mu^{+}\mu^{-}$ channel for our
analysis because this leptonic channel may explain the observed excesses of
local positrons \cite{Abeysekara:2014ffg}.\\
\section{Dark Matter Detection}
\begin{quote}{Harley White}
\noindent Physicists hunt for DM, to move it\\
With particle accelerators, to prove it\\
Exists as suspected, from data collected\\
With outcome expected, eureka! projected...\\
\end{quote}
\noindent If DM is dominated by WIMPs, then we should have cosmological abundance of WIMPs. Two
different approaches may be used for the detection of DM particles - direct as well as
indirect. A schematic diagram of production and decay of DM is shown in figure 1.7.\\
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{figures/longprop6.png}
\caption{Schematic diagram of DM detection through direct, indirect
and its production at Colliders.}
\end{figure}
\noindent It is possible to directly detect DM particles, both from cosmic sources as well as from colliding beam experiments like the LHC. It is assumed that WIMPs
have weak-scale
scattering cross-section with SM particles and thus it might be possible to
directly detect the nuclear recoil energy from WIMP-nucleon interactions in
low-background experiments \cite{Goodman:1984dc,
Ahmed:2010hw, Aprile:2012nq}. We can also try to generate WIMPs in
accelerators through
the collision of SM particles. The distinctive signatures (e.g., the missing
transverse energy)
of these events are expected to be recorded by the collider experiments
\cite{Baltz:2006fm, Aad:2008zzm,
Chatrchyan:2008aa}. \\
\noindent Moreover, WIMPs are considered to be a thermal relic and it is
expected
that they might
possess a weak-scale self-annihilation cross-section
\cite{Jungman:1995df, Gunn:1978gr, Stecker:1978du, Bergstrom:1988fp,
Bergstrom:1997fj}. Thus there might be a fair scope
to indirectly detect the WIMP signature through the SM particles (e.g., photons, neutrinos, positrons, etc.) originating from annihilation.
Different experiments are designed to probe different characteristics of the
WIMP and all of them have their benefits, difficulties, and uncertainties. But,
in order to have total knowledge of WIMPs, for example, their eventual
detection,
identification, and characterization, we need to gather the information from all
three experimental
techniques.\\
\subsection{Direct Detection}
\noindent The basic assumption for direct detection of DM candidates is that
our Universe is filled with an abundance of WIMPs and many of those WIMPs are
continuously passing through our terrestrial surface. Thus our terrestrial laboratories
should
notice the interaction of WIMPS with the matter by observing the recoil energy
of nuclei through either ionization, scintillation, or vibrations (phonon).
This method needs a very clean detector material so that it can detect a
possible real signal from the background. It is also very important to minimize
particle backgrounds as much as possible. For the direct experiment, one of the
most general
setup is to find
an underground-site that would effectively reduce the cosmic-ray
background. The rate of WIMP detection
depends on various prime factors, such as the mass of WIMPs, the local halo
density of DM, the velocity distribution in the Milky Way and the cross-section
on the target nuclei. The detectors generally consist of a very pure crystal as
in e.g. CDMS\footnote{\tiny{http://cdms.berkeley.edu/}}, DAMA\footnote{\tiny{http://people.roma2.infn.it/ dama/web/home.html}}, CRESST\footnote{\tiny{http://www.cresst.de/}} or a liquid noble gas such as Xenon
(Xenon100\footnote{\tiny{http://xenon.astro.columbia.edu/xenon100.html}}).
From the
theoretical
prescription, the cross-section of WIMP interactions are predicted to be very
small, hence very large detectors are needed (e.g. the Xenon100 contains 100 kg
liquid Xenon) to detect the interactions.\\
\noindent There are several ongoing experiments such as DAMA/LIBRA which are
designed to detect the DM by using the solid scintillators. For detecting the
particle interactions, the particle detectors of DAMA/LIBRA use the
thallium-activated sodium
iodide crystals which are covered in a low radioactivity container with several
Photomultiplier Tubes (PMTs) \cite{Aad:2012tfa}. These detectors report the annual
modulation of the signal of confidence level $\approx$ 8.9$\sigma$
\cite{Bernabei:2010mq}.\\
\subsection{Indirect Detection}
The indirect detection method is one of the popular ways to identify the
invisible DM
signals. As we already discussed in the earlier section, WIMP can
self-annihilate (or
decay) into SM particles.
With this detection method, we probe the SM particles originated from WIMP
annihilation and then measure the particle spectra generating from them.
The spectra
would provide us with valuable information about the nature of DM particles.
There are several
dedicated (ongoing and planned) indirect detection experiments that are designed
for solving the mystery of the DM. The detection experiments are classified according to the particles they detect.
\begin{itemize}
\item \textbf{Photons:} \\ Gamma-rays, including both direct line photons and
diffusion photons are one of the most popular methods for indirect DM
detection.
WIMP annihilates to a quark and anti-quark products and they later produce a
jet
of particles that will generate the gamma-ray spectrum. At high energy, the
neutral pions decay to a pair of monoenergetic photons that can create the
prompt line of gamma-rays. When WIMP directly annihilates to the gamma-rays
i.e., $\chi\chi~\rightarrow~\gamma\gamma$, the energy of the photons is
proportional to the mass of WIMPs. Since the mass of WIMP is of the order of
~GeV, it would create a very high energy gamma-rays and detection of any of
such
the gamma-ray line would give an obvious indication of the DM annihilation and
in the indirect detection, it would be referred to as the smoking gun for the
DM
search \cite{Bergstrom:2012fi, Weniger:2012tx}. Another source of the
gamma-rays
are the internal bremsstrahlung of charged particles produced in the
annihilation process. The simulation and the observational study suggest that
Galactic centre, dwarf spheroidal galaxies (dSphs), Low surface brightness
galaxies (LSBs), cluster of galaxies etc. would be the ideal platform for indirect
search. The advantage of tracking $\gamma$-ray is that they are electrically
neutral and do not interact with magnetic fields. Hence, it is possible to track their origin and energy. In the latter
chapter, we will discuss this method in detail. \\
\noindent For indirect DM detection, there are many dedicated space-based and
ground-based gamma-ray observatories. The examples of the space-based
observatories are: Fermi-LAT (Fermi Large Area Telescope),
AGILE (Astro-rivelatore Gamma a Immagini Leggero \cite{Pittori:2003fgd}),
planned Gamma-400 \cite{Galper:2012fp}, etc. The examples of ground-based Air
Cherenkov Telescopes (ACTs) are: MAGIC (Mayor Atmospheric Gamma-ray
Imaging Cherenkov \cite{Aleksic:2011bx}) telescope in La Palma, H.E.S.S.
(High Energy Stereoscopic System
\cite{Aharonian:2006pe}) in Namibia, next-generation telescope,
CTA (Cherenkov Telescope Array \cite{Actis:2010bc}), etc. The space-based telescopes
can directly observe the gamma rays resulting from WIMP pair production within
their detector, while the ACTs use the atmosphere as part of the detectors and
detect the Cherenkov light from the air showers which are produced during the
interaction of gamma rays with the atmosphere. For our thesis work, we have
considered the space-based gamma-ray telescope but the ACTs have their
advantages and disadvantages. ACTs can generally observe much higher energy
photons than the space-based telescopes and they have a comparatively large
collecting area.
But,
the advantage of space-based telescopes is that they can cover the whole sky and are more sensitive than ACTs, while
ACTs need to consider the atmospheric distortions and can not observe the whole
sky at once.
\item \textbf{Charged particles, positrons, and antiproton:} \\ The possible
charged particles originating from the WIMP self-annihilation are positrons
($e^{+}$), electrons ($e^{-}$), antiprotons ($\bar{p}$), and
anti-deuterons ($\bar{d}$), etc. (see Fig.~1.6). The flux of each charged particles and their anti-particles
are being estimated from the WIMP mass and the annihilation channels.
There are a few experiments which have prominently reported the excess of
positrons such as PAMELA (Payload for Antimatter Matter Exploration and
Light-nuclei Astrophysics), AMS-02 (The Alpha Magnetic Spectrometer), etc. \\
\noindent PAMELA \cite{Adriani:2008zq, Adriani:2008zr} reported an excess in the positron fraction and this
can be connected to the hint of DM (see e.g. \cite{Bergstrom:2009fa}).
But there are other existing theories behind such positron excess, some study shows that
such excess can also be explained by a population of pulsars
\cite{Malyshev:2009tw}. AMS-02 has also
observed an excess in the positron fraction and has re-confirmed the findings
from PAMELA \cite{Accardo:2014lma}. Fermi-LAT, a dedicated
gamma-ray telescope, can also detect charged particles. Even the results
obtained from the Fermi-LAT collaboration \cite{Abdo:2009zk} show an excess in
the electron-positron spectra between 100 to 1000 GeV energy range
and again confirm the positron excess reported by PAMELA
\cite{FermiLAT:2011ab}. But the problem with Fermi-LAT is that this satellite
does not have an on-board magnet and so it is not possible for Fermi-LAT to
distinguish the signal of positrons from electrons.
\item \textbf{Neutrino:} \\ Neutrinos ($\nu$) and anti-neutrinos ($\bar{\nu}$)
produced in the annihilation of DM particles serve as a good signal for their
parent particles. The advantage of searching neutrino is that their weak
interactions lead to the long mean free path. For heavy DM particles, one
expects
to see high energy neutrinos coming from the region where the concentration of
DM is generally high. Unlike photons, neutrinos can be detected in a controlled
environment of underground laboratories, underwater, or in ice. Presently, the
active neutrino detectors include Super-Kamiokande (Super-K), ICECUBE
\cite{Achterberg:2006md}, and ANTARES \cite{Ageron:2011nsa}. The IceCube
collaboration has looked for muon-neutrino signals from annihilating DM in
nearby galaxies, galaxy clusters, Galactic centre, Sun, and
Galactic halo \cite{Aartsen:2012kia, Aartsen:2013dxa, Aartsen:2014hva,
Abbasi:2012ws}. To date, there are no signals observed in the neutrino channel
yet from Super-K, ICECUBE, and ANTARES.
\end{itemize}
\subsection{Collider Searches}
\noindent It is believed that under ideal environment, the DM particles can be produced
in
the
colliders. The idea behind the collider searches is to generate DM candidates
from SM particles i.e., from SM+SM $\rightarrow$ DM+DM.
Search for DM in colliding beam experiments suffers from several disadvantages. Production
rate of the SM particles is very large compared to the possible DM particles. Also, the DM
candidates may not be directly observed, but the SM particles produced in their decays.
Since it may not be possible to detect all the SM particles produced in the decay of the DM
particles, measurement of their masses in a colliding beam experiment is difficult at the best. \\
\noindent The LHC provided data of proton-proton collisions at centre-of-mass energies 7, 8 and 13
TeV. ATLAS \cite{Aad:2008zzm} and CMS \cite{Chatrchyan:2008aa}, the two major experiments at the LHC have done a number of
analyses and have not seen any signal of DM \cite{Abercrombie:2015wmb}. It is hoped that the next run of the LHC will
reveal evidence of Physics beyond the SM, including the DM candidates.\\
\section{Tucana-II}
\noindent Inspired by the ongoing research interest in the indirect search for
DM signal from
the UFDs or dSphs, in this work, we have studied a recently discovered UFD,
namely Tucana-II (Tuc-II) \cite{Bhattacharjee:2018xem}. It is also referred to as DES~J2251.2-5836.
\cite{Koposov:2015cua, Bechtol:2015cbp, Drlica-Wagner:2015ufc}. The observation
by ref.~\cite{Walker:2016mcs} has confirmed its status. Their study suggested
that Tuc-II is a UFD and not a member of any globular cluster. Its high mass to
luminosity ratio, large half light radius, large velocity dispersion value and
luminosity-metallicity relation, all of these qualities have well-established
Tuc-II as a confirmed candidate of UFD galaxies\cite{Walker:2016mcs,
Gilmore:2006iy, Kirby:2013wna,
Straizys:1974jf, Gallagher:2003nx, Grcevich:2009gt, Ji:2016cvd} and make Tuc-II
a very promising source for the indirect search of DM
signal\cite{Walker:2016mcs, Drlica-Wagner:2015xua, Hooper:2015ula,
Fermi-LAT:2016uux, Calore:2018sdx}. The shape of the Tuc-II is a bit distorted
and its outer region appears to be little elongated but the observational noise
could be the reason for the distortion in Tuc-II \cite{Bechtol:2015cbp,
Martin:2008wj, Munoz:2009hj}. \\
\noindent By using the Michigan Magellan Fibre System (M2FS) \cite{Mateo:2012mn}, the spectroscopic
study done by Walker \textit{et al.}, 2016 \cite{Walker:2016mcs} has identified
some of the member stars in the direction of Tuc~II. The study by
ref.~\cite{Walker:2016mcs} and other previous photometric observation of
Tuc-II~\cite{Koposov:2015cua, Drlica-Wagner:2015ufc}, have identified
eight probable member stars of Tuc-II. Those member stars are well-resolved
enough to
estimate the internal velocity dispersion of Tuc-II but they also lead to a
large asymmetrical
uncertainties in velocity dispersion (i.e.,
$\rm{\sigma_{v}}~=~8.6_{-2.7}^{+4.4}~{\rm {km}~{s}}^{-1}$) about a
mean velocity of $-129.1_{-3.5}^{+3.5}~{\rm {km}~{s}}^{-1}$. Some important
properties of Tuc-II obtained by several studies \cite{Koposov:2015cua,
Walker:2016mcs, Chiti:2018cds} have been mentioned in Table~6.1 \cite{Bhattacharjee:2018xem}.
\begin{table}
\begin{center}
\caption{Properties of Tucana-II.}
\label{Table-1}
\begin{tabular}{|p{4 cm}|p{4 cm}|p{2 cm}|}
\hline
\hline
Property & Value & Reference \\
\hline
Galactic longitude & $\rm{328.0863^{\circ}}$ & \cite{Koposov:2015cua} \\
\hline
Galactic latitude & $\rm{-52.3248^{\circ}}$ & \cite{Koposov:2015cua} \\
\hline
Heliocentric distance ([d]) & $\rm{57_{-5}^{+5}~\rm {kpc}}$ &
\cite{Koposov:2015cua}\\
\hline
Metallicity ([$\rm{Fe/H}$]) & $\rm{<0.4}$ & \cite{Walker:2016mcs}\\
\hline
Projected half light radius ($\rm{R_{h}}$) & $\rm{165^{+27.8}_{-18.5}~pc}$ &
\cite{Koposov:2015cua} \\
\hline
Maximum galactocentric angular distance in the sample of the observed member
stars in Tuc-II, as measured from the observer's position ([$\theta_{\rm
{max}}$]) & $0.30^{\circ}$ & \cite{Chiti:2018cds}\\
\hline
Square-root of the luminosity-weighted square of the line-of-sight stellar
velocity dispersion ($\rm{\sigma_{v}}$) & $~\rm{8.6_{-2.7}^{+4.4}~km~s^{-1}}$ &
\cite{Walker:2016mcs}\\
\hline
Mass within the projected half-light radius
\Big($\rm{\frac{M_{1/2}}{M_{\odot}}}$\Big) &
$\rm{~2.7_{-1.3}^{+3.1}~\times~10^{6}}$ & \cite{Walker:2016mcs}\\
\hline
Dynamical mass-to-light ratio \Big($\rm{(M/L_{v})_{1/2}}$\Big) &
$\rm{~1913_{-950}^{+2234}~M_{\odot}~L_{\odot}^{-1}}$ & \cite{Walker:2016mcs}\\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\section{The \textit{Fermi}-LAT Data Analysis of Tuc-II}
\begin{table}
\caption{The parameter set that we used for our \textit{Fermi}-LAT
analysis.}
\begin{tabular}{||p{7 cm}p{8 cm}||}
\hline \hline
{\bf Parameter for data extraction } &\\
\hline\hline
Parameter & Value \\
\hline \hline
Source & Tucana-II \\
Right Ascension (RA) & 342.9796 \\
Declination (DEC) & -58.5689 \\
Radius of interest (ROI) & $10^{\circ}$ \\
TSTART (MET) & 239557418 (2008-08-04 15:43:37.000 UTC) \\
TSTOP (MET) & 530362359 (2017-10-22 10:52:34.000 UTC) \\
Energy Range & 100 MeV - 300 GeV \\
\hline \hline
\texttt{gtselect} for event selection & \\
\hline \hline
Event class & Source type (128)\\
Event type & Front+Back (3)\\
Maximum zenith angle cut & $90^{\circ}$\\
\hline \hline
\texttt{gtmktime} for time selection &\\
\hline \hline
Filter applied & $\textit{(DATA\_QUAL>0)\&\&(LAT\_CONFIG==1)}$\\
ROI-based zenith angle cut & No\\
\hline \hline
\texttt{gtltcube} for livetime cube &\\
\hline \hline
Maximum zenith angle cut ($z_{cut}$) & $90^{\circ}$\\
Step size in $cos(\theta)$ & 0.025\\
Pixel size (degrees) & 1\\
\hline \hline
\texttt{gtbin} for 3-D (binned) counts map &\\
\hline \hline
Size of the X $\&$ Y axis (pixels) & 140\\
Image scale (degrees/pixel) & 0.1\\
Coordinate system & Celestial (CEL)\\
Projection method & AIT\\
Number of logarithmically uniform energy bins & 24\\
\hline \hline
\texttt{gtexpcube2} for exposure map &\\
\hline \hline
Instrument Response Function (IRF) & $\rm{P8R2\_SOURCE\_V6}$ \\
Size of the X $\&$ Y axis (pixels) & 400\\
Image scale (degrees/pixel) & 0.1\\
Coordinate system & Celestial (CEL)\\
Projection method & AIT\\
Number of logarithmically uniform energy bins & 24\\
\hline \hline
\texttt{gtlike} for likelihood analysis & \\
\hline \hline
Galactic diffuse emission model & $\rm{gll\_iem\_v06.fits}$ \\
Extragalactic isotropic diffuse emission model &
$\rm{iso\_P8R2\_SOURCE\_V6\_v06.txt}$ \\
Source catalog & 3FGL \\
Extra radius of interest & $5^{\circ}$ \\
Response functions & $\rm{P8R2\_SOURCE\_V6}$\\
Optimizer & NEWMINUIT\\
Spectral model of Tucana-II & Power law (in Section-6.2.1) $\&$ DMFit
Function (in Section-6.4) \\
\hline \hline
\end{tabular}
\label{table_1}
\end{table}
\noindent Like Tri-II, for Tuc-II we have analysed the gamma-ray data observed
by the Fermi-LAT and have mostly followed the same analysis method that we have
applied for Tri-II \cite{Bhattacharjee:2018xem}.\\
\noindent We have used the Fermi ScienceTools version \texttt{v10r0p5} and the
dataset was
pre-processed with an
improved IRF, P8R2\_SOURCE\_V6 of the Fermi-LAT \cite{Bhattacharjee:2018xem}.\\
\noindent For analysing the possible signal coming from the direction of
Tuc-II,
we have extracted about nine years of Fermi-LAT data in between 100 MeV to 300
GeV energy range and have selected
a $10^{\circ}~\times~10^{\circ}$ ROI centred on the
location of Tuc-II \cite{Bhattacharjee:2018xem}.\\
\noindent In the source model, we have included our source of interest, Tuc-II
along with
all the sources from 3FGL catalog~\cite{Acero:2015gva} within 15$^{\circ}$ ROI
from the location of Tuc-II \cite{Bhattacharjee:2018xem}. Then with `gtlike' tool, we have run the binned
likelihood on our
dataset~\cite{Cash:1979vz, Mattox:1996zz}. During the likelihood process, the
spectral parameters of
all the sources within $10^{\circ}~\times~10^{\circ}$ ROI and the
normalization parameters of two diffuse backgrounds models (i.e.,
$\rm{gll\_iem\_v06.fits}$
and $\rm{iso\_P8R2\_SOURCE\_V6\_v06.txt}$) have been left free \cite{Bhattacharjee:2018xem}. The remaining
all the background sources within the
$15^{\circ}~\times~15^{\circ}$ ROI have been kept fixed to their 3FGL catalog
\cite{Acero:2015gva} mentioned values. All the necessary information for
performing the
\textit{Fermi}-LAT analysis is mentioned in TABLE~6.2 \cite{Bhattacharjee:2018xem}. In TABLE~6.2, TSTART and TSTOP define the start and the end of observation in unit Mission Elapsed Time (MET), respectively.\\
\noindent In the following section, to check any possible emission from Tuc-II
location, we would first model our source with a power-law spectrum for
different spectral indices.
\subsection{Results of the Power-law Modelling}
\begin{figure}
\centering
\subfigure[]
{ \includegraphics[width=0.8\linewidth]{figures/need1.png}}
\subfigure[]
{ \includegraphics[width=0.9\linewidth]{figures/2_powerlaw.png}}
\caption{(a) The spectral fit to the observed data counts per energy bin and (b) the
residual
plot for the location of Tuc-II has been shown here. We have modelled the Tuc-II with
the power-law spectrum for $\Gamma = 2$. In figure 6.1(a), the solid dark reddish-
brown curve displays the best-fit total spectrum, along with the corresponding LAT-observed data
points (in purple); the dot-dashed sky-blue and orange curves display the galactic diffuse background and
the isotropic background component, respectively; the dot-dashed black curve along with green points
denotes the spectral fit of Tuc-II. The rest of the curves correspond to various point sources other
than Tuc-II, lying within the ROI that are not distinctly labeled in figure 6.1(a).}
\end{figure}
\noindent Like our previous chapter, we have modelled the Tuc-II with power-law
spectrum (Eq.~5.1) and have performed the fitting for five spectral indices
($\Gamma$). In Fig.~6.1, we have shown the fitting results of Tuc-II for the spectral index $\Gamma =2$.\\
\noindent In Fig.~6.1(a), we have shown the spectral fit of all the sources that lie within the ROI \cite{Bhattacharjee:2018xem}.
In this figure, the sum of the best fit spectrum along with the LAT counts (in purple) is denoted by the solid
dark reddish-brown curve, while the best fit spectrum for the galactic and isotropic components are defined by the `dot-dashed' sky-blue and
orange curves, respectively. The black `dot-dashed' curve along with the green points refer to the best-fit spectra of Tuc-II and
the remaining curves are related to other sources within ROI. In Fig.~6.1(b) we have displayed the residual plot of Tuc-II for the
spectral index, $\Gamma$=2 \cite{Bhattacharjee:2018xem}.\\
\noindent The best-fitted value of the normalisation parameter, $N_{\rm {0}}$
and the TS value obtained from
Tuc-II for all five spectral indices ($\Gamma$) is shown in TABLE~6.3 \cite{Bhattacharjee:2018xem}.
Among all five spectral indices, $\Gamma = 1$ is
assumed to have the connection with DM annihilation
(Ref.~\cite{Essig:2009jx}) and we have chosen other our $\Gamma$'s values to
examine the astrophysical spectrum of Tuc-II \cite{Bhattacharjee:2018xem}. From TABLE~6.3 we can observe
that
for $\Gamma = 1$, the value of statistical error on $N_{\rm {0}}$ is slightly
higher than the value of $N_{\rm {0}}$ itself and the TS values of Tuc-II for
all $\Gamma$s are much less than the threshold limit for detection (i.e.,
TS$\ge$25) \cite{Bhattacharjee:2018xem}.
\begin{table}
\begin{center}
\caption{The best-fit normalization parameters ($N_{0}$) of Tuc-II and the TS values for
five spectral indices ($\Gamma$).}
\begin{tabular}{|p{3cm}|p{5cm}|p{3cm}|}
\hline
\hline
Spectral~Index~($\Gamma$) & $\rm{N_{0} \times
10^{-5}~(cm^{-2}~s^{-1}~MeV^{-1})}$ & Test Statistic (TS) value \\
\hline
$1$ & $(2.457\pm11.17)\times10^{-10}$ & 0.056 \\
\hline
$1.8$ & $(1.173\pm1.126)\times10^{-7}$ & 1.215 \\
\hline
$2$ & $(3.146\pm 2.565)\times10^{-7}$ & 2.077 \\
\hline
$2.2$ & $(7.458\pm4.923)\times10^{-7}$ & 2.973 \\
\hline
$2.4$ & $(1.433\pm0.839)\times10^{-6}$ & 3.592 \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{The $\gamma$-ray flux upper limits in $95\%$ C.L. obtained from Tuc-II for five spectral indices ($\Gamma$).}
\begin{center}
\begin{tabular}{|p{3cm}|p{8cm}|}
\hline
\hline
Spectral~Index~($\Gamma$) &
Flux~upper~limits~in~$\rm{95\%~C.L.~(cm^{-2}~s^{-1})}$ \\
\hline
1 & $3.248\times10^{-11}$ \\
\hline
1.8 & $4.484\times10^{-10}$ \\
\hline
2 & $8.362\times10^{-10}$ \\
\hline
2.2 & $1.401\times10^{-9}$ \\
\hline
2.4 & $2.113\times10^{-9}$ \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\noindent As we have not detected any excess emission from Tuc-II location,
we have determined the flux upper limit in $95\%$ C.L. by the profile
likelihood
method
\cite{Barbieri:1982eh, Rolke:2004mj}.\\
\noindent We have next derived the flux upper limits in 95$\%$ C.L. by using
the
semi-Bayesian method with flat prior \cite{Bhattacharjee:2018xem}. This semi-Bayesian method is developed
from Helene’s approach \cite{Helene:1990yi} and is
already implemented in the \texttt{ScienceTools}.\\
\noindent In Table~6.4, we have shown the flux upper limits in 95$\%$ C.L.
derived from the semi-Bayesian method \cite{Bhattacharjee:2018xem}.
From this table~6.4, we can note that, the $\gamma$-flux upper limit for
$\Gamma=1$ is almost 2 orders lower than the flux upper limits corresponding to
$\Gamma=2.4$ \cite{Bhattacharjee:2018xem}. This result is consistent with our finding for
Tri-II~\cite{Biswas:2017meq, Bhattacharjee:2018xem}. Here, we would like to point out that flux upper
limits developed from the semi-Bayesian method and the profile likelihood
method
are hardly differed by 1.2 to 1.3 factor \cite{Bhattacharjee:2018xem}.\\
\noindent In the next section, we have attempted to study the possible DM
signature coming from the location of Tuc-II \cite{Bhattacharjee:2018xem}. Thus, now we would model Tuc-II
with the $\gamma$-ray spectrum from DM annihilation
(i.e., with DMFit function) that is implemented in \textit{Fermi}
\texttt{ScienceTools}. For comparison, along with Tuc-II, we would also
introduce two newly discovered dSphs, namely,
Reticulum-II (Ret-II) and Ursa Minor (UMi) \cite{Bhattacharjee:2018xem}.\\
\section{Estimation of Astrophysical Factor (J-factor) for Tuc-II}
\noindent The main difficulties in studying the newly discovered UFDs is their
insufficient kinematics data. That also questions the reliability of J-factors
of
the dSphs and UFDs \cite{Bhattacharjee:2018xem}. For our work, we have taken the J-factors of Tuc-II and other two dSphs (Ret-II and UMi) from Evans
\textit{et al.}, 2016
\cite{Evans:2016xwx}. Their study suggests that the
analytical formula for J-factor
can estimate more or less accurate results if we compare it to the spherical
Jeans formula for J-factor calculation driven by Markov Chain Monte Carlo
techniques. Evans \textit{et
al.} \cite{Evans:2016xwx} argued that their derived formula for J-factors can
even reproduce the computational results.
\section{DM Annihilation Constraints from Tuc-II}
\subsection{Searching for $\gamma-ray$ Emission due to DM
Annihilation from Tuc-II}
\begin{figure}
\centering
\subfigure[]
{\includegraphics[width=0.49\linewidth]{figures/ts_mod.pdf}}
\subfigure[]
{\includegraphics[width=0.49\linewidth]{figures/ts_time2.pdf}}
\caption{(a) The variation of the TS values of Tuc-II with $m_{DM}$ for two WIMP annihilation channels; i) $100\%$ $b\bar{b}$
(blue) and ii) $100\%$ $\tau^{+}\tau^{-}$ (red). We have also shown the results for three different periods
of LAT data. (b) The peak TS value observed from the location of Tuc-II for three periods of LAT data, while the red and the blue markers refer to the peak value of TS
for $b\bar{b}$ and $\tau^{+}\tau^{-}$ WIMP annihilation final states,
respectively.}
\end{figure}
\noindent Here first we have fitted the possible $\gamma$-ray flux arising from
the
Tuc-II location with the $\gamma$-ray spectrum for DM pair-annihilation \cite{Bhattacharjee:2018xem}. For
this calculation, we have employed
the MC simulation package DMFit~\cite{Jeltema:2008hf, Gondolo:2004sc} which is
implemented in the Fermi-\texttt{ScienceTools}.
We have defined the Tuc-II as a point source and its significance is derived by
the $\Delta TS$ method that we have followed in section 6.2.1. \\
\noindent In this section, we would try to examine after modelling the Tuc-II
with $\gamma$-ray annihilation spectrum,
whether we can obtain any excess from the location of Tuc-II. Interestingly, we
have detected a very faint emission from Tuc-II \cite{Bhattacharjee:2018xem}.\\
\noindent From Fig.~6.2(a), we can observe the variation of the detection
significance of
$\gamma$-ray
excess (i.e the TS values) from the location of Tuc-II as a function of WIMP
mass ($m_{DM}$) and for two pair annihilation final states, i.e., $100\%$
$b\bar{b}$ and
$100\%$ $\tau^{+}\tau^{-}$ \cite{Bhattacharjee:2018xem}. In Fig. 6.2(b), we
have also shown the variation of TS values for three, six and nine years of LAT
data \cite{Bhattacharjee:2018xem}.
For this comparison, we have performed the same
analysis method in all three years of the dataset. From Fig.~6.2(b), we can
observe that the peak value of TS is increased with increasing the dataset and both annihilation channels
have followed the same nature. The observed emission from Tuc-II location is
indeed too faint (i.e.,
less than TS=25) to claim anything precisely, but the most interesting finding
of this analysis is that the peak value of TS is gradually increasing with time
period \cite{Bhattacharjee:2018xem}. From this signature, we can expect that in future we can possibly
detect
a real signal from Tuc-II either due to its connection with
any astrophysical source or resulting from DM annihilation \cite{Bhattacharjee:2018xem}. In Fig.~6.2(a),
with
nine years of \textit{Fermi}-LAT data,
the TS value peaks at $m_{DM}$~=~14 GeV for $100\%$ $b\bar{b}$ channel, while for $100\%$ $\tau^{+}\tau^{-}$ it
peaks at $m_{DM}$~=~4 GeV \cite{Bhattacharjee:2018xem}.\\
\noindent There were many earlier studies that already analyzed
Tuc-II
with six or
seven years of \textit{Fermi}-LAT data \cite{Drlica-Wagner:2015xua,
Hooper:2015ula,
Fermi-LAT:2016uux, Calore:2018sdx}. But in our analysis, we have studied Tuc-II
with nine years of \textit{Fermi}-LAT data and thus the increase in TS peak
values possibly originate from the larger dataset \cite{Bhattacharjee:2018xem}. Hence, such
an increase in $\gamma$-ray excess with increasing the time period of analysis
seems to encourage the indirect detection of DM annihilation signal \cite{Bhattacharjee:2018xem}.\\
\begin{table}
\caption{The overview of the TS and the $\Delta$ TS values for two spectrum
models that we have considered for this work: 1) the power-law (PL) for the spectral index, $\Gamma=~2.4$ and 2) the
best-fit DM model corresponds to the highest TS values (for
our case, it is $\rm{100\%~\tau^{+}\tau^{-}}$ final state at $m_{DM}$= 4 GeV). The p-value
is estimated by assuming the $\chi^{2}$ distribution for the 1 degree of freedom.}.
\label{Tab-3}
\centering
\begin{tabular}{|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.2cm}
|p{2cm}|}
\hline
Our source & TS for PL & $\sigma~(= \sqrt{TS})$ for PL & p-value for PL & TS
for
DM & $\sigma~(= \sqrt{TS})$ for DM & p-value for DM & $\Delta$ TS (DM-PL)\\
\hline
Tucana-II & 3.59 & 1.89 & 0.05 & 8.61 & 2.93 & 0.003 & 5.02 \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\subfigure[]
{ \includegraphics[width=0.48\textwidth, height=0.35\textwidth]{figures/1degreespectra.jpg}}
\subfigure[]
{ \includegraphics[width=0.48\linewidth, height=0.34\textwidth]{figures/compare_tuc1.pdf}}
\caption{(a) The spectral fit to the observed counts and (b) the residual
plot for $1^{\circ}$ $\times$ $1^{\circ}$ of ROI centred on the location of
Tuc-II.
From Fig. 6.3(a), the sum of the best-fit spectrum
along with the Fermi-LAT detected counts (in brown) is shown with the solid purple curve, while the diffuse galactic and isotropic components are displayed by the `dot-dashed' sky-blue and orange curves. For Fig. 6.3(b), with the magenta solid cure, we have represented the best-fit DM annihilation spectra for 100$\%$ $\tau^{+}\tau^{-}$ channel at DM Mass,$m_{DM}$ = 4 GeV. The corresponding residual points (in red) within 100 MeV to 300 GeV energy ranges are overplotted with error bars.}
\end{figure}
\noindent If we compare between the power-law and the DM annihilation spectra,
we would
find that the peak value of TS value is significantly improved for DM
annihilation hypothesis \cite{Bhattacharjee:2018xem}. Besides, the p-value (p-value is defined as the
probability of getting the signal-like data obtaining from background excess)
related to local significance is reduced for DM annihilation spectra. We have
derived the p-value by assuming a $\chi^{2}$ distribution for a degree of
freedom~=~1 \cite{Bhattacharjee:2018xem}. All the necessary details are displayed in TABLE~6.5 \cite{Bhattacharjee:2018xem}. From this
table, we can find that excess obtained from the Tuc-II location might favour
the DM annihilation scenario over its connection with the astrophysical
phenomenon \cite{Bhattacharjee:2018xem}. But here we also want to mention that for both DM annihilation
hypothesis and power-law, we have obtained the comparable value of
-log(Likelihood).
Thus, we could not firmly rule out the possibility of having an astrophysical
connection with the faint excess from Tuc-II \cite{Bhattacharjee:2018xem}. Hence, from our analysis, at
present, we can conclude that our results at best show a hint of a DM signal
from the location of Tuc-II \cite{Bhattacharjee:2018xem}. For DM annihilation spectra, we have
obtained $\sigma$ = 2.93 and next, we will examine the
effect of surrounding unresolved sources that have not been detected by the
Fermi-LAT and will review whether their effects could decrease the local
significance ($\sigma$) for DM annihilation model.\\
\noindent Till now, we have executed the likelihood analysis over
$10^{\circ}~\times~10^{\circ}$ ROI but it is quite impossible to distinguish
any
special features of Tuc-II from that large region of the sky. Hence, in
Fig.~6.3(a,b), we have displayed the best-fitted spectra and corresponding
residual plot of Tuc-II for $1^{\circ}~\times~1^{\circ}$ ROI region \cite{Bhattacharjee:2018xem}. For
obtaining the best-fitting spectra for $1^{\circ}~\times~1^{\circ}$ ROI region,
we have fixed all the background sources to the best-fitted
values obtained from the $10^{\circ}~\times~10^{\circ}$ ROI fitting \cite{Bhattacharjee:2018xem}. Now, for
investigating any interesting signature originating from the location of
Tuc-II,
in Fig.
6.3(a) we have not included Tuc-II in the source model. \\
\noindent From Fig.~6.3(a), we can check the spectral fit per energy bin of all
the sources within $1^{\circ}~\times~1^{\circ}$ ROI along with the isotropic
and
the galactic diffuse background model except for Tuc-II. In Fig.~6.3(b), we
have
over-plotted the best-fitted spectra of Tuc-II with a magenta solid line and
the
corresponding residual plot
between 100 MeV to 300 GeV energy range are shown with the red points. \\
\noindent In Fig.~6.3(b), we have considered the best-fitted DM spectra for
100$\%$ $\tau^{+}\tau^{-}$ annihilation channel at DM mass 4 GeV as this
channel produces the highest TS peak value of Tuc-II (check Fig.~6.2(a,b)) \cite{Bhattacharjee:2018xem}.
Now,
to check the goodness of fitting between the DM annihilation spectra for
$\tau^{+}\tau^{-}$ annihilation channel and the data derived from the residual
energy spectrum (Fig.~6.3(b)), we have applied the T-TEST method \cite{Welch:1947df, Ruxton:2006dh}. This method
is generally favoured for the system which is dealing with the small number of
events. T-TEST is the test for the statistical hypothesis that examines whether
there exists any considerable deviation between the means of two samples. Under
the null hypothesis, T-TEST expects that both the samples are drawn from the
same populations (Appendix A and B from Chapter 9). For our case,
with the T-TEST method, we have tried to check whether our selected DM model
spectrum can provide an acceptable fit to the data obtained from the residual
energy spectrum (Fig.~6.3(b)) \cite{Bhattacharjee:2018xem}. In Fig.~6.3(b),
we have combined the residuals from all pixels into the energy bins. From this
figure (Fig.~6.3(b)), we can observe that even for the full energy range (i.e.,
including both the positive bump for energy above 500 MeV and the negative bump for
energy below 500 MeV) the spectrum for DM annihilation model with
$\tau^{+}\tau^{-}$ final state can produce an acceptable fit to the residual
energy spectrum with a p-value of $\approx$ 0.112 (p-value is related to the
goodness of fitting of the T-Test) \cite{Bhattacharjee:2018xem}. P-value of $>0.05$ implies that we are not
in a position to reject the assumption for the null hypothesis. Thus, we could
not reject the idea that the DM annihilation spectrum for its
$\tau^{+}\tau^{-}$ final state (for both the positive and the negative bumps)
is
consistent with the residual spectrum. Besides, if we only focus on the
positive
residual energy bump above 500 MeV, we would find that the DM annihilation
model
for $\tau^{+}\tau^{-}$ final state provides a fit to the residual energy
spectrum with a p-value of $\approx$ 0.782 \cite{Bhattacharjee:2018xem}. This positive bump from the
residual
energy spectrum indicates an intriguing hint of the DM annihilation signal from
Tuc-II \cite{Bhattacharjee:2018xem}. Here, we would like to mention that from Fig 6.3(b) the negative bump
for energy below 500 MeV is nearly the same significant as the positive bump
for
energy above 500 MeV and this negative bump at lower energies might be
connected
to the poor modelling of the diffuse background templates. The TS peak values
that we have obtained from the Tuc-II location is much lower than the detection
threshold limit for
the \textit{Fermi}-LAT. Hence, we could not completely rule out the possibility
that such excess might come from the statistical fluctuations or it might have
a connection with some nearby unassociated sources. In our next section, we
would investigate this in detail.
\subsection{Distribution of the Excess Obtained from $\gamma$-ray
Spectra of DM Annihilation}
\begin{table}
\caption{The list of CRATES and BZCAT sources within the 1$^{\circ}$ ROI of the
Tuc-II. The J225455-592606 is listed in both catalogs. Thus for this source, we have used its
CRATES coordinates.}
\label{Tab-3}
\centering
\begin{tabular}{|p{2cm}|p{6cm}|p{5cm}|}
\hline
Our source & Nearby sources from BZCAT and CRATES catalog & Distance to the
Tuc-II ($^{\circ}$)\\
\hline
Tucana-II & J~225134-580103 & 0.55 \\
\hline
& J~225008-591029 & 0.66 \\
\hline
& J~225455-592606 & 0.95 \\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{The TS values for Tuc-II, 4FGL 2247.7-5857, and three sources from the BZCAT and the CRATES catalog that lie within $1^{\circ}$ of Tuc-II are mentioned. For
Tuc-II, we have shown its TS peak value for $100\%~\tau^{+}\tau^{-}$
annihilation channel at $m_{DM}$=4 GeV. The three nearby CRATES
sources are modelled with the power-law spectra for
$\Gamma=2.2$. In case of 4FGL 2247.7-5857, we have modelled it with
power-law spectra and have used the parameter values the 4FGL catalog of Fermi-LAT.}
\begin{tabular}{|p{0.8cm}|p{1.2cm}|p{1.2cm}|p{1.2cm}|p{1.2cm}|p{1.2cm}|p{3cm}
|p{2.5cm}|}
\hline
Year & Tuc-II from by $\Delta~TS$ method & TS value of J225134-580103 & TS
value of
J225008-591029 & TS value of J225455-592606 & TS value of 4FGL 2247.7-5857 & TS
value of Tuc-II after including three CRATES sources and 4FGL 2247.7-5857 to
source model & Rescaled TS value of Tuc-II due to all possible background
fluctuation.\\
\hline
3 & 3.0868 & 0.05 & 0.027 & 0.49 & 5.61 & 3.04 & $\approx$ 1.7167\\
\hline
6 & 6.8802 & 0.66 & 1.22 & 0.98 & 10.45 & 5.24 & $\approx$ 3.8265\\
\hline
9 & 8.61 & 2.043 & 3.82 & 2.01 & 21.67 & 7.05 & $\approx$ 4.7885\\
\hline
\end{tabular}
\end{table}
\noindent In subsections 6.2.1 and 6.4.1, we have determined the TS value for
Tuc-II but we have not examined whether there is any nearby background
fluctuation. Such surrounding fluctuation can influence the
significance that we have earlier obtained for
Tuc-II. Apart from this, we have obtained a very faint emission from the
location of Tuc-II (i.e., TS value of 8.61) \cite{Bhattacharjee:2018xem}. Thus, before claiming its
connection with the spectrum resulting from DM annihilation, next, we would try
to carefully examine
the origin and the reliability of such faint excess \cite{Bhattacharjee:2018xem}. \\
\noindent There is a fair chance that the excess that we obtained from Tuc-II
location
could be the result of either any surrounding unresolved sources or the
deficiency of background models \cite{Bhattacharjee:2018xem}. Carlson \textit{et al.},
2015~\cite{Carlson:2014nra} have suggested
that such faint $\gamma$-ray emission from dSphs can plausibly come from
several
nearby unresolved faint $\gamma$-ray sources such as radio galaxies
\cite{Inoue:2011bm}, blazars
\cite{Abdo:2010gqa}, star-forming galaxies
\cite{Ackermann:2012vca, Linden:2016fdd} and millisecond pulsars
\cite{Hooper:2013nhl}. Among these,
blazars are the most responsible candidates for such background fluctuations
\cite{Carlson:2014nra}. At high-latitude, blazars are the most numerous point
sources
and thus they are assumed to be the prime source of anisotropy in extragalactic
gamma-ray sky \cite{Ackermann:2012uf, Abazajian:2010pc, Venters:2010bq,
Venters:2011gg, Cuoco:2012yf,
Harding:2012gk}. A non-negligible amount of
$\gamma$-ray emission can also arise from the star-forming and the radio
galaxies.\\
\noindent Motivated by the work of Carlson \textit{et al.},
2015~\cite{Carlson:2014nra}, in this section, we have performed a detailed
analysis to examine the possible reason for obtaining a faint excess from the
location of Tuc-II. For our purpose, we have used two multiwavelength catalogs
for blazar such as CRATES
\cite{Healey:2007by} and BZCAT \cite{Massaro:2008ye}. BZCAT catalog consists of
nearly 3149 blazars and among them, 2274 are located at high galactic latitude
i.e., $|b|>30^{\circ}$. CRATES catalog contains
nearly 11,000 bright flat-spectrum radio sources. Within $1^{\circ}$ ROI of
Tuc-II, we have observed one blazar
from the BZCAT catalog and three radio sources from the CRATES catalog \cite{Bhattacharjee:2018xem}. The
source that is included in the BZCAT catalog has also been detected by the
CRATES catalog. For our examination, we have considered three CRATES sources
such as J225134-580103, J225008-591029, and J225455-592606. All three sources
are located within a
1$^{\circ}$ ROI of Tuc-II \cite{Bhattacharjee:2018xem}. We have not considered any other radio sources
beyond
1$^{\circ}$ because any source beyond
1$^{\circ}$ might would not produce any significant changes to the local
emission of dSphs \cite{Carlson:2014nra}. In Table~6.6, we have mentioned the
list of CRATES sources within
1$^{\circ}$ ROI of Tuc-II \cite{Bhattacharjee:2018xem}.\\
\noindent Inspired by ref.~Carlson et al.\cite{Carlson:2014nra}, we have modelled all three
radio sources with the power-law spectrum of the index ($\Gamma$)=2.2 and then
have derived the TS values of these three radio sources for different time
periods of Fermi-LAT dataset\cite{Bhattacharjee:2018xem}. In Table~6.7, we have mentioned our result. After
the inclusion of these three sources, we have observed that the significance of
Tuc-II is only reduced by $\sim$ 10$\%$ \cite{Bhattacharjee:2018xem}. Here we would like to mention that
Carlson
\textit{et al.}, 2015~\cite{Carlson:2014nra}, has also observed the same
reductions. They have
again concluded that the blazars are responsible for only 10$\%$ of local
TS value of the source and the large part of the excess from dSphs is not
related to the nearby radio sources.\\
\noindent To investigate the distribution of local excess obtained from the
location of Tuc-II, we have generated the $2^{\circ}$ x $2^{\circ}$ residual TS
map around Tuc-II with \textit{`gttsmap'} for energy range between 100 MeV to
300 GeV \cite{Bhattacharjee:2018xem}. During this process, the spectral parameters of all the sources within
$10^{\circ}$ ROI were kept fixed to their values obtained from their fittings
performed on nine years of Fermi-LAT data \cite{Bhattacharjee:2018xem}. But the normalization values for the
galactic and the isotropic models were left free. We have generated the TS map
for three
cases \cite{Bhattacharjee:2018xem}: 1) Fig.~6.4 (extreme left) Tuc-II and three sources from BZCAT and
CRATES catalog that lies within a $1^{\circ}$ x $1^{\circ}$ ROI of Tuc-II were
not included to the source model,
2) Fig.~6.4 (middle); The three radio sources from BZCAT and CRATES catalog
that lie within a $1^{\circ}$ x $1^{\circ}$ ROI of Tuc-II were included to
source model but Tuc-II were not included, 3)
Fig.~6.4 (extreme right); Tuc-II and three sources from BZCAT and
CRATES catalog that lies within a $1^{\circ}$ x $1^{\circ}$ ROI of Tuc-II were
included to the source model. For generating the residual TS map (for right
image of Fig.~6.4), we have taken the best-fitted parameters of Tuc-II obtained
from its DM annihilation spectra for $100\%~\tau^{+}\tau^{-}$ annihilation
channel at $m_{DM}$ = 4 GeV.
\begin{figure}
\centering
{ \includegraphics[width=1.0\linewidth]{figures/3.png}}
\caption{The residual TS maps (between 100 MeV to 300 GeV) for $2^{\circ}~\times~2^{\circ}$
ROI centred at Tuc-II extracted from the $10^{\circ}$ ROI. The image scale for TS map is 0.025 $pixel^{-1}$. In
the left Fig., the three CRATES sources and Tuc-II are not added to our source model; For middle Fig., Tuc-II is not added to the source model but we have included three CRATES sources in our source model; in right Fig., the three CRATES sources and Tuc-II are
included to our source model. We have mentioned the name of Tuc-II and three CRATES sources that lie within $1^{\circ}$ with the white cross.}
\end{figure}
\noindent From Fig.~6.4 (extreme left, middle), we can observe a hint of a
localized-emission of TS value $\approx$ 6.5 and that region is very close to
the location of Tuc-II \cite{Bhattacharjee:2018xem}. This is true that the region of faint emission is not
directly localized to the position of Tuc-II,
but that is mere $0.18^{\circ}$ away from the position of Tuc-II \cite{Bhattacharjee:2018xem}.
Interestingly, from Fig.~6.4(right), we can find that just after including
three radio sources from CRATES catalog and Tuc-II to the source model, the
significance of the nearby localized-excess is considerably reduced \cite{Bhattacharjee:2018xem}. Thus, from
Fig.~6.4, we can conclude that there is a fair possibility that Tuc-II is
associated with the nearby localized emission \cite{Bhattacharjee:2018xem}.\\
\noindent Apart from that nearby localized-excess region of Tuc-II, from Fig.
6.4, we can notice a very bright emission of TS value $\approx$5$\sigma$ at the
bottom of the right corner and after investigating the above three TS maps, we
can safely state that such bright excess is not associated with Tuc-II \cite{Bhattacharjee:2018xem}. Thus,
we
have checked 4FGL
catalog of Fermi-LAT (\cite{Fermi-LAT:2019yla}) and have noticed that source,
4FGL 2247.7-5857 is exactly overlapping with that bright region \cite{Bhattacharjee:2018xem}. Next, we have
produced the residual TS map of $2^{\circ}~\times~2^{\circ}$ for four
cases (see Fig. 6.5) \cite{Bhattacharjee:2018xem}; First and second residual TS maps of Fig. 6.5
are same
as first two TS maps of Fig. 6.4. For the third TS map of Fig. 6.5, the three
radio sources from BZCAT and CRATES catalog
that lie within a $1^{\circ}$ x $1^{\circ}$ ROI of Tuc-II and 4FGL 2247.7-5857
were included to source model but Tuc-II were not included. For the last TS
map
(extreme right of Fig.~6.5); 4FGL 2247.7-5857, Tuc-II and three sources from
BZCAT
and
CRATES catalog that lies within a $1^{\circ}$ x $1^{\circ}$ ROI of Tuc-II were
included to the source model.
\begin{figure}
\centering
{ \includegraphics[width=1.0\linewidth]{figures/2.png}}
\caption{The residual TS maps (between 100 MeV to 300 GeV) for $2^{\circ}~\times~2^{\circ}$
ROI centred at Tuc-II extracted from the $10^{\circ}$ ROI. The image scale for TS map is 0.025 $pixel^{-1}$. In
the extreme left Fig., the three CRATES sources, 4FGL 2247.7-5857 and Tuc-II are not added to our source model; For Second left Fig., Tuc-II and 4FGL 2247.7-5857 are not added to the source model but we have included three CRATES sources in our source model; for Third Fig., Tuc-II in not added to the source model but we have included 4FGL 2247.7-5857 and three CRATES sources in our source model; in extreme right Fig., the three CRATES sources, 4FGL 2247.7-5857 and Tuc-II are
included to our source model. We have mentioned the name of Tuc-II, 4FGL 2247.7-5857 and three CRATES sources that lie within $1^{\circ}$ with the white cross.}
\end{figure}
\noindent Now if we check the extreme right image of Fig. 6.5, we can observe
that after inclusion of 4FGL 2247.7-5857 to source model, the emission from
that
bright region at the bottom of the right corner is greatly decreased. Hence,
this result shows that the bright excess from residual TS map has an
astrophysical connection and primarily originates from the source 4FGL
2247.7-5857 \cite{Bhattacharjee:2018xem}.\\
\noindent From our analysis, we would like to mention that, even after
including
4FGL 2247.7-5857 and three radio sources from CRATES catalog, from Fig.~6.4 and
Fig.~6.5 we can still detect plenty of delocalized excesses \cite{Bhattacharjee:2018xem}. The deficiency in
background models for Fermi-LAT can be the reason for leakage \cite{Bhattacharjee:2018xem}. There is also a
possibility
that these delocalized excess regions are associated with some unresolved
astrophysical
sources as well as the DM
subhalos can also be linked with such emissions. There are some studies which
argue that even if we try to accurately model all the astrophysical sources to an extent, the DM subhalos will still be accountable for an
irreducible background, say$\approx5\%-10\%$, for the gamma-ray sky
\cite{Carlson:2014nra,
Ackermann:2012uf, Lee:2008fm, SiegalGaskins:2009ux}. But with detailed
multiwavelength study, we can positively reduce the contamination from
most of the unresolved sources in our blank sky.\\
\noindent In this work, for calculating the TS value, we have considered the
background models provided by Fermi-LAT and not the blank sky. Hence,
there is a high chance that we have overestimated the significance of the
source
even after including all possible nearby sources to our source model \cite{Bhattacharjee:2018xem}. There are
several works by Fermi collaboration
\cite{Ackermann:2013yva, Fermi-LAT:2016uux} which have reported that from a
vast
region of the blank sky, we might observe an excess of TS $>$ 8.7. Such
emission
would decrease the source
significance from 2.95$\sigma$ to 2.2$\sigma$ \cite{Ackermann:2013yva,
Fermi-LAT:2016uux}.
Following this prescription \cite{Ackermann:2013yva, Fermi-LAT:2016uux}, we
have
also re-calibrated
the TS value estimation and this effect reduces the TS value of Tuc-II from
8.61
to 4.79 i.e., p-value from
0.003 to 0.029 \cite{Bhattacharjee:2018xem}. All our obtained results are mentioned in Table~6.7 \cite{Bhattacharjee:2018xem}. In
column 2, we have shown the TS value from $\Delta TS$ method; in
columns 3, 4, 5 and 6, we have given the TS value of all three radio sources
from CRATES catalog and 4FGL
2247.7-5857; in column 7, we have provided the revised TS value of
Tuc-II after including 4FGL 2247.7-5857 and three radio sources from CRATES
catalog to the source model;
and in column 8, we have shown the re-scalled TS value of Tuc-II by considering
all probable background fluctuations.
\subsection{Possible DM Annihilation Constraint on Theoretical DM
Models with 9 Years of Tuc-II \textit{Fermi}-LAT Data}
\noindent In our earlier section, we have already discussed that peak value of TS
for $\tau^{+}\tau^{-}$ annihilation channel is lower than the detection
threshold limit of Fermi-LAT
(i.e., TS~$<$~25). Thus, in this section, we would estimate
$\gamma$-ray flux upper limit in $95~\%$ C.L. for Tuc-II by employing the
$\gamma$-ray spectrum from DM annihilation. For this purpose, we have used the
semi-Bayesian method~\cite{Helene:1990yi}, as described in section 6.2.1. With DMFIt
Function,
we have also
determined the upper limits to the $<\sigma
v>$ as a function of DM mass ($\rm{m_{DM}}$), for five pair-annihilation final
states \cite{Jungman:1995df}.
Like Tri-II, in this analysis, we have again considered these five
supersymmetry-favoured pair annihilation final states \cite{Jungman:1995df},
such as
$100\%$ $b\bar{b}$, $80\%$ $b\bar{b}$+$20\%$ $\tau^{+}\tau^{-}$, $100\%$
$\tau^{+}\tau^{-}$, $100\%$ $\rm{\mu^{+} \mu^{-}}$ and $100\%$ $W^{+}W^{-}$,
respectively. In Fig.~6.6(a,b), we have shown the variation of $\gamma$-ray
flux
upper limits of Tuc-II in 95 $\%$ C.L. and the relative upper limits to
$<\sigma
v>$ as a function of $\rm{m_{DM}}$ and annihilation final states \cite{Bhattacharjee:2018xem}.
\begin{figure}
\centering
\subfigure[]
{\includegraphics[width=0.49\textwidth,clip,angle=0]{figures/tuc_flux_newrev.pdf}}
\subfigure[]
{\includegraphics[width=0.49\linewidth]{figures/tuc_cross_newrev.pdf}}
\caption{The variations of (a) $\gamma$-ray flux upper
limits and (b) the respective WIMP pair annihilation $<\sigma~v>$ in $95\%$ C.L. with DM mass, $\rm{m_{DM}}$ estimated for five
annihilation channels, ``f". The results are produced by considering the median value of $\rm{J(0.5^{\circ})}$-factor value for
Tuc-II.}
\end{figure}
\begin{figure}
\centering
\subfigure[]
{\includegraphics[width=0.49\linewidth]{figures/msugra_newrev.pdf}}
\subfigure[]
{\includegraphics[width=0.49\linewidth]{figures/mssm_newrev.pdf}}
\caption{The variation of $<\sigma v>$ upper limits of Tuc-II with $\rm{m_{DM}}$ for $b\bar{b}$ annihilation channels is shown in the parameter plane of ($\rm{m_{DM},<\sigma v>}$) for the median value of J-factor
with its associated uncertainties. The shaded region denotes the uncertainty associated with the DM profiles for Tuc-II. The $<\sigma~v>$ limits obtained from Tuc-II are compared with the limits predicted by (a) the mSUGRA and (b) the MSSM
DM models. In both (a) and (b), the red points are related to the thermal relic DM density, while the blue points correspond to the higher $<\sigma~v>$ and low thermal relic DM density. The thermal-relic cross section rate ($\rm{2.2\times10^{-26}~cm^{3}~s^{-1}}$) estimated by the Steigman \textit{et al.}, 2012 is displayed by a green dashed line.}
\end{figure}
\begin{figure}
\centering
{\includegraphics[width=0.5\linewidth]{figures/klein_amsb_newrev.pdf}}
\caption{The comparison of the $<\sigma~v>$ upper limits obtained from the Tuc-II with the $<\sigma~v>$ limits predicted by the AMSB and the Kaluza-Klein UED DM models is displayed in this figure. The shaded region denotes the uncertainty associated with the DM profiles for Tuc-II. The thermal-relic cross section rate ($\rm{2.2\times10^{-26}~cm^{3}~s^{-1}}$) estimated by the Steigman \textit{et al.}, 2012 is displayed by a green dashed line.}
\label{fig:fig}
\end{figure}
\noindent In this work, we have also tried to check whether Tuc-II can impose
any strong constraint on theoretically favored DM models \cite{Bhattacharjee:2018xem} and for that purpose,
we have again considered the mSugra~\cite{Chamseddine:1982jx} model, the
MSSM~\cite{Chung:2003fi}, the Kaluza-Klein in UED~\cite{Cheng:2002ej,
Servant:2002aq, Hooper:2007qk} and the AMSB model~\cite{Giudice:1998xp},
respectively. In Figs.6.7~(a,b) and 6.8, we have shown $<\sigma v>$ upper limits of
Tuc-II for $100\%$ b$\rm{\bar{b}}$ annihilation channel, as a function of
$\rm{m_{DM}}$,
for its median J value and uncertainties in J-factor \cite{Evans:2016xwx}.
Here,
we have only considered the $100\%$ b$\rm{\bar{b}}$ annihilation channel
because
for $\gamma$-ray analysis, this channel provides the most stringent limits on
theoretical model \cite{Bhattacharjee:2018xem}.
In Figs.~6.7(a,b) and 6.8, we have denoted the relic thermal cross
section rate derived by Steigman et. al.~\cite{Steigman:2012nb} with
a horizontal dashed green line. These results are
then compared with the $<\sigma~v>$ values obtained from the mSugra (in
Fig.~6.7(a))~\cite{Chamseddine:1982jx} model, the MSSM (in
Fig.~6.7(b))~\cite{Chung:2003fi}, the Kaluza-Klein in UED
(Fig.~6.8)~\cite{Cheng:2002ej, Servant:2002aq, Hooper:2007qk} and the AMSB
model
(Fig.~6.8)~\cite{Giudice:1998xp}, respectively.\\
\noindent From Figs. 6.7~(a,b) and 6.8, we can immediately observe that for its
lowest
limit of the shaded band, Tuc-II could provide a very strong limit on the
parameter space of all four theoretical DM models \cite{Bhattacharjee:2018xem}. From Figs.~6.7(a,b), it is
very encouraging to mention that, even for the median of
J($0.5^{\circ}$)-factor
of Tuc-II (i.e.,
$\rm{\log_{10} J(0.5^{\circ})}$=19.05~$\rm{GeV^{2}~cm^{-5}}$), the upper
limits of $<\sigma~v>$ can significantly constrain the blue points in both the
MSSM and
the mSUGRA model, while the uncertainty band of J-factor of Tuc-II have already
started
to limit the red points for both the models \cite{Bhattacharjee:2018xem}. From Fig.~6.8, it is interesting
to
note that the $<\sigma v>$ upper
limit from Tuc-II for the median value of J($0.5^{\circ}$)-factor (i.e.,
$\rm{\log_{10} J(0.5^{\circ})}$=19.05~$\rm{GeV^{2}~cm^{-5}}$), disfavors the
Kaluza-Klein in UED model and the AMSB model for masses
$\approx<220$~GeV and $\approx<400$~GeV, respectively \cite{Bhattacharjee:2018xem}.\\
\noindent For Tuc-II, the insufficient kinematics data is the main reason
behind
its large uncertainties in J-factor. But, in future, with more detailed
observation of the structure of Tuc-II, we should positively reduce such large
uncertainty band
to a single upper limit curve for $<\sigma~v>$ and that would definitely
improve
the $<\sigma~v>$ limit on beyond SM \cite{Bhattacharjee:2018xem}.
\subsection{Comparison of the Constraints on the DM Annihilation
Cross-section ($b\bar{b}$ Channel) Obtained from Tuc-II, Ret-II and UMi}
\noindent In this section, we have introduced two newly discovered dSphs,
Ret-II
and UMi. In Fig.~6.9, we have shown the comparison between the Tuc-II, Ret-II
and UMi in space of ($<\sigma v>$, $m_{DM}$) and for this comparison, we have
again chosen the $b\bar{b}$ annihilation channel \cite{Bhattacharjee:2018xem}.
For obtaining the $<\sigma~v>$ upper limit in 95$\%$ C.L. of Ret-II and UMi, we
have analysed the nine years of Fermi-LAT and followed the same method that we
have used for Tuc-II (check Table~6.2).\\
\noindent In Fig.~6.9, the median value of
J-factor is denoted by the dashed lines, while the shaded band represents the
range of uncertainties in J-factor
for all three UFDs \cite{Bhattacharjee:2018xem}. In case of newly discovered UFDs, a very few numbers of
member stars have been observed that leads to the main difficulties in
understanding the DM distribution in UFDs. The large uncertainty bands of UFDs
actually represent our insufficient knowledge of their internal structures.
\begin{figure}
\begin{center}
\subfigure[]
{\includegraphics[width=0.5\textwidth,clip,angle=0]{figures/tuc_reti_umi.pdf}}
\caption{The variations of $<\sigma v>$ with $\rm{m_{DM}}$ for the $b\bar{b}$ annihilation channel of Tuc-II, UMi and Ret-II is shown in the parameter plane of ($\rm{m_{DM},<\sigma v>}$). The
shaded region denotes the uncertainty associated with the DM profiles of UFDs, while the dashed line represents the $<\sigma v>$ upper limits in 95 $\%$ C.L. for their median value of J-factor.}
\end{center}
\end{figure}
\noindent From Fig.~6.9, we can notice that compared to UMi and Ret-II, Tuc-II
shows larger uncertainty in DM
density profile \cite{Bhattacharjee:2018xem}. We can also observe an overlapping region between the
uncertainties band of the Ret-II, Tuc-II and UMi in parameter space of
($<\sigma~v>$,
$m_{DM}$). So, from this scenario, we could not favour
Tuc-II over other two UFDs \cite{Bhattacharjee:2018xem}. But from Fig.~6.9, it is also important to note
that
above
$m_{DM}~\sim~100~GeV$, Tuc-II has provided a better constraint on
($<\sigma~v>$,
$m_{DM}$) space than Ret-II for its median value of J($0.5^{\circ}$)-factor \cite{Bhattacharjee:2018xem}.
\subsection{Comparative Study between the Limits Obtained from Tuc-II and the Limits Obtained from
Several Collaboration Works on dSphs/UFDs}
\noindent Here, we have performed a comparative study between the upper limits
of
$<\sigma~v>$ obtained from Tuc-II and the $<\sigma~v>$ limits obtained from
several collaboration works on dSphs/UFDs and the related plot is shown in
Fig.~6.10 \cite{Bhattacharjee:2018xem}. For comparison, we have included the
results from the combined analysis~\cite{Ackermann:2015zua} of 15 dSphs with
six
years of
\textit{Fermi}-LAT data, the results obtained by the High
Energy Stereoscopic System (H.E.S.S.) telescope from a combined analysis of $5$
dSphs~\cite{Abramowski:2014tra}, the results obtained by the High Altitude
Water
Cherenkov (HAWC) gamma-ray observatory from the combined analysis of $15$
dSphs~\cite{Proper:2015xya}, the results obtained by the Very Energetic
Radiation Imaging Telescope Array System (VERITAS) from $4$
dSphs~\cite{Archambault:2017wyh}, the results obtained by the Major Atmospheric
Gamma-ray Imaging Cherenkov Telescopes (MAGIC) \cite{Ahnen:2016qkx} for
Segue-I, as well as the results obtained for Segue-I by the combined analysis
from the \textit{Fermi}+the MAGIC~\cite{Ahnen:2016qkx}
collaboration. From Fig.~6.10, it is evident that among all observational
results, the combined \textit{Fermi} $\&$ MAGIC analysis for Segue-I imposes
the
best limit on the WIMP pair-annihilation $<\sigma~v>$ for a very wide range of
DM masses. The combined limits obtained from 15 dSph performed by the
\textit{Fermi}-LAT collaboration also provides a strong constraint up to around
DM mass $1$~TeV,
and beyond that DM mass, because of the low statistics, Fermi-LAT could not
perform well. It is also interesting to mention that the $<\sigma~v>$
upper-limits obtained from both HAWC and \textit{Fermi}+MAGIC collaboration
tend
to converge for the mass range
$\approx 100$~TeV and that signature indicates that they are competitive in
place of searching the DM signal from
dSphs/UFDs. Thus, from Fig.~6.10, we can conclude that the combined data are
taken
from several ground and space-based $\gamma$-ray telescopes can improve the
present limits of WIMP annihilation $<\sigma~v>$.
\begin{figure}
\centering
{\includegraphics[width=0.6\linewidth]{figures/all_obs.pdf}}
\caption{The comparison of the $<\sigma v>$ upper limits for $b\bar{b}$ annihilation final state
obtained from Tuc-II with the limits obtained from several collaboration work has been shown here. For comparison, we have considered the $<\sigma v>$ upper limits obtained from the single or the combined studies on dSphs by VERITAS, HESS, MAGIC, HAWC,
\textit{Fermi}-LAT+MAGIC and \textit{Fermi}-LAT, respectively.
The shaded region denotes the uncertainty associated with the DM profiles for Tuc-II. The relic cross section rate obtained by the Steigman \textit{et al.}, 2012 is represented by the `dashed' sky-blue coloured line.}
\label{fig:fig}
\end{figure}
\section{Conclusions $\&$ Discussions}
\noindent In this work, we have studied nearly nine years of
\textit{Fermi}-LAT data from the location of Tuc-II to investigate the
signatures of DM annihilation.
We have detected a very faint $\gamma$-ray excess from the location of Tuc-II
for both the power-law spectra and the $\gamma$-ray spectrum from DM
annihilation. We would also like to report that for $\gamma$-ray spectrum from
DM annihilation, we
have shown the variation of the TS values for Tuc-II with DM mass. We
have also observed that for nine years of \textit{Fermi}-LAT data, TS value of
Tuc-II peaks at
~$m_{DM}\sim$~14 GeV for $100\%$ $b\bar{b}$ annihilation channel, while for
$100\%$ $\tau^{+}\tau^{-}$ TS value peaks at $m_{DM}\sim$~4 GeV. In case of our
Galactic
Center, $m_{DM}$ range between 8 GeV to 15 GeV for $\tau^{+}\tau^{-}$
annihilation channel
and the $m_{DM}$ range between 25 GeV to 70 GeV for $b\bar{b}$ annihilation
channel
play a crucial role to understand the $\gamma$-ray emission possibly arising
from
DM annihilation \cite{Gordon:2013vta, Hooper:2013rwa, Daylan:2014rsa,
Zhou:2014lva, Calore:2014xka}.
The mass range for our obtained TS peaks from the analysis for Tuc-II are slightly lower
than the mass range required to describe the DM interpretation for Galactic
Center. \\
\noindent From our analysis, we have also confirmed that excess from Tuc-II
location is increased with increasing the time periods of data and such
increase
in TS peak value is approximately
proportional to $\sim\sqrt{t}$ \cite{Charles:2016pgz}; here t is the time
periods of Fermi-LAT dataset. The most encouraging result of this analysis is
that such successive increase in TS peak values of Tuc-II with larger time
periods of the dataset can hint at the existence of any real signal either
associated with any astrophysical scenario or resulting from DM
annihilation. In the field of indirect DM detection, such hints of
$\gamma$-ray emission from Tuc-II may open a new path in DM physics. \\
\noindent When we assume the $\gamma$-ray spectra for DM annihilating to
100$\%$
$\tau^{+}\tau^{-}$ channel, we have obtained a p-value of
$\approx$ 0.003 from Tuc-II location corresponding to the background models provided by Fermi-LAT. It
can be the result of rare statistical fluctuation in background. The one most
tantalizing explanations of such excess are the presence of any surrounding
unresolved bright sources. Among different types of unresolved sources,
blazars are believed to be the main source of background fluctuation that emits
$\gamma$-ray
emission just below the threshold limit for Fermi-LAT. We have searched the
BZCAT and the CRATES catalog, have found that three nearby radio sources lie
within $1^{\circ}$ ROI of Tuc-II and among all of them, the most nearby source
i.e., J225455-592606 lies at just $0.55^{\circ}$ away from the location of
Tuc-II. We have also checked
the 4FGL catalog of Fermi-LAT (\cite{Fermi-LAT:2019yla}) and have
noticed that a source, 4FGL 2247.7-5857 lies 0.66 degree away from
Tuc-II location. Hence, it is very unlikely that the emission detected from
Tuc-II location would be extremely contaminated by these nearby sources. \\
\noindent We have generated the residual TS maps of Tuc-II for energy $>$ 100
MeV (Figs.~6.4 and 6.5).
From these residual TS maps, we have noticed an excess of TS value $\approx$
6.5
that is
$0.18^{\circ}$ from the location of Tuc-II. We have also shown that whenever we
have included Tuc-II to our source model, the excess from that location is
greatly reduced. Thus, there is a very high chance that such emission is
associated with Tuc-II. We have generated our all residual TS maps for energy
$>$ 100 MeV. But the PSF of
Fermi-LAT is comparatively large at lower energies, while at higher energies
(say for around energy
$>$ 500 MeV), the 68$\%$ of the photons would be confined within 1 degree of
the
location of the source
\footnote{\tiny{http://www.slac.stanford.edu/exp/glast/groups/canda/lat{\_}Performance.htm}}
Thus, to again check the origin of the excess near Tuc-II location, we have
produced a residual TS map for energy $>$ 500 MeV. Interestingly, from this new
TS map
(Fig 6.11), we could find that after including the Tuc-II to our source model,
the
nearby excess region has almost disappeared \cite{Bhattacharjee:2018xem}. This signature would probably hint
that in Figs.~6.4 and 6.5
after including Tuc-II to source model, the remaining excess emission is
associated with weak background modellings. Thus, from our result, we can
at
best
conclude that the nearby excess is associated with Tuc-II location and it
might indicate a DM annihilation signal from our Tuc-II \cite{Bhattacharjee:2018xem}.\\
\noindent Several Fermi collaboration papers observe that in a large region of
the blank sky, the excess of TS $>$ 8.7 is very common. If we only consider the
blazars within $1^{\circ}$ from the location of source, they would roughly
account for 10$\%$ of such excesses. The DM
subhalos may also be responsible for a $\approx$5$\%$-10$\%$ irreducible
background. Therefore, we have re-calibrated our obtained significance and it
decreases the TS peak value of Tuc-II from 8.61 to 4.79, i.e., from p value 0.003
to 0.029. At present, with nine years of data, the obtained emission from
Tuc-II is much weaker than Fermi-LAT's threshold detection. But from our work,
we have also found that the significance of Tuc-II is increased with an increase
in
time periods of data and from TS map we have also observed a localized excess
just beside the Tuc-II. So, in future, with even more time periods of data and
with better background modelling, we can expect to explain the origin of the $\gamma$-ray excess from the location of Tuc-II.
\begin{figure}
\centering
{\includegraphics[width=.8\linewidth]{figures/TS_500_1.png}}
\caption{The residual TS maps (between 500 MeV to 300 GeV) for $1^{\circ}~\times~1^{\circ}$
ROI centred at Tuc-II extracted from $10^{\circ}$ $\times$
$10^{\circ}$ ROI. The image scale for TS map is 0.025 $pixel^{-1}$. In left Fig., Tuc-II is not included in the source model but 4FGL 2247.7-5857 and three CRATES sources are added to our source model; in right Fig., 4FGL 2247.7-5857, Tuc-II and the three CRATES sources are added to our source model.}
\end{figure}
\noindent As we have already reported, the excess observed from Tuc-II location
is below the detection threshold for Fermi-LAT. Thus we have derived the
possible upper-limit of
pair-annihilation $<\sigma~v>$ of the DM in Tuc-II as a function of
DM mass and five annihilation channels.
For our purpose, we have adopted the values J-factor and their uncertainties
from Evans \textit{et al.}, 2016 \cite{Evans:2016xwx}. \\
\noindent For this, we have analysed the larger periods of compared to
other previous works performed on Tuc-II and thus from our analysis, we can
expect to
provide more stringent limits on the theoretical models.
We have observed that for median J-factor value, Tuc-II has imposed a strong
constraint on the blue points in both the mSUGRA and the MSSM model, while the
uncertainty band of Tuc-II have begun to constrain the red points. Because of
the large uncertainty band, we may not obtain any impressive limits from Tuc-II
in parameter space of ($\sigma v$, $m_{DM}$) but our obtained results stress
that with a more detailed understanding of the internal structure, there is a
high possibility that in future Tuc-II would provide very strong bounds on
theoretically favoured DM models. From results show that for $>$100 GeV DM
mass,
Tuc-II imposes a stronger bound than the limits obtained from
Ret-II. Thus, we can expect if we would have larger periods of dataset and more
detailed information of the internal structure, we should be able to reduce
the
uncertainty band of Tuc-II to a possible narrow band in parameter space of
($<\sigma~v>$, $m_{DM}$). Then Tuc-II might be considered as one of the most DM
dominated UFDs.
\chapter{List of publications}\label{publications}
\begin{center}
I would like to dedicate this thesis to my loving parents\\
{\it \textbf{Bina Bhattacharjee} $\&$\\
\textbf{Sudipta Bhattacharjee}}\\
for their endless love, support and encouragement
\end{center}
\section{Multiwavelength searches for DM}
\section{Dark Matter Rich Targets}
\noindent In order to search for the DM signal, we first need to look for the DM
dominated regions. The observational
evidences and the optical studies indicate many potential targets. But all of
these regions have their advantages and
challenges. We need to explore that in detail. \\
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/skymap_dm.jpg}
\caption{DM rich regions.}
\end{center}
\end{figure}
\noindent The Galactic centre (GC) is assumed to be one of the most dense DM
regions. But this region consists of a large
number of unknown gamma-ray sources and a very complicated diffuse gamma-ray
emission resulting from cosmic-ray interacting with interstellar radiation fields
and gas. Hence, it is very tough to reduce all background emission by modelling
and that would simultaneously increase the uncertainties in analysis. Looking for DM signal from the GC needs extra caution to avoid background from other
astrophysical sources like pulsars. The spectral line from DM annihilation/decay should be
distinct from the expected background.
But the issue is with the low statistics and inadequate instrumental
facilities.
Few studies have observed a hint of a line signal from the GC (e.g.
\cite{Weniger:2012tx}), but the significance of that emission was very low and
unfortunately, such significance has decreased over time
\cite{Weniger:2013dya}. Hence, the true origin of such excess from the GC is
still under debate \cite{Ackermann:2012qk, Ackermann:2013uma}.\\
\noindent Galactic halo is also believed to be rich in DM. Although we do not have much knowledge
of the mass and shape of the Galactic halo, the background in this region is less
complicated. The resulting upper limits on the annihilation cross-section obtained from the
Galactic halo is comparable to the results from dwarf spheroidal galaxies (dSphs) but they
have much larger uncertainties compared to dSphs \cite{Ackermann:2012rg}.\\
\noindent The high latitude isotropic diffuse emission, i.e., the combination of
unresolved DM halo and possible Galactic subhalos, can constrain the
extragalactic isotropic DM signal. The resulting upper limits on the
annihilation cross-section obtained from the extragalactic diffusion emission
are comparable to the limits from dSphs for mass above $\approx$ $10^{3}$ GeV
\cite{Ackermann:2015tah} and for some cases the isotropic diffuse emission can also provide more
stringent limits than dSphs. But, their DM profile have large uncertainties and
that makes it difficult to know the true nature of the unresolved sources and
DM
halos. Hence, it is hard to distinguish between the positive DM signal with the
diffuse gamma-ray emission resulting from the unresolved sources
\cite{Conrad:2015bsa}.\\
\noindent The galaxy clusters are also considered as the DM dominated systems.
Most of the galaxy clusters are situated at high Galactic latitude and that
significantly reduces the contamination from the galactic diffuse emission. But
several studies report that in some clusters, the emission might come from their
cosmic-ray scenarios and DM substructure could have a significant contribution to that. Hence, it also leads to large
uncertainty in the astrophysical factors \cite{Ackermann:2012rg,
Ackermann:2013iaq, Pinzke:2011ek}. \\
\noindent In our local universe, low surface brightness (LSB) galaxies would
also, be considered as
DM dominated galaxies. LSB
galaxies are metal-poor, hardly show any signs of star
formation\cite{Impey:1997uc} and their stellar disks are generally embedded in
a rich extended
neutral HI gas disk\cite{deBlok:1996jib, Burkholder:2001sgt, ONeil:2004mqj,
Du:2015dfg}. The discrepancy between the HI mass estimated from the
rotation curves and the visible baryonic mass (i.e. derived from gas and stars)
indicates that LSB galaxies are very rich in DM content\cite{deBlok:1997zlw}.
These galaxies generally do not show any significant emission resulting from their
astrophysical activities i.e., from star-forming regions and hence, important for indirect search
of DM candidates \cite{Honey:2018dfr}. Besides, their extended HI rotation curve and gas
kinematics are used to investigate the distribution of DM at central halo
and that might help to resolve the much debated
‘cusp-core’ problem in the CDM theory for galaxy
formation\cite{vandenBosch:2000rza}.
But the problem with the LSB galaxies is that they lie at very large distances
(e.g. at the order of Mpc) and that would weaken their
astrophysical factors (at least 3 orders magnitude lower than dSphs). With such a low value of astrophysical factors, it is
hard
to provide any strong limit on the DM models. We will discuss this
in Chapter 7.\\
\noindent The predictions from the cosmological N-body simulation indicate that the structure of the
CDM halos assumed to be formed by the WIMPs are not even. Recent simulation result indicate that halos contain a large number of bound
substructures or we can say sub-halos\cite{Abdo:2010ex, Diemand:2005vz,
Kuhlen:2008qj, Springel:2005nw}. The simulation also hints the
existence
of a huge number of DM sub-halos around the MW’s
(MW)\cite{Kuhlen:2009jv, Drlica-Wagner:2013bhh} halo and among all these
predicted sub-halos, few hundreds are assumed to be massive enough to become
the dwarf spheroidal galaxies (dSphs) or the ultra faint dwarf spheroidal
galaxies (UFDs)\cite{Kuhlen:2009jv}.\\
\begin{enumerate}
\item \textbf{Dwarf Spheroidal Galaxies (dSphs):}\\
The
dSphs are considered as the largest galactic substructures around the MW. Their
mass-to-light ratio lies between 100–1000
$M_{\circ} /L_{\circ}$, in which $M_{\circ}$ and $L_{\circ}$ are the solar mass
and the solar luminosity, respectively. Hence, the dSphs could be the most DM
dominated structures of the galactic halo. Their large DM content, minimal
Galactic
foreground emission, and lack of astrophysical radiation \cite{Mateo:1998wg,
Grcevich:2009gt} make dSphs promising targets for the indirect detection of DM.
Since the DM content of each dSphs can be determined from stellar kinematic
data, it is possible to predict the relative strength and spatial distribution
of the annihilation signal expected from each galaxy. These characteristics
provide a mechanism for distinguishing a DM annihilation signal in dSphs from
conventional astrophysical backgrounds. \\
\item \textbf{Ultra-faint Dwarf Spheroidal Galaxies (UFDs):}\\
Since the last few decades, the Panoramic Survey Telescope and Rapid Response
System
(Pan-STARRS) \cite{Kaiser:2002zz, Laevens:2015kla, Laevens:2015una}, the Sloan
Digital Sky Survey (SDSS) \cite{York:2000gk,
Belokurov:2010rf}, the Dark
Energy Survey (DES) \cite{Abbott:2005bi, Bechtol:2015cbp, Koposov:2015cua,
Kim:2015ila} experiment and the Dark Energy Camera at Cerro Tololo
\cite{Kim:2015xoa, Martin:2015xla} have detected a set of UFD galaxies. They
have very low stellar contents and that hints that they could be very rich in DM
\cite{Grebel:2003zq,
Evans:2003sc, Bonnivard:2015xpq, York:2000gk, Belokurov:2010rf, Kaiser:2002zz}.
The UFDs are generally characterised by very old ($\geq$ 12 Gyr) stellar
populations with large velocity dispersions and inferred mass-to-light ratios
reaching up to $\approx$ 3000 $M_{\circ} /L_{\circ}$. The high value of
velocity
dispersion and mass-to-light ratios support the existence of significant DM in
UFDs \cite{Koch:2011tb}. Hence, by analysing such UFDs we can aquire a
substantial knowledge of the nature of the ancient galaxies \cite{Simon:2007dq,
Kirby:2013wna} that were accreted to form the MW halo \cite{Norris:2010zs,
Belokurov:2013aga} and the origin of the chemical abundances of the stellar
population of Milky Way (MW) halo \cite{Starkenburg:2014hca}. Therefore, the UFDs
are
considered as the best tracers of early DM sub-halos in the universe
\cite{Kuhlen:2009jv, Drlica-Wagner:2013bhh, Belokurov:2013aga, Kim:PhDthesis}.
\\
\end{enumerate}
\section{Dark Matter Density Distributions}
\noindent The exact nature of the DM distribution is still in debate but
several theoretically favoured density profiles can considerably
fit the N-body simulations data. Amongst all proposed density profiles, the popular
profiles are the Navarro-Frenk-White (NFW) density profile
\cite{Navarro:1996gj}, the Burkert (BURK) profile \cite{Burkert:1995yz}, the Pseudo
Isothermal profile (ISO) \cite{Gunn:1972sv}, the Einasto profile
\cite{Einasto:1965czb}, etc.
\noindent The NFW profile is defined as
\begin{equation}
\rho_{NFW}(r)=\frac{\rho_{s}r_{s}^{3}}{r(r_{s} + r)^{2}}
\end{equation}
\noindent where, \\
\noindent $\rho_{s}$ = characteristic density of NFW profile and\\
\noindent $r_{s}$ = scale radius of NFW profile. \\
\noindent The BURK profile is defined as
\begin{equation}
\rho_{BURK}(r)=\frac{\rho_{B}r_{B}^{3}}{(r_{B}+r)(r_{B}^{2} + r^{2})} \\
\end{equation}
\noindent where, \\
\noindent $\rho_{B}$ =central density of BURK profile and\\
\noindent $r_{B}$ = core radius of BURK profile.\\
\noindent The ISO profile is defined as\\
\begin{equation}
\rho_{ISO}(r)=\frac{\rho_{c}}{(1+\frac{r^{2}}{r_{c}^{2}})} \\
\end{equation}
\noindent where, \\
\noindent $\rho_{c}$ = central density of ISO profile and\\
\noindent $r_{c}$ = core radius of ISO profile.\\
\noindent and the Einasto (EINO) profile is defined as
\begin{equation}
\rho_{EINO}(r)= \rho_{e} \exp\Big[\frac{-2((r/r_{e})^\alpha -1)}{\alpha} \Big]
\end{equation}
\noindent where, \\
\noindent $\rho_{e}$ = characteristic density of EINO profile and\\
\noindent $r_{e}$ = scale radius of EINO profile.\\
\noindent For Einasto profile $\alpha$ defines the shape of the distribution.
In
Fig. 2.2, we have shown the comparison among NFW, BURK, ISO
and EINO. The NFW profile defines the cuspy
distribution of DM whereas the BURK and ISO are the cored
profile with
constant DM core. The rotation curves of LSB and late-type and gas-rich dwarf
seem to indicate an approximately
constant DM density in the inner parts of galaxies, while the N-body
cosmological simulations indicate a steep power-law
type behaviour. This controversy is generally known as the ``core-cusp
problem'' and till today it remains as one of the unresolved problems in
DM distribution, especially for the small-scale structure \cite{deBlok:2009sp}.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/DMdensityprofile.png}
\caption{NFW, Einasto, Isothermal and Burkert galactic DM density profile. The
profiles
are normalized for the Milky-Way such that $\rho_{\odot}$=0.3 GeV $cm^{-3}$ and
$r_{\odot}$=8.33 Kpc. The diagram is taken from Ref.~Pierre, 2019.}
\end{center}
\end{figure}
\section{The Gamma-ray Signal Resulting from WIMP Annihilation}
\noindent In chapter 1, we have shown how WIMP can annihilate into the SM
particles and then produce the secondary charged particles and gamma rays as the end
products of the annihilation chain (see, Fig. 1.6). The gamma-ray resulting
from the WIMP annihilation is expected to produce a distinct line spectrum
and
such a line feature would be completely distinguished from any known
astrophysical phenomena. Thus, it is referred to as the
``smoking gun''
signature for the indirect search for DM. The nature of the continuum gamma-ray spectra for four different
annihilation
channels are shown in Fig. 2.3. They are derived from the DMFit package \cite{Jeltema:2008hf} which is
implemented in the Fermi Science Tools. This DMFit code was first derived using
the Dark-SUSY \cite{Gondolo:2004sc} package, but later it has been updated by
Pythia 8.165 \cite{Sjostrand:2007gs} and now this code can consider more
annihilation channels and cover a wide range of DM masses
\footnote{\tiny{http://fermi.gsfc.nasa.gov/ssc/data/analysis/software/}}.
\begin{figure}
\begin{center}
\includegraphics[width=0.6\linewidth]{figures/gamma-spectra.pdf}
\caption{Gamma-ray annihilation spectra from WIMPs with masses of 100 GeV
(solid
lines) and 200 GeV (dashed lines), annihilating through four different
annihilation channels (the $b\bar{b}$ channel in blue, $\mu^{+}\mu^{-}$ channel
in red, $\tau^{+}\tau^{-}$ channel in green, and $W^{+}W^{-}$ channel in
magenta). Values are obtained from the DMFit package implemented in the Fermi Science Tools.}
\end{center}
\end{figure}
\noindent The $\gamma$-ray flux originating from the DM annihilation depends on
both the distribution of DM and the particle physics involving pair
annihilation. At a specific energy E, the differential $\gamma$-ray flux
$\phi_{\rm{WIMP}} (E, \Delta \Omega)$ (in units of photons
$cm^{-2}s^{-1}GeV^{-1}$) arising from the WIMP annihilations of mass $m_{DM}$
in a region within a solid angle $\Delta \Omega$ can be expressed as
\cite{Abdo:2010ex}:
\begin{equation}
\phi_{\rm{WIMP}}(E, \Delta \Omega)~ = ~ \Phi^{pp}(E) \times J(\Delta \Omega),
\end{equation}
\noindent where, $\Phi^{pp}(E)$ is the particle physics factor and $J(\Delta
\Omega)$ is the astrophysical factor.
\subsection{Particle-Physics Factor}
\noindent The $\Phi^{pp}$ depends on the characteristics of particles
generating
through WIMP annihilation. The expression of the particle physics factor can be
written as \cite{Abdo:2010ex}:
\begin{equation}
\Phi^{pp}(E)~ = ~\frac{<\sigma v>}{8 \pi ~m^{2}_{\rm{DM}}} \sum_{f}
\frac{dN_{f}}{dE}B_{f}.\\
\end{equation}
\noindent where, $<\sigma v>$ is the thermally-averaged annihilation
cross-section and $m_{DM}$ is mass of WIMP. $\frac{dN_{f}}{dE}$ denotes the
differential photon spectrum for each possible pair-annihilation final state and $B_{f}$ is the branching ratio corresponding to the final state, `f'.
The selection of SM final candidates, through which annihilation would
occur, is theoretically motivated. Several numerical packages like Pythia
\cite{Sjostrand:2007gs}, DarkSUSY\cite{Gondolo:2004sc}, DMFit
\cite{Jeltema:2008hf}, etc. are designed to estimate differential photons yields
from each annihilation channel.\\
\subsection{Astrophysical Factor (J-factor)}
\noindent The Astrophysical factor (or J-factor) characterizes the astrophysical
properties of the DM dominated sources. The J-factor
depends on the spatial distribution of the DM and directly proportional to the
line-of-sight integral of the squared of DM particle density, i.e. $\propto$
$\rho^{2}$. The expression of the J-factor is \cite{Abdo:2010ex}:
\begin{eqnarray}
J (\Delta \Omega) &=& \int \int \rho^{2}(r(\lambda)) d\lambda ~ d\Omega
\nonumber \\
&=& 2 \pi \int_{\theta_{\rm{min}}}^{\theta_{\rm{max}}}
\rm{sin} \theta \int_{\lambda_{\rm{min}}}^{\lambda_{\rm{max}}}
\rho^{2}(r(\lambda)) d\lambda ~ d\theta .
\end{eqnarray}
\noindent In Eq. 2.7, $\lambda$ and $r(\lambda)$ are the line-of-sight (l.o.s) and galactocentric distance, respectively. $\theta$ is the angle between the l.o.s and the center of the target. The value of $\theta_{max}$ is the angle over which
we would average the integration of the J-factor. We generally
use the resolution of the detector as the $\theta_{max}$, for example, if we use
the data observed by Fermi-LAT, we would consider $0.5^{\circ}$ as $\theta_{max}$ for
J-factor calculation. For $\theta_{min}$, we generally use $0^{\circ}$.\\
\noindent The expression for $r(\lambda)$ is,
\begin{equation}
r(\lambda) = \sqrt{\lambda^{2} + d^{2} - 2~ \lambda ~d~ \rm{cos \theta}}
\end{equation}
\noindent where, d is defined as the heliocentric distances of the target.\\
\noindent The maximum and minimum limits of $\lambda$ can be represented as
\cite{Evans:2016xwx}
\begin{eqnarray}
\lambda_{\rm{max}} = d\rm{cos \theta} + \sqrt{R_{\rm{t}}^{2} - d^{2}
\rm{sin^{2} \theta}} \\
\end{eqnarray}
\noindent and
\begin{eqnarray}
\lambda_{\rm{min}} = d\rm{cos \theta} - \sqrt{R_{\rm{t}}^{2} - d^{2}
\rm{sin^{2} \theta}}
\end{eqnarray}
\noindent respectively. Here, $R_{t}$ is the tidal radius for DM rich galaxies. For dSphs,
we generally consider $R_{t}$ for evaluating the maximum
and minimum range of l.o.s distances. The tidal radius of
the dSphs halo in the gravitational potential of
the MW is estimated from the Jacobi
limit \cite{Binney:1987}. But, for comparatively large system, say
for Low surface brightness galaxies, we use virial radius ($R_{vir}$) in place
of $R_{t}$.\\
\subsection{Expected Gamma-Ray Flux from WIMP Annihilation}
\noindent Several numerical packages such as
DarkSUSY\cite{Gondolo:2004sc}, Pythia\cite{Sjostrand:2007gs},
DMFit\cite{Jeltema:2008hf}, etc. are generally used for
producing the spectra of the events resulting from the WIMP annihilation and
can also be used to model the probable interactions between the incoming and
the outgoing particles. All of these packages are developed for simulating the
possible interactions between the self-annihilating DM particles and then to
estimate the probable number of end products (such as gamma rays, neutrino or
secondary charged particles) resulting from the DM annihilation. For our work,
especially for the $\gamma$-ray analysis, we have used
DMFit tool that is designed to estimate the possible gamma-ray spectrum
resulting from the DM annihilation for any DM mass in GeV ranges and
possible annihilation channels. The DMFit tool is developed from several set of
MC simulations codes used for the hadronization and decay of the DM
annihilation final products. The same set of MC simulations codes are also used
by the DarkSUSY package\cite{Gondolo:2004sc} which uses the Pythia
6.154\cite{Sjostrand:2007gs} as the event-generator. Once such codes estimate the differential flux (i.e.
$\frac{dN_{f}}{dE}$) originating from pair-annihilation, we can determine the total $\gamma$-ray flux corresponding to the DM signal for any target.\\
\noindent WIMP can self-annihilate to several possible channels, but for our
analysis, we mostly preferred five combinations, such as: $\chi\chi \rightarrow \tau^{+}\tau^{-}$, $\chi\chi \rightarrow
\mu^{+}\mu^{-}$,
$\chi\chi \rightarrow W^{+}W^{-}$, $\chi\chi \rightarrow b\bar{b}$ and
$\chi\chi
\rightarrow 80\%~b\bar{b} + 20\%~\tau^{+}\tau^{-}$.
The reasons for
choosing these annihilation channels are already discussed in section~1.5.
\subsection{The Astrophysical Backgrounds}
\noindent For gamma-ray data analysis, the backgrounds play a very
significant role. Especially for very faint sources, it is very important to investigate the background region in detail. Otherwise, in due course of analysis, the emission coming from the surroundings can be
classified as the gamma-ray counts from the source location while they are just the background coming from the
nearby point sources or the Galactic and the Extragalactic foreground.
\noindent Data collected by the Fermi-LAT has been used for the gamma-ray analysis part of this
thesis, as described in Chapters 3 and 4.
The space-based telescopes lie above the earth's atmosphere and that help to reduce the possible background contamination at the time of data recording. That enables the detectors to produce much clear images than the ground-based telescope. During the
analysis, we can again screen our data with event classification process and can
model
out the remaining contribution of backgrounds (i.e. gamma-ray emission from diffuse galactic and
extra-galactic components and nearby gamma-ray sources).
\noindent When Fermi-LAT detects the background particles originating from
cosmic
rays (or from cosmic-ray's interactions with the Earth’s atmosphere), initially
those particles are counted as the events\cite{Ackermann:2012kna}. Fermi-LAT
has
the segmented anti-coincidence detector which vetoes most of the passing cosmic rays
(briefly described in Chapter 3.1) and the remaining cosmic rays are correctly
deferred in the event selection process. The residual cosmic-ray background
contamination is included as
the gamma-ray backgrounds to the source model. The Fermi-LAT
collaboration provides the necessary background models and source catalogs. The background models consist of a Galactic diffuse emission
template, an extragalactic isotropic diffuse emission template and the contribution from all the nearby sources that lie within the radius of interest.
\noindent The Galactic diffuse emission template contains both the spatial and
spectral part of the cosmic emission. The template is derived by fitting the
inverse Compton radiation maps as predicted by GALPROP \cite{Strong:2007nh} and
gamma-ray emissivities obtained from gas density maps with known point sources
and a model for isotropic diffuse
emission\cite{Nolan:2011sjt}\footnote{\tiny{
http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html}}.
\noindent The isotropic template contains the extra-galactic residual
cosmic-ray contamination and the emission from the unresolved point sources.
This isotropic diffuse model is
generated by fitting the extra-galactic residual emission to the high-latitude
sky with the emission from the Galactic template
and from other known gamma-ray sources. These two Galactic diffuse emission
template and extragalactic isotropic diffuse emission template play an
important
role to eliminate the possible background emission.
\noindent There is a high possibility that the events detected by the Fermi-LAT
are
contaminated by the photons coming from the Earth's albedo effect. The
Fermi-LAT
team has recommended
to
use the zenith angle cut as 90 degree to reduce the background photons
resulting
from the Earth's albedo. Fermi-LAT team has provided the Earth-limb template
(below 100 MeV) for all available Fermi Gamma-ray LAT (FGL) source catalogs, i.e for 2FGL
\cite{Nolan:2011sjt}, 3FGL \cite{Acero:2015gva} and 4FGL
\cite{Fermi-LAT:2019yla}.
\noindent With time, Fermi-LAT team have released several version of source catalogs. The
first published catalog was 1FGL \cite{Abdo:2010ru} which was
prepared on 11 months of data. 1FGL catalog contains 1451 sources where
the sources are modelled with the power-law spectrum. They have implied several
improvements
for their second source catalog, i.e. for 2FGL, \cite{Nolan:2011sjt} which have
24 months of data. In that catalog, the Fermi-LAT team have used the updated
diffuse models. They have also considered extended and non-power-law
sources and an improved source association process. The 2FGL catalog contains
1873 sources.
\noindent The third released source catalog by Fermi-LAT team is the 3FGL
\cite{Acero:2015gva}. This catalog is derived by the first four years
of
Fermi-LAT data and for energy range between 100 MeV to 300 GeV. This
catalog contains 3033 sources (Fig. 2.4). This catalog has included many new
sources which already have known counterparts in other surveys. The
new identified or associated sources are from the blazar class (or from active
galaxies), supernova, pulsar and X-ray binaries \cite{Acero:2015gva}.
\noindent The most recent version of the Fermi-LAT catalog for point sources
i.e. the 4FGL catalog \cite{Fermi-LAT:2019yla} consists of 5064 sources (Fig.
2.5). This
catalog is generated from the first eight years of Fermi-LAT data and is
working between the energy
range from 50 MeV to 1 TeV. Amongst all the Fermi-LAT published source
catalogs, 4FGL is the deepest one
if we consider the energy range. Relative to the 3FGL catalog, the 4FGL one
has incorporated the improved analysis method and
updated models for the Galactic and isotropic diffuse $\gamma$-ray emission.
The 4FGL catalog consists of
1336 unassociated sources, whereas 239 are the pulsars and more than 3130 of
the identified or associated sources
are the active galaxies from the blazar class.
\begin{figure}
\begin{center}
\includegraphics[width=1.0\linewidth]{figures/sky_map_3fgl.pdf}
\caption{Sources from the third Fermi-LAT catalog (3FGL) plotted in Aitoff
projection.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1.0\linewidth]{figures/sky_map_4fgl.pdf}
\caption{Sources from the fourth Fermi-LAT catalog (4FGL) plotted in Aitoff
projection.}
\end{center}
\end{figure}
\section{The X-ray and Radio Signal Resulting from WIMP Annihilation}
\noindent For DM detection it is important to use multi-wavelength studies complementing the $\gamma$-ray excess with increasing the time period of $\gamma$-ray
analysis. In case of dSphs, it has already been pointed out that the observational limits
obtained from the radio and X-ray data are competitive with $\gamma$-ray \cite{Regis:2017oet,Jeltema:2008ax}.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/darksusy_100Gev.png}
\caption{The $e^{-}e^{+}$ injection spectra resulting from WIMPs annihilation
with 100 GeV DM mass
for four different annihilation channels (the $b\bar{b}$
channel in red, $\mu^{+}\mu^{-}$ channel in green, $\tau^{+}\tau^{-}$ channel
in
blue, and $W^{+}W^{-}$ channel in black). The spectrum is obtained from
DarkSUSY.}
\end{center}
\end{figure}
\noindent In WIMP annihilation chain, the secondary charged
particles are generated via various reactions, such as: $\pi^{\pm}
\rightarrow \mu^{\pm} + \nu_{\mu}(\overline{\nu_{\mu}})$, with $\mu^{\pm}
\rightarrow e^{\pm} + \overline{\nu_{\mu}}(\nu_{\mu})$. When the charged
particles are propagating through the interstellar medium, they would lose
their
energy through a variety of electromagnetic processes such as inverse Compton
radiation (IC), synchrotron radiation, Coulomb losses and bremsstrahlung, etc. Charged particles passing through the magnetic field of astrophysical objects produce
electromagnetic emission in the radio frequency range \cite{Ginzburg:1967zja, Longair:2001yy}. For the IC emission, the starlight
photons and the photons resulting from the 2.7K cosmic microwave background
(CMB) interact with the relativistic charged particles and produce photons in the X-ray range \cite{Ginzburg:1967zja,
Longair:2001yy}.
Thus, by various radiation mechanism \cite{Colafrancesco:2005ji,
Colafrancesco:2006he,
Ginzburg:1967zja, Longair:2001yy}, especially the synchrotron
emission and the IC emission at high energies, the charged particles
originating from WIMP annihilation produce the energy spectrum at
radio and X-ray frequency range.
\noindent In order to examine the radio and X-ray emission,
we should consider the diffusion coefficient of the region and the relative energy loss from the charged particles for several possible annihilation channels. In Fig.~2.6, we have shown the $e^{\pm}$
injection spectra resulting from the WIMPs of 100 GeV mass that annihilates to $b\bar{b}$,
$\mu^{+}\mu^{-}$, $\tau^{+}\tau^{-}$ and $W^{+}W^{-}$ final states.
Formalism for solving the transport equation for the number density ($n_e(r,E)$) of
$e^\pm$ of a given energy $E$ at the position $\mathbf{r}$ with respect to the
center of the source has been developed in Refs.~ \cite{Colafrancesco:2005ji,
Colafrancesco:2006he, McDaniel:2017ppt}. The transport equation looks as
\begin{align}\label{eqn:diffusion}
\frac{\partial}{\partial t} \frac{dn_e}{dE} = \nabla . \Big( D(E,\mathbf{r})
\nabla \frac{dn_e}{dE}\Big)
+ \frac{\partial}{\partial E}
\Big( b(E,\mathbf{r}) \frac{dn_e}{dE}\Big)
+ Q_e (E,\mathbf{r}).
\end{align}
\noindent Here, $ D(E,\mathbf{r})$ is the space-dependent diffusion coefficient
and $b(E,\mathbf{r})$ denotes the energy loss term. The Source term ($Q_e$) can
be defined as
\begin{align}
Q_e (E,\mathbf{r}) = \frac{\rho^2_\chi(\mathbf{r}) \langle \sigma v\rangle}{2
m_\chi^2} \frac{dN_{e}}{dE},
\end{align}
\noindent where, $\frac{dN_e}{dE}$ is the number of $e^{+}/e^{-}$ produced at a
given energy $E$ per DM annihilation.
The solution of the Eq. 2.11 i.e., $\frac{dn_e}{dE}(r,E)$ would
give us the number density of $e^{+}/e^{-}$ per unit energy at a distance
r from the center of the source.\\
\noindent The energy loss term is given by
\begin{align}
b(E,\mathbf{r}) = & b_{IC}(E) + b_{Syn}(E, \mathbf{r}) + b_{Coul}(E) + b_{brem}(E) \nonumber \\
= & b_{IC}^0 E^2 + b_{syn}^0 B^2 E^2 \nonumber \\
& + b_{Coul}^0 n ( 1 + log(\gamma /n)/75) + b_{brem}^0 n (log(\gamma /
n) + 0.36).
\end{align}
\noindent where, the magnetic field, $B$ is in unit of $\mu G$, $n$ denotes the
number density of the thermal electrons in unit of cm$^{-3}$ , $\gamma$ is $E/m_{e}$ and the energy loss coefficients for all
radiative mechanisms are
$b_{IC}^0 = 0.25 \times
10^{-16}$ GeV s$^{-1}$, $b_{syn}^0 = 0.0254 \times 10^{-16}$ GeV s$^{-1}$,
$b_{brem}^0 = 1.51 \times 10^{-16}$ GeV s$^{-1}$ and $b_{Coul}^0 = 6.13 \times
10^{-16}$ GeV s$^{-1}$.
\noindent As we still don't have much detailed knowledge on the structure of DM
distribution, we have assumed the diffusion coefficient $D(E, \mathbf{r})$ to
be
independent of position. So, we can safely consider the Kolmogorov form for
diffusion coefficient:
\begin{equation}
D(E) = D_0 \Big(E\Big)^{\gamma_{D}}
\end{equation}
\noindent where $D_{0}$ is defined as the diffusion constant.
\noindent If we assume a uniform magnetic field and stationary state of the
number density of thermal electrons (i.e. $\frac{\partial,}{\partial t}
\frac{dn_e}{dE} = 0$), the spherically symmetric solution of the diffusion
equation is given by
\begin{align}\label{eqn:solutionndifusion}
\frac{dn_e}{dE}(r,E) = \frac{1}{b(E)} \int_{E}^{M_\chi} dE^\prime G\Big(r,
v(E)-v(E^\prime)\Big) Q_e(E,r),
\end{align}
\noindent where, the Green's function is given by
\begin{align*}
G(r, \Delta v) = &\frac{1}{\sqrt{4\pi \Delta v}} \sum_{n=-\infty}^{\infty}
(-1)^n
\int_{0}^{r_h} dr^\prime \frac{r^\prime}{r_n}
\Big(\frac{\rho_\chi(r^\prime)}{\rho_\chi(r)}\Big)^2 \\
& \times \Big[ exp\Big(-\frac{(r^\prime -r_n)^2}{4 \Delta
v}\Big) - exp\Big(-\frac{(r^\prime + r_n)^2}{4 \Delta v}\Big)\Big],
\end{align*}
\noindent with $r_n = (-1)^n r + 2nr_h$ and $v(E) = \int_E^{M_\chi} d\tilde{E}
\frac{D(\tilde{E})}{b(\tilde{E})}$. Here, $r_h$ defines the diffusion zone of
the
galaxy. Typically, the value of $r_h$ is taken to be twice the radius of the
last stellar component of the galaxy (i.e. twice the distance of the outermost
star from center). The solution is obtained with the free escape boundary
condition $\frac{dn_e}{dE}(r_h,E) = 0$. For evaluating Green’s
function, we are considering the average magnetic field strength. So, we express the energy loss term only as a function of E, i.e., $b(E,\mathbf{r})~\approx~b(E)$. \\
\noindent As we already discussed, in the presence of the comparatively strong
magnetic field, the synchrotron radiation would play the most dominant role.
The synchrotron power spectrum ($P_{\rm synch}(\nu,E,B)$) in the
presence of $B$ is defined as \cite{Longair:2001yy, Storm:2016bfw}:
\begin{equation}
P_{\rm synch}(\nu, E, B) = \pi \sqrt{3} r_0 m_e c \nu_0 \, \int_0^\pi \,
d\theta \, sin^2\theta \, F\big(\frac{x}{\sin\theta }\big),
\end{equation}
\noindent where, $\theta$ is the pitch angle, $r_0 = e^2/(m_e c^2)$ is the
classical electron radius and $\nu_0 = eB/(2\pi m_e c)$ is the non-relativistic
gyro-frequency. While,
\begin{equation}
F(y) = y \, \int_y^\infty d\zeta \, K_{5/3}(\zeta) \simeq 1.25 \, y^{1/3}\,
e^{-y} \, (648 + y^2)^{1/12} \ .
\end{equation}
\noindent The quantity $x$ is given by
\begin{equation}
x = \frac{2 \, \nu\, m_e^2 \, (1+z)}{3 \, \nu_0\, E^2}
\end{equation}
\noindent with $z$ being the redshift of the source.
We can also estimate the local emissivity for the synchrotron radiation i.e., the
energy radiated at a given $\mathbf{r}$, per unit volume per unit time at a
given frequency $\nu$ in terms of $P_{\rm synch}$ and $dn_e/dE$,
\begin{equation}\label{eqn:emissivity}
j_{\rm synch}(\nu, r)
= \int_{m_e}^{M_{\chi}} dE \left(\frac{dn_{e^+}}{dE}
+ \frac{dn_{e^-}}{dE}\right) P_{\rm synch}(\nu,E,B) =
2 \int_{m_e}^{M_{\chi}} dE \, \frac{dn_{e^-}}{dE}\,
P_{\rm synch}(\nu,E,B) \ .
\end{equation}
\noindent Then the expression for integrated synchrotron flux density would be
\begin{equation}\label{eqn:syn_flux}
S_{\rm synch}(\nu) = \frac{1}{4\pi d^2}\int d^3 r \, \, j_{\rm synch}(\nu,r),
\end{equation}
\noindent where, $d$ is the distance to the target galaxy.\\
\noindent For regions with low magnetic fields, the IC radiation process would
play the dominant role. Depending on the mass of DM candidates, the emission
from the IC mechanism would produce a spectral peak between the soft to hard
X-ray bands
\cite{Jeltema:2011bd}. The expression for the IC power ($P_{\rm
IC}(E_{\gamma},E)$) is:
\begin{equation}
P_{\rm IC}(E_{\gamma},E) = cE_{\gamma} \int d\epsilon \, n(\epsilon) \,
\sigma\big(E_{\gamma},\epsilon, E\big),
\end{equation}
\noindent where, $n(\epsilon)$ is photon number density,
$\sigma(E_{\gamma},\epsilon, E)$ is the IC scattering cross-section and
$\epsilon$ is the energy of the target CMB photons. E is the energy of the
relativistic $e^{\pm}$ pair and $E_{\gamma}$ is the energy of the upscattered
photons. From Klein-Nishina formula, we can define the
$\sigma(E_{\gamma},\epsilon, E)$ as:
\begin{equation}
\sigma(E_{\gamma},\epsilon, E) = \frac{3\sigma_{T}}{4\epsilon \gamma^{2}} G(q,
\Gamma)
\end{equation}
\noindent where, $\sigma_{T}$ is the Thomson cross-section and the expression of
$G(q, \Gamma)$ is:
\begin{equation}
G(q, \Gamma) = \Big[2q \ln q + (1+2q) (1-q) + \frac{(2q)^{2}(1-q)}{2(1+\Gamma
q)}\Big]
\end{equation}
\noindent where, $\rm{\Gamma = \frac{4\epsilon \gamma}{m_{e} c^{2}} = \frac{4
\gamma^{2} \epsilon}{E}}$ and $\rm{q = \frac{E_{\gamma}}{\Gamma
(E-E_{\gamma})}}$. The range of q lie between $1/(4\gamma^{2}) \geq q \geq 1$.\\
\noindent Similar to the synchrotron emission, we can also find the local
emissivity for IC emission by folding the power with the electron number
density
at equilibrium,
\begin{align}\label{eqn:emissivity}
j_{IC}(\nu, r) = \int_{m_e}^{M_{\chi}} dE \Big(\frac{dn_{e^+}}{dE} +
\frac{dn_{e^-}}{dE}\Big) P_{IC}(E,E_{\gamma}) \nonumber \\
= 2 \int_{m_e}^{M_{\chi}} dE \, \frac{dn_{e^-}}{dE}\,
P_{IC}(E,E_{\gamma}),
\end{align}
\noindent The integrated IC flux density spectrum is given by
\begin{align}\label{eqn:ic_flux}
S_{IC}(\nu) = \frac{1}{4\pi d^2}\int d^3 r \, \, j_{IC}(E_{\gamma},r),
\end{align}
\noindent where, $d$ is the distance to the target galaxy.\\
\noindent Here, we would like to mention that, unlike the gamma-ray emission, the X-ray and the synchrotron flux
is not directly related to the astrophysical factor (J-factor). They are primarily dependent on the diffusion mechanism and
the energy loss processes of the system. In addition, the magnetic field (B) and the diffusion
coefficient (characterised by $D_{0}$ and $\gamma_{D}$) inside the source would also play a crucial role.\\
\section{Low Surface Brightness Galaxy}
\noindent In this chapter, we have chosen a set of low surface brightness (LSB)
galaxies which are thought to be an excellent target for indirect DM detection \cite{Bhattacharjee:2019jce}.
LSB galaxies are the diffuse galaxy whose surface brightness is nearly one
order
of magnitude lower than our night sky. Most of the baryonic component in
LSB galaxies are in form of neutral hydrogen (HI) gas \cite{Burkholder:2001bg,
ONeil:2004mqj, Du:2015in}
and that hydrogen disk is extended
up to 2 to 3 times beyond the stellar disks of LSBs\cite{Blok:1996dfr,
Mishra:2017vgt}. LSB
galaxies are metal poor and are generally made of dust free stellar disks
\cite{Impey:1997uc} with a very considerably small amount of molecular hydrogen
gas
\cite{Das:2006jp}. Hence, LSB would have a negligible or very little star
formation rates (SFRs). The $\gamma$-ray emission resulting due to the Star
formation would not then interfere
much with emission from WIMP annihilation. The
supermassive black holes or active galactic nuclei (AGN) can also be a source
of
$\gamma$-rays emission but AGNs are rarely found in LSB galaxies. Thus, from
astrophysical perspective, we can consider LSB
as the clean sources \cite{Bhattacharjee:2019jce}.\\
\noindent The measurements from their HI rotation curves \cite{Das:2019tea}
indicate
their very high value of mass-to-light ratio \cite{ONeil:2004mqj,
Honey:2018dfr} i.e., the contribution coming from the stars and the luminous gas
is very little compare to the total mass in LSB. The studies obtained from the
observation of rotation curve in LSB galaxies also hint the existence
of massive DM halos \cite{Blok:1997dfr}. Even the centres of LSB galaxies do
not
have any large overdensities in stellar components. Therefore, LSB galaxies are
believed to be DM-dominated even in their centres and that makes them an
excellent source for the indirect search of the DM signal \cite{Bhattacharjee:2019jce}.
For indirect detection of DM candidate, LSB galaxies hold two primary criteria
i.e., (i)~very rich in DM content and
(ii) do not consist of any strong sources of $\gamma$ radiation,
for example, AGN and star-forming regions.\\
\noindent The HI rotation curves and gas kinematics of LSB galaxies are also
used to resolve the
`cusp-core' problem in the CDM theory of galaxy formation
\cite{vandenBosch:2000rza, deNaray:2008bs}. The N-body simulation generally
favours the cuspy profile for
DM distribution, while for some LSB galaxies the cored profile can provide a
better fit to their central DM distribution.\\
\noindent Even though the LSB galaxies are very suitable targets for indirect
DM
detection, because of their large distances (of the order of Mpc) they are not
widely studied. There are very few dedicated literatures which have studied the
possible $\gamma$-ray emission from LSB galaxies \cite{Gammaldi:2017mio,
Cadena:2017ldx, Hashimoto:2019obg, Bhattacharjee:2019jce}. For our study, we have chosen four LSB
galaxies that are relatively
close and have applied the multiwavelength approach for investigating the
possible
DM signal from LSB galaxies at gamma and radio wavelengths \cite{Bhattacharjee:2019jce}.
\section{Sample Selection:}
\noindent In this section, we would give a brief introduction to our selected
LSB galaxies \cite{Bhattacharjee:2019jce}. They have low B-band luminosity and a large quantity of DM
contents\cite{vandenBosch:2000rza}. They all are situated within the 15 Mpc
heliocentric distances. From the optical images, any intense sign of AGN
activity and new star formation have not been observed. In Table~7.1, we have
mentioned some observational results for LSB galaxies \cite{Bhattacharjee:2019jce}.\\
1) \textbf{UGC 3371}: UGC 3371, also known as DDO 039, is characterised as the
irregular dwarf galaxy. Several studies show an impressive agreement between the
rotational velocities obtained from the H$\alpha$ and the HI, respectively. But
at the initial, rotation curve from HI disk started to arise more steeply than
the rotational curve for H$\alpha$. The overcorrection done in the beam smearing
for HI rotational curve is assumed to the reason for such discrepancies
\cite{Swaters:2002rx} and thus for UGC 3371, we could not predict the exact
shape of the rotational curve. Its shape could be either linear or steep. The
study by ref.~\cite{swaters:PhDthesis} showed that the rotational curve of UGC
3371 had provided an impressive fit to the DM halo profile with the steep cusp
at the centre. Thus, for UGC 3371, the DM halo profile is consistent with the
CDM prediction\cite{vandenBosch:2000rza}.\\
2) \textbf{UGC 11707}: UGC 11707 is characterised as the spiral galaxy which has
the loosely bound broken arms originating from some individual stellar clusters.
The observational data points out the very faint bulge (Sd) at the centre of UGC
11707. But there are not many studies on it and thus because of the insufficient
data, there are not many sample data to define its rotational curve. But the
study indicates that between H$\alpha$ and HI rotational curves, the
inner rise for H$\alpha$ curve is comparatively steeper\cite{swaters:PhDthesis}.
The rotational curves for UGC 11707, also indicate a discrepancy between the
approaching and receding values of rotational velocity for radius $\le$ 7
kpc\cite{swaters:PhDthesis}. The DM halo profile for UGC 11707 is consistent
with the CDM prediction\cite{vandenBosch:2000rza}.\\
3) \textbf{UGC 12632}: UGC 12632, also known as DD0217, is characterised as the
weakly barred spiral galaxy (i.e., SABm). Its HI rotational curve follows the
uniform distribution but a distinguished high-velocity bump has been observed
from the blue portion of the H$\alpha$ curve. From the velocity map of UGC
12632, a steep rise in rotational velocity has been observed near its centre and
that pattern is gradually extended to the outer region. Thus, the observational
findings, directly indicate that the DM halo profile for UGC 12632 is consistent
with the CDM prediction\cite{vandenBosch:2000rza}.\\
4) \textbf{UGC 12732}: Like UGC 12632, UGC 12732 is also characterised as the
weakly barred spiral galaxy (i.e., SABm). The observational data indicates that
the rotational curve for both the HI and the H$\alpha$ are consistent with each
other and it is observed that the DM halo profile for UGC 12732 is consistent
with the CDM prediction\cite{vandenBosch:2000rza, Swaters:2009by}.
\begin{table}
\caption{Properties of LSB galaxies. Column~I: Name of LSB galaxies; Column~II: Galactic longitude and latitude of LSB galaxies; Column~III: The adopted distance of the galaxies, based on a Hubble constant ($H_{\circ}$)= 75 $km~s^{-1}~Mpc^{-1}$. We have obtained the value of distance for each LSB galaxies and their corresponding uncertainties from \textit{NASA/IPAC Extragalactic Database}; Column~IV: Observed rotational velocity at last measured point of rotational curve from van den Bosch \textit{et al.}, 2000; Column~V: Scale length of stellar disk from van den Bosch \textit{et al.}, 2000; Column~VI: B band Luminosity of LSBs from OBrien \textit{et al.}, 2011; Column~VII: Location of the last observed data points of LSB galaxies from Swaters \textit{et al.}, 2009; Column~VIII: Observed HI gas masses of LSB galaxies from Swaters \textit{et al.}, 2002.}
\centering
\begin{minipage}{1.0\textwidth}
\begin{tabular}{|p{1cm}||p{2.6cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|}
\hline
\hline
Name & (l,b) & D & $V_{last}$ & $R_{d}$ & $L_{B}$ & $R_{last}$ & $M_{HI}$ \\
[0.5ex]
$ $ & [deg],[deg] & (Mpc) & $(km~s^{-1})$ & (Kpc) & $(10^{9}~L_{\odot}^{B})$ &
(Kpc) & $(10^{8}~M_{\odot})$ \\ [0.5ex]
\hline
UGC 3371 & 138.43,22.81 & $12.73^{+0.90}_{-0.90}$ & ~~~~~86 & 3.09 & 1.54 &
10.2 & 12.2 \\ [0.5ex]
\hline
UGC 11707 & 74.31,-15.04 & $14.95^{+1.05}_{-1.05}$ & ~~~~~100 & 4.30 & 1.13 &
15.0 & 37.2 \\ [0.5ex]
\hline
UGC 12632 & 106.77,-19.31 & $8.36^{+0.60}_{-0.60}$ & ~~~~~76 & 2.57 & 0.86 &
8.53 & 8.7 \\ [0.5ex]
\hline
UGC 12732 & 103.74,-33.98 & $12.38^{+0.87}_{-0.87}$ & ~~~~~98 & 2.21 & 0.71 &
15.4 & 36.6 \\ [0.5ex]
\hline
\hline
\end{tabular}
\end{minipage}
\end{table}
\pagebreak
\section{\textit{Fermi}-LAT Observation and Data Analysis of LSBs}
\noindent Here we have analysed nearly 9 years of Fermi-LAT data i.e., from
2008-08-04 to
2017-10-22 for our each source of targets \cite{Bhattacharjee:2019jce}. For this purpose, we have used the
Fermi ScienceTools version,
\textit{v1.2.1}\footnote{\tiny{https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/}} \cite{Bhattacharjee:2019jce}.
Like our other works, for this study we have used the source class
IRF, $\rm{P8R3\_SOURCE\_V2}$
\footnote{\tiny{https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone{\_}LAT{\_}IRFs/IRF{\_}overview.html}},
\footnote{\tiny{https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Pass8{\_}usage.html}} \cite{Bhattacharjee:2019jce}.
The PSF of
LAT is yielding to $4^{\circ}$ and $2.5^{\circ}$ for energy around 500 MeV and 1 GeV,
respectively\footnote{\tiny{https://www.slac.stanford.edu/exp/glast/groups/canda/lat{\_}Performance.html}}.
Thus in
order to reduce the possible
uncertainties at low energies and background contamination at high energies, we
have used the energy limits or range between 500 MeV to 300 GeV \cite{Bhattacharjee:2019jce}. Here, we have
extracted the LAT data for a $10^{\circ}$ ROI for each source of interest and
for generating the source model for likelihood analysis, we have used here
\textit{Fermi} 4FGL source catalog
\cite{Fermi-LAT:2019yla} and the most recent version of galactic
($\rm{gll\_iem\_v07.fits}$) and extragalactic
($\rm{iso\_P8R3\_SOURCE\_V2\_v1.txt}$) diffuse models \cite{Bhattacharjee:2019jce}.\\
\noindent In sections ~6.2 and 5.2, we have already described the analysis
methodology for
Fermi-LAT data.
\subsection{Results from the Power-law Modelling}
\begin{figure}
\centering
\subfigure[ UGC 3371]
{ \includegraphics[width=0.45\columnwidth]{figures/3371.pdf}}
\subfigure[ UGC 11707]
{ \includegraphics[width=0.45\columnwidth]{figures/11707.pdf}}
\subfigure[ UGC 12632]
{ \includegraphics[width=0.45\columnwidth]{figures/12632.pdf}}
\subfigure[ UGC 12732]
{ \includegraphics[width=0.45\columnwidth]{figures/12732.pdf}}
\caption{We have shown the residual plots of four LSB galaxies for $10^{\circ}
\times 10^{\circ}$ ROI. We have modelled them with the power law spectrum for
$\Gamma = 2$.}
\end{figure}
\noindent In order to check the possible astrophysical constraint from LSB
galaxies, first we have modelled them
with power-law spectra for spectral index ($\Gamma$) = 2 \cite{Bhattacharjee:2019jce}.\\
\noindent Fig.~7.1(a,b,c,d) shows the residual fit for the four LSB galaxies\cite{Bhattacharjee:2019jce}.
In Table~7.2, we have shown the spectral results obtained from our each LSB
galaxies \cite{Bhattacharjee:2019jce}. From this table, we can find the best-fitted values for the galactic
and isotropic components and normalization parameter, $N_{0}$ for each galaxy.
The best-fitted values for two diffuse models are close to 1 and it strengths
the reliability of our analysis method \cite{Bhattacharjee:2019jce}. From table 7.2, we can also check that
$N_{0}$ is always lower than its statistical error by at least an order of 2.
This signifies that Fermi-LAT has not observed any emission from the location
of
four LSB sources \cite{Bhattacharjee:2019jce}.\\
\noindent Next, we have estimated the upper limits of $\gamma$-ray flux by
profile
likelihood method \cite{Barbieri:1982eh, Rolke:2004mj, Bhattacharjee:2019jce}. In Table~7.3, we have
shown the flux upper limits in 95$\%$ C.L.for energy ranges between 500 MeV to
300 GeV.
\begin{table}[!h]
\centering
\caption{The Best-Fit value for the normalization parameter of LSB, diffuse
galactic and isotropic components.}
\begin{tabular}{|p{2cm}|p{3cm}|p{3cm}|p{4cm}|}
\hline
\hline
LSB & Galactic & Isotropic & $\rm{N_{0}}\times~10^{-5}$ \\ [0.5ex]
Galaxies & Component & Component & $ $ \\ [0.5ex]
& $cm^{-2} s^{-1} MeV^{-1}$ & $cm^{-2} s^{-1} MeV^{-1}$ & $cm^{-2} s^{-1} MeV^{-1}$ \\ [0.5ex]
\hline
UGC 3371 & $0.95 \pm 0.011$ & $0.95 \pm 0.035$ & $(6.29\pm 21.55)\times10^{-8}$ \\ [0.5ex]
\hline
UGC 11707 & $0.92 \pm 0.001$ & $1.06 \pm 0.001$ & $(0.1099\pm6.06)\times10^{-7}$ \\ [0.5ex]
\hline
UGC 12632 & $0.93 \pm 0.011$ & $1.09 \pm 0.05$ & $(0.334\pm 5.82)\times10^{-6}$ \\ [0.5ex]
\hline
UGC 12732 & $0.97 \pm 0.001$ & $1.004 \pm 0.017$ & $(0.12\pm2.30)\times10^{-8}$ \\ [0.5ex]
\hline
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{The $\gamma$-ray flux upper limits in $95\%$ C.L. of LSB galaxies.}
\begin{tabular}{|p{5cm}|p{5cm}|}
\hline
\hline
LSB galaxies & E~$>$~500 MeV \\ [0.5ex]
$ $ & ($cm^{-2} s^{-1}$) \\ [0.5ex]
\hline
UGC 3371 & $2.43\times10^{-10}$ \\ [0.5ex]
\hline
UGC 11707 & $3.22\times10^{-10}$ \\ [0.5ex]
\hline
UGC 12732 & $3.54\times10^{-10}$ \\ [0.5ex]
\hline
UGC 12632 & $3.06\times10^{-10}$ \\ [0.5ex]
\hline
\hline
\end{tabular}
\end{table}
\section{A theoretical Framework to Estimate $\gamma$-ray Flux from Pair-annihilation of WIMPs In Case of LSB Galaxies}
\subsection{Modelling with NFW Density Profile}
\noindent We have used the
NFW density profile \cite{Navarro:1996gj} for modelling the DM distribution in
LSB galaxies. The rotational curves for our selected LSB galaxies are
consistent
with the $\lambda$CDM prediction \cite{vandenBosch:2000rza, vandenBosch:2001bp,
Swaters:2002rx} and their observational data obtained from the
ref.~\cite{vandenBosch:2000rza,
vandenBosch:2001bp, Swaters:2002rx} shows that the cuspy profile can provide a
good fit to the central region of LSBs. For our J-factor calculation, we have
taken the necessary parameters from
ref.~\cite{vandenBosch:2000rza}. The expression of the NFW density
profile is \cite{Abdo:2010ex, Navarro:1996gj}
\begin{equation}
\rho (r)=\frac{\rho_{s}r_{s}^{3}}{r(r_{s} + r)^{2}}
\end{equation}
\\
\noindent where, $\rho_{s}$ and $r_{s}$ are the characteristic
density and scale radius, respectively and $r$ is the distance from the center
of the LSB galaxy. In order to obtain the value of $\rho_{s}$ and $r_{s}$, we
have used the following relations \cite{Bhattacharjee:2019jce}:\\
\noindent The expression of the $\rho_{s}$ is \cite{Lokas:2000mu,
Liddle:1998ew}:
\begin{equation}
\rho_{s} = \rho_{c}^{0} \delta_{\rm{char}}
\end{equation}
\\
\noindent where, $\delta_{\rm{char}}$ is the fitting parameter
and $\rho_{c}^{0}$ is the critical density of Universe. For our calculation, we
have adopted the Hubble constant of $H_{0}$=$75~\rm{km~s^{-1}Mpc^{-1}}$ =
$100h~\rm{km~s^{-1}Mpc^{-1}}$ from Ref.~\cite{vandenBosch:2000rza} and thus
$\rho_{c}^{0}$ can be expressed as $\rho_{c}^{0}$ = $2.78h^{-1}\times10^{11}$
$\frac{M_{\odot}}{(h^{-1}Mpc)^{3}}$.\\
\noindent The expression of the $\delta_{\rm{char}}$ is:
\begin{equation}
\delta_{\rm{char}} = \frac{v c^{3}g(c)}{3}
\end{equation}
\noindent where,
\begin{equation}
g(c)=\frac{1}{\ln(1+c)- c/(1+c)}
\end{equation}
\\
\noindent In Eqs.~7.3 $\&$ 7.4, $c$ is the concentration
parameter that defines the shape of the density profile and the value of the
virial overdensity, $v$ is assumed to be $\approx$ 178 \cite{vandenBosch:2000rza}.\\
\noindent $R_{\rm{vir}}$ (or we can say $r_{200}$) is the virial
radius at which mean density is 200 times of present critical density
($\rho_{c}^{0}$) of our Universe. The circular velocity at $R_{\rm{vir}}$ is
defined as\cite{Lokas:2000mu, vandenBosch:2000rza}
\begin{equation}
V_{200} = \frac{R_{vir}}{h^{-1}}
\end{equation}
\\
\noindent The expression of scale radius is \cite{vandenBosch:2000rza}:
\begin{equation}
r_{s} = \frac {R_{\rm{vir}}}{c}
\end{equation}
\\
\noindent Thus, using the Eqs.~7.2 to 7.6, we can derive $\rho_{s}$ and
$r_{s}$. \\
\noindent For our case, we have taken $\theta_{\rm{min}} = 0^{\circ}$ and $\theta_{max} = \rm{sin}^{-1}\Big(\frac{R_{vir}}{d}\Big)$ \cite{Bhattacharjee:2019jce}. The
J-factor allows us to estimate the
annihilation rate from LSB galaxies for theoretical favored DM models.\\
\noindent In Table~7.4, we have mentioned some necessary
parameters for estimating the J-factors from Eq.~2.7 \cite{Bhattacharjee:2019jce}. We have adopted the value
of $c$, $V_{200}$ from
ref.~\cite{vandenBosch:2000rza}.\\
\noindent In Table~7.4, we have shown the uncertainty associated with the
J-factors. For deriving the uncertainties in J-factor, we have taken the
distribution of distance ($d$) and concentration parameter
($c$) mentioned in Table~7.4 and have developed an algorithm
to find the limiting values of the J-factor in a 2$\sigma$ limit by a
Monte Carlo method \cite{Bhattacharjee:2019jce}. As the concentration parameter for LSB galaxies lies within
asymmetrical limits, we have considered asymmetric normal distribution about
the
mean with two different values of the standard deviation on each side of the
mean\cite{Bhattacharjee:2019jce}. We have first generated random numbers for the user-defined distribution
and then by performing the
Smirnov Transform on a set of uniformly distributed random numbers, we have
generated the uncertainty limits of J-factor for 2$\sigma$ or 95$\%$ C.L. \cite{Bhattacharjee:2019jce}.
\begin{table}
\centering
\caption{The necessary parameter values for calculating the J-factor from
Eq.~2.7
($h_{0}=0.75$).}
\begin{tabular}{|p{1.5cm}|p{2cm}|p{1.5cm}|p{2cm}|p{1.5cm}|p{3cm}|p{4cm}|}
\hline \hline
Galaxy & Distance & ~~~~c & ~~$V_{200}$ & $\theta_{max}$ & J~factor\\
name & ~~Mpc & $ $ & ~~$km~s^{-1}$ & ~~~$^{\circ}$ & $\times10^{16}$~$\frac{GeV^{2}}{cm^{5}}$\\
\hline \hline
UGC 3371 & $12.73^{+0.90}_{-0.90}$ & $14.5^{+14.6}_{-10.2}$ & ~~69.8 & $0.42$ & $0.739^{+2.87}_{-0.63}$ \\
\hline
UGC 11707 & $14.95^{+1.05}_{-1.05}$ & $14.7^{+14.6}_{-10.3}$ & ~~66.9 & $ 0.34$ & $0.485^{+1.85}_{-0.42}$ \\
\hline
UGC 12632 & $8.36^{+0.60}_{-0.60}$ & $15.6^{+15.5}_{-10.9}$ & ~~51.4 & $0.47$ & $0.795^{+3.25}_{-0.716}$ \\
\hline
UGC 12732 & $12.38^{+0.87}_{-0.87}$ & $14.3^{+14.4}_{-10}$ & ~~73.3 & $0.45$ & $0.880^{+3.40}_{-0.75}$ \\
\hline \hline
\end{tabular}
\end{table}
\subsection{J-factor Derived from the Toy Model}
\noindent In this section, we have predicted the J-factor for LSB galaxies by
using the toy model proposed by
Charbonnier et al, 2011~\cite{Charbonnier:2011ft}. The sole purpose of using
the
toy model is to check the reliability of our derived value for J-factor from
Eq.~2.7. In Fig.~7.2, we have shown the sketch of the toy model for J-factor
calculation \cite{Charbonnier:2011ft}. The
vertical hatched region denotes the contribution from integration, while the
cross-hatched
region refers to the toy model.
\begin{figure}
\centering
\includegraphics[width=0.6\columnwidth]{figures/toy.pdf}
\caption{The diagram of the toy model for calculating J-factor.}
\end{figure}
\noindent In Fig.~7.2, $d$ is the distance to LSB galaxy from the observer and
$\alpha_{int}$ defines the angle for integration where,
$r_{\alpha_{int}}
= d~\rm{sin}\alpha_{int}$. The toy model assumes that roughly $90\%$
of the clump luminosity might contain in scale radius, $r_{s}$ and they do not
have any direct dependence on the DM density profile. We can rewrite the
Eq.7.1 as:\\
\begin{equation}
\rho_{approx} = \rho_{s} r_{s}/r~ \rm{for} ~r_{sat}< r \le r_{s}
\end{equation}
where, $r_{sat}$ is the saturation distance. The corresponding approximate form
of J-factor is:
\begin{eqnarray}
J_{approx} &=&\frac{4 \pi} {d^{2}} \int_{0}^{\rm{min}[r_{\alpha_{int}}, r_{s}]} \rho_{approx}^{2} r^{2}~ dr \nonumber\\
&=& \frac{4 \pi} {d^{2}} \rho_{s}^{2}r_{s}^{2}(\rm{min}[r_{\alpha_{int}}, r_{s}]).
\end{eqnarray}
\noindent If $r_{\alpha_{int}}\gtrsim r_{s}$, the density profile falls faster
than $1/r$ for $r \sim r_{s}$. The the toy model advised us stop the
integration
at $r_{x}$ where, $\rho_{true} = \frac{\rho_{approx}}{x}$, $x =2$ and $r_{x} = r_{s}[\sqrt2 - 1]$ \cite{Charbonnier:2011ft}.
\begin{equation}
J_{approx}= \frac{4 \pi} {d^{2}} \rho_{s}^{2}r_{s}^{2}(\rm{min}[r_{x}, r_{\alpha_{int}}]).
\end{equation}
\noindent The comparison between the J-values obtained from the toy model and
integration method has been shown in Table~7.5 \cite{Bhattacharjee:2019jce}. Charbonnier et al,
2011\cite{Charbonnier:2011ft}, proposed that the difference in the J values
obtaining from these above-mentioned methods should lie within the factor of 2
and from Table~7.5, it is clear that our results are consistent \cite{Bhattacharjee:2019jce} with the study
done by Ref.~\cite{Charbonnier:2011ft}
\begin{table}
\centering
\caption{J-factor obtained from the integration method and the Toy model for
$h_{0}$=0.75.}
\begin{tabular}{|p{2.5cm}|p{3cm}|p{4.5cm}|}
\hline \hline
Galaxy & Integration method & Toy model \\
name & ($\rm{GeV^{2}/cm^{5}}$) & ($\rm{GeV^{2}/cm^{5}}$) \\
\hline \hline
UGC 3371 & $0.739^{+2.87}_{-0.63}\times10^{16}$ & $0.918^{+3.47}_{-0.82}\times10^{16}$ \\
\hline
UGC 11707 & $0.485^{+1.85}_{-0.42}\times10^{16}$ & $0.603^{+2.20}_{-0.54}\times10^{16}$ \\
\hline
UGC 12632 & $0.795^{+3.08}_{-0.68}\times10^{16}$ & $0.987^{+3.84}_{-0.88}\times10^{16}$ \\
\hline
UGC 12732 & $0.880^{+3.40}_{-0.75}\times10^{16}$ & $1.09^{+4.37}_{-0.97}\times10^{16}$ \\
\hline
\end{tabular}
\end{table}
\subsection{Constraints on the Annihilation Cross-section}
\begin{figure}
\subfigure[]
{ \includegraphics[width=0.48\linewidth]{figures/lsb_bb_flux.pdf}}
\subfigure[]
{ \includegraphics[width=0.48\linewidth]{figures/lsb_tt_flux.pdf}}
\subfigure[]
{ \includegraphics[width=0.48\linewidth]{figures/lsb_mm_flux.pdf}}
\subfigure[]
{ \includegraphics[width=0.48\linewidth]{figures/ugc_12632_flux_paper.pdf}}
\caption{The $\gamma$-ray flux upper limits of all four LSB galaxies for three
pair
annihilation channels, such as: (a) the $100\%$ $b\overline{b}$, (b) the
$100\%$
$\rm{\tau^{+} \tau^{-}}$, (c) the $100\%$ $\rm{\mu^{+} \mu^{-}}$. (d) It
shows the variation of $\gamma$-ray flux upper limits for UGC 12632 with DM
mass, $m_{DM}$ for
four annihilation channels. We have considered the median J-factor value from
Table~7.4.}
\end{figure}
\begin{figure}
\subfigure[]
{ \includegraphics[width=0.48\linewidth]{figures/lsb_bb_cross_fermipy.pdf}}
\subfigure[]
{ \includegraphics[width=0.48\linewidth]{figures/lsb_tt_cross_fermipy.pdf}}
\subfigure[]
{ \includegraphics[width=0.48\linewidth]{figures/lsb_mm_cross_fermipy.pdf}}
\subfigure[]
{ \includegraphics[width=0.48\linewidth]{figures/ugc_12632_cross_paper.pdf}}
\caption{The $<\sigma~v>$ upper limit of all four LSB galaxies for three
annihilation channels, such: (a) the $100\%$ $b\overline{b}$, (b) the $100\%$
$\rm{\tau^{+} \tau^{-}}$, (c) the $100\%$ $\rm{\mu^{+} \mu^{-}}$. (d) It
shows the variation of the upper limits on $<\sigma~v>$ for UGC 12632 with DM
mas,$m_{DM}$ for
four annihilation channels. We have considered the median J-factor value from
Table~7.4.}
\end{figure}
\noindent In this section, we have studied the possible $\gamma$-ray flux
upper limits in 95$\%$ C.L. resulting from WIMP annihilation and its relative
thermally averaged
pair-annihilation cross-section $<\sigma v>$ as a function of DM mass
($\rm{m_{DM}}$) and WIMP annihilation final states (f) \cite{Bhattacharjee:2019jce} for each LSB galaxy
using
the
DMFit tool \cite{Jeltema:2008hf,
Gondolo:2004sc} and for that purpose, we have chosen four WIMP pair
annihilation
final states
(f), such as, $100\%$ $b\overline{b}$, $100\%$ $\tau^{+}\tau^{-}$, $100\%$
$\rm{\mu^{+} \mu^{-}}$ and $100\%$ $W^{+}W^{-}$,
respectively \cite{Jungman:1995df}. \\
\noindent In Figs.~7.3 (a,b,c) and 7.4 (a,b,c), we have displayed the
$\gamma$-ray flux and the $<\sigma~v>$ upper limits as a function of DM mass,
$m_{DM}$ for three pair annihilation channels, respectively, while in Figs.~7.3
(d) and 7.4 (d), we have presented the variation for UGC 12632 \cite{Bhattacharjee:2019jce}. From
Fig.~7.3(d), we find that at high energies where, the diffuse background is
comparatively less, $\rm{\mu^{+} \mu^{-}}$ and $\rm{\tau^{+} \tau^{-}}$ channels
provide the best $\gamma$-ray flux limits \cite{Bhattacharjee:2019jce}. From fig.~7.3(d), we can also notice that at
around 1~TeV DM mass, the gamma-ray flux upper limits for four annihilation
channels varies within a factor of $2$, whereas for low DM mass, this variation
is increased to a factor of 4 \cite{Bhattacharjee:2019jce}. All our sources show the same nature, thus in
figs.~7.3(d) and 7.4(d), we have only shown the result for UGC 12632. For
obtaining the Figs~7.3 and 7.4, we have used the median J values (see
Table~7.4) \cite{Bhattacharjee:2019jce}.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\linewidth]{figures/bb_cross_errors.pdf}
\caption{The variation of $<\sigma v>$ upper limits in 95$\%$ C.L. with $m_{DM}$
for $b\overline{b}$ annihilation channels of four LSB galaxies is shown in the
plane of ($m_{DM}$, $<\sigma v>$) for the median value of J-factor along with
the
uncertainties. The shaded region refers to the uncertainty of the DM
profiles for our LSB galaxies.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\linewidth]{figures/lsb_bb_fermi_thermal.pdf}
\caption{The variation of $<\sigma v>$ upper limits in 95$\%$ C.L. with $m_{DM}$
for $b\overline{b}$ annihilation channels of four LSB galaxies is shown in the
plane of ($m_{DM}$, $<\sigma v>$) for the median value of J-factor. The relic
abundance
cross-section rate i.e., $2.2~\times~10^{-26}~cm^{3}~s^{-1}$ derived by Steigman \textit{et al.}, 2012 and the combined
$<\sigma v>$ upper limits obtained from the Fermi-LAT analysis of 15 dSphs by Ackermann \textit{et al.}, 2015 are overplotted here.}
\end{center}
\end{figure}
\noindent From Fig.~7.5, we have shown the variation of
$<\sigma v>$ in 95$\%$ C.L. with ${m_{DM}}$ for median value of
J-factor and its 2$\sigma$ C.L. uncertainties \cite{Bhattacharjee:2019jce}. We have only considered the
100$\%$ $b\overline{b}$ annihilation channel because for gamma-ray analysis,
they put the most stringent limits on the parameter space of
the (<$\sigma$~v>, $m_{DM}$). From Fig.~7.5, it is evident that LSB galaxies
impose large uncertainty on the
parameter space of the (<$\sigma$~v>, $m_{DM}$) and the uncertainty bands for
all LSB galaxies are
overlapping with each other \cite{Bhattacharjee:2019jce}. Thus, from this plot,
we won't be able to favour one LSB galaxies between all four \cite{Bhattacharjee:2019jce}.
The very low rate of star formation and poor nuclear activity are considered as
the primary
reasons for the large uncertainties associated with the DM distribution in LSB
galaxies. \\
\noindent Next, we have performed a comparative study between the $<\sigma v>$
limits obtained
from the LSB galaxies with the limits derived by the
Ackermann et al.\cite{Ackermann:2015zua} and the same is
shown in Fig.~7.6 \cite{Bhattacharjee:2019jce}. The limits obtained from Ackermann et al.\cite{Ackermann:2015zua}
performed the analysis on 15 dSphs with six years of LAT data. In Fig.~7.6, we
have also
compared the limits from LSB galaxies with the thermal relic cross-section rate
estimated by~Steigman et al.\cite{Steigman:2012nb}. \\
\noindent In Fig.~7.6, the thermal cross-section rate
obtained by the study of~Steigman et al.\cite{Steigman:2012nb} is denoted by the blue
``dot-dashed ''line while the $<\sigma v>$ limits
derived by Ackermann et al.\cite{Ackermann:2015zua} is represented by the red ``dotted''
line. From Fig.~7.6, it is clear that the $<\sigma v>$ limits obtained from our
four LSB galaxies are roughly 3 orders of magnitude weaker than the limits
achieved by the
Ackermann et al.\cite{Ackermann:2015zua} and the Steigman et al.\cite{Steigman:2012nb}. In our next
section, we would estimate the stacking limits for LSB galaxies \cite{Bhattacharjee:2019jce}.
\subsection{Stacking Analysis}
\noindent In Section 7.4.3, we have estimated the $<\sigma v>$ upper limits for
individual LSB galaxies and from Fig. 7.6,
we have checked that the individual $<\sigma v>$ limits are around 3 orders of
magnitude weaker \cite{Bhattacharjee:2019jce} than the limits estimated by the combined analysis of
Ackermann et al.\cite{Ackermann:2015zua} and the annihilation rate for relic abundances
derived by Steigman et al.\cite{Steigman:2012nb}. In this section, in order to increase the sensitivity of the limits, we have preferred to derive
the stacking limits on the individual $<\sigma v>$ limits obtained from each LSB galaxies \cite{Bhattacharjee:2019jce}. In Chapter 4, we have already
discussed the formalism for stacking likelihood function. For this work, to
estimate the stacking limits, we have used the Eq.~4.11. \\
\noindent The J-factor provides a rough estimation on WIMP signal coming from
the DM rich sources, thus the stacking analysis would be able to generate a more
stringent result than the limits obtained from any individual LSB galaxy \cite{Bhattacharjee:2019jce}. Even
for the combined analysis,
we have not observed any gamma-ray emission from the location of LSB. Thus we
have computed the $<\sigma v>$ upper limits in 95$\%$ C.L. by the
delta-likelihood method \cite{Bhattacharjee:2019jce}. In Fig.~7.7(a), we have shown the $<\sigma v>$ upper
limits as a function of $m_{DM}$ obtained from the stacking analysis and have
compared it with the individual limits of LSB galaxies for $100\%$
$b\overline{b}$ final state. From Fig.~7.7(b), we can find the comparison
between the stacking $<\sigma v>$ limits for LSBs and the $<\sigma v>$ limits
taken from Ackermann et al.\cite{Ackermann:2015zua} with the thermal annihilation rate from
Steigman et al.\cite{Steigman:2012nb}. For Fig.~7.7(b), the 2$\sigma$ uncertainty band
associated with stacking limits of $<\sigma v>$ has been displayed \cite{Bhattacharjee:2019jce}.
\begin{figure}
\subfigure[]
{ \includegraphics[width=0.48\linewidth]{figures/lsb_stack_bb_paper.pdf}}
\subfigure[]
{ \includegraphics[width=0.48\linewidth]{figures/lsb_stack_thermal_limits.pdf}}
\caption{(a) The comparison between the upper limits on $<\sigma v>$ obtained
from LSB for
the stacking analysis and the $<\sigma v>$ limits obtained from individual LSB
for the $100\%$ $b\overline{b}$
annihilation channel.
b) The comparison between the upper limits on $<\sigma v>$ obtained from LSB for
the stacking analysis and the relic abundance
cross-section rate i.e., $2.2~\times~10^{-26}~cm^{3}~s^{-1}$ derived by Steigman \textit{et al.}, 2012 and the combined
$<\sigma v>$ upper limits obtained from the Fermi-LAT analysis of 15 dSphs by Ackermann \textit{et al.}, 2015. The shaded region refers to the uncertainty associated
with the stacking limits.}
\end{figure}
\noindent From Fig.~7.7, it is evident that the stacking limit of $<\sigma~v>$
has been improved by a factor of $\approx$ 4 from the individual limit obtained
from LSB galaxies, but it is still nearly two orders of magnitude weaker \cite{Bhattacharjee:2019jce} than
the limits obtained from
Ackermann et al. \cite{Ackermann:2015zua} and Steigman et al. \cite{Steigman:2012nb}. \\
\noindent We may then conclude that at present due to low
J-values (roughly 2-3 orders weaker than the standard values for dSphs/UFDs), the
$\gamma$-ray $<\sigma v>$ limits obtained for the LSB galaxies, are unable to
produce any stringent limits on the theoretical WIMP models. But in the future, the next generation optical surveys such as Large Synoptic Survey Telescope (LSST) are designed to
discover many new LSB galaxies. Thus, the constraint limits on theoretical DM models obtained from LSB galaxies might
improve significantly.
\subsection{Possible Radio Constraint Obtained from LSB Galaxies}
\begin{figure}
\subfigure[UGC 3371]
{ \includegraphics[width=0.48\linewidth]{figures/3371_sed.pdf}}
\subfigure[ UGC 11707]
{ \includegraphics[width=0.48\linewidth]{figures/11707_sed.pdf}}
\subfigure[ UGC 12632]
{ \includegraphics[width=0.48\linewidth]{figures/12632_sed.pdf}}
\subfigure[ UGC 12732]
{ \includegraphics[width=0.48\linewidth]{figures/12732_sed.pdf}}
\caption{The multiwavelength SED of four LSB galaxies for three DM
annihilation final states, such as the $b\overline{b}$ (solid), the
$\tau^{+}\tau^{-}$ (dashed) and the $\mu^{+}\mu^{-}$ (dotted).
We have considered $m_{DM}$=100 GeV , $B_{0}$ = 1$\mu$G and
$D_{0}$=$3\times10^{28}~(cm^{2}s^{-1})$.}
\end{figure}
\begin{figure}
\centering
\subfigure[Variation with $B_{0}$]
{ \includegraphics[width=0.49\linewidth]{figures/12632_B_variation.pdf}}
\subfigure[Variation with $D_{0}$]
{ \includegraphics[width=0.49\linewidth]{figures/12632_D_variation.pdf}}
\subfigure[Variation with $\gamma_{D}$]
{ \includegraphics[width=0.5\linewidth]{figures/12632_gamma_variation.pdf}}
\caption{The variation of the multiwavelength SED of UGC 12632 for (a) four
values of
$B_{0}$, (b) three values of $D_{0}$ and (c) four values of
$\gamma_{D}$. We have considered $m_{DM}$=100 GeV, $B_{0}$ = 1$\mu$G,
$D_{0}$=$3\times10^{28}~(cm^{2}s^{-1})$ and have fixed the thermal
averaged $<\sigma v>$) to $3
\times 10^{26}~cm^{3}s^{-1}$.}
\end{figure}
\noindent In the earlier section, we find that with $\gamma$-ray data, LSB
galaxies could not impose strong limits on DM models. Thus, in this section, we
have tried to investigate the radio emission that might come from the WIMP
annihilation \cite{Bhattacharjee:2019jce}.\\
\noindent In order to
estimate the radio and the X-ray emission resulting from the DM annihilation,
we
have solved
the diffusion equation for the secondary electron spectrum (Eq.~2.11). In
chapter~2, we
have already defined the formulation for the radio and the X-ray emission
through DM annihilation \cite{Colafrancesco:2005ji,
Colafrancesco:2006he}.\\
\noindent Here, we have used a publicly accessible
code,
RX-DMFIT \cite{McDaniel:2017ppt}.
This code is an extension of the DMFit tool \cite{Jeltema:2008hf, Gondolo:2004sc} that we have earlier used to investigate the DM signal
from $\gamma$-ray data. With the RX-DMFIT code, it might be
possible to predict the flux limits from the radio and the X-ray emission
resulting
from the secondary charged particles which are assumed to be generated from the
DM annihilation. For radio analysis, we have modelled the DM
density distribution of LSB galaxies with the NFW profile \cite{Bhattacharjee:2019jce}. In order to calculate
the source term for DM signal i.e., $Q_{e}$ (check Eq.~2.11), RX-DMFIT tools uses
the set of Fortran packages from the DarkSUSY
v5.1.2 which is designed to estimate the $e^{+}/e^{-}$ injection
spectrum per DM
annihilation event (i.e., $\sum_{f} \frac{dN^{e}_{f}}{dE}B_{f}$) for any approved
range of DM masses and DM annihilation final states \cite{Bhattacharjee:2019jce}. \\
\noindent RX-DMFIT gives us access to customize the wide range of parameter sets
for the astrophysical and the particle components \cite{McDaniel:2017ppt}. With this code, we can check
how does the diffusion mechanism, magnetic field, DM distribution etc. can
possibly affect the radio and the X-ray emission from LSB galaxies.\\
\noindent As we already mentioned in sections 7.1 and 7.2, there are not many
observational studies on the LSB galaxies
and thus it is difficult to preciously have any information on
their magnetic fields and diffusion mechanism. But fortunately, the systematics
of the dSphs are not very different from the LSB galaxies, so for our
calculation, we have used the values of diffusion constant ($D_{0}$) and
magnetic field (B) that are generally favoured for the dSphs \cite{Bhattacharjee:2019jce}. We have defined
the diffusion coefficients of the LSB galaxies by the Kolmogorov form (i.e.,
$\rm{D(E) = D_{0}
E^{\gamma}}$) where, the diffusion zone i.e., $r_{h}$ is assumed to be equal to
the $2~\times~R_{last}$ (see Table~7.1). We have also fixed the values of $D_{0}$ and $\gamma_{D}$ at
$3~\times~10^{-28}~cm^{-2}~s^{-1}$ and 0.3, respectively \cite{Bhattacharjee:2019jce}. For LSB galaxies,
there is no such detailed study on the distribution of the magnetic field and
thus we do not have any knowledge on the spatial extension of their magnetic fields \cite{Bhattacharjee:2019jce}.
Thus we have used the exponential form to define the magnetic field of the LSBs.
The expression for magnetic field is, $\rm{B(r) = B_{0}~e^{\frac{-r}{r_{c}}}}$,
where, we have fixed the $B_{0}$ at 1$\mu G$\cite{Fitt:1993dfr} and $r_{c}$
defines the core radius of the LSB which is equal to the $r_{d}$ (see
Table~7.1) \cite{Bhattacharjee:2019jce}. Here, we have also fixed the $<\sigma v>$ at the
$3\times10^{-26} cm^{3} s^{-1}$. In Table~7.6, we have shown all the parameter
values that we have used for our radio analysis \cite{Bhattacharjee:2019jce}.\\
\noindent Using the parameter set mentioned in Table~7.6, we have tried to
predict the spectral energy distribution i.e., SED in multiwavelength range for
all four LSB galaxies at 100 GeV of DM mass \cite{Bhattacharjee:2019jce}. From Fig. 7.8, we can find the SED
plots for three DM annihilation channels, where, the synchrotron emission is
defined by `Sync' and the IC emission due to starlight and CMB photons are
defined by `IC SL' and `IC CMB', respectively \cite{Bhattacharjee:2019jce}. The SED plots that we have shown
in Fig.7.8 are dependent on our choice of parameter sets. So, next, we would try
to find how the SED plot would be affected by changing the astrophysical
parameters \cite{Bhattacharjee:2019jce}. In Fig.~7.9, we have shown the variation of SED plots with $B_{0}$,
$D_{0}$ and $\gamma_{D}$ and for our purpose, we have only chosen UGC 12632 and
$b\overline{b}$ final state \cite{Bhattacharjee:2019jce}. From Fig.~7.9(a), it is evident that the magnetic
field has the direct impact on the synchrotron emission and high B field would
increase the emission, while the IC emission is not much affected by the
variation of B field \cite{Bhattacharjee:2019jce}. Next, from Fig.~7.9(b), we can find that both synchrotron
and IC emission are strongly dependent on the $D_{0}$ \cite{Bhattacharjee:2019jce}. Last, from Fig.~7.9(c),
we can check how SED would vary with changing the $\gamma_{D}$. Here we would like
to mention that $\gamma_{D}$ is associated with the Kolmogorov form of
diffusion coefficient \cite{Bhattacharjee:2019jce}.
\begin{table}
\begin{center}
\caption{The parameter set used as the input of RX-DMFIT tool.}
\begin{tabular}{|p{1.5cm}|p{1cm}|p{1cm}|p{2cm}|p{1cm}|p{1cm}|p{1cm}|p{1.5cm}|p{1cm}|}
\hline
\hline
Galaxy & d & $r_{h}$ & $D_{0}$ & $\gamma_{D}$ & $B_{0}$ & $r_{c}$ & $\rho_{s}$ & $r_{s}$ \\
$ $ & Mpc & Kpc & $cm^{2}s^{-1}$ & & $\mu G$ & Kpc & $\frac{GeV}{cm^{3}}$ & kpc\\
\hline
UGC 3371 & 13.1 & 20.4 & $3\times10^{28}$ & 0.3 & 1 & 3.09 & 0.5725 & 6.5151 \\
\hline
UGC 11707 & 15.4 & 30.0 & $3\times10^{28}$ & 0.3 & 1 & 4.30 & 0.5875 & 6.2529 \\
\hline
UGC 12632 & 8.59 & 17.06 & $3\times10^{28}$ & 0.3 & 1 & 2.57 & 0.6825 & 4.5223 \\
\hline
UGC 12732 & 12.72 & 30.8 & $3\times10^{28}$ & 0.3 & 1 & 2.21 & 0.5676 & 6.5556 \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\noindent With RX-DMFIT tool, from observed X-ray or radio flux density, it
would be possible to estimate the corresponding $<\sigma v>$ as a function of
$m_{DM}$ and WIMP annihilation channels \cite{Bhattacharjee:2019jce}.
The star formation rate in LSB galaxies are extremely low and that makes them an
ideal room for examining the radio emission which might dominantly come from the
DM annihilation/decay \cite{Bhattacharjee:2019jce}.
For our purpose, we have taken the observed value of radio flux density for all
LSB galaxies from the NVSS survey
\cite{Condon:1998iy}. NVSS is the `NRAO VLA Sky Survey' which has performed a
sky survey at the frequency($\nu$)= 1.4 GHz. The Very Large Array (VLA) is
located in southwestern New Mexico. It has the 27 elements of the
interferometric array which generates the radio images of the sky for a very
broad range of resolutions and frequencies.
The spatial
size of the NVSS images\footnote{\tiny{https://www.cv.nrao.edu/nvss/}} around
UGC 3371, UGC 11707, UGC 12632 and UGC 12732
are 185.40", 300.70", 66.00" and 307.70", respectively. But except UGC 11707,
other three LSB galaxies only provide the upper limits of the radio flux
density. The flux limits from the NVSS survey are shown in Table~7.7 \cite{Bhattacharjee:2019jce}.
\begin{table}
\begin{center}
\caption{The radio flux density limit obtained from the NVSS at frequency 1.4 GHz.}
\begin{tabular}{|p{5cm}|p{5cm}|}
\hline
\hline
Galaxy & Observed Flux density in mJy \\
\hline
UGC~3371 & $<$ 0.45 mJy \\
\hline
UGC~11707 & $1.17$ mJy \\
\hline
UGC~12632 & $<$ 0.45 mJy \\
\hline
UGC~12732 & $<$ 0.45 mJy \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\subfigure[]
{ \includegraphics[width=0.48\linewidth]{figures/vla_comparison.pdf}}
\subfigure[]
{ \includegraphics[width=0.48\linewidth]{figures/vla_fermi_bb.pdf}}
\subfigure[]
{ \includegraphics[width=0.48\linewidth]{figures/vla_fermi_tt.pdf}}
\subfigure[]
{ \includegraphics[width=0.48\linewidth]{figures/vla_fermi_mm.pdf}}
\caption{(a) The limits on $<\sigma v>$ by using radio flux density obtained
from the NVSS images and for three annihilation channels are shown here. The (a)
solid,
dashed and dot-dashed linestyle denote the $b\overline{b}$,
the $\tau^{+}\tau^{-}$ and the$\mu^{+}\mu^{-}$ channels, respectively.
Comparison of the radio $<\sigma v>$ limits obtained from NVSS data with
$\gamma$-ray $<\sigma v>$ limits obtained from the individual and the stacked
analysis for (b) the $b\overline{b}$, (c) the
$\tau^{+}\tau^{-}$ and (d) the $\mu^{+}\mu^{-}$ annihilation channels. we have
chosen the
NFW profile.We have considered $m_{DM}$=100 GeV, $B_{0}$ = 1$\mu$G,
$D_{0}$=$3\times10^{28}~(cm^{2}s^{-1})$ and have fixed the thermal averaged
$<\sigma v>$ to $3 \times 10^{26}~cm^{3}s^{-1}$. Like (a), the same linestyles
have been used for (b), (c) and (d).}
\end{figure}
\begin{figure}
\begin{center}
\subfigure[$b\overline{b}$]
{ \includegraphics[width=0.9\linewidth]{figures/radio_uncertain_bb.pdf}}
\subfigure[$\tau^{+}\tau^{-}$]
{ \includegraphics[width=0.9\linewidth]{figures/radio_uncertain_tt.pdf}}
\subfigure[$\mu^{+}\mu^{-}$]
{ \includegraphics[width=0.9\linewidth]{figures/radio_uncertain_mm.pdf}}
\caption{The uncertainties associated with the $<\sigma v>$ limits obtained
from
NVSS images for (a) the $b\overline{b}$, (b) the $\tau^{+}\tau^{-}$ and (c)
the $\mu^{+}\mu^{-}$ final states are shown here. The radio limits for each
annihilation channels are compared with the uncertainty band associated with
$\gamma$-ray stacking limits for $b\overline{b}$. The shaded region between
dashed lines displays the uncertainty band for radio limits, while the shaded
region between solid lines shows the uncertainty band for $\gamma$-ray stacking
limits.}
\end{center}
\end{figure}
\begin{figure}
\subfigure[UGC 3371]
{ \includegraphics[width=0.48\linewidth]{figures/ska_3371.pdf}}
\subfigure[ UGC 11707]
{ \includegraphics[width=0.48\linewidth]{figures/ska_11707.pdf}}
\subfigure[ UGC 12632]
{ \includegraphics[width=0.48\linewidth]{figures/ska_12632.pdf}}
\subfigure[ UGC 12732]
{ \includegraphics[width=0.48\linewidth]{figures/ska_12732.pdf}}
\caption{The flux density predicted for our LSB galaxies that annihilates into
the
$b\overline{b}$, the $\mu^{+}\mu^{-}$ and the $\tau^{+}\tau^{-}$ channels. we
have chosen the
NFW profile.We have considered $m_{DM}$=100 GeV, $B_{0}$ = 1$\mu$G,
$D_{0}$=$3\times10^{28}~(cm^{2}s^{-1})$ and have fixed the thermal averaged
$<\sigma v>$ to $3 \times 10^{26}~cm^{3}s^{-1}$. We have overplotted the SKA
sensitivity curve for its 10,
100 and 1000 hours of observation time with the dashed, the dotted and the
dot-dashed black
curves, respectively.}
\end{figure}
\noindent Before proceeding to our next analysis, we would like to report that the signal observed from the location of
UGC 11707 is roughly less than 3$\sigma$ and for data analysis, such faint emission
is assumed to be mostly originated from the fluctuation in some unknown astrophysical sources. Thus a more sensitive survey at $\nu$=1.4 GHz
is needed to examine the real nature of signal coming from the location of UGC 11707.
Thus, for our analysis, even though UGC 11707 produces the physical flux limits, we have performed the same method for all our targets.\\
\noindent By using the VLA data (mentioned in Table~7.7) we have estimated the
$<\sigma v>$ limits as the function of $m_{DM}$ for three annihilation channels
and the relevant plot is shown in Fig.~7.10 \cite{Bhattacharjee:2019jce}. From Fig.~7.10(a), we can find that
for radio data $\tau^{+}\tau^{-}$ and $\mu^{+}\mu^{-}$ final states provide the
more stringent limits than $b\overline{b}$ \cite{Bhattacharjee:2019jce}. But from $\gamma$-ray data (see
Fig.~7.4(d)), $b\overline{b}$ final state put the most stringent limits.
Theoretically most of the $b\overline{b}$ final state first annihilates to the
$\pi^{\circ}$ and they decays to $\gamma$-ray photons, while $\tau^{+}\tau^{-}$
and $\mu^{+}\mu^{-}$ final states (i.e., leptonic channel) mostly decay to the
$e^{+}$/$e^{-}$ \cite{Bhattacharjee:2019jce}. Hence, for gamma-ray analysis, $b\overline{b}$ annihilation
channel is expected to produce stronger limits than leptonic channels but for
radio analysis, we would get the reverse result \cite{Bhattacharjee:2019jce}. For Figs.~7.10 (b,c,d), The
comparison between the radio $<\sigma v>$ limits with the limits obtained from
the $\gamma$-ray analysis (from sections~7.4.2 and 7.4.3) for three annihilation
final states has been shown in Figs.~7.10 (b,c,d) \cite{Bhattacharjee:2019jce}. For Fig.~7.10, we have used
the other necessary parameter values from Table 7.6. For Fig.~7.10, we have not
considered the uncertainty associated with radio and gamma-ray limits and for
now, we can observe that for 100 GeV DM mass, the radio data might provide the
stronger limits than gamma-ray \cite{Bhattacharjee:2019jce}. For the $\mu^{+}\mu^{-}$ channel, the radio
limits even provide nearly 2 orders of the more stringent limits than the
stacking $<\sigma v>$ limits for Fermi-LAT data \cite{Bhattacharjee:2019jce}. \\
\noindent Next we would try to check the uncertainty associated with the radio
$<\sigma v>$ limits for LSB galaxies \cite{Bhattacharjee:2019jce}. As we already mentioned that there is not
much detailed study for our selected LSB galaxies and thus with the inadequate
kinematics data of LSB galaxies, they can produce the large uncertainty bands \cite{Bhattacharjee:2019jce}.
For radio data, we have estimated the uncertainty band in 2$\sigma$ C.L. and
then compared the radio limits with the stacked limits obtained from the
gamma-ray data for $b\overline{b}$ (Fig.~7.11) \cite{Bhattacharjee:2019jce}. For gamma-ray data, we have
chosen the $b\overline{b}$ final states as this channel produces the strongest
limits, while for radio data we have shown the uncertainty band associated with
UGC 12632 for $b\overline{b}$ (Fig.~7.11 (a)), $\tau^{+}\tau^{-}$ (Fig.~7.11
(b)) and $\mu^{+}\mu^{-}$ (Fig.~7.11 (c)) final states \cite{Bhattacharjee:2019jce}.
Now, from Fig.~7.11, we can observe that for LSB galaxies the magnitude of the
uncertainty band associated to radio $<\sigma v>$ is at the order of 2 and for
each annihilation channels uncertainty band corresponding to both radio and
gamma-ray data is overlapping with each other \cite{Bhattacharjee:2019jce}. Unlike the Fig.~7.10, whenever we
would consider the uncertainty band, it would not be possible for us to strongly
favour the radio analysis over gamma-ray. Our result at best shows that radio
and gamma-ray limits are competitive with each other \cite{Bhattacharjee:2019jce}.\\
\noindent We next have tried to explore whether, with Square Kilometre Array
(i.e., SKA), it would be possible to detect any radio emission from LSB galaxies \cite{Bhattacharjee:2019jce}.
SKA is the next generation radio telescope and because of its wide F.O.V and
resolution \cite{Proceedings:2015yra}, we expect that from next decade SKA would
be able to explore many resolved problems in cosmology. Searching for the DM
signal would be one of the most intriguing parts of it\cite{braun:2015sd}.\\
\noindent We have predicted the possible flux density $S(\nu)$ for each LSB
galaxies in a form of synchrotron emission with RX-DMFIT tool \cite{Bhattacharjee:2019jce}. In Fig.~7.12, we
have shown the variation of $S(\nu)$ with frequency ($\nu$) for three WIMP
annihilation channels and have compared it with the sensitivity curve of SKA for
its 10, 100 and 1000 hours of observations \cite{Bhattacharjee:2019jce}.
From Fig.~7.12 (a,b,c,d), we find that it might be possible for SKA to detect
the radio emission from the LSB galaxies, especially with its 1000 hours of
sensitivity curve \cite{Bhattacharjee:2019jce}. In Fig.~7.9, we have presented how does the `Sync' SED depend
on the astrophysical parameters, especially on the B and the D \cite{Bhattacharjee:2019jce}. Hence, accurate
knowledge on B, D and DM density distribution is very necessary. Otherwise, we
could not strongly state whether SKA would be able to detect any positive signal
from LSB. Thus our study, at best, hints that SKA would definitely play a very
major part to investigate the radio emission (most possibly from DM
annihilation) \cite{Bhattacharjee:2019jce}.
\subsection{Comparison between the NFW, Burkert and Pseudo Isothermal Density Profiles}
\begin{table}
\centering
\caption{J-factors derived for three DM density profiles at $h_{0}=0.75$.}
\begin{tabular}{|p{2cm}|p{3cm}|p{5cm}|}
\hline \hline
Galaxy name & Density Profile & J-factor ($\rm{GeV^{2}/cm^{5}}$)\\
\hline \hline
UGC & NFW & $0.739^{+2.87}_{-0.63}\times10^{16}$ \\
3371 & ISO & $0.188^{+0.775}_{-0.169}\times10^{16}$ \\
& BURKERT & $0.385^{+1.594}_{-0.346}\times10^{16}$ \\
\hline \hline
UGC & NFW & $0.485^{+1.85}_{-0.42}\times10^{16}$ \\
11707 & ISO & $0.123^{+0.501}_{-0.110}\times10^{16}$ \\
& BURKERT & $0.253^{+1.03}_{-0.227}\times10^{16}$ \\
\hline \hline
UGC & NFW & $0.795^{+3.08}_{-0.68}\times10^{16}$ \\
12632 & IS0 & $0.202^{+0.835}_{-0.182}\times10^{16}$ \\
& BURKERT & $0.414^{+1.717}_{-0.373}\times10^{16}$ \\
\hline \hline
UGC & NFW & $0.880^{+3.40}_{-0.75}\times10^{16}$ \\
12732 & ISO & $0.223^{+0.919}_{-0.1997}\times10^{16}$ \\
& BURKERT & $0.459^{+1.888}_{-0.411}\times10^{16}$ \\
\hline \hline
\end{tabular}
\end{table}
\begin{figure}
\subfigure[]
{ \includegraphics[width=0.48\linewidth]{figures/12632_flux_profiles_another_one.pdf}}
\subfigure[]
{ \includegraphics[width=0.53\linewidth]{figures/profile_error.pdf}}
\caption{(a) The upper limit on the $\gamma$-ray flux for three different
density profiles. (b) The comparison between the upper limits on the $<\sigma
v>$ for three density profiles estimated for the median value of J-factor along
with the uncertainty. The shaded region refers to the uncertainty in the DM
density for LSB galaxies. In both figures, we have chosen UGC 12632 that
annihilates into the $b\overline{b}$ channel.}
\end{figure}
\noindent In this section, we have done a comparative study between three
popular density profiles, i.e., between NFW\cite{Navarro:1996gj}, Pseudo
Isothermal (ISO)\cite{Gunn:1972sv} and Burkert (BURK)
profile\cite{Burkert:1995yz, Salucci:2011ee}. For examining the distribution of
the DM, two types of profiles are widely used in the literatures. Those are
cuspy (e.g. NFW) and cored-like (e.g. BURK, ISO) profiles. N-body simulation
results strongly support the cuspy-like distribution of DM distribution, while
the observational study i.e., the rotational curves for several irregulars and
dwarf galaxy favour the cored profile \cite{Bhattacharjee:2019jce}. This problem is known as the
``cuspy-core'' problem. Before presenting the comparison between three density
profiles, we would like to mention that for our sources we have preferred to use
the NFW profile \cite{Bhattacharjee:2019jce}. Because the available rotational curves for our LSB galaxies
showed that the NFW profile produced an acceptable fit to the rotational curves
\cite{vandenBosch:2000rza, vandenBosch:2001bp, Swaters:2002rx} and their study
was not able to differentiate between the $1/r$ cusps and constant cores.\\
\noindent The mathematical form of these three DM density profiles are described
in Chapter~2. Using the Eqs.~2.1, 2.2, 2.3 and 2.7, we have calculated the
J-factors of UGC 12632 for all three profiles. The values are mentioned in Table
7.8. We can notice that, among three density profiles, NFW produces the largest
J-factor \cite{Bhattacharjee:2019jce}.\\
\noindent Next, we have estimated the $\gamma$-ray flux upper limits and the
corresponding $<\sigma v>$ limits for three density profiles. The J-factors of
UGC 12632 has been taken from Table~7.8 and for our purpose, in Fig.~7.13 we
have shown the result for $b\overline{b}$ final state \cite{Bhattacharjee:2019jce}. In Fig~7.13 (a), we have
shown the gamma-ray flux upper limits for three density profiles. The flux upper
limits have no direct dependence on the J-factor, so from for all three
profiles, we have obtained the same order of flux limits \cite{Bhattacharjee:2019jce}. In Fig.~7.13 (b), we
have displayed the $<\sigma v>$ upper limits along with its 2$\sigma$
uncertainty band for three DM density profiles \cite{Bhattacharjee:2019jce}. From this figure, we could
notice that the uncertainty band for each profile are overlapping and thus
without reducing the uncertainty band, from this Fig.~7.13 (b) we could not
comment which density profile can produce the most stringent limit in the space
of ($m_{DM}$, $<\sigma v>$) \cite{Bhattacharjee:2019jce}.
\section{The Future of LSB Galaxies for Dark Matter Searches and the Impact of the CTA}
\noindent It is much expected that from next decade, the Cherenkov Telescope
Array (in short CTA) would come as the most advanced and sensitive $\gamma$-ray
telescope for high-energies. CTA would study the $\gamma$ rays between 20 GeV to
300 TeV energy range and because of its large angular resolution (say around 2
arc minutes) and improved energy resolution (much below than $\sim$10$\%$), it
might be possible for CTA to detect the $\gamma$ rays even from a very weak and
distant target.
CTA has very wide F.O.V (for small and medium sized telescopes it is around
$\sim$ 8 degree) and the encouraging part of this instrument is that its
effective area would increase with energies. Thus all of those quantities make
CTA the foremost sensitive comparing to all or any currently working space-based
and ground-based telescopes and that also gives us a hope that in future it
might be possible for CTA to identify the DM signals. \\
\noindent For our work, we would like to check whether in the future CTA can
detect any emission from the LSB galaxies and for that purpose we have compared
the differential flux of LSB galaxies obtained from the Fermi-LAT with the
sensitivity curve of CTA \cite{Bhattacharjee:2019jce}.
Our adopted CTA sensitivity curve (\cite{Maier:2017cjy}) is derived from the
point-like sources where, they are modelled with the power-law function and have
the 5$\sigma$ significance for 50 hours of CTA
observation. For Fermi-LAT we have used the sensitivity curve
for 10 years of LAT observation for the high-Galactic-latitude sources
\footnote{\tiny{http://www.slac.stanford.edu/exp/glast/groups/canda/lat{\_}Performance.html}}.
The sensitivity curve for the Fermi-LAT is also estimated for the point-like
sources that are modelled with the power law and have the 5$\sigma$ detection
significance\footnote{\tiny{http://www.slac.stanford.edu/exp/glast/groups/canda/lat{\_}Performance.html}}.\\
\noindent The comparison between the differential flux for all LSBs with the
sensitivity curves for CTA and Fermi-LAT instruments are shown in Fig.~7.14 \cite{Bhattacharjee:2019jce}. In
order to estimate the differential fluxes for LSB galaxies, they were modelled
with the power-law spectrum for $\Gamma$=2 (see section~7.3.1) \cite{Bhattacharjee:2019jce}. From Fig.~7.14,
it is quite evident that between the energy range of 100 GeV to 1 TeV, CTA might
be able to observe the emission from LSB galaxies. This is the really very
intriguing part of this study but we should also keep in mind that from
Fig.~7.14 we can only hint that above 100 GeV with the 50 hours of observation
there are chances that CTA would detect the emission from LSB galaxies \cite{Bhattacharjee:2019jce}. But that
emission can either come from any astrophysical sources or from the DM
annihilation.
A detailed simulation study is needed to check whether such emission is
resulting from the DM annihilation but that part is currently beyond the scope
of the analysis \cite{Bhattacharjee:2019jce}.
Hence, from our study, we can only at best comment that in the next decade CTA
would be very important tools for the gamma-ray analysis and would be especially
ideal for the indirect DM searching \cite{Bhattacharjee:2019jce}.
\begin{figure}
\begin{center}
{ \includegraphics[width=0.5\linewidth]{figures/lsb_differential_flux.pdf}}
\caption{The comparison of the differential $\gamma$-ray flux obtained from our
LSB galaxies with the detection-sensitivity curves for the Fermi-LAT and CTA.}
\end{center}
\end{figure}
\section{Conclusions \& Discussions}
\noindent For this work, we have studied for nearly nine years of LAT data but
have not detected any emission from the location of LSB. With DMFit tools, we
estimated the $\gamma$-ray and $<\sigma v>$ upper limits for four annihilation
states. But because of their low J-factors, individual limits obtained from the
LSB galaxies have not put any stringent limits on the DM theoretical models.
With the hope of increasing the LAT sensitivity, we have then performed the
joined likelihood on the set of four LSB galaxies. As expected, the stacking
method has improved the $<\sigma v>$ by the factor of 4 than the individual
limits obtained from LSB galaxies. But, the combined $<\sigma v>$ were still
around two orders of magnitude weaker than the $<\sigma v>$ limits obtained from
refs.~Ackermann et al.\cite{Ackermann:2015zua} and Steigman et al.\cite{Steigman:2012nb}. \\
\noindent The observation data for our chosen LSB galaxies could not
particularly favour cored profile over the cuspy profile. The rotational curves
for LSBs are in an agreement with the prediction from $\lambda$CDM and some
study also indicated that the cuspy profile could also provide a reasonable fit
to the DM distribution at the internal core. Thus, motivated by all the
observation indications, we have modelled the DM distribution of LSB galaxies
with the NFW profile. We have also performed a comparative study between NFW,
ISO and BURK DM density profiles (check Fig.~7.13) and find that the $<\sigma v>$
limits for each density profiles are overlapping with other. Thus, from our
study, we could not favour one profile between all three but for the median
value of J-factor, the most stringent limits would come from the NFW profile.\\
\noindent For this study, we have used the multiwavelength approach which is
considered as the complementary of the $\gamma$-ray detection method and is very
popular in nowadays for the indirect searching of the DM signal. For our
analysis, we have preferred to focus on the radio signal and for that purpose,
we have followed the code RX-DMFIT. RX-DMFIT is the extension of DMFIt package
and is specially designed to investigate the possible radio and X-ray emission
from DM annihilation. LSB galaxies have very low nuclear activity and poor star
formation rates and that makes them suitable targets for examining the diffuse
radio emission most possibly coming from the DM annihilation/decay. We have
estimated the multiwavelength SED plots for LSB galaxies and have also checked
how the nature of SED varies with varying the parameter sets (check Figs.~7.8
$\&$ 7.9). We have searched for the radio flux limits for all LSB galaxies from
the NVSS sky survey data but only the location of UGC 11707 gives detected flux
density values and other thee LSBs only provide the upper limits to the flux
density. With the VLA flux density, we have tried to predict the radio $<\sigma
v>$ limits in parameter space of ($<\sigma v>$, $m_{DM}$) (check Fig.~7.10). If
we consider the 2$\sigma$ uncertainty band associated with the radio limits, we
have noticed that the radio limits are overlapping with the limits obtained from
stacking analysis for LAT data (check Fig.~7.11) and all three annihilation
channels have shown the same nature. Hence, from our analysis, we could, at
best, comment that the radio data is competitive with the gamma-ray data. With
more detailed observational data and precise analysis, in future, it might be
possible for LSB galaxies to impose strong limits on DM models.\\
\noindent We have checked whether with the next generation radio (SKA) and
gamma-ray (CTA) telescopes it would be possible to detect any emission from the
location of LSB galaxies. We have noticed (check 7.12) that SKA might be able to
detect the emission from the location of LSB galaxies and its 1000 hours of
observation would have the highest possibility to detect the emission from LSBs.
But we would also like mention that in order to claim that SKA would detect the
emission from DM annihilation, we first need to perform a simulation study.
Besides, the estimated radio emission is also dependent on the various
astrophysical scenario. We need to have a well-defined knowledge on the
distribution of diffusion zone, magnetic fields, DM density profile, etc..
Hence, from our analysis, we could, at best, hint the possibility of observing
the radio signal from LSB galaxies by SKA. We have also found (Fig.~7.14) that
for energy ranges between 100 GeV to 1 TeV, it might be possible for CTA to
observe the $\gamma$-ray emission with the 50 hours of sensitivity curve. But
like SKA, the same conclusion also holds for CTA. A simulation study is needed
to examine whether it would be possible for CTA to detect the emission resulting
from the DM annihilation/decay.\\
\noindent Hence, from our work, we can conclude that the $\gamma$-ray data
obtained from the Fermi-LAT could not impose the strong $<\sigma v>$ limits on
the WIMP models. We find that the radio signal possibly originated from WIMP
annihilation is quite competitive with the $\gamma$-ray emission observed by the
Fermi-LAT. Our analysis, at best, indicates that to study $\gamma$-ray and radio
signal from the LSB galaxies, SKA and CTA would play a very significant role in
future.
\section{Source Details}\label{section:source_details}
\noindent In this chapter, we have investigated the gamma-ray and radio emission
possibly resulting from DM annihilation \cite{Bhattacharjee:2020phk}. For this purpose, we have chosen
several UFDs based on their very high mass to light ratio, large velocity
dispersion of their stars, etc. and thus they are very likely to be rich in DM
\cite{Baumgardt:2008zt}. The observed spectroscopic and photometric properties
of our selected UFDs are described in Table~8.1 \cite{Bhattacharjee:2020phk}, where, $M/L$, $\sigma$, $d$,
$r_{1/2}$ and $\theta_{max}^o$ refers to mass-to-light
ratio, velocity dispersion, heliocentric distance, half light radius and maximum
galactocentric distance of each UFDs, respectively\cite{Pace:2018tin}.
\begin{table}[!h]
\begin{center}
\begin{tabular}{||p{2.3cm}|p{1.8cm}|p{1.9cm}|p{1.5cm}|p{2cm}|p{2cm}||}
\hline
\hline
Galaxy & M/L $(M_{\odot}/L_{\odot})$ & d (Kpc) & $r_{1/2}~(pc)$ & $\sigma~(km~s^{-1})$ & $\theta_{max}^o$ \\
\hline
Aquarius~II & $1330^{+3242}_{-227}$ & $107.9^{+3.3}_{-3.3}$ & $123^{+22}_{-21}$ & $6.2^{+2.6}_{-1.7}$ & 0.11134 \\
\hline
Carina~II & $369^{+309}_{-161}$ & $37.4^{+0.4}_{-0.4}$ & $77^{+8}_{-8}$ & $3.4^{+1.2}_{-0.8}$ & 0.23\\
\hline
Draco~II & $501^{+1083}_{-421}$ & $20.0^{+3.0}_{-3.0}$ & $12^{+5}_{-5}$ & $3.4^{+2.5}_{-1.9}$ & 0.1\\
\hline
Eridanus~II & $420^{+210}_{-140}$ & $366.0^{+17.0}_{-17.0}$ & $176^{+14}_{-14}$ & $7.1^{+1.2}_{-0.9}$ & 0.062 \\
\hline
Grus~I & $<~2645$ & $120.2^{+11.1}_{-11.0}$ & $52^{+26}_{-26}$ & $4.5^{+5.0}_{-2.8}$ & 0.093\\
\hline
Horologium~I & $570^{+1154}_{-112}$ & $79.0^{+7.0}_{-7.0}$ & $32^{+5}_{-5}$ & $5.9^{+3.3}_{-1.8}$ & 0.0619 \\
\hline
Hydra~II & $<~315$ & $151.0^{+8.0}_{-8.0}$ & $71^{+11}_{-11}$ & $<6.82$ & 0.08509 \\
\hline
Leo~V & $264^{+326}_{-264}$ & $173.0^{+5.0}_{-5.0}$ & $30^{+17}_{-17}$ & $4.9^{+3.0}_{-1.9}$ & 0.077 \\
\hline
Pegasus~III & $1470^{+5660}_{-1240}$ & $215.0^{+12}_{-12}$ & $37^{+14}_{-14}$ & $7.9^{+4.4}_{-3.1}$ & 0.03049\\
\hline
Pisces~II & $370^{+310}_{-240}$ & $183.0^{+15}_{-15}$ & $48^{+10}_{-10}$ & $4.8^{+3.3}_{-2.0}$ & 0.06861\\
\hline
Reticulum~II & $467^{+286}_{-168}$ & $30^{+2}_{-2}$ & $32^{+3}_{-3}$ & $3.4^{+0.7}_{-0.6}$ & 0.24\\
\hline
Tucana~II & $1913^{+2234}_{-950}$ & $57.5^{+5.3}_{-5.3}$ & $115^{+32}_{-32}$ & $7.3^{+2.6}_{-1.7}$ & 0.225\\
\hline
Tucana~III & $<~240$ & $25.0^{+2}_{-2}$ & $43^{+6}_{-6}$ & $<2.18$ & 0.2\\
\hline
Triangulum~II & $<~2510$ & $30^{+2}_{-2}$ & $28^{+8}_{-8}$ & $<6.36$ & 0.15\\
\hline
\hline
\end{tabular}
\end{center}
\caption{Properties of the UFDs.}
\label{table:astro_fundamental_param_dwarfs}
\end{table}
\subsection{Dependence of $J$ on the Density Profiles}
\label{sec:DM_profile}
\noindent As we have already discussed, NFW density profile is the benchmark
choice for the DM distribution which is mainly favoured by the $N$-body
simulations~\cite{Navarro:2008kc, Wagner:2020opz}, while some observational
studies~\cite{de_Blok_2001} prefer the cored profile. Thus for this work, we
have performed a comparative study between the NFW~\cite{Navarro:1996gj},
Burkert (BURK)~\cite{Burkert:1995yz, Salucci:2011ee} and Pseudo-Isothermal
(ISO)~\cite{Gunn:1972sv} profiles \cite{Bhattacharjee:2020phk}. We have estimated the J-factor of each UFDs
for three density profiles \cite{Bhattacharjee:2020phk}. From Table 8.2, we could find that Burkert provides
stronger limits than NFW, while ISO imposes the weakest limits \cite{Bhattacharjee:2020phk}. In Table 8.2, we
have also compared our estimated J values for NFW profile with the J values
derived by the Pace {\em et al}\cite{Pace:2018tin}.
\begin{table}[!t]
\centering
\begin{tabular}{|p{2.5cm}|c|c|c|c|}
\hline \hline
Galaxy & \multicolumn{4}{c|}{$\log_{10}(J(0.5^{\circ})/{\rm GeV}^2\, {\rm cm}^{-5})$}\\
\cline{2-5}
& Pace {\em et al}\cite{Pace:2018tin} & \multicolumn{3}{c|}{Direct Integration}\\
\cline{3-5}
& (NFW) & NFW & Burkert & ISO \\
\hline \hline
Aquarius II & $18.27^{+0.65}_{-0.59}$ & $18.11^{+0.68}_{-0.63}$ & $18.53^{+0.72}_{-0.66}$ & $18.01^{+0.73}_{-0.66}$ \\
\hline \hline
Carina II & $18.24^{+0.53}_{-0.53}$ & $18.16^{+0.55}_{-0.53}$ & $18.45^{+0.60}_{-0.56}$ & $18.05^{+0.58}_{-0.54}$ \\
\hline \hline
Draco II & $18.97^{+1.29}_{-1.69}$ & $19.07^{+1.33}_{-1.69}$ & $19.54^{+1.35}_{-1.70}$ & $18.90^{+1.34}_{-1.70}$ \\
\hline \hline
Eridanus II & $17.29^{+0.35}_{-0.26}$ & $17.14^{+0.35}_{-0.30}$ & $17.68^{+0.35}_{-0.31}$ & $17.06^{+0.35}_{-0.31}$ \\
\hline \hline
Grus-I & $16.87^{+1.52}_{-1.68}$ & $16.94^{+1.57}_{-1.74}$ & $17.48^{+1.60}_{-1.75}$ & $16.76^{+1.54}_{-1.67}$ \\
\hline \hline
Horologium I & $19.25^{+0.79}_{-0.70}$ & $19.01^{+0.83}_{-0.73}$ & $19.37^{+0.85}_{-0.75}$ & $18.73^{+0.85}_{-0.75}$ \\
\hline \hline
Hydra II & $<~17.71$ & $<~17.92$ & $<~18.46$ & $<~17.84$ \\
\hline \hline
Leo V & $17.69^{+0.93}_{-0.99}$ & $17.91^{+1.03}_{-1.06}$ & $18.51^{+1.02}_{-1.08}$ & $17.84^{+1.01}_{-1.07}$ \\
\hline \hline
Pegasus III & $18.41^{+0.89}_{-1.07}$ & $18.46^{+0.94}_{-1.05}$ & $19.06^{+1.02}_{-1.07}$ & $18.39^{+1.03}_{-1.05}$ \\
\hline \hline
Pisces II & $17.31^{+0.97}_{-0.107}$ & $17.53^{+1.02}_{-1.09}$ & $18.10^{+1.04}_{-1.09}$ & $17.45^{+1.03}_{-1.09}$ \\
\hline \hline
Reticulum II & $18.95^{+0.57}_{-0.52}$ & $18.76^{+0.53}_{-0.48}$ & $19.21^{+0.53}_{-0.54}$ & $18.66^{+0.53}_{-0.53}$ \\
\hline \hline
Triangulum II & $<~19.72$ & $<~19.74$ &$<~20.18$ & $<~19.64$ \\
\hline \hline
Tucana II & $19.02^{+0.57}_{-0.52}$ & $18.93^{+0.62}_{-0.58}$ & $19.22^{+0.64}_{-0.61}$ & $18.83^{+0.66}_{-0.62}$ \\
\hline \hline
Tucana III & $<~17.68$ & $<~17.87$ & $<~18.20$ & $<~17.76$ \\
\hline \hline
Draco & $18.83^{+0.10}_{-0.10}$ & $18.85^{+0.12}_{-0.12}$ & $19.08^{+0.13}_{-0.13}$ & $18.75^{+0.13}_{-0.13}$ \\
\hline \hline
\end{tabular}
\caption{The astrophysical factors (J-factor) of our selected UFDs deriving from
the Eq.~2.7 for NFW, Burket and ISO DM density profiles at
$\theta_{max}=0.5^{\circ}$. Also mentioned J-factors of NFW profile estimated by
the scaling relation from Pace \textit{et al.}, 2019.}
\label{table:table-1}
\end{table}
\section{Analysis of $\gamma$-ray Fluxes from UFDs}
\label{sec:analysis}
\noindent Since the last decade, several dSphs/UFDs have been studied in order
to investigate the DM signal but no strong emission has been detected from their
location. But even the null detection can provide an intriguing knowledge on the
DM signature \cite {Ackermann:2011wa,GeringerSameth:2011iw,Ackermann:2013yva,
Ackermann:2015zua, Fermi-LAT:2016uux}. With all these keeping in mind, we have
chosen a recently discovered 14 UFDs and have analyzed nearly eleven years
(2008-09-01 to 2019-02-04) of Fermi-LAT data \cite{Bhattacharjee:2020phk}. For our analysis, we have used the
Fermi ScienceTools version, v1.2.1 and have accessed the source class IRF,
$\rm{P8R3\_SOURCE\_V2}$ processed data \cite{Bhattacharjee:2020phk}. We have considered the energy range $E$,
{\em viz.} $E\in [0.1, 300]$~GeV and have extracted data within the $15^{\circ}$
ROI around the location of each UFDs \cite{Bhattacharjee:2020phk}. We have then generated the source model
file where, we have included our `source of interest' along with all the sources
within $20^{\circ}$ ROI from the 4FGL catalog \cite{Fermi-LAT:2019yla}. In
addition, we have also added the galactic ($\rm{gll\_iem\_v07.fits}$) and
isotropic ($\rm{iso\_P8R3\_SOURCE\_V2\_v1.txt}$) diffuse models to our source
model \cite{Bhattacharjee:2020phk}. Next, we have performed the binned likelihood analysis \cite{Cash:1979vz,
Mattox:1996zz} on our extracted dataset and during the process, the spectral
parameters of all the sources within $15^{\circ}~\times~15^{\circ}$ ROI and the
normalization parameters for two diffuse backgrounds models have been left free.
The necessary information for Fermi-LAT analysis is mentioned in TABLE~8.3 \cite{Bhattacharjee:2020phk}.
\begin{table}
\caption{Parameters used for the analysis of \textit{Fermi}-LAT data.}
\begin{tabular}{||p{7 cm}p{8 cm}||}
\hline \hline
{\bf Parameter for data extraction} &\\
\hline\hline
Parameter & Value\\
\hline \hline
Radius of interest (ROI) & $15^{\circ}$\\
TSTART (MET) & 241976960 (2008-09-01 15:49:19.000 UTC)\\
TSTOP (MET) & 570987500 (2019-02-04 15:38:15.000 UTC)\\
Energy Range & 100 MeV - 300 GeV\\
\textit{Fermitools} version & \texttt{1.2.1}\\
\hline \hline
\texttt{gtselect} for event selection &\\
\hline \hline
Event class & Source type (128)\\
Event type & Front+Back (3)\\
Maximum zenith angle cut & $90^{\circ}$\\
\hline \hline
\texttt{gtmktime} for time selection &\\
\hline \hline
Filter applied & $\textit{(DATA\_QUAL>0)\&\&(LAT\_CONFIG==1)}$\\
ROI-based zenith angle cut & No\\
\hline \hline
\texttt{gtltcube} for livetime cube &\\
\hline \hline
Maximum zenith angle cut ($z_{cut}$) & $90^{\circ}$\\
Step size in $cos(\theta)$ & 0.025\\
Pixel size (degrees) & 1\\
\hline \hline
\texttt{gtbin} for 3-D counts map &\\
\hline \hline
Size of the X $\&$ Y axis (pixels) & 140\\
Image scale (degrees/pixel) & 0.1\\
Coordinate system & Celestial (CEL)\\
Projection method & AIT\\
Number of logarithmically uniform energy bins & 24\\
\hline \hline
\texttt{gtexpcube2} for exposure map &\\
\hline \hline
Instrument Response Function (IRF) & $\rm{P8R3\_SOURCE\_V2}$\\
Size of the X and Y axis (pixels) & 400\\
Image scale (degrees/pixel) & 0.1 \\
Coordinate system & Celestial (CEL)\\
Projection method & AIT\\
Number of logarithmically uniform energy bins & 24\\
\hline \hline
diffuse models and Source model XML file &\\
\hline \hline
Galactic diffuse emission model & $\rm{gll\_iem\_v07.fits}$\\
Extragalactic isotropic diffuse emission model & $\rm{iso\_P8R3\_SOURCE\_V2\_v1.txt}$\\
Source catalog & 4FGL\\
Extra radius of interest & $5^{\circ}$\\
Spectral model & DMFit Function\cite{Jeltema:2008hf}\\
\hline \hline
\end{tabular}
\label{table:fermi_lat_parameters}
\end{table}
\subsection{Constraints on DM Annihilation with Eleven Years of Fermi-LAT Data}\label{section:gamaray_sigmav_constraint}
\noindent In order to investigate the $\gamma$-ray signal from the location of
our `source of interest', we have modelled our targets with the
power-law spectrum (i.e., $dN/dE \propto E^{-\Gamma}$) for spectral index
$\Gamma$ = 2 \cite{Ackermann:2013yva, Ackermann:2015zua, Fermi-LAT:2016uux,
Bhattacharjee:2018xem}. Unfortunately, we have not observed any strong emission
from the location of UFDs.
\begin{figure}[h!]
\begin{minipage}[c]{0.5\textwidth}
\input{chapters/table_ts_values.tex}
\end{minipage}
\hfill
\begin{minipage}{0.5\textwidth}
\includegraphics[width=\textwidth,clip,angle=0]{figures/ts_compare.pdf}
\end{minipage}
\caption{The maximum TS values (or peak value) detected from location of our
selected UFDs for $b\bar{b}$ and $\tau^{+}\tau^{-}$ final states with eleven
years of LAT data (left). The TS peak value observed from the location of Tucana
II for three, six, nine and eleven years of LAT data (right).}
\label{figure:ts_tucana}
\end{figure}
\noindent We would like to mention out that except for Tucana II, we have not
observed any faint emission from the location of other UFDs (i.e., for them TS
$\le$ 5). In Fig.~8.1(a), we have listed the TS peak value of UFDs for
$b\bar{b}$ and $\tau^{+}\tau^{-}$ annihilation channels. An intriguing hint of a
faint emission had been reported from the direction of Tucana-II in a recent
publication (ref.~\cite{Bhattacharjee:2018xem}). The significance of this faint
emission was shown to increase with time. In Fig.~8.1(b), we have shown the peak
TS value as a function of time for Tucana-II. As was seen in
ref.~Bhattacharjee et al. \cite{Bhattacharjee:2018xem}, the significance seems to grow even with a
11 years of LAT data. But the observed significance with eleven years of
Fermi-LAT data is still faint (i.e., TS $<$ 25) enough to state any strong claim of the
existence of a signal.\\
\noindent As we have not detected any strong emission from the direction of
UFDs, we have then derived the 95$\%$ C.L. gamma-ray flux upper limits from the
region of these objects \cite{Bhattacharjee:2020phk}. For this purpose, we have used the Bayesian approach
(\cite{Helene:1990yi}), which is sensitive~\cite{Rolke:2004mj, Barbieri:1982eh}
for the low statistics analysis. The approach was developed by Helene
\cite{Helene:1990yi} and is implemented in the Fermi-\texttt{ScienceTools}.\\
\begin{figure}[h!]
\centering
\includegraphics[width=.49\linewidth]{figures/bbbar_gamma_flux.pdf}
\includegraphics[width=.49\linewidth]{figures/tautau_gamma_flux.pdf}
\includegraphics[width=.5\linewidth]{figures/mumu_gamma_flux.pdf}
\hskip 10pt
\includegraphics[width=1.0\linewidth]{figures/flux_legends.pdf}
\caption{$95\%$ C.L. $\gamma$-ray flux upper limits of our selected UFDs for
$b\bar{b}$, $\tau^+\tau^-$ and $\mu^+\mu^-$ pair-annihilation channels.}
\label{figure:fermi_flux}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.49\linewidth]{figures/bbbar_gamma_cross.pdf}
\includegraphics[width=0.49\linewidth]{figures/tautau_gamma_cross.pdf}
\includegraphics[width=0.5\linewidth]{figures/mumu_gamma_cross.pdf}
\hskip 10pt
\includegraphics[width=1.0\linewidth]{figures/cross_legends.pdf}
\label{fig:cross_legends}
\caption{$95\%$ C.L. $\langle \sigma v \rangle$ upper limit of our selected
UFDs for $b\bar{b}$, $\tau^+\tau^-$ and $\mu^+\mu^-$ pair-annihilation channels.
We have not included the limits from Triangulum II, Hydra II and Tucana III
as they only have the upper limits of $J$-factor.}
\label{figure:fermi_cross}
\end{figure}
\noindent The aforesaid $\gamma$-ray flux upper limits obtained from the location of our targets can be translated
to the WIMP pair-annihilation cross-section, $\langle \sigma v\rangle$ as a
function of DM mass and WIMP annihilation channels \cite{Bhattacharjee:2020phk}. We have adopted three pair
annihilation final states; such as $b \bar b$, $\tau^+\tau^-$ and $\mu^+\mu^-$.
For estimating the 95$\%$ C.L. limits on $\langle \sigma v \rangle $, we have
modelled the
$\gamma$-ray flux upper limits with the DMFitFunction
\cite{{Jeltema:2008hf}}\footnote{\tiny{https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/source{\_}models.html}}\\
\noindent The consequent upper limits on $\gamma$-ray flux and $\langle \sigma v
\rangle$ limits for all three annihilation channels are shown in Fig.~8.2 and 8.3 \cite{Bhattacharjee:2020phk}.
From Fig~8.2, we can observe that for most of DM mass range, Draco provides the
strongest limits for all three channels. In Fig.~8.3, we have shown the LAT
sensitivity in ${\rm m_{DM}}- \langle \sigma v \rangle$ plane for all 15 UFDs \cite{Bhattacharjee:2020phk}.
The obtained limits in Fig.~8.3 depend on the $J$-factor and the DM density
profiles. Among all our considered UFDs, Horologium I, due to its largest
$J$-factor, has imposed the most stringent limit on ${\rm m_{DM}}- \langle
\sigma v \rangle$ plane for
all three annihilation final states \cite{Bhattacharjee:2020phk}. But, we also should not ignore the large
uncertainties associated with the J-factor of Horologium I. Thus, the limit
obtained from Horologium I might not be as robust as we can expect from Draco.
In Fig.~8.3, we have not showed the $\langle \sigma v \rangle $ limits for Triangulum II, Hydra II and
Tucana III, because they can only produce the limiting values for $\langle
\sigma v \rangle$ due to their upper
limits of J-factor \cite{Bhattacharjee:2020phk}.
\section{Synchrotron Radiation from UFDs}
\label{sec:synchr}
\noindent As we have seen above, the limits obtained from the $\gamma$-ray data
are directly dependent on the J-factor but this is not the case for synchrotron
emission. The radio emission generating from synchrotron emission strongly
depends on the diffusion coefficient ($D_{0}$), magnetic field ($B$) and energy
loss mechanism, etc. The magnetic field of dSphs are not well-studied but
several studies suggest to consider the $B$ $\approx$ 1 $\mu$G for dSphs
\cite{Colafrancesco:2006he, McDaniel:2017ppt, Spekkens:2013ik}. For our
analysis, we have also assumed the same \cite{Colafrancesco:2006he,
Jeltema:2008hf}. For diffusion coefficient, we have considered the simplified
form of it, i.e., $D(E) = D_{0} \left(\frac{E}{1 \rm GeV}\right)^{\gamma_D}$, where,
$D_{0}$ is the diffusion constant. For galaxy clusters, $D_0$ lies between the
range of $10^{28}$--$10^{30}\, {\rm cm}^2/{\rm s}$~\cite{Natarajan:2015hma,
Jeltema:2008ax}, while for Milky Way it stands between $10^{27}$--$10^{29}\,
{\rm cm}^2/{\rm s}$~\cite{Webber:1992bn, Baltz:1998xv, Maurin:2001sj}.
Similarly, $\gamma_D$ is expected to lie between $0\leq \gamma_D \leq 1$
\cite{Jeltema:2008ax}. For our analysis, we have fixed $D_0$ and $\gamma_{D}$ at
$3 \times 10^{28}\, {\rm cm}^2/{\rm s}$ and $0.3$\cite{McDaniel:2017ppt},
respectively. \\
\noindent For a specific DM mass, the synchrotron emission would also depend on
the WIMP pair-annihilation channels and their relative cascades. Just like our
$\gamma$-ray analysis, here we have again considered three annihilation final
states; such as $b\bar{b}$, $\tau^+ \tau^-$ and $\mu^+ \mu^-$. Next, in order to
predict the possible synchrotron emission resulting from the DM annihilation, we
have used a publicly accessible code, RX-DMFIT \cite{McDaniel:2017ppt} which is
an extension of the DMFit tool \cite{Jeltema:2008hf, Gondolo:2004sc}. As a
default we have used the NFW density profile and have fixed the pair
annihilation cross-section, $\langle \sigma v \rangle$ at $10^{-26} \, {\rm
cm}^3/{\rm s}$ \cite{Bhattacharjee:2020phk}. In addition, we have used the thermal electron density
$n_{e} \approx 10^{-6}$ cm$^{-3}$ \cite{Colafrancesco:2006he,McDaniel:2017ppt}
for all our selected UFDs.\\
\noindent Using the parameters, $d$, $r_{1/2}$, and $\sigma$ listed in Table
8.1, we have calculated characteristic density ($\rho_{s}$), scale radius
($r_{s}$) and diffusion zone ($r_{h}$). The parameter values mentioned in Table
8.4 are derived from the `central values' of $d$, $r_{1/2}$, and $\sigma$ \cite{Bhattacharjee:2020phk}.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{c c c c c}
dSphs & d(Kpc) & $r_h(Kpc)$ & $\rho_s (GeV/cm^3)$ & $r_s (Kpc)$ \\
\hline
Aquarius II & 107.9& 0.42 & 2.27 & 0.615 \\
Carina II & 37.4 & 0.3 & 1.78 & 0.38 \\
Draco II & 20 & 0.07 & 71.73 & 0.06 \\
Eridanus II & 366 & 0.792 & 1.454 & 0.88 \\
Grus I & 120.2 & 0.39 & 6.7 & 0.26 \\
Horologium I & 79 & 0.188 & 30.55 & 0.16 \\
Hydra II & 151 & 0.448 & < 8.24 & 0.335 \\
Leo V & 173 & 0.465 & 23.83 & 0.15 \\
Pegasus III & 215 & 0.228 & 40.73 & 0.185 \\
Pisces II & 183 & 0.438 & 8.93 & 0.24 \\
Reticulum II & 30 & 0.251 & 10.08 & 0.16 \\
Tucana II & 57.5 & 0.452 & 3.6 & 0.575 \\
Tucana III & 25 & 0.174 & < 2.29 & 0.215 \\
Triangulum II & 30 & 0.157 & < 46.1 & 0.14 \\
\hline \hline
\textbf{Draco} & 76 & 2.5 & 1.4 & 1.0 \\
\hline
\end{tabular}
\caption{The astrophysical parameters for our selected UFDs along with the
classical dSphs Draco. The values for $r_h$, $\rho_s$ and
$r_s$ has been derived from the `central values' of the astrophysical parameters
listed in Table 8.1.}
\label{table:astro_param_dwarfs}
\end{center}
\end{table}
\begin{figure}[!h]
\centering
\subfigure[]
{\includegraphics[width=0.49\linewidth]{figures/dnde_200GeV_r_1e-1kpc_Tucana_II.pdf}}
\label{fig:dnde_200GeV}
\subfigure[]
{\includegraphics[width=0.49\linewidth]{figures/dnde_2TeV_r_1e-1kpc_Tucana_II.pdf}}
\label{fig:dnde_2TeV}
\caption{The $e^{\pm}$ distribution spectrum of Tucana II at equilibrium for
radial distance $r=0.1$~kpc and three pair-annihilation channels, such as:
$b\bar{b}$ (red), $\mu^+ \mu^-$ (blue) and $\tau^+ \tau^-$ (green). We have
considered NFW profile and fixed the parameters at $\langle \sigma v\rangle \, =
10^{-26}$ cm$^3$/s, $B \, = \, 1\, \mu$G, $D_0 = 3 \times 10^{28}$ cm$^2$/s,
$\gamma_D = 0.3$. The spectrum for DM masses 200 GeV and 2 TeV have been shown
in left and right panels, respectively.}
\label{figure:dnde}
\end{figure}
\noindent In Fig.~8.4, we have shown the $e^{\pm}$ distribution spectrum of Tucana II
at a radial distance 0.1 kpc and for DM masses, 2 TeV and 200 GeV \cite{Bhattacharjee:2020phk}. The cascade
channels resulting from the $b\bar{b}$ annihilation could produce a large amount
of $e^\pm$ that we can expect from the $\tau^{+}\tau^{-}$ or the
$\mu^{+}\mu^{-}$ annihilation channel. Thus, the integrated spectrum obtained
from the $b\bar{b}$ channel would be larger than the $\tau^{+}\tau^{-}$ and the
$\mu^{+}\mu^{-}$ \cite{Bhattacharjee:2020phk}. From Fig.~8.4, we can also explain the relative softness
between three annihilation channels.
\begin{figure}[!h]
\centering
\subfigure[]
{\includegraphics[width=0.50\linewidth]{figures/sync_power_vs_E.pdf}}
\label{fig:synpower}
\subfigure[]
{\includegraphics[width=0.49\linewidth]{figures/Tucana_II_bbbar_tautau_mumu_200GeV_2TeV_flux.pdf}}
\label{fig:bbbartautau_flux_comp}
\caption{(a) The power-spectrum at five different frequency values for $B \, =
\, 1\, \mu$G. (b) The synchrotron flux densities for $b\bar{b}$ (red), $\mu^+
\mu^-$ (blue) and $\tau^+ \tau^-$ (green) annihilation channels. The fluxes for
DM masses 200 GeV and 2 TeV have been denoted with solid and dashed lines,
respectively. We have considered NFW profile and fixed the parameters at
$\langle \sigma v\rangle \, = 10^{-26}$ cm$^3$/s, $B \, = \, 1\, \mu$G, $D_0 = 3
\times 10^{28}$ cm$^2$/s, $\gamma_D = 0.3$.}
\end{figure}
\noindent In Fig.~8.5(a), we have shown the power-spectrum ($P_{\rm
synch}(\nu,E,B)$) of Tucana II at $B= 1$ $\mu$G for frequency range between 5
MHz to 50 GHz. We find that $P_{\rm synch}$ for higher frequency values peaks at
comparatively higher energies \cite{Bhattacharjee:2020phk}.
For a specific frequency value, the annihilation channel which produces a large
number of $e^\pm$ with increasing energies would generate a larger amount of
synchrotron flux. Thus, for a higher value of frequencies, the leptonic
annihilation channel would dominate over the hadronic final states. We can
observe this feature from Fig.~8.5(b) \cite{Bhattacharjee:2020phk}. In that figure, for $=$ 200 GeV mass
and high frequency value, $\tau^+ \tau^-$ annihilation channel dominates over $b
\bar{b}$ final state, while for the low frequency value, $b \bar{b}$
annihilation channel dominates over $\tau^+ \tau^-$ final state. The $e^\pm$
resulting as the end product of WIMP annihilation final states could
possess the maximum energy, $\sim$ $M_{DM}$ and thus for higher DM mass values, we would obtain
the harder the $e^\pm$ spectrum (have already shown in Fig.~8.4). Therefore,
from Fig.~8.5(b), we can observe the crossover between $b\bar b$ dominance and
$\tau^+ \tau^-$ dominance with changing the frequencies \cite{Bhattacharjee:2020phk}.
\subsection{Results Pertaining to the UFDs}
\label{sec:synchro_ufd}
\noindent In this section, we have considered the radio data observed by two
popular radio telescopes; such as:
\begin{itemize}
\item 1) The sky-survey data observed by the Giant Metrewave Radio Telescope
(GMRT) \cite{Intema:2016jhx}. It covers the sky between $-53^{\circ}$ to
$+90^{\circ}$ declination at $\nu =0.1475~{\rm GHz}$
\item 2) The NVSS survey data the Very Large Array
(VLA) telescope \cite{condon1998}. It covers the sky between $-40^{\circ}$ to
$+90^{\circ}$ declination at $\nu = 1.4~{\rm GHz}$.
\end{itemize}
\noindent Unfortunately, no excess emission has been detected from the location
of UFDs by both the telescopes \cite{Bhattacharjee:2020phk}. Thus the noise obtained from the direction of
UFDs is translated to upper limits on flux density for 95$\%$ C.L. same as
listed in the Table~8.5 \cite{Bhattacharjee:2020phk}. Here we would like to mention that the radio images are
generally prepared in per unit beam-size, where, the beam-size is convolved with
the PSF of the respective telescope. Thus, for the final processed radio images,
the unit for flux density is in Jy. As both of our considered telescopes do not
cover the full sky, we do not have the information for some UFDs, e.g.
Tucana II \cite{Bhattacharjee:2020phk}. The observed upper limits on flux density are then translated to the
$\langle
\sigma v \rangle$ upper limits for three annihilation final states. In Fig.~8.6,
we have shown our results \cite{Bhattacharjee:2020phk}.
\begin{table}[!h]
\centering
\begin{tabular}{|p{2.5cm}|p{4cm}|p{4cm}|}
\hline \hline
Galaxy & GMRT ($\nu = 147.5$ MHz) & VLA ($\nu = 1.4$ GHz) \\
\hline \hline
Aquarius II & $6.8 $ & $0.86$ \\
\hline \hline
Draco II & $9 $ & $1.1$ \\
\hline \hline
Eridanus II & $7.8 $ & No Data \\
\hline \hline
Grus I & $4.1$ & No Data \\
\hline \hline
Hydra II & $8.8 $ & $1.1 $ \\
\hline \hline
Leo V & $6 $ & $0.98$ \\
\hline \hline
Pegasus III & $10$ & $0.96$ \\
\hline \hline
Pisces II & $3.5$ & $0.88$ \\
\hline \hline
Triangulum II & $6 $ & $1$ \\
\hline \hline
Draco & $7.2$ & $9.2$ \\
\hline \hline
\end{tabular}
\caption{2$\sigma$ upper limits on radio flux densities detected by the
sky-survey performed by GMRT and VLA.
The location of Carina II, Reticulum II, Horologium II and Tucana II\&III are
not covered by both the surveys.}
\label{table:radio_flux_upper_limits}
\end{table}
\begin{figure}[h!]
\centering
\includegraphics[width=.49\linewidth]{figures/bbbar_Exclusion_Curve_GMRT_VLA_comparison_sigmav_versus_M.pdf}
\includegraphics[width=.49\linewidth]{figures/mumu_Exclusion_Curve_GMRT_VLA_comparison_sigmav_versus_M.pdf}
\includegraphics[width=.5\linewidth]{figures/tautau_Exclusion_Curve_GMRT_VLA_comparison_sigmav_versus_M.pdf}
\hskip 10pt
\includegraphics[width=1.0\linewidth]{figures/legends_Exclusion.pdf}
\caption{95 $\%$ C.L. $\langle \sigma v \rangle$ limit of UFDs using upper
limits on flux densities observed by GMRT and VLA for $b\bar{b}$,
$\tau^+\tau^-$ and $\mu^+\mu^-$ pair-annihilation channels. We
have considered NFW profile and fixed the parameters at $\langle \sigma v\rangle
\, = 10^{-26}$ cm$^3$/s, $B \, = \, 1\, \mu$G, $D_0 = 3 \times 10^{28}$
cm$^2$/s, $\gamma_D = 0.3$.}
\label{figure:sigmav_Exclusion_curve}
\end{figure}
\noindent Compared to GMRT, the VLA telescope has a wider effective area and
operates in one order of magnitude higher frequency range which reduces the
contribution from the galactic background. From Fig.~8.6, we find that for large
DM mass, the $\langle\sigma v\rangle$ limits obtained from the NVSS images are
stronger than the limits obtained from the GMRT data, while for low DM mass,
GMRT data imposes the strongest limits \cite{Bhattacharjee:2020phk}. This result is the outcome of the
comparative efficiencies between two telescopes and the dependence of the
$e^\pm$ spectrum on DM mass.
\subsection{Future Projections}
\noindent SKA is expected to operate for a wide range of radio frequency i.e.,
between 50 MHz - 50 GHz. This enables SKA to observe the synchrotron emission
from DM annihilation in dSphs/UFDs \cite{Bull:2018lat, Colafrancesco:2015ola}.
We have calculated the
synchrotron flux from our considered UFDs and have examined the possibility of
observing these signals by SKA \cite{Bhattacharjee:2020phk}. Fig. 8.7, shows the estimated synchrotron
fluxes, for the UFDs listed in Table 8.4, for $b\bar{b}$, $\tau^+ \tau^-$ and
$\mu^+ \mu^-$ annihilation channels \cite{Bhattacharjee:2020phk}. In Fig. 8.7, we have also shown the
sensitivity of SKA~\cite{braun2017ska,braun2019anticipated}
for its 10, 100 and 1000 hours of observation time. Here we would like to
mention that SKA would process a very wide effective area and thus we can expect
it to cover all our selected UFDs \cite{Bhattacharjee:2020phk}.
\begin{figure}[h!]
\centering
\includegraphics[width=.3\linewidth]{figures/bbbar_new_udfs_ska_radio_sensitivity_200GeV.pdf}
\hskip 10pt
\includegraphics[width=.3\linewidth]{figures/tautau_new_udfs_ska_radio_sensitivity_200GeV.pdf}
\hskip 10pt
\includegraphics[width=.3\linewidth]{figures/mumu_new_udfs_ska_radio_sensitivity_200GeV.pdf}
\includegraphics[width=.3\linewidth]{figures/bbbar_new_udfs_ska_radio_sensitivity_2TeV.pdf}
\hskip 10pt
\includegraphics[width=.3\linewidth]{figures/tautau_new_udfs_ska_radio_sensitivity_2TeV.pdf}
\hskip 10pt
\includegraphics[width=.3\linewidth]{figures/mumu_new_udfs_ska_radio_sensitivity_2TeV.pdf}
\hskip 10pt
\includegraphics[width=0.9\linewidth]{figures/legends.pdf}
\caption{The synchrotron flux densities of our considered UFDs and classical
dSphs, Draco has been determined for three annihilation channels, such as:
$b\bar{b}$ (left), $\tau^+\tau^-$ (center) and $\mu^+\mu^-$ (right) and for two
particular DM masses, such as: 200 GeV (top) and 2 TeV (bottom). For each
figure, we have considered NFW profile and fixed the parameters at $\langle
\sigma v\rangle \, = 10^{-26}$ cm$^3$/s , $B \, = \, 1\, \mu$G, $D_0 = 3 \times
10^{28}$ cm$^2$/s, $\gamma_D = 0.3$. The values of $\rho_s$, $r_s$, $d$ and
$r_h$ have been taken from Table 8.4. For Hydra II, Triangulum II and Tucana
III, we only have the upper limits on $\rho_s$ (Table 8.4), thus they can only
provide the upper limits on synchrotron flux densities.}
\label{figure:synflux_newgalaxies}
\end{figure}
\noindent For high DM mass, the $e^\pm$ produces the hard spectrum and the
resulting synchrotron flux can go beyond the detection range of SKA. Thus, that
would also consequently reduce the detection feasibility at SKA. From Fig.~8.7,
we can observe the same \cite{Bhattacharjee:2020phk}. In Fig.~8.7, we find that for 200 GeV DM mass, the
radio emission of 12 UFDs originating from three annihilation channels can be
detected by the 100 hours of SKA sensitivity, while for 2~TeV DM mass, the
synchrotron emission only for the $b\bar{b}$ annihilation channel can be
observed by the 1000 hours of sensitivity curve \cite{Bhattacharjee:2020phk}. Interestingly, from Fig.~8.7,
we can also notice that for both 200 GeV and 2 TeV DM masses, only Draco can be
detected by the SKA (too with the $\sim 10$ hours of sensitivity curve) for all
three annihilation channels \cite{Bhattacharjee:2020phk}.
\section{Astrophysical Uncertainties and the Constraints}
\label{sec:uncertainty}
\noindent The limits that we have derived in the earlier sections are based on
the central values of the parameters listed in Tables~8.1 and 8.2. But there is
not much detailed spectroscopic study for the newly discovered UFDs and due to
the inadequate observation, the astrophysical parameters associated with them
might process very large uncertainty. Thus, in order to draw any strong conclusion
from the analysis of UFDs, we need to address the possible uncertainty related
to the constraints that we obtained from gamma-ray and radio data.
\subsection{Uncertainties in the $\gamma$-ray Bounds}
\label{section:uncertainties_horo_tuc}
\noindent For gamma-ray analysis, our insufficient knowledge of the shape of DM
distribution is the main source for large uncertainties. Especially for the
newly discovered UFDs, a few member stars have been detected and that creates
the prime obstacle to assume the DM distribution in them\cite{Funk:2013gxa}. As
we have already mentioned, the $N$-body
simulation results favour the cuspy-NFW profile but the observation data from
some particular galaxies favour the cored profile for DM distribution (e.g.
Pseudo isothermal and Burkert\cite{de_Blok_2001}). Hence, we would like to
investigate the role of the DM profiles for UFDs and for that purpose we have
chosen Horologium I as it provides the strongest $\gamma$-ray limits for
$b\bar{b}$ annihilation channel \cite{Bhattacharjee:2020phk}.
\noindent We have used the median value of $J$-factor from Table~8.2 and have
derived the $\langle \sigma v \rangle$ upper limits for three DM density
profiles (Fig.~8.8). From Fig.~8.8, we observe that Burkert profile imposes the
strongest limits, while Pseudo isothermal profile provides the weakest
constraints \cite{Bhattacharjee:2020phk}. Though we have only shown the result for Horologium I, all our
selected UFDs would show the same nature.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.5\textwidth,clip,angle=0]{figures/density_profile_comparison_horo.pdf}
\end{center}
\caption{Comparison between the upper limits on $\langle \sigma v \rangle $
obtained from the Fermi-LAT data for three density profiles for $b\overline{b}$
final state.}
\label{figure:profile_comparison}
\end{figure}
\noindent Next, we have checked how the uncertainties associated with the
J-factor for NFW profile can influence our results. We have considered the
$1\sigma$ uncertainty band associated with NFW profile (Table~8.2), and in
Fig.~8.9 we have shown their corresponding limits. Here, we again consider only
the $b\bar{b}$ annihilation channel as for $\gamma$-ray data this channel
provides the most stringent limits \cite{Bhattacharjee:2020phk}. From Fig.~8.9, we could find that UFDs have
possessed a large uncertainty band in the parameter space of ($m_{DM}$, $\langle
\sigma v \rangle$) \cite{Bhattacharjee:2020phk}. If we check the Eq.~2.5, we find that $\gamma$-ray flux
resulting from the WIMP annihilation is proportional to J-factor and thus the
large uncertainty in J-factor would always translate to the large uncertainties
in the $\langle \sigma v\rangle$ upper limits. In the future, with more detailed
spectroscopic studies, it might be possible to reduce the uncertainty band for
newly discovered UFDs.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=.49\linewidth]{figures/horologium_uncertainty.pdf}
\includegraphics[width=.49\linewidth]{figures/tucana_uncertainty.pdf}
\end{center}
\caption{95$\%$ C.L. upper limits of $\langle \sigma v \rangle $ as a function
of DM mass, $m_DM$ for the `central value' of J-factor derived by Pace \textit{et al.},
2019 and its relative uncertainties (Table~8.2). The
$\langle \sigma v \rangle $ limits for Horologium I and Tucana II have been
shown in the left and the right panels, respectively.}
\label{figure:crossuncertainty}
\end{figure}
\subsection{Uncertainties in the Synchrotron Fluxes}
\label{sec:synchrotron_uncertainty}
\noindent Like the gamma-ray fluxes, the uncertainties in the astrophysical
parameters (e.g. $d$, $r_{1/2}$ and $\sigma$) can also affect the synchrotron
fluxes. Thus, in this subsection, we would again check the possible
uncertainties associated with radio limits and for that purpose, we have used
the $1\sigma$ uncertainties of the parameters listed in Table 8.1. From
Fig.~8.10, we have shown the uncertainties related to the synchrotron flux in
Tucana II for DM mass 200 GeV and $b\bar{b}$ annihilation channel. Here we have
particularly chosen Tucana II as it shows the highest synchrotron emission \cite{Bhattacharjee:2020phk}.
\begin{figure}[!h]
\centering
\subfigure[]
{\includegraphics[width=0.49\linewidth]{figures/Tucana_II_bbbar_200GeV_d_fundamental_uncertainty.pdf}}
\label{fig:d_uncertainty}
\subfigure[]
{\includegraphics[width=0.49\linewidth]{figures/Tucana_II_bbbar_200GeV_rhalf_fundamental_uncertainty.pdf}}
\label{fig:rhalf_uncertainty}
\subfigure[]
{\includegraphics[width=0.49\linewidth]{figures/Tucana_II_bbbar_200GeV_sigma_fundamental_uncertainty.pdf}}
\label{fig:sigma_uncertainty}
\caption{The variation of synchrotron flux densities in Tucana II, for 200 GeV
$m_{DM}$ and $b\bar b$ final state, with 1$\sigma$ uncertainties in (a) $d$, (b)
$r_{\frac{1}{2}}$, (c) $\sigma$. We have considered NFW profile and fixed the
parameters at $\langle \sigma v\rangle \, = 10^{-26}$ cm$^3$/s , $B \, = \, 1\,
\mu$G, $D_0 = 3 \times 10^{28}$ cm$^2$/s, $\gamma_D = 0.3$.}
\label{figure:uncertainty}
\end{figure}
\noindent The range of uncertainties that we have shown in Fig.~8.10 is the
combination of the errors associated with $d$, $r_{1/2}$ and $\sigma$ \cite{Bhattacharjee:2020phk}. From
Table~8.1, we can notice that compared to $r_{1/2}$ and $\sigma$, the error in
$d$ is relatively small and, thus $d$ would not impose large uncertainty in
synchrotron flux. But both $r_{1/2}$ and $\sigma$ contribute a significant
amount to the uncertainties in flux \cite{Bhattacharjee:2020phk}. Hence, in order to reduce the uncertainties
level, the accuracy in the measurements of these two parameters would play a
very crucial role. From Fig.~8.10, we can check the same \cite{Bhattacharjee:2020phk}.\\
\noindent A
further source of uncertainty is due to the density distribution for the DM.
Until now, we have only used the NFW density profile in order to predict the
synchrotron flux, while in Fig.~8.11, we have shown the synchrotron flux from
Tucana II predicted for NFW, ISO and Burkert DM profiles. Here, we would like to
mention that unlike the $\gamma$-ray limits, for synchrotron flux NFW profile
provides the highest flux, while Burkert imposes the lowest \cite{Bhattacharjee:2020phk}. \\
\begin{figure}[h!]
\begin{center}
\includegraphics[width=.5\linewidth,height=3in]{figures/bbbar_NFW_Burkert_ISO_Tucana_II_200GeV.pdf}
\end{center}
\caption{The synchrotron flux densities in Tucana II predicted from NFW,
Burkert and ISO density profiles. We have considered 200 GeV $m_{DM}$ and $b\bar
b$ final state. Besides, we have fixed the parameters at $\langle \sigma
v\rangle \, = 10^{-26}$ cm$^3$/s , $B \, = \, 1\, \mu$G, $D_0 = 3 \times
10^{28}$ cm$^2$/s, $\gamma_D = 0.3$.}
\label{figure:sync_profile_dependence}
\end{figure}
\noindent Besides, the synchrotron fluxes also strongly depend on magnetic field
($B$), diffusion constant $D_{0}$ and its exponent ($\gamma_D$). Unfortunately,
for UFDs, we do not have any precise knowledge of them.
In section 8.3, we discussed the possible values for $B$, $D_{0}$ and
$\gamma_{D}$ but to predict the possible synchrotron flux limits, we had only
used the central values of them. Hence, To check the effect of these parameters
on the prediction of the amount of synchrotron flux, in Fig. 8.12 we show the
synchrotron flux for different values of $B$, $D_0$ and $\gamma_D$ within their
plausible ranges \cite{Bhattacharjee:2020phk}. We have taken the values of $B$ in the range of $0.5$-$10$
$\mu$G, $D_0$ in the range of $3\times 10^{26}$-$10^{30}$ $cm^2/s$ and $\gamma_D$
in the range of $0.1$-$1$ \cite{Bhattacharjee:2020phk}. Since the magnetic field is the cause of synchrotron
radiation, the flux increases as we go higher in the magnetic field (Fig.~8.12
(a)), while the diffusion constant shows the reverse effect (Fig.~8.12 (b)). For
a large value of $D_0$, the synchrotron would decrease as most of the
relativistic charged particles then leave the diffusion region without radiating
their complete energy.
For $\gamma_D$, its large value would suppress or enhance the $D(E)$, depending on the energy value. Since for synchrotron emission, the high energy $e^\pm$
accelerates in the presence of the magnetic field (check Fig.~8.5 (a)), the large value of $\gamma_D$
would strongly suppress flux at high frequency, while for frequency below $\sim
1$~MHz, the synchrotron flux would be enhanced \cite{Bhattacharjee:2020phk}. Between 1 MHz to 5 MHz, the flux
would rise to its peak value at $E(e^\pm) = 1$~GeV (Fig.~8.5 (a)). But the
effect of $\gamma_D$ relatively less crucial than $B$ and $D_{0}$ (Fig.~8.12
(c)) \cite{Bhattacharjee:2020phk}.
\begin{figure}[!h]
\centering
\subfigure[]
{\includegraphics[width=0.49\linewidth]{figures/Tucana_II_bbbar_200GeV_different_B.pdf}}
\label{fig:flux_different_B}
\subfigure[]
{\includegraphics[width=0.49\linewidth]{figures/Tucana_II_bbbar_200GeV_different_D0.pdf}}
\label{fig:flux_different_D0}
\subfigure[]
{\includegraphics[width=0.49\linewidth]{figures/Tucana_II_bbbar_200GeV_different_gamma.pdf}}
\label{fig:flux_different_gamma}
\caption{Variation of synchrotron flux densities in Tucana II with (a) $B$,
(b) $D_0$ and (c) $\gamma_D$. We have considered 200 GeV $m_{DM}$, $b\bar b$
final state and NFW density profile. Besides, we have fixed the parameters at
$\langle \sigma v\rangle \, = 10^{-26}$ cm$^3$/s , $B \, = \, 1\, \mu$G, $D_0 =
3 \times 10^{28}$ cm$^2$/s, $\gamma_D = 0.3$.}
\label{fig:flux_diff_BD0gamma}
\end{figure}
\section{Conclusions $\&$ Discussions}
\label{section:conclusion}
\noindent The UFDs, dominated by the DM content, can also possess the moderately
large value of the magnetic field and that makes them an ideal target for the
indirect detection of DM signals through the multiwavelength approach. In recent
times, several literatures have tried to derive the strong limits on
annihilation $\langle \sigma v \rangle$ from gamma-ray and radio data
\cite{Hoof:2018hyn, DiMauro:2019frs, Fermi-LAT:2016uux, Beck:2019ukt,
Regis:2017oet, Regis:2014tga}. For our work, we have considered the newly
discovered UFDs detected by the observations performed by Pan-STARRS, DES and
some other spectral survey \cite{Bhattacharjee:2020phk}.
Using both the gamma-ray (detected by Fermi-LAT) and the radio data (detected by
VLA and GMRT), we have searched for the WIMP annihilation signal in 15 UFDs. We
have also predicted the possible spectra associated with the radio emission and
have checked whether it would be possible for SKA to detect any emission from
them \cite{Bhattacharjee:2020phk}.
\noindent With eleven years of Fermi-LAT data, we have not detected any
significant emission from the location of UFDs. Thus, we have then derived the
upper limits on $\langle\sigma v\rangle$ as a function of DM mass for our chosen
DM annihilation channels. We have estimated the limits for 12 UFDs. Because, for
Triangulum II, Hydra II and Tucana III, we only have the upper limits on
J-factor, so they could not provide any robust limits on the parameter space of
($m_{DM}$, $\langle\sigma v\rangle$) \cite{Bhattacharjee:2020phk}. For gamma-ray data, Holorologium I
provided the most stringent constraints but our obtained limits strongly depend
on the distribution of DM. Using the NFW profile, we have derived most of the
results. Besides, we have also performed a comparative study between NFW,
Burkert and ISO profiles. In view of gamma-ray analysis, the Burkert profile
imposed the strongest limits on $\langle \sigma v\rangle$, while the ISO imposed
the weakest limits \cite{Bhattacharjee:2020phk}.
\noindent In view of synchrotron emission, we have considered the radio-flux
limits observed by GMRT and VLA and have predicted the respective $\langle\sigma
v\rangle$ upper limits for $b\bar{b}$, $\tau^+ \tau^-$ and $\mu^+ \mu^-$ final
states. We have compared our obtained radio limits with the limits obtained from
gamma-ray data and found that the VLA telescope has the potential to impose more
stringent limits than Fermi-LAT.
\noindent We have derived the possible the synchrotron fluxes in UFDs for a wide
range of frequencies, i.e., between 10 MHz to 100 GHz and compared these with the
sensitivity curves of SKA. We find that for 200 GeV DM mass and $b \bar b$ final
state, it might be possible for SKA to detect the radio emission from our
considered UFDs, even with its 10 hours of sensitivity curve. For $\tau^+\tau^-$
and $\mu^+ \mu^-$ final states, the emission could be detected with the 100
hours of exposure curve of SKA. On the other side, for comparatively heavy DM
masses, (say $\sim$ 2 TeV), the synchrotron spectrum would become harder, and
thus a longer observation time would be necessary to detect the radio signal.
\noindent We also need to remember that the synchrotron fluxes have strong
dependences on several astrophysical components, such as magnetic field,
diffusion coefficient, distance, etc. But, due to insufficient observation, the
values are not very precise. Thus, in order to predict the synchrotron fluxes in
UFDs, we must have the most accurate information of the astrophysical
parameters, especially the magnetic field and the diffusion coefficient. We have
checked how the synchrotron flux in Tucana II varies with $B$, $D_0$ and
$\gamma_D$ for DM mass 200 GeV and $b\bar{b}$ annihilation channel. We have
noticed that synchrotron emission strongly depends on these. Besides, the
emission is also controlled by the choice of DM density distribution in UFDs. We
have found that for Tucana II, NFW density profile could produce the maximum
amount of radio flux between all three density profiles. Our considered UFDs
process a large uncertainties in $r_{1/2}$, $d$ and $\sigma$. The uncertainties
in these astrophysical parameters can also affect the synchrotron emission
arising from UFDs. We have performed the respective checks and have found that
the largest contribution is coming from the uncertainties in $\sigma$.
\noindent Despite the dependence on these uncertainties, we can safely conclude
that a very intriguing aspect of indirect searching for DM signal from UFDs has
been discussed in our study. In Fig.~8.13, we have compared the most stringent
obtained from the VLA sky-survey with the best limits obtained from the
Fermi-LAT data for three final states. From Fig.~8.13, we could notice that for
$\mu^+ \mu^-$ and $\tau^+ \tau^-$ final states, VLA imposes the better limits
that Fermi-LAT, while for $b\bar{b}$ final state Fermi-LAT provides the stronger
limits that VLA \cite{Bhattacharjee:2020phk}.
\begin{figure}[h!]
\centering
\includegraphics[width=.49\linewidth]{figures/bbbar_Exclusion_Curve_sigmav_versus_M_Fermilat_VLA_Comparison.pdf}
\includegraphics[width=.49\linewidth]{figures/mumu_Exclusion_Curve_sigmav_versus_M_Fermilat_VLA_Comparison.pdf}
\includegraphics[width=.49\linewidth]{figures/tautau_Exclusion_Curve_sigmav_versus_M_Fermilat_VLA_Comparison.pdf}
\caption{Comparison between the 95 $\%$ C.L. $\langle \sigma v \rangle$ limits
obtained from the VLA and the Fermi-LAT data for three annihilation channels,
such as: $b\bar{b}$, $\tau^+\tau^-$ and $\mu^+\mu^-$.
For comparison, we have considered the strongest radio and gamma-ray limits from
obtained from Draco and Horologium I, respectively. We have considered NFW profile and fixed the parameters at
$\langle \sigma v\rangle \, = 10^{-26}$ cm$^3$/s , $B \, = \, 1\, \mu$G, $D_0 =
3 \times 10^{28}$ cm$^2$/s, $\gamma_D = 0.3$.}
\label{figure:Exclusion_curve_VLA_Fermilat_comparison}
\end{figure}
\noindent In view of indirect DM search, we expect that the next-generation
$\gamma$-ray telescope, CTA would play a very crucial role. CTA would have the
deepest sensitivity for a very wide range of
energies\cite{CTAConsortium:2018tzg} and would be able to investigate the
thermal $\langle \sigma v \rangle$ rate from several of DM rich targets. Along
with the CTA, in radio sky, SKA is expected to become the most sensitive radio
telescopes in the future. Besides, Low-Frequency Array (LOFAR) such as MeerKAT
and ASKAP would also be complementary to the CTA and SKA. We can, at best,
expect that all of these next-generation telescopes would be able to solve
several crucial aspects of dark matter physics.
\chapter{List of publications}\label{publications}
\vspace{0.5cm}
\noindent{\large{\bf Publications relevant to the Thesis:}}
\begin{itemize}
{\item[1.] \emph{Constraints on dark matter models from
the observation of Triangulum-II with the Fermi Large Area Telescope}, Sayan
Biswas, {\bf Pooja Bhattacharjee}, Pratik Majumdar, Mousumi Das, Subinoy Das,
and Partha Sarathi Joarder, {\color{black} Journal of Cosmology and
Astroparticle Physics {\bf 11}, 003 (2017); arXiv:1705.00426 [astro-ph.HE] (2017)}.}
\vspace{2mm}
{\item[2.] \emph{Analysis of Fermi-LAT data from
Tucana-II: possible constraints on the
Dark Matter models with an intriguing hint of a signal}, {\bf Pooja
Bhattacharjee}, Sayan Biswas, Pratik Majumdar, and Partha Sarathi Joarder,
{\color{black} Journal of Cosmology and Astroparticle Physics {\bf 08}, 028
(2019); arXiv:1804.07542 [astro-ph.HE] (2018)}.}
\vspace{2mm}
{\item[3.] \emph{Multiwavelength analysis of low surface
brightness galaxies to study
possible dark matter signature}, {\bf Pooja Bhattacharjee}, Pratik Majumdar,
Mousumi Das, Subinoy Das, Partha Sarathi Joarder, and Sayan Biswas,
{\color{black} Monthly Notices of the Royal
Astronomical Society {\bf 501}, 4238
(2021); arXiv:1911.00369 [astro-ph.HE] (2019)}.}\vspace{2mm}
{\item[4.] \emph{Gamma-ray and Synchrotron Radiation
from Dark Matter annihilations in Ultra-faint Dwarf Galaxies}, {\bf Pooja
Bhattacharjee}, Debajyoti Choudhury, Kasinath Das, Dilip Kumar Ghosh, and Pratik
Majumdar, {\color{black} Journal of Cosmology and Astroparticle Physics {\bf 06}, 041
(2021); arXiv:2011.08917 [hep-ph] (2020)}.}
\end{itemize}
\vspace{0.5cm}
\noindent{\large{\bf Additional publications during the Ph.D. thesis but not forming part of it:}}
\begin{itemize}
{\item[1.] \emph{Investigating the region of 3C 397 in
High Energy Gamma rays}, {\bf Pooja Bhattacharjee}, Pratik Majumdar, Tulun
Ergin, Lab Saha, Partha Sarathi Joarder, {\color{black} Proceedings of the
International Astronomical Union {\bf 12}, 316 (2017)}; arXiv:1801.05961 [astro-ph.HE] (2018). }
\vspace{2mm}
{\item[2.] \emph{Probing the star formation origin of
gamma rays from 3FHL J1907.0+0713}, Tulun Ergin, Lab Saha, {\bf Pooja
Bhattacharjee}, Hidetoshi Sano, Shuta Tanaka, Pratik Majumdar, Ryo Yamazaki,
Yasuo Fukui, {\color{black} Monthly Notices of the Royal
Astronomical Society {\bf 501}, 4226
(2021)}; arXiv:2012.07357 [astro-ph.HE] (2020).}
\end{itemize}
|
1,108,101,562,980 | arxiv | \section{Introduction}\label{sec:int}
Inelastic collisions of charged particles with matter probe the response of many-electron systems ranging from linear response in the perturbative limit to the strong-field non-linear response in the non-perturbative regime at low projectile velocities. The characteristic energy loss, stopping power, and energy straggling (the second moment of the energy loss distribution) are among the most important variables quantifying this response. Their investigation dates back to the early work by Bohr \cite{bohr13,bohr15} more than one hundred years ago and continues up to date \cite{bohr48,bethe1930,bailey,bloch1933,landau44,lindhard54,bonderup,andersen1978,besenbacher1980,ahlen80,sigmund96,sigmundbook,sigmund2001}. Present interest in the energy loss distribution is derived from both fundamental aspects of inelastic many-body physics as well as a host of technological and radiation physics applications. The most prominent examples of the latter include hadron-therapy protocols in oncology, sub-surface layer deposition in semi-conductors, and material protection against long-term radiation exposure for space exploration.
Only recently, progress in methods for exact numerical solutions of the time-dependent many-electron problem and the increased availability of computational power has opened up opportunities for fully ab-initio simulations of the many-electron response to charged particle penetration. The prototypical case in point, for which a - within the numerical accuracy - exact solution is nowadays possible is the inelastic scattering of antiprotons with helium \cite{bailey,borbely14}. This system constitutes the benchmark for the inelastic many-body response and for the energy loss distribution in inelastic collisions for several reasons: helium is the simplest atomic system where correlation effects play a prominent role. Antiprotons are the simplest case of a hadronic projectile that provides a time-dependent Coulomb field driving excitation and ionization without adding complications associated with the charge-transfer channel. Moreover, comparison between proton and antiproton projectile scattering allows for the exploration of the Barkas effect \cite{barkas63}, the variation of the many-electron response under charge conjugation. Pioneering computational studies of correlated two-electron charged particle induced processes in He including the Barkas effect in double ionization~\cite{reading1987a,reading1987b,ford94} and correlation effects in ionization of helium~\cite{reading96,reading97} were performed by Reading and Ford using the forced impulse approximation~\cite{ford1985}. Nowadays, for $\rm{\bar p + He}$ collisions the time-dependent Schr\"odinger equation (TDSE) for the two-electron problem can be solved in its full dimensionality without any approximation.
On the experimental side, the low-energy antiproton ring (LEAR) at CERN has allowed to study fundamental scattering and recombination processes involving antiprotons \cite{sigmund2001,borbely14,barkas63}. The extra-low energy antiproton (ELENA) ring is expected to significantly increase the flux of antiprotons usable in scattering experiments in the near future \cite{schiwietz96}. First full quantum calculations for $\rm{\bar p + He}$ beyond perturbative calculations were performed within the single-active electron (SAE) model by Schiwietz et al.~\cite{schiwietz96} using an atomic-orbital (AO) expansion and by L\"uhr and Saenz \cite{luhr} employing a semiclassical close-coupling approach to the effective one-electron TDSE for $\rm{\bar p + He}$ using a B-spline basis for the radial wave functions. They found sizeable disagreement with the first stopping power measurement by Agnello et al.~\cite{agnello,lodi04} for helium both below and above the stopping power maximum and attributed the discrepancies with the experiment at lower energies to multi-electron or correlation effects neglected within the SAE model. A step towards partially including those were very recently taken by Bailey et al.~\cite{bailey} using a multi-configuration expansion of the He target wave function within the convergent close-coupling (CCC) approach. True two-electron processes such as double ionization and excitation-ionization were, however, still approximated by sequential one-electron excitation and ionization of He and He$^+$.
For straggling, i.e. the second moment of the energy loss distribution, available experimental data as well theoretical results are still remarkably scarce despite its importance for applications. For gas-phase targets only very few measurements are available \cite{bonderup,andersen1978,besenbacher1980,Vockenhuber}. Theoretical treatments, to date, rely on perturbation theory converging to the high-energy limit $T_B=4\pi Z_p^2Z_Te^2$ for electronic straggling derived by Bohr from classical binary encounter scattering \cite{bohr15} of the projectile on $Z_T$ independent free electrons of the target atom. Remarkably, at non-asymptotic energies ab-initio simulations appear to be still missing up to date.
In the present communication we present first fully ab-initio simulations of the electronic energy loss distribution for antiproton scattering at helium atoms. The two-electron response is treated - within the limits of numerical convergence - exactly and allows, for the first time, to clearly identify the influence of electronic correlations on the energy loss distribution. Most notably, multi-electron shake-up processes yield energy loss fluctuations in excess of the celebrated Bohr straggling limit $T_B$. Atomic units are used unless stated otherwise.
\section{Theoretical methods}\label{sec:theoback}
\subsection{Background}
The passage of charged particles through matter with atom number density $N$ and thickness $\Delta x$ is accompanied by an energy loss resulting for an initially mono-energetic beam with energy $E_p=\frac{1}{2}m_pv_p^2$ in energy loss distribution $P(\varepsilon)$ with $\varepsilon=E-E_p$, the energy transferred to the target atoms. For dilute matter such as gas targets where non-linear density effects can be safely neglected, $P(\epsilon)$ is related to the differential energy transfer (DET) cross section, $d\sigma(\varepsilon)/d\varepsilon$, as
\begin{equation}\label{eq:P(E)}
P(\varepsilon) = N\Delta x \frac{d\sigma(\varepsilon)}{d\varepsilon}.
\end{equation}
The mean energy loss, the first moment of $P(\varepsilon)$, is, accordingly, given by
\begin{equation}\label{eq:mean_E}
\langle \Delta E\rangle = N\Delta x S
\end{equation}
with
\begin{equation}\label{eq:S}
S=\int \epsilon\frac{d\sigma(\epsilon)}{d\epsilon}d\epsilon
\end{equation}
the energy loss cross section $S$, the mean loss per target atom. The so-called stopping power or stopping force, ($-\frac{dE}{dx}$) follows from Eqs. (\ref{eq:mean_E}) and (\ref{eq:S}) as
\begin{equation}\label{eq:stopping_p}
-\frac{\langle \Delta E \rangle}{\Delta x} = N(-S),
\end{equation}
where the minus sign indicates energy lost by the projectile and transferred to the electronic degrees of freedom of the target atom. Likewise, the straggling parameter $\Omega^2$ related to the second moment of the DET follows as
\begin{equation}\label{eq:straggling}
\Omega^2 = N\Delta xT,
\end{equation}
with
\begin{equation}\label{eq:T}
T=\int\epsilon^2\frac{d\sigma(\epsilon)}{d\epsilon}d\epsilon
\end{equation}
referred to as the atomic straggling cross section. $T$ is a measure for fluctuations in the energy loss distribution.
We will focus in the following on the energy transfer to the electronic degrees of freedom. Energy transfer to the He nucleus (''nuclear stopping'') is negligible at high collision energies~\cite{schiwietz96} and provides only a small correction to the stopping cross section of $\leq 10\%$ even at lowest energies ($E_p=3$~keV) considered here. Also for higher moments of the energy loss distribution, the nuclear scattering channel may contribute only a small tail extending to high energies due to rare "hard" binary collisions at a (screened) Coulomb potential. Nuclear contributions can be readily accounted for by elastic binary collisions at a screened Coulomb potential and will be, for completeness, included when we compare with experiments. We also note that for transmission through dense gas targets, the energy loss distribution $d\sigma(\epsilon)/d\epsilon$ resulting from the individual atomic collisions should be self-convoluted in a multiple scattering setting. Our focus in the following is on single collisions at a multi-electron atom in a dilute gas target.
Early theories on the stopping power ($-\frac{dE}{dx}$) or stopping force based on either classical binary collision approximations \cite{bohr13,bohr15} or first-order quantum approximations \cite{bethe1930,bloch1933} can be written in terms of the dimensionless so-called stopping number $L(E)$ as
\begin{equation}
\label{eq:dedx}
-\frac{dE}{dx} = N \frac{4\pi e^2Z_p^2Z_T^2}{m_ev_p^2} L(E),
\end{equation}
with $Z_p$ ($Z_T$) the nuclear charge of the projectile (target), $v_p$ the speed of the incident projectile, $m_e$ the mass of the electron and $N$ the number density of the target atoms. Well-known approximations to the stopping number include the classical Bohr logarithm
\begin{align}\label{eq:L_Bohr}
L_{\rm Bohr}(E) = \ln{\frac{1.123 m_ev_p^3}{Z_pe^2\omega}},
\end{align}
with $\omega$ the classical oscillator (or mean transition) frequency and the Bethe logarithm derived from the first Born approximation
\begin{align}\label{eq:L_Bethe}
L_{\rm Bethe}(E) = \ln{\frac{2 m_ev_p^2}{\hbar\omega}}.
\end{align}
A multitude of more sophisticated approximations have been developed over the years approximately including
corrections for the Barkas effect, binding shell corrections, so-called "bunching" effects accounting for deviations from the independent-electron response in atoms and solids as well as interpolations between the low-energy regime and the high-energy regime where the Bohr approximation applies covering the stopping maximum \cite{sigmundbook,sigmund2001}.
\subsection{Semiclassical impact-parameter approach}
The numerical solution of the time-dependent Schr\"odinger equation (TDSE) for a non-perturbative treatment of the electronic DET cross section involves, generally the (semiclassical) impact parameter (IP) approach. Accordingly, the projectile is treated as a classical charged particle moving on a straight-line trajectory $\vec{R}(t)=\vec{b}+\vec{v}_pt$. Here $\vec{b}$ is the impact parameter vector, and $\vec{v_p}$ is the projectile's velocity. In turn, the electronic dynamics driven by the time-dependent Hamiltonian $H(t)$ is treated fully quantum mechanically by solving the TDSE. The IP approximation is well justified and leads to negligible errors for the antiproton energies above a few keV, the projectile energies considered in the following. The impact parameter dependent transfer probability density $P_{i\rightarrow f}(\varepsilon;b,v_p)$ from the initial state $i$ to the final state $f$ representing excitation or ionization is determined by the projection of the numerically evolved state at time $t_t$ $\left |\Psi(b,v_p,t_t)\right\rangle$, parametrically dependent on impact parameter and projectile velocity, onto the corresponding exit-channel state $|\psi_f(E_f)\rangle$,
\begin{equation}\label{eq:prob}
P_{i\rightarrow f}(\epsilon; b,v_p) =|\langle\psi_f(E_f)|\Psi(b,v_p,t_t)\rangle|^2,
\end{equation}
where $\epsilon=E_f-E_i$ with $E_i$ the energy of the initial state (i.e.~the ground state of the target), and $E_f$ the energy of the final state (excited, singly, and doubly ionized states) at the termination point $t_t$ of the time propagation. As $t_t$ is finite in a realistic numerical simulation, $P_{i\rightarrow f}$ must be tested for convergence as a function of $t_t$. In Eq. (\ref{eq:prob}), the rotational symmetry of the He initial state was used as $P_{i\rightarrow f}$ depends only on the magnitude $b$ of the impact parameter vector. From Eq. (\ref{eq:prob}) the differential energy transfer cross section follows as
\begin{equation}
\frac{d\sigma}{d\varepsilon}(\varepsilon)=2\pi\int db b\sum\limits_{f} P_{i\rightarrow f}(\varepsilon; b, v_p),
\label{eq:det}
\end{equation}
where the sum extends over those degenerate final states $f$ that contribute to the fixed energy transfer $\varepsilon$.
The total energy loss or stopping cross section can be expressed as
\begin{equation}
S(v_p)=2\pi\int b S(b;v_p) db,
\label{eq:sint}
\end{equation}
where $S(b;v_p)$ is the impact parameter dependent mean energy loss given in terms of the loss distribution (Eq.~\ref{eq:prob}) by
\begin{equation}
S(b;v_p)=\sumint_f \varepsilon P_{i\rightarrow f}(\varepsilon,b;v_p) d\varepsilon
\label{eq:lossindirect}
\end{equation}
Analogously, the straggling cross section reads
\begin{equation}
T(v_p)=2\pi\int bT(b;v_p)db,
\label{eq:tint}
\end{equation}
where $T(b;v_p)$ is the impact parameter dependent straggling which can be calculated from the energy transfer probability density
\begin{equation}
T(b;v_p)= \sumint_f P_{i\rightarrow f}(\varepsilon,b;v_p) \left[\varepsilon - S(b;v_p)\right]^2d\varepsilon.
\label{eq:straggindirect}
\end{equation}
Alternatively to the explicitly channel-resolved expressions Eq. (\ref{eq:lossindirect}) and Eq. (\ref{eq:straggindirect}), $S$ and $T$ can be directly expressed in terms of expectation values of the unperturbed electronic Hamiltonian $H_0$ calculated with the initial ($E_0$) as well as the evolved state $\left |\Psi(b,v_p,t) \right\rangle$,
\begin{equation}
S(b;v_p)=\left\langle E \right\rangle -E_0
\label{eq:stopping_direct}
\end{equation}
with
\begin{equation}
\left\langle E \right\rangle = \left\langle \Psi(b,v_p,t) | H_0 | \Psi(b,v_p,t) \right\rangle,
\label{eq:enexp}
\end{equation}
and
\begin{equation}
T(b;v_p)= \left\langle E^2 \right\rangle -2\left\langle E \right\rangle\left[E_0+S(b;v_p)\right]+\left[E_0+S(b;v_p)\right]^2
\label{eq:straggling_direct}
\end{equation}
with
\begin{equation}
\left\langle E^2 \right\rangle = \left\langle \Psi(b,v_p,t) | H_0^2 | \Psi(b,v_p,t) \right\rangle.
\label{eq:en2exp}
\end{equation}
Within a fully converged calculation and in the limit $t\rightarrow \infty$, Eqs. (\ref{eq:stopping_direct},\ref{eq:straggling_direct}) would be equivalent to Eqs. (\ref{eq:lossindirect},\ref{eq:straggindirect}). However, since the numerical propagation must be terminated at a finite time $t_t$ when both the departing antiproton and the ionized electron are still at a moderately large distance from the He target as well as from each other, the projections Eq. (\ref{eq:prob}) as well as the expectation values Eqs. (\ref{eq:enexp},\ref{eq:en2exp}) may be affected by, in general different, termination errors. We estimate the size of such errors by comparing $S$ and $T$ calculated by the two alternative methods.
\subsection{Time-dependent close coupling method}
For accurate energy loss values and energy loss distributions a high-precision description of the collision between the projectile and one target atom is required. In order to achieve this goal, we numerically solve the time-dependent Schr\"odinger equation describing the quantum dynamics of the two active electrons of the He target in the presence of the passing-by antiproton \cite{borbely14}.
The time-dependent Hamiltonian is given by
\begin{equation}
H(t) = H_0 +\sum\limits_{i=1}^2\frac{1}{|\vec r_i-\vec R(t)|},
\label{eq:ham}
\end{equation}
with $H_0$ the unperturbed electronic Hamiltonian of the helium atom
\begin{equation}
H_0=\sum\limits_{i=1}^2\left ( -\frac{\nabla_i^2}{2} -\frac{2}{r_i} \right)+\frac{1}{|\vec r_1-\vec r_2|}.
\label{eq:ham0}
\end{equation}
We solve the TDSE
\begin{equation}
i\frac{\partial \Psi(t)}{\partial t}=H(t)\Psi(t)
\end{equation}
by the time-dependent close-coupling (TDCC) method~\cite{borbely14,foster08,feist08}. Briefly, the fully correlated two-electron wave function is represented in the basis of symmetrized coupled spherical harmonics~\cite{borbely14}, while the radial partial wave functions are represented using the finite element discrete variable representation (FEDVR) method \cite{schneider05,rescigno00}, where each radial coordinate is divided into segments with variable length (i.e. finite elements - FEs). Then, inside each FE the radial wave function is represented on a local polynomial basis (i.e. discrete variable representation - DVR) built on top of a Gauss-Lobatto quadrature to ensure the continuity at the FE boundaries.
For the temporal propagation of the wave function the short iterative Lanczos (SIL) method with adaptive time steps is applied \cite{park86,schneider2011}. The time-evolution of our system is started with the projectile located at $R_z=-40$~a.u., and with the ground state He target located at the center of our coordinate system. The ground state of helium was obtained by propagating an initial trial wave function in negative imaginary time ($t \rightarrow -i\tau$). The time-propagation is continued up to the termination time $t_t$ at which the position $R_z=80$~a.u. (the distances from the He atom at zero impact parameter) of the antiproton is reached. For channel-resolved energy transfer densities [Eq.(\ref{eq:prob})] we project onto asymptotic channel wave functions, which are constructed as a symmetrized product of single-electron wave functions, thus they neglect the electron-electron and electron-projectile interactions in the continuum. These are correct only in the limit $R\rightarrow \infty$ and, if ionization is involved, $r_i\rightarrow\infty$. Therefore, errors due to the finite propagation time needs to be checked.
\subsection{Mean-field approximation}
\label{sec:MFA}
In order to quantify the role of correlations in the DET distribution and to compare with previous non-perturbative calculations for stopping \cite{bailey,schiwietz96,cabrera05,luhr} we perform in parallel mean-field simulations. For the calculation of the DET distribution, they involve two separate approximations to be kept track of. The first one is the approximation of the exact Hamiltonian by the sum of two effective single-electron Hamiltonians
\begin{equation}
H(t)=\sum\limits_{j=1}^2H_{j}^\mathrm{eff}(t),
\end{equation}
with
\begin{equation}
H_{j}^\mathrm{eff}(t)=-\frac{\nabla_j^2}{2}+V_\mathrm{eff}(r_j)+\frac{1}{|\vec r_j-\vec R(t)|}
\end{equation}
where the effective mean-field potential $V_\mathrm{eff}$ accounts for the nuclear Coulomb field and the mean screening field provided by the other electron. Using a static screening potential as in the following,
\begin{equation}
V_\mathrm{eff}(r)=-\frac{Z_c+a_1e^{-a_2r}+a_3re^{-a_4r}+a_5e^{-a_6r}}{r},
\label{eq:modpot}
\end{equation}
where $Z_c$ is the charge of the residual ion and the model parameters are taken from \cite{tong05} in the TDSE
\begin{equation}
i\frac{\partial \Psi_j(t)}{\partial t}=H_j^\mathrm{eff}(t)\Psi_j(t)
\label{eq:tdseSAE}
\end{equation}
leads to the single-active electron (SAE) approximation~\cite{tong01,tong00,yao93,tong02}.
Alternatively, within time-dependent density functional theory (TDDFT), $V_{\mathrm{eff}}$ contains dynamical screening due to the self-consistent coupling of the evolution to the time-dependent electronic density $\rho = \sum\limits_j|\Psi_j|^2$ \cite{gross84,hohenberg64,bauer97,tong98}. Within the TDDFT approach, correlation effects can be taken into account on the mean-field level.
Final state probabilities for excitation (EX) and ionization (I) follow from the projection amplitudes $P_f^{(1)}=\left|\langle \Psi_f| \Psi_j \rangle\right|^2$. Unlike for the projection of the fully correlated two-electron wave function, these $P_f^{(1)}$ are one-electron probabilities on a mean-field level. Therefore, to account for multi-electron processes (specifically in the case of He, two-electron processes), a second approximation is invoked, the independent event model (IEM). This applies to both the SAE and TDDFT approaches. Accordingly the joint probability for ionizing, e.g., one and exciting the other electron is approximated by $P_{EX-I}=2P_{EX}^{(1)}P_{I}^{(1)}$.
Analogously, double ionization (DI) is approximated by $P_{DI}=P^{(1)}_I P^{(1)}_I$. Such an IEM for multi-electron processes can be modified to account for an assumed sequentiality of these processes. Eg. sequential double ionization is expressed as $P_{DI}^{\mathrm{Seq.}}={P_{I}^{(1)}}^+P_{I}^{(1)}$, where $P_{I}^{(1)}$ is the one-electron ionization probability for neutral helium, while ${P_{I}^{(1)}}^+$ is the ionization probability of $\mathrm{He}^+$ calculated from Eq. (\ref{eq:tdseSAE}) with the effective potential [Eq. (\ref{eq:modpot})] reduced to the bare Coulomb potential ($-2/r$).
\subsection{The Bohr model}
The pioneering study of the energy transfer process between an incident charged particle and atomic targets performed by Bohr \cite{bohr13,bohr15} dates back more than a century predating even his quantum atomic model. Apart from historic interest, it still serves as a useful guide for the processes underlying contemporary models for stopping and straggling. Bohr's model for energy loss involves two contributions: the close collision regime for small impact parameters $b<b_0$ approximated by binary Coulomb scattering between the classical electron on the incident charged particle and the distant-collision regime for large impact parameters for which the projectile supplies the time-dependent electric field that excites the electron with oscillator (i.e. transition) frequency $\omega$ (reminiscent of Thomson's atom model of harmonically bound electrons).
A smooth transition between the two regimes is expected at an intermediate impact parameter $b_0$ which must simultaneously fulfill two requirements: $b_0$ should be large compared to the so-called collision diameter $b_c=Z_pe^2/(m_ev_p^2)$ \cite{bohr13,ahlen80} but small compared to the characteristic impact parameter for resonant excitation of the target electrons $b_r\approx v_p/\omega$ \cite{bohr13,ahlen80} by the time dependent Coulomb field of the passing-by projectile. These two requirements can be combined ($b_c<b_r $) to the criterion for the validity of the Bohr model
\begin{equation}
v_p > \left(\frac{2Z_pe^2\omega}{m_e}\right)^{1/3},
\label{eq:bohrlimit}
\end{equation}
where the classical oscillator frequency $\omega$ should be replaced by a typical quantum excitation frequency the order of magnitude of which is given by the first ionization potential $I_{p_1}$, $\hbar\omega\simeq I_{p_1}$.
Within the framework of this classical model, Bohr also derived the (non-relativistic) high-energy limit for the straggling cross section, i.e. the second moment of the DET which is given by the projectile-energy independent constant
\begin{equation}\label{eq:TB}
T_B = 4\pi Z_pZ_Te^2.
\end{equation}
For later reference we emphasize that Eq. (\ref{eq:TB}) describes the response of $Z_T$ independent (classical) electrons implicitly invoking the IEM. Remarkably, to this date Eq. (\ref{eq:TB}) has remained the benchmark with which current experimental and theoretical results for straggling are to be compared.
\section{Angular momentum basis convergence}
Since both the TDCC for solving the full two-electron dynamics as well as mean-field models such as the SAE are based on the numerical solution of the time-dependent Schr\"odinger equation, rigorous numerical checks are required. While convergence with respect to size and density of the radial grid, time-propagation parameters, or length of the projectile trajectory has been tested previously \cite{borbely14,tong01}, a critical issue of particular relevance for the present study is the convergence with respect to the number of partial waves or angular momenta included. During energetic binary collisions there is a large energy and momentum transfer from the projectile to the electron, which also implies a large angular momentum transfer. If the truncated angular momentum basis is not large enough to accommodate such angular momentum transfers, then the probability for generating high energy continuum electrons will be significantly suppressed. The importance of including high-angular momentum partial waves is not specific to the numerical solution of the TDSE or to the energy loss but has been previously observed in a first Born approximation calculation of the angular distribution of high-energy electrons emitted in $\mathrm{p+He}$ collisions~\cite{madison73,manson75}. Since this effect is more pronounced at high projectile velocities, we have performed the angular momentum basis convergence tests at 1 MeV antiproton energy, the highest energy considered in this work.
We compare the impact parameter resolved DET, $d\sigma (b,\varepsilon)/d\varepsilon\equiv \sum\limits_{f} P_{i\rightarrow f}(\epsilon; b) $, from the TDCC for the single ionization channel with the second electron remaining in the ground state denoted in the following by SI0 with the corresponding SAE results (Fig. \ref{fig:angdep}a).
\begin{figure}
\includegraphics{fig1.pdf}
\caption{ The single ionization (SI0) impact parameter resolved DET $d\sigma(b,\varepsilon)/d\varepsilon$ (a) and the energy transfer square ($\varepsilon^2$) rescaled DET (b) as a function of electron ejection energy for the 1~MeV antiproton projectile at fixed $b=1$~a.u. impact parameter. This impact parameter value was chosen to coincide with the maximum of the impact parameter dependent DET. TDCC and SAE results with different angular momentum basis size ($L_{max}$) are compared for the SI0 single ionization channel . In (b) the binary collision energy loss ($2v_p^2$) is also indicated with a vertical dotted line. \label{fig:angdep}}
\end{figure}
We also checked for $\varepsilon^2 d\sigma (b,\varepsilon)/d\varepsilon$, the DET weighted with the squared energy transfer which places enhanced weight on large energy and angular momentum transfer entering straggling (Fig.~\ref{fig:angdep}b). Obviously, convergence is reached when the maximum classically allowed binary encounter momentum transfer $\Delta p\sim 2 v_p\simeq 13$~a.u.~at an impact parameter of the order of the atomic radius $b\simeq 1$~a.u.~corresponding to an angular momentum transfer of $L_{max}\simeq b\Delta p\simeq 13$~a.u.~can be accurately represented.
\begin{figure}
\includegraphics{fig2.pdf}
\caption{The truncation error for stopping and straggling as a function of the angular momentum basis size $L_{max}$ at $E=1$~MeV ($v_p=6.32$~a.u.) and $b=1$~a.u. \label{fig:angconv}}
\end{figure}
In the case of the full TDCC simulation we choose a highly asymmetric partial wave basis with $0\le l_1\le L_{max}\le 20$ for the ionized electron while $l_2$ is constrained to low angular momentum $l_2=0,1,\dots,l_{2,max}$, where we find convergence already for $l_{2,max}=1$.
The truncation error as a function of the maximum of total (coupled) angular momentum $L_{max}$ included (Fig.\ref{fig:angconv}) shows that previously used small angular momentum basis sizes ($L_{max}\le 6$) for calculation of ionization cross section \cite{borbely14,bailey} are insufficient to accurately account for the stopping and straggling at high energies. We estimate the truncation error by comparison with the reference calculations $S_{ref}$ and $T_{ref}$ in which the contributions of very high $L>>L_{max}\simeq 15$ taken from corresponding SAE calculations are included, as for asymptotically high $L$ the influence of correlation effects can be safely excluded.
\section{Differential energy loss distributions}
\subsection{Differential energy transfer}
The differential energy transfer (DET) cross section, $d\sigma (\varepsilon) / d\varepsilon$, integrated over the impact parameter [Eq.(\ref{eq:det})] is the key input quantity of interest determining the stopping and straggling.
While $d\sigma(\varepsilon)/d\epsilon$ is a continuous function above the first ionization threshold, $I_{p_1}=24.6$~eV, it is discrete below $I_{p_1}$. In order to display the continuity across the threshold we analytically continue $d\sigma (\varepsilon)/d\varepsilon$ for $0\le\varepsilon\le I_{p_1}$ as
\begin{align}
\frac{d\sigma}{d\epsilon} = \sum\limits_{n,l,m}\sigma_{nlm}D(n,l)
\approx\sum\limits_{n,l,m}\sigma_{nlm}(E_{n+1,l}-E_{n,l})^{-1}
\end{align}
with $D(n,l)$ the spectral density of bound states of a given $n,l$ and $E_{n,l}$ the energy of the excited bound state. Both above and below the threshold the multiple (quasi) degeneracies are included. As expected for Coulomb interactions, $d\sigma(\varepsilon)/d\varepsilon$ is continuous and finite across the first ionization threshold (see inset of Fig. \ref{fig:etcs}). At all collision energies, ionization dominates over (exclusive) bound-state excitations (Fig. \ref{fig:etcs}).
\begin{figure}
\includegraphics{fig3.pdf}%
\caption{The energy transfer cross section $d\sigma(\varepsilon)/d\varepsilon $ as a function of $\varepsilon$ for different antiproton impact energies $E$. The inset shows the finite cross section at and the continuous transition across the first ionization threshold. \label{fig:etcs}}
\end{figure}
This implies that a typical ''mean'' energy transfer, i.e. the mean value of this distribution, is somewhat larger than $I_{p_1}$ suggesting also a value suitable for $\omega$ in Bohr's model [Eq. (\ref{eq:bohrlimit})]. We observe a power-law behavior of the high-energy tail of $d\sigma (\varepsilon)/d\varepsilon\sim\varepsilon^{-\alpha}$ ($\alpha\simeq 2.2$ for energies below the binary encounter limit) as expected. The DET also contains some fluctuations (most visibly for 10~keV), which are the trace of the unresolved Fano resonances\cite{Kaldun738,Gruson734}. The discontinuities of the DET in the $0.9~\mathrm{a.u.}<\varepsilon < 2.9$~a.u.~energy transfer interval signify the appearance of the ionization-excitation channels. As expected, for each collision energy the high energy tail of the transfer distribution extends to the binary encounter limit $\varepsilon=(2v_p)^2/2$ above which electron emission is strongly suppressed and not resolved in our simulation.
\subsection{Comparison with the Bohr model}
Another energy loss distribution, differential in impact parameter, but integrated over all energy transfers $S(b;v_p)$ is of considerable conceptual interest. This distribution allows a direct comparison of the TDCC simulation with the original Bohr model for energy loss (Fig.~\ref{fig:bohr}).
\begin{figure*}
\includegraphics{fig4.pdf}
\caption{The mean energy loss $S(b;v_p)$ [Eq. (\ref{eq:lossindirect})] as a function of the impact parameter for different antiproton energies: 16 keV (a) and 1000keV (b). The present TDCC results contain only the inelastic one-electron channels: single ionization (SI0) and single excitation (EX0) with the second electron remaining in the ground state and are compared to the predictions of the Bohr model for close and distant collisions~\cite{bohr13,ahlen80}. The oscillator frequencies as given in the figures are obtained by fitting the distant collision model to the TDCC results at large impact parameters.\label{fig:bohr}}
\end{figure*}
Since (correlated) multi-electron processes are not included in the Bohr model we restrict for this comparison the TDCC energy loss to one-electron processes by projecting the evolved state exclusively onto one-electron inelastic channels, i.e pure single ionization (SI0) and pure single excitation with the second electron remaining in the ground state (EX0) [Eq.(\ref{eq:prob})]. It should be noted that in Bohr's close collision model the impact parameter of projectile refers to the quasi-free electron while in the quantum calculation it denotes the distance to the ionic core of the target. Only upon averaging over the ensemble of classical electrons representing the initial bound state the two agree. Moreover, in the close collision model the classical electron is assumed to be at rest in the target frame thereby neglecting the initial momentum-space distribution, i.e. the Compton profile of the initial state.
While at lower projectile energies ($v_p<$1, Fig.~\ref{fig:bohr}a) neither the close-collision contribution expected to be applicable for $b<b_0$ nor the distant-collision contribution for $b>b_0$ approximates the TDCC results well, in the perturbative regime ($E=1$~MeV, $v_p=6.32$, Fig.~\ref{fig:bohr}b) the distant-collision, overall, yields reasonable agreement. The latter is, obviously, related to the fact that, to some extent, it successfully mimics the dipole transitions by virtual photon absorption
\cite{[{}][{ p. 414}]heitler,[{}][{ ch. 15}]jackson}
closely related to first-order quantum perturbation theory. For the comparison we have fitted the oscillator frequency $\omega$ in Bohr's distant-collision model to TDCC results for large impact parameters ($b>b_0=v_p/\omega$). The resulting $\omega$ (Fig.~\ref{fig:bohr}) closely matches the expectation of a mean excitation energy slightly above $I_{p_1}=0.9$~a.u. (24.6 eV) as suggested by the DET cross section (Fig.~\ref{fig:etcs}).
As expected, for low antiproton energies ($E_p\sim 16$~keV, Fig.~\ref{fig:bohr}a) and outside the validity of the Bohr model (Eq.~\ref{eq:bohrlimit}), the transition between the close and distant collision regimes is not smooth. With increasing antiproton energies this transition smoothens and at high antiproton energies ($1000$~keV) there is a large impact parameter region where the close and distant collision energy loss predictions overlap. In view of the simplicity of the Bohr model, the agreement between Bohr's distant collision model (with fitted $\omega$) and the high precision TDCC calculations for energies above $\approx100$~keV, when restricted to one-electron processes, is remarkably good. The discrepancies to the close collision model are generally larger, in part due to the neglect of the atomic Compton profile. The latter deficiency can be corrected within the framework of more advanced classical models, in particular the classical-trajectory Monte Carlo method~\cite{abrines66,percival76,reinhold93}.
\subsection{Multi-electron energy loss channels}
By comparing the present TDCC \emph{ab initio} approach with the IEM using SAE calculations (for details and definitions see Section \ref{sec:MFA}) as input, the importance of different single- and multi-electron energy loss channels and the influence of correlations in each of these can be assessed. To this end we group the final states of the energy-transfer probabilities $P_{i\rightarrow f}(\varepsilon,b,v_p)$ into four different exit channels: single ionization with the second electron remaining in the ground state (SI0), single excitation with the second electron in the ground state as well (EX0), simultaneous single ionization and shake-up excitation of the second electron (SI-EX) and double ionization (DI). We note that the contributions from double excitations leading to formation of autoionizing resonances are implicitly included in the SI0 and SI-EX channels as we do not explicitly project onto them. We note that their contribution to total stopping and straggling is, in particular, at high collision energies negligibly small. The one-electron channels SI0 and EX0 allow for a direct comparison between the TDCC and mean-field models such as the present or previously employed SAE approximations and for probing for electron correlation effects in one-electron transitions. These are to be distinguished from true multi-electron transitions (SI-EX and DI) for which models \cite{bailey,schiwietz96,luhr} based on SAE approximations, TDDFT or convergent close coupling calculations have been invoked, in addition to, the independent event model (IEM) thereby neglecting explicitly correlated transitions. Such dynamical correlations are fully accounted for by the present TDCC simulation.
\begin{figure*}
\includegraphics{fig5.pdf}
\caption{Exit-channel decomposition of the energy loss $S(b;v_p)$ as a function of the impact parameter for different antiproton energies (16, 100, 1000 keV) obtained within the framework of the present TDCC and SAE-IEM models. Single ionization (SI0), single excitation (EX0), correlated excitation-ionization (SI-EX), and double ionization (DI). In the SAE approximation the IEM is invoked to approximate multi-electron transitions.\label{fig:Svsb}}
\end{figure*}
Overall, the relative importance of different loss channels varies only weakly over a wide range of collision energies (Fig.~\ref{fig:Svsb}). The one-electron channels dominate, SI0 at small impact parameters and EX0 at large impact parameters. This explains the success of mean-field models for stopping. However, correlated multi-electron process, in particular the SI-EX process provide a significant contribution throughout and become nearly as large as SI0 at large impact parameters and collision energies. It is these processes for which the SAE and similar mean-field models with their uncorrelated IEM extension completely fail (Fig.~\ref{fig:Svsb}).
The SAE-IEM does not account for the correlated ''shake-up'' of the second electron during the ionization process. The importance of such shake-up has recently been also demonstrated in the timing of photoionization by attosecond pulses
\cite{feist14,Isinger893,ossiander16}.
Also for DI, the SAE-IEM mostly fails, however with the remarkable exception in the perturbative regime at high collision energies. Here the SAE-IEM reproduces the TDCC quite well indicating that direct uncorrelated double ionization dominates over shake-off.
By contrast, for true one-electron transitions the present SAE yields excellent agreement for SI0 at all energies and impact parameters while for EX0 the agreement is still good with minor deviations observable. The latter can be easily explained by the fact that the final excited state in neutral helium carries the signatures of electron correlations and screening missing in the SAE model.
\begin{figure*}
\includegraphics{fig6.pdf}%
\caption{ Exit channel decomposition of the straggling probability $T(b;v_p)$ as a function of the impact parameter for different antiproton energies (16, 100, 1000 keV) obtained in the framework of the present TDCC and SAE-IEM models. Single ionization (SI0), single excitation (EX0), excitation-ionization (SI-EX) and double ionization (DI). Within the SAE approximation, the IEM is used to approximate multi-electron transitions.\label{fig:Tvsb}}
\end{figure*}
The relative importance of the different loss channels changes when higher moments of the energy loss distribution are considered. Specifically, for the impact-parameter dependent straggling $T(b,v_p)$ (Fig. \ref{fig:Tvsb}) the SI0 channel still provides the largest contribution at small impact parameters, while at large impact parameter values the contributions from the SI-EX and EX0 channels dominate. Most notably, the contribution from shake-up ionization (SI-EX) to energy loss fluctuations is large enough to leave its mark on the integrated straggling cross section.
\section{The stopping and straggling cross sections}
The total stopping and straggling cross sections are calculated by integration over all impact parameters [Eqs. (\ref{eq:sint}) and (\ref{eq:tint})]. For the stopping cross section we can compare our present TDCC results for $\rm{\bar p}$ in He with available $\mathrm{\bar p}$ experimental data of Agnello {\it et al.} as reevaluated by Rizzini {\it et al.} \cite{agnello,lodi04} and with experimental data of Kottmann for stopping of negatively charged muons ($\mu^-$) in He~\cite{kottmann}. Since the mass of $\mu^-$ ($=207$~a.u.) is large compared to that of the electron ($m_\mu \gg m_e$), inelastic electronic processes induced by isotachic (equal velocity) $\mu^-$ and $\mathrm{\bar p}$ projectiles should closely resemble each other and allow for a direct comparison of their stopping cross section. We also compare with other available theoretical results (Fig. \ref{fig:compSexp}).
\begin{figure}
\includegraphics{fig7.pdf}%
\caption{The stopping cross section $S(v_p)$ as a function of antiproton impact energy. The present TDCC results are compared to the experimental data for $\mathrm{\bar p}$ \cite{agnello,lodi04} (experimental uncertainty as shaded blue area), and $\mathrm{\mu^-}$ colliding with He~\cite{kottmann}, and to other theoretical calculations: convergent close-coupling (CCC) of Bailey {\it et al.} \cite{bailey}; atomic-orbital close coupling of Schiwietz {\it et al.} \cite{schiwietz96}; semiclassical B-spline close-coupling calculations of L\"uhr {\it et al.} \cite{luhr}; and the binary collision theory of Sigmund {\it et al.} \cite{sigmund2001}. The TDCC results also include the contribution from nuclear stopping.
\label{fig:compSexp}}
\end{figure}
Among those, the most advanced available approach is that of Bailey {\it et al.} \cite{bailey} based on the convergent close-coupling (CCC) method in which the two-electron wave function is represented in a basis of target pseudostates and propagated in time numerically. Single ionization (SI0) and single excitation (EX0) are numerically accurately represented while double ionization and excitation ionization are approximated by a sequential independent event model.
With the exception of high projectile energies ($> 100$~keV), the agreement between the CCC and TDCC calculations is quite good, primarily because the dominant SI0 channel is treated in both approaches equivalently. The discrepancies observed for projectile energies above 100~keV can be attributed to the angular momentum basis truncation errors in the CCC calculations, in which the maximum angular momentum value was $L_{max}=6$. This suppresses the formation of the high-energy part of the ionization spectrum (see Fig.\ref{fig:angdep}) and leads to the underestimation of the stopping cross section.
Remarkably, both state-of-the art calculations disagree with the $\mathrm{\bar p}$ experimental data by Agnello {\it et al.} \cite{agnello} as reevaluated in \cite{lodi04}. At low antiproton energies both the TDCC and CCC results lie outside the error bars. Below $10$~keV, the contribution from nuclear stopping sets in. We have therefore also included these corrections. However, the result still lies outside the quoted error interval of the experiment (Fig.\ref{fig:compSexp}). Most significantly, the stopping power maximum appears to be displaced in the experiment to higher collision energies (close to $150$~keV). As discussed in \cite{bailey} these discrepancies may result
in part from the complex processing of the experimental data which give only indirectly access to $S(v_p)$. Closer agreement is found with the experimental $\mu^-$ data, in particular the projectile velocity (or equivalent energy) for which the stopping cross section reaches its maximum coincides with that in the simulation. Yet, noticeable discrepancies in magnitude appear as well whose significance is difficult to assess in view of the unknown experimental uncertainties.
Earlier calculations have been performed within the framework of one-electron models. They include the atomic-orbital close-coupling model of Schiwietz {\it et al.}~\cite{schiwietz96}, the electron-nuclear dynamics model by Cabrera-Trujillo {\it et al.}~\cite{cabrera05}, and the pseudostate close-coupling approach by L\"uhr {\it et al.} ~\cite{luhr}. Contributions of two-electron processes to the stopping cross section are approximately included in these calculations employing an IEM. The agreement between these one-electron models \cite{schiwietz96,luhr} and the present two-electron calculations is good at high antiproton energies ($>200$~keV), while at lower antiproton energies the one-electron calculations overestimate the stopping cross section. It is of conceptual interest to identify the origin of this discrepancy. To this end, we have decomposed the TDCC results for $S(v_p)$ into the contributions due to the one-electron processes SI0 and EX0 and the two-electron processes DI and SI-EX. At intermediate energies 50~keV~$< E <$~200~keV the SI0 and EX0 contributions agree very well with the present SAE model and also with that of L\"uhr {\it et al.} \cite{luhr}. In this energy regime the discrepancy is thus due to the overestimation of uncorrelated multi-electron transitions within the IEM.
At even lower energies ($< 50$ keV) additional discrepancies appear already in the SI0 and EX0 contributions to stopping indicating that in this strongly non-perturbative regime electron correlation effects play an important role already in one-electron transitions.
The binary collision theory for stopping by Sigmund et al.~\cite{sigmund2001} originally designed for swift heavy ions is based on an interpolation of the stopping numbers $L$ between the classical one-electron binary collision model at low collision speeds [Eq.(\ref{eq:L_Bohr})] and the Bethe limit at high speeds [Eq.(\ref{eq:L_Bethe})]. Multi-electron effects are indirectly included via shell corrections and screening. Its application to $\rm{\bar p + He}$ yields qualitative agreement with the ab-initio calculations while systematically underestimating the stopping cross section below and around the stopping maximum. At high collision energies, the binary collision theory converges, by construction, to the Bethe limit and agrees quite well with the present TDCC results. For a critical comparison with the experiment it is worthwhile recalling the limitations of the present TDCC approach to stopping. Deviations from a classical straight-line trajectory or diffractive scattering are neglected from the outset. Such effects are, however, expected to be negligible for equivalent projectile energies above a few keV (for $\mu^-$ slightly higher than for $\mathrm{\bar p}$). Furthermore, the present asymmetric angular momentum basis limiting the accessible angular momenta of the spectator electron of the primary collision event with the projectile to $l\le 1$ cannot reliably account for secondary violent electron-electron scattering events that have been identified in the equal energy sharing region of double ionization by high-energy photons \cite{amusia,schoffler} and in the electronic Thomas scattering \cite{thomas,mergel97,fischer,Gudmundsson} mediating simultaneous charge transfer and ionization in charged particle collisions. Such processes have, however, negligible cross section compared to single ionization or excitation-ionization and are not expected to significantly influence the stopping cross section. Short-ranged non-electromagnetic interactions, including weak interactions are negligible as well since stopping is strongly dominated by distant collisions and long-range interactions. The main limitation is thus the non-relativistic treatment. While for the numerical results presented with $\mathrm{\bar p}$ energies up to 1~MeV corresponding to $\gamma\le 1.005$ relativistic corrections are still very small, at higher energies they may become significant. The high-energy behavior of stopping and straggling discussed here refers to the non-relativistic limit only.
Despite its importance for characterizing the DET distributions, experimental results on straggling cross sections for gas targets are still remarkably sparse \cite{bonderup,andersen1978,besenbacher1980,hvelplund75} as most of the measurements are performed for solid targets \cite{sigmundbook}. In particular, for $\rm \bar p$ on He neither experimental data nor numerical simulations appear to be available. The only available measurements somewhat related to the present calculations are those of Bonderup \emph{et al.}~\cite{bonderup}, and of Besenbacher et al.~\cite{besen1981} performed for proton projectiles on a He gas target.
\begin{figure}
\includegraphics{fig8_updated.pdf}%
\caption{Comparison between straggling cross sections for protons and antiprotons colliding with helium, normalized to the Bohr straggling number $T_B=4\pi Z_p^2Z_T$. Shown are the present TDCC and SAE results for antiprotons, the experimental data by Bonderup {\it et al.} \cite{bonderup} and by Besenbacher et al.~\cite{besen1981} for protons. The analytic predictions by Sigmund for antiprotons and for protons are also shown, the latter both with ~\cite{sigmund2003,sigmund2010} and without~\cite{sigmundbook} corrections (see text). The smaller frame is a zoom-in on the high antiproton energy region showing the (non)convergence of the presented results towards the classical Bohr limit.
\label{fig:stragglingbohr}}
\end{figure}
We present here first \emph{ab initio} straggling simulations for antiprotons using the fully correlated TDCC approach as well as the SAE model (Fig. \ref{fig:stragglingbohr}). The energy independent Bohr straggling cross section $T_B$ (Eq.~\ref{eq:TB}), based on the energy transfer in classical binary collisions with quasi-free electrons, gives the natural scale for straggling and provides a useful order-of magnitude estimate. We therefore display the experimental results for $\mathrm{p}$ on He and the theoretical predictions for $\mathrm{ \bar p}$ on He in units of $T_B$. Of particular interest is the convergence behavior of $T(v_p)$ towards $T_B$ at large collision energies as frequently assumed or implied. Indeed, the present SAE simulations as well as the TDCC restricted to one-electron processes, i.e. the sum of the SI0 and EX0 contributions agree very well with each other over the entire range of energies investigated (3~keV~$\le E \le$~1~MeV) and monotonically approach the Bohr limit $T_B$.
The analytic theory by Sigmund~\cite{sigmundbook} also predicts a monotonic increase towards $T_B$ for $\mathrm{\bar p}$ while for the charge conjugate projectile $\mathrm{p}$ this limit is approached from above and displays a peak around 200~keV. The peak is significantly reduced and moved to higher projectile energies when including the effect of multiple Bohr oscillators, and shell and screening corrections in the binary theory formalism~\cite{sigmund2003,sigmund2010}. The enhanced straggling for $\mathrm{p}$ originates from the combined effects of the Barkas contribution ($\sim Z_p^3$) \cite{barkas63} and the charge transfer channel absent for $\mathrm{\bar p}$.
The full TDCC, however, which includes the many-electron transitions does not appear to converge to $T_B$ (see zoom-in in Fig.\ref{fig:stragglingbohr}), $T(v_p)$ lies about 10\% above $T_B$. The slight decrease by about 1\% of the TDCC straggling between 500~keV and 1~MeV antiproton energies (see zoom-in of Fig.\ref{fig:stragglingbohr}) might suggest a possible delayed convergence towards $T_B$, however, from above rather than from below. Based on both numerical evidence and analytic results for the non-relativistic high-energy limit this can be excluded since correlated two-electron processes, most importantly, single ionization accompanied by shake-up to excited states of He$^+$ (SI-EX) provide a finite contribution to $T(v_p)$ finite even as $E\rightarrow\infty$. It should be noted that a direct converged numerical calculation of $T(v_p)$ in the limit $v_p\rightarrow\infty$ is computationally not feasible within a given large, but finite-size angular momentum basis and FEDVR. Instead, we explore the asymptotic behavior of the two-electron processes to the total straggling cross section by fitting and extrapolating the ratios $R$ of the channels (SI-EX)/(SI0+EX0) and (SI-EX+DI)/(SI0+EX0) to the asymptotic expansion in powers of $E^{-1}$,
\begin{equation}\label{eq:ratio}
R(E^{-1})=R_0+\frac{a}{E}+\frac{b}{E^2}+\frac{c}{E^3}.
\end{equation}
For both ratios the extrapolation yields nearly identical asymptotic limits of $R_0 \simeq 0.1$ (Fig. \ref{fig:straggratio}). While the DI channel provides a significant contribution at intermediate energies, the asymptotic behavior is dominated by the correlated shake-up.
\begin{figure}
\includegraphics{fig9.pdf}%
\caption{ Straggling cross section ratios as a function of the inverse projectile energy. The TDCC data for the ratios $R$ are fitted to the following function: $R(E^{-1}) = R_0+a/E+b/E^2+c/E^3$. The $R_0$ asymptotic values of the ratios are 9.4\% [SI-EX/(SI0+EX0)] and 9.7\% [(SI-EX+DI)/(SI0+EX0)]. \label{fig:straggratio}}
\end{figure}
It is the rapid decrease of the DI contribution between 500~keV and 1~MeV (or between 2 and 1~MeV$^{-1}$, Fig.~\ref{fig:straggratio}) which results in the slight decrease of $T(v_p)$ mentioned above (Fig.~\ref{fig:stragglingbohr}, inset).
The finite additional contribution of SI-EX and, to a lesser extent, of DI to the asymptotic straggling cross section beyond its one-electron limit is consistent with earlier analytic and numerical results of shake-up and shake-off processes in photoionization \cite{anderson93,tang97,aberg,dalgarno,byron,kabir}
which are closely intertwined with the analogous processes in charged-particle scattering \cite{burgdorfer97,mcquire}. Also in photoionization shake-up (i.e. SI-EX) and shake-off (DI) converge to a finite fraction of the SI0 cross section in the limit $E\rightarrow\infty$ with shake-up dominating over shake-off. The present findings are also consistent with earlier theoretical~\cite{lnagy99} and experimental~\cite{bailey95} data which show the SI-EX/SI0 ionization cross section ratio to converge towards a constant nonzero value for large projectile velocities.
From the asymptotic behavior of $R(E^{-1})$ [Eq.\ref{eq:ratio} and Fig.\ref{fig:straggratio}] we estimate that the true (non-relativistic) high energy limit of straggling is $T\simeq 1.09T_B$ rather than $T_B$. Straggling is thus shown for the prototypical case of helium to be sensitive to multi-electron processes not accounted for by the Bohr model. This effect is expected to be more pronounced for heavier multi-electron atoms with plethora of available shake-up as well as correlated multiple shake-up-shake-off channels.
\section{Concluding Remarks}
We have presented a fully ab-initio simulation of the electronic energy loss distribution for antiproton scattering at He for antiproton energies ranging from $3$~keV to $1$~MeV using the TDCC method \cite{borbely14}. The first moment of this distribution, referred to as stopping cross section, and the second moment, the straggling cross section, are compared with other theoretical predictions and experiment when available. We have addressed the well-known discrepancy between several theoretical predictions~\cite{bailey,schiwietz96,luhr} and experimental data for the stopping cross section for $\mathrm{\bar p}$~\cite{agnello,lodi04} and $\mathrm{\mu^-}$~\cite{kottmann}. While we find slightly improved agreement with the $\mathrm{\bar p}$ experiment at high energies well above the stopping power maximum, the discrepancies to the data persist at lower energies while our TDCC results are in good accord with the recent CCC calculation \cite{bailey} both of which explicitly include electron correlation effects. While all numerical simulations employing either an effective one-electron or the full two-electron time-dependent Schr\"odinger equation agree with each other on the projectile velocity of the stopping maximum, the stopping power maximum of the $\mathrm{\bar p}$ experimental data differ from these predictions.
Compared to the $\mathrm{\bar p}$ data, better agreement is found between the $\mathrm{\mu^-}$ experimental data and the theoretical predictions, in particular on the position of the stopping maximum, however the magnitude of the $\mu^-$ stopping cross section is somewhat lower than the theoretical prediction for all equivalent energies. The considerable spread and uncertainties in the
available experimental data suggests that further experimental tests are desirable.
Both the stopping cross section and the straggling cross section are shown to be influenced by electron correlation effects. In particular, the first \emph{ab initio} simulation for straggling reveals the importance of correlated multi-electron transitions. Ionization accompanied by excitation of the second electron provides a non-vanishing contribution even at high collision energies. This shake-up process is at the origin why the Bohr straggling number is not approached at high energies but surpassed. The present results provide the first benchmark data for the role of correlations in stopping and straggling for the simplest multi-electron system, helium, for which a full \emph{ab initio} description is still feasible. We expect such multi-electron transitions in heavier atoms and more complex targets to be of even greater importance.
\section{Acknowledgments}
The present work was supported by FWF-SFB049 (Nextlite), FWF-SFB041 (VICOM), doctoral college FWF-W1243(Solids4Function), WWTF MA14-002, by the National Research, Development and Innovation Office (NKFIH) Grant No. KH 126886, and by the high performance computing resources of the Babe\c{s}-Bolyai University. JF acknowledges funding from the European Research Council under grant ERC-2016-STG-714870 and by the Ministerio de Econom{\'\i}a y Competitividad (Spain) through a Ram\'on y Cajal grant. XMT was supported by a Grants-in-Aid for Scientific Research (JP16K05495) from the Japan Society for the Promotion of Science. Part of the calculation was performed using COMA and Oakforest-Pacs supercomputers at the Center for Computational Sciences, University of Tsukuba.
|
1,108,101,562,981 | arxiv | \section{Introduction}
In~\cite{Han_Kobayashi}, the capacity region of interference channel
is studied for both discrete and Gaussian cases. In this paper we
study the discrete interference channels $W_{Z|X,Y}$ and $\tilde
W_{\tilde Z|X,Y}$ with two pairs of encoders and decoders as shown
in Figure~\ref{fig.interference_channel}. The two channel inputs are
$\svx^n\in {\cal X}^n$ and $y^n\in {\cal Y}^n$, outputs are
$z^n\in \mathcal Z^n$ and $\tilde z^n\in \tilde{\cal Z}^n$
respectively, where $\cal X$, $\cal Y$, $\cal Z$ and $\tilde{\cal
Z}$ are finite sets. We study the basic interference channel where
each encoder only has a private message to the correspondent
decoder.
\begin{figure*}[htpb]
\setlength{\unitlength}{3247sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(8255,3049)(-400,-4098)
\thinlines {\put(1501,-1936){\vector( 1, 0){375}}
}%
{\put(3751,-3361){\vector( 1, 0){1275}}
}%
{\put(1501,-3286){\line( 0, 1){ 0}} \put(1501,-3286){\vector( 1,
0){375}}
}%
{\put(3751,-2011){\vector( 1,-1){1200}}
}%
{\put(3751,-1936){\vector( 1, 0){1275}}
}%
{\put(1876,-2311){\framebox(975,750){}}
}%
{\put(1876,-3586){\framebox(975,750){}}
}%
{\put(3751,-3286){\vector( 1, 1){1200}}
}%
{\put(2890,-3286){\vector( 1, 0){150}}
}%
{\put(2890,-1936){\vector( 1, 0){150}}
}%
{\put(5026,-2311){\framebox(1425,750){}}
}%
{\put(5026,-3586){\framebox(1425,750){}}
}%
{\put(6451,-1936){\vector( 1, 0){375}}
}%
{\put(6451,-3211){\vector( 1, 0){375}}
}%
{\put(6826,-3586){\framebox(975,750){}}
}%
{\put(6826,-2311){\framebox(975,750){}}
}%
{\put(7801,-1936){\vector( 1, 0){375}}
}%
{\put(7801,-3211){\vector( 1, 0){375}}
}%
\put(5201,-2011){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{$W_{Z|XY}(z|x,y)$ }%
}}}}
\put(3076,-1986){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x^n (m_x)$}%
}}}}
\put(3076,-3286){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{$y^n (m_y)$}%
}}}}
\put(-200,-1936){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{$m_x \in \{1,2,...2^{nR_x}\}$}%
}}}}
\put(5201,-3286){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{$\tilde W_{\tilde Z|XY}(\tilde z|x,y)$ }%
}}}}
\put(1951,-3286){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{Encoder Y}%
}}}}
\put(1951,-1936){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{Encoder X}%
}}}}
\put(6901,-2011){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{Decoder X}%
}}}}
\put(6901,-3286){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{Decoder Y}%
}}}}
\put(-200,-3286){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{$m_y \in \{1,2,...2^{nR_y}\}$}%
}}}}
\put(8251,-1936){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{$\widehat{m}_x(z^n)$}%
}}}}
\put(8251,-3286){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{$\widehat{m}_y(\tilde z^n)$}%
}}}}
\put(6551,-1836){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z^n$}%
}}}}
\put(6551,-3086){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{$ \tilde z^n $}%
}}}}
\end{picture}%
\caption[ ]{A discrete memoryless interference channel of two users}
\label{fig.interference_channel}
\end{figure*}
Some recent progress on the capacity region for Gaussian
interference channels is reported in~\cite{Raul1bit}, however, the
capacity regions for general interference channels are unknown. We
focus our investigation on the capacity region for a specific coding
scheme: randomized fixed-composition codes while the error
probability is defined as the average error over all code book with
a certain composition (type). Fixed-composition coding is a useful
coding scheme in the investigation of both
upper~\cite{Gallager_sphere} and lower bounds of channel coding
error exponents~\cite{Csiszar_graph} for point to point channel and
~\cite{Pokorny_MAC,Liu_huges} for multiple access (MAC) channels.
Recently in~\cite{Raul_ISIT} and~\cite{Raul_journal}, randomized
fixed-composition codes are used to derive a lower bound on the
error exponent for discrete interference channels. A lower bound on
the maximum-likelihood decoding error exponent is derived, this is
a new attempt in investigating the error exponents for
interference channels. The unanswered question is the capacity
region of such coding schemes.
In this paper, we give a complete characterization of the
interference channel capacity region for randomized
fixed-composition codes. To prove the achievability of the capacity
region, we prove the positivity everywhere in the capacity region of
a universal decoding error exponent. This error exponent is derived
by the method of types~\cite{Csiszar:98}, in particular the
universal decoding scheme used for multiple-access
channels~\cite{Pokorny_MAC}. A better error exponent can be achieved
by using the more complicated universal decoding rules developed
in~\cite{Liu_huges}. But since they both have the same achievable
capacity region, we use the simpler scheme in~\cite{Pokorny_MAC}. To
prove the the converse, that the achievable region matches the outer
bound, we extend the technique in~\cite{Dueck_RC} for point to point
channels to interference channels by using the known capacity region
results for multiple-access channels. The result reveals the
intimate relations between interference channels and multiple-access
channels. With the capacity region for fixed-composition code
established, it is evident that this capacity region is a subset of
the Han-Kobayashi region~\cite{Han_Kobayashi}.
The technical proof of this paper is focused on the average behavior
of fixed-composition code books. However this fundamental setup can
be generalized in the following three directions.
\begin{itemize}
\item
It is obvious that there exists a code book that its decoding error
is no bigger than the average decoding error over all code books.
Hence the achievability results in this paper guarantees the
existence of a of deterministic coding scheme with at least the same
error exponents and capacity region. More discussions are in
Section~\ref{sec.average_random}.
\item The focus of this paper is on the fixed-composition codes
with a composition $P$, where $P$ is a distribution on the input alphabet.
This code book
generation is different from the non-fixed-composition random
coding~\cite{Gallager} according to distribution $P$. It is well
known in the literature that the fixed-composition code gives
better error exponent result in low rate regime for point to point
channels~\cite{Csiszar_graph} and multiple-access
channels~\cite{Pokorny_MAC,Liu_huges}. It is the same case for
interference channels and hence the capacity region result in this
paper applies to the non-fixed-composition random codes.
\item Time-sharing is a key element in achieving capacity regions for
multi-terminal channels~\cite{Cover}. For instance, for
multiple-access channels, simple time-sharing among operational rate
pairs gives the entire capacity
region. We show that the our fixed composition codes can be
used to build a time-sharing
capacity region for interference channel. More interestingly, we
show that the simple time-sharing technique that gives the entire
capacity region for multiple-access channels is not enough to get
the largest capacity region, a more sophisticated time-sharing
scheme is needed. Detailed discussions are in
Section~\ref{sec.timeshare}.
\end{itemize}
The outline of the paper is as follows. In
Section~\ref{sec.setupmainresult} we first formally define
randomized fixed-composition codes and its capacity region and then
in Section~\ref{sec.mainresult} we present the main result of this
paper: the interference channel capacity region for randomized
fixed-composition code in Theorem~\ref{Thm.Int_capacity}. The proof
is later shown in Section~\ref{sec.proof} with more details in the
appendix. Finally in Section~\ref{sec.timeshare}, we argue that due
to the non-convexity of the randomized fixed-composition coding, a
more sophisticated time-sharing scheme is needed. This shows the
necessity of studying the geometry of the code-books for
interference channels.
\section{Randomized fixed-composition code and its capacity
region}\label{sec.setupmainresult}
We first review the definition of randomized fixed-composition code
that is studied intensively in previous works. Then the definition
of the interference channel capacity region for such codes is
introduced. Then we give the main result of this paper: the complete
characterization of the capacity region for randomized
fixed-composition codes.
\subsection{Randomized fixed-composition codes}
A randomized fixed-composition code is a uniform distribution on the
code books in which every codeword is from the type set with the
fixed composition (type).
First we introduce the notion of type set~\cite{Cover}. A type set
$\mathcal T^n(P)$ is a set of all the strings $x^n\in \mathcal X^n$
with the same type $P$ where $P$ is a probability
distribution~\cite{Cover}. A sequence of type sets $\mathcal T^n
\subseteq \mathcal X^n$ has composition $P_X$ if the types of
$\mathcal T^n$ converges to $P_X$, i.e. $\lim\limits_{n\rightarrow
\infty} \frac{N(a|\mathcal T^n)}{n}= P_X(a)$ for all $a\in \cal X$
that $P_X(a)>0$ and $N(a|\mathcal T^n)=0$ for all $a\in \cal X$ that
$P_X(a)=0$, where $N(a|\mathcal T^n)$ is the number of occurrence of
$a$ in type $\mathcal T^n$. We ignore the nuisance of the integer
effect and assume that $n P_X(a)$ is an integer for all $a\in \cal
X$ and $nR_x$ and $nR_y$ are also integers. This is indeed a
reasonable assumption since we study long block length $n$ and all
the information theoretic quantities studied in this paper are
continuous on the code compositions and rates. We simply denote by
$\mathcal T^n(P_X)$ the length-$n$ type set which has ``asymptotic''
type $P_X$, later in the appendix we abuse the notations by simply
writing $x^n\in P_X$ instead of $x^n\in \mathcal T^n(P_X)$.
Obviously, there are $|\mathcal T^n(P_X)|^{2^{nR_x}}$ many code
books with fixed-composition $P_X$ and rate $R_x$
In this paper, we study the randomized fixed-composition codes,
where each code book with all codewords from the fixed composition
being chosen with the same probability. Equivalently, over all
these code books, a code word for message $i$ is uniformly i.i.d
distributed on the type set $\mathcal T^n(P_X)$. A formal definition
is as follows. \vspace{0.1in}
\begin{definition}{Randomized fixed-composition codes}\label{def:randomized_coding}:
for a probability distribution $P_X$ on $\cal X$, a rate $R_x$
randomized fixed-composition-$P_X$ encoder picks a code book with the
following probability, for any fixed-composition-$P_X$ code book
$\theta^n =(\theta^n(1),\theta^n(2),..., \theta(2^{nR_x}))$, where
$\theta^n(i)\in \mathcal T^n(P_X)$, $i=1,2,..., 2^{nR_x}$, and $\theta^n(i)$
and $\theta^n(j)$ may not be different for $i\neq j$,
the code book $\theta_n$ is chosen, i.e. $x^n(i)=\theta^n(i), \ \ i=1,2,...,2^{nR_x}$, with probability
\begin{eqnarray*}
\left (\frac{1}{|\mathcal T^n(P_X)|}\right)^{2^{nR_x}}
\end{eqnarray*}
In other words, the choice of the code book is a random variable
$c_X$ uniformly distributed on the index set of all the possible
code books with fixed-composition $P_X$: $\{1,2,3,...,|\mathcal
T^n(P_X)|^{2^{nR_x}}\}$, while $c_X$ is shared between the encoder
$X$ and the decoders $X$ and $Y$. \vspace{0.1in}
\end{definition}
The key property of the randomized fixed-composition code is that
for any message subset $\{i_1, i_2,...i_l\}\subseteq \{1,2,...,
2^{nR_x}\}$, the code words for these messages are identical
independently distributed on the type set of $\mathcal T^n(P_X)$.
For randomized fixed-composition codes, the average error
probability $P_{e(x)}^n(R_x, R_y, P_X, P_Y)$ for X is the
expectation of decoding error over all message, code books and
channel behaviors.
\begin{eqnarray}
P_{e(x)}^n(R_x, R_y, P_X, P_Y) &&= \left (\frac{1}{|\mathcal
T^n(P_X)|}\right)^{2^{nR_x}}
\left (\frac{1}{|\mathcal T^n(P_Y)|}\right)^{2^{nR_y}}\label{eqn.inter_error_avg_X}\\
\sum_{c_X}\sum_{c_Y}\frac{1}{2^{nR_x}}&&\sum_{m_x}\frac{1}{2^{nR_y}}\sum_{m_y}\sum_{z^n}
W_{Z|XY}(z^n|x^n(m_x),y^n(m_y))
1(\widehat m_x(z^n)\neq m_x)\nonumber
\end{eqnarray}
where $x^n(m_x)$ is the code word of message $m_x$ in code book
$c_X$, similarly for $y^n(m_y)$, $\widehat m_x(z^n)$ is the decision
made by the decoder knowing the code books $c_X$ and $c_Y$.
\subsection{Randomized fixed-composition coding capacity for interference channels}
Given the definitions of randomized fixed-composition coding and
the average error probability in (\ref{eqn.inter_error_avg_X}) for
such codes, we can formally define the capacity region for such
codes. \vspace{0.1in}
\begin{definition}{Capacity region for randomized fixed-composition codes}: for
a fixed-composition $P_X$ and $P_Y$, a rate pair $(R_x,R_y)$ is said to be achievable for $X$,
if for all $\delta>0$, there exists $N_\delta<\infty$, s.t. for all $n>N_\delta$,
\begin{eqnarray}
P_{e(x)}^n(R_x, R_y, P_X, P_Y)< \delta \mbox{}
\end{eqnarray}
We denote by $\mathcal R_x(P_X,P_Y)$ the closure of the union of the
all achievable rate pairs. Similarly we denote by $\mathcal
R_y(P_X,P_Y)$ the achievable region for $Y$, and $\mathcal
R_{xy}(P_X,P_Y)$ for $(X,Y)$ where both decoding errors are small.
Obviously
\begin{eqnarray}
\mathcal R_{xy}(P_X,P_Y)= \mathcal R_{x}(P_X,P_Y)\bigcap\mathcal
R_{y}(P_X,P_Y) .\label{eqn.union_xy}
\end{eqnarray}
\end{definition}
\vspace{0.1in} We only need to focus our investigation on $\mathcal
R_{x}(P_X,P_Y)$, then by the obvious symmetry, both $\mathcal
R_{y}(P_X,P_Y)$ and $\mathcal R_{xy}(P_X,P_Y)$ follow.
\subsection{Capacity region of the fixed-composition code, $\mathcal
R_{x}(P_X,P_Y)$, for $X$ }\label{sec.mainresult}
The main result of this paper is the complete characterization of
the randomized fixed-composition capacity region $\mathcal
R_{x}(P_X,P_Y)$ for $X$, as illustrated in~(\ref{eqn.union_xy}), by
symmetry, $\mathcal R_{xy}(P_X,P_Y)$ follows.
\begin{figure*}
\setlength{\unitlength}{3247sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(5562,4416)(-1701,-4240)
\thinlines { \put(1951,-3961){\vector( 1, 0){4800}}
}%
{ \put(1951,-3961){\vector( 0, 1){3825}}
}%
\thicklines { \put(3601,-211){\line( 0,-1){1350}}
\put(3601,-1561){\line( 1,-1){1500}} \put(5101,-3061){\line(
0,-1){900}}
}%
{
\multiput(5101,-3061)(0.00000,89.9054){32}{\makebox(6.6667,10.0000){\SetFigFont{10}{12}{\rmdefault}{\mddefault}{\updefault}.}}
}%
{
\multiput(1951,-1561)(90.00000,0.00000){35}{\makebox(6.6667,10.0000){\SetFigFont{10}{12}{\rmdefault}{\mddefault}{\updefault}.}}
}%
\put(1626,-361){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $R_y$}%
}}}}
\put(6301,-3836){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $R_x$}%
}}}}
\put(2701,-811){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $I$}%
}}}}
\put(2701,-2761){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $II$}%
}}}}
\put(4576,-811){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $III$}%
}}}}
\put(4576,-1861){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $IV$}%
}}}}
\put(6001,-1861){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $V$}%
}}}}
\put(3301,-4186){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $I(X;Z)$}%
}}}}
\put(1076,-3111){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $I(Y;Z)$}%
}}}}
\put(901,-1661){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $I(Y;Z|X)$}%
}}}}
\put(4626,-4186){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $I(X;Z|Y)$}%
}}}}
\end{picture}%
\caption[ ]{Randomized fixed-composition capacity region $\mathcal
R_x(P_X, P_Y)$ for $X$, the achievable region is the union of Region
$I$ and $II$.}
\label{fig.inter_region}
\end{figure*}
\vspace{0.1in}
\begin{theorem}{Interference channel capacity region $\mathcal R_{x}(P_X,P_Y)$ for
randomized fixed-composition codes with compositions $P_X$ and $P_Y$:}
\label{Thm.Int_capacity}
\begin{eqnarray}\label{eqn.int_region}
\mathcal R_x(P_X, P_Y)&=&\{(R_x,R_y): 0\leq R_x< I(X;Z), 0\leq R_y\}\ \ \ \bigcup\nonumber\\
&& \{(R_x,R_y): 0\leq R_x< I(X;Z|Y), R_x+R_y< I(X,Y;Z)\}
\end{eqnarray}
where the random variables in (\ref{eqn.int_region}), $(X, Y, Z)\sim
P_X P_Y W_{Z|X,Y}$. The region $\mathcal R_{x}(P_X,P_Y)$ is
illustrated in Figure~\ref{fig.inter_region}.
\end{theorem}
\vspace{0.1in}
The achievable part of the theorem states
that: for a rate pair $(R_x, R_y)\in \mathcal R_x(P_X, P_Y)$, the
union of Region $I$ and $II$ in Figure~\ref{fig.inter_region}, for
all $\delta
>0$, there exists $N_\delta<\infty$, s.t. for all $n>N_\delta$, the
average error probability (\ref{eqn.inter_error_avg_X}) for the
randomized code from compositions $P_X $ and $P_Y$ is smaller than
$\delta$ for $X$:
$$P_{e(x)}^n(R_x, R_y, P_X, P_Y)< \delta $$ for some decoding rule.
Region $II$ is also the multiple-access capacity region for
fixed-composition codes $(P_X, P_Y)$ for channel $W_{Z|XY}$.
\vspace{0.1in}
The converse of the theorem states that for any rate pair $(R_x,
R_y)$ outside of $\mathcal R_x(P_X, P_Y)$, that is region $III$,
$IV$ and $IV$ in Figure~\ref{fig.inter_region}, there exists
$\delta>0$, such that for all $n$,
$$P_{e(x)}^n(R_x, R_y, P_X, P_Y)> \delta $$ no matter what decoding rule is
used. Note that the definition of the error probability
$P_{e(x)}^n(R_x, R_y, P_X, P_Y)$ defined
in~(\ref{eqn.inter_error_avg_X})
The proof of Theorem~\ref{Thm.Int_capacity} is in
Section~\ref{sec.proof}.
\begin{figure}
\begin{center}
\includegraphics[width=90mm]{Inter_XY.eps}
\end{center}
\caption[ ]{A typical randomized fixed-composition capacity region
$\mathcal R_{xy}(P_X, P_Y)= \mathcal R_x(P_X, P_Y) \cap \mathcal
R_y(P_X, P_Y)$ is the intersection of the dotted line and the solid
lines, this capacity region is not necessarily convex. }
\label{fig.inter_region_XY}
\end{figure}
\subsection{Necessities of more sophisticated time-sharing
schemes}\label{sec.time-sharing-firstrun}
In the achievability part of Theorem~\ref{Thm.Int_capacity}, we
prove that the average error probability for $X$ is arbitrarily
small for a randomized fixed-composition code if the rate pair
$(R_x,R_y)$ is inside the capacity region $\mathcal R_x(P_X, P_Y)$.
For interference channels, it is obvious that the rate region for
both $X$ and $Y$ is:
\begin{eqnarray}\mathcal R_{xy}(P_X, P_Y)=
\mathcal R_x(P_X, P_Y) \cap \mathcal R_y(P_X,
P_Y),\label{eqn.XYREGION}
\end{eqnarray}
where $\mathcal R_y(P_X, P_Y)$ is defined in the same manner as
$\mathcal R_x(P_X, P_Y)$ but the channel is $ \tilde W_{\tilde
Z|XY}$ instead of $W_{Z|XY}$ as shown in
Figure~\ref{fig.interference_channel}. A typical capacity region
$\mathcal R_{xy}(P_X, P_Y)$ is shown in
Figure~\ref{fig.inter_region_XY}. It is not necessarily convex.
However, by a simple time-sharing between different rate pairs for
the same composition, we can convexify the capacity region. Then the
convex hull of the union of all such capacity regions of different
compositions gives a bigger convex achievable capacity region.
This capacity region of the interference channel is
\begin{eqnarray}
CONVEX\left(\bigcup_{P_X, P_Y} \mathcal
R_{xy}(P_X,P_Y)\right).\nonumber
\end{eqnarray}
It is tempting to claim that the above convex capacity region is the
largest one can get by time-sharing the ``basic'' fixed-composition
codes as multiple-access channels shown in~\cite{Cover}. However, as
will be discussed later in Section~\ref{sec.timeshare}, it is not
the case. A more sophisticated time-sharing gives a bigger capacity
region.
This is an important difference between interference channel coding
and multiple-access channel coding because the fixed-composition
capacity region is convex for the latter and hence the simple
time-sharing gives the biggest capacity region~\cite{Cover}.
Time-sharing capacity is detailed in Section~\ref{sec.timeshare}.
\subsection{Existence of a good code for an interference
channel}\label{sec.average_random}
In this paper we focus our study on the average (over all messages)
error probability over all code books with the same composition. For
a rate pair $(R_x,R_y)$, if the average error probability for $X$ is
smaller than $\delta$, then obviously there exists a code book such
that the error probability is smaller than $\delta$ for $X$. This
should be clear from the definition of error probability
$P_{e(x)}^n(R_x, R_y, P_X, P_Y)$ in~(\ref{eqn.inter_error_avg_X}).
In the following example, we illustrate that this is also the case
for decoding error for both $X$ and $Y$. We claim without proof that
this is also true for ``uniform'' time-sharing coding schemes later
discussed in Section~\ref{sec.timeshare}. The existence of a code
book that achieves the error exponents in the achievability
part of the proof of Theorem~\ref{Thm.Int_capacity} can
also be shown. The proof is similar to that in~\cite{Gallager} and
Exercise 30~(b) on page 198~\cite{Csiszar}.
Similar to the
error probability for $X$ defined in~(\ref{eqn.inter_error_avg_X}),
we define the average joint error probability for $X$ and $Y$ as
\begin{eqnarray}
P_{e(xy)}^n(R_x, R_y, P_X, P_Y) &=& \left (\frac{1}{|\mathcal
T^n(P_X)|}\right)^{2^{nR_x}}\left (\frac{1}{|\mathcal
T^n(P_Y)|}\right)^{2^{nR_y}}
\sum_{c_X}\sum_{c_Y}\frac{1}{2^{nR_x}}\sum_{m_x}\frac{1}{2^{nR_y}}\sum_{m_y} \label{eqn.inter_error_avg_XY}\\
&&\ \ \ \big\{ \sum_{z^n} W_{Z|XY}(z^n|x^n(m_x),y^n(m_y)) 1(\widehat
m_x(z^n)\neq m_x)
\nonumber\\
&& \ \ \ \ \ \ + \sum_{\tilde z^n} \tilde W_{\tilde Z|XY}(\tilde
z^n|x^n(m_x),y^n(m_y)) 1(\widehat m_y(\tilde z^n)\neq
m_y)\big\}\nonumber
\end{eqnarray}
For a rate pair $(R_x,R_y)\in \mathcal R_{xy}(P_X, P_Y)= \mathcal
R_x(P_X, P_Y)\bigcap \mathcal R_y(P_X, P_Y) $. We know that for all
$\delta
>0$, there exists $N_\delta<\infty$, s.t. for all $n>N_\delta$, the
average error probability is smaller than $\delta$ for user $X$
and user $Y$:\\
$P_{e(x)}^n(R_x, R_y, P_X, P_Y)< \delta $ and
$P_{e(y)}^n(R_x, R_y, P_X, P_Y)< \delta
$. It is easy to see that the average joint error probability for
user
$X$ and $Y$ can be bounded by:
\begin{eqnarray}
P_{e(xy)}^n(R_x, R_y, P_X, P_Y) &=& P_{e(x)}^n(R_x, R_y, P_X, P_Y)+
P_{e(y)}^n(R_x, R_y, P_X,
P_Y)\nonumber\\
&\leq& 2\delta\label{eqn.union_bound_onxy}
\end{eqnarray}
From (\ref{eqn.inter_error_avg_XY}), we know that $P_{e(xy)}^n(R_x,
R_y, P_X, P_Y)$ is the average error probability of \textit{all}
$(P_X, P_Y)$-fixed-composition codes. Together with
(\ref{eqn.union_bound_onxy}), we know that there exists at least
\textit{one} code book such that the error probability is no bigger
than $2 \delta$.
Note, the converse of the randomized coding does not guarantee that
there is not a single good fixed-composition code book. The
converse claims that, the average (over all code books with the
composition) decoding error probability does not converge to zero if
the rate pair is outside the capacity region in
Theorem~\ref{Thm.Int_capacity}.
\section{Proof of Theorem~\ref{Thm.Int_capacity}}\label{sec.proof}
There are two parts of the theorem, achievability and converse.
The achievability part is proved by applying the classical method of
types in point to point channel coding and MAC channel coding for
randomized fixed-composition code. The converse is proved by
extending the technique first developed in~\cite{Dueck_RC} for point
to point channels to interference channels.
\subsection{Achievability}
We show that in the interior of the capacity region, i.e. the union
of Region $I$ and $II$ in Figure~\ref{fig.inter_region}, a positive
error exponent is achieved by applying the randomized
fixed-composition coding defined in
Definition~\ref{def:randomized_coding}. In
Sections~\ref{section:regionII} and~\ref{section:regionI}, we
describe the universal decoding rules for Region $II$ and $I$
respectively. We then present the error exponent results in
Lemma~\ref{lemma:regionII} in Section~\ref{section:regionIIEE} and
Lemma~\ref{lemma:regionI} in Section~\ref{section:regionIEE} that
covers Region $II$ and $I$ respectively. Then in
Lemma~\ref{lemma:positiveness} in Section~\ref{sec.positivityEE}, we
show that these error exponents are positive in the interior of the
capacity region $\mathcal R_x(P_X, P_Y)$ and hence conclude the
proof of the achievability part in Theorem~\ref{Thm.Int_capacity}.
\vspace{0.1in}
\subsubsection{Decoding rule in Region $II$} \label{section:regionII} In Region $II$, we show that
decoder $X$ can decode both message $m_x$ and $m_y$ with small
error probabilities. This is essentially a multiple-access channel
coding problem. We use the technique developed in~\cite{Csiszar}
to derive the positive error exponents that parallel to those in~\cite{Pokorny_MAC}.
The decoder is a simple maximum mutual
information\footnote{A more sophisticated decoding rule based on
minimum conditional entropy decoding for multiple-access channel is
developed in~\cite{Liu_huges}, it is shown that this decoding rule
achieves a bigger error exponent in low rate regime. The goal of
this paper is, however, not to derive the tightest lower bound on
the error exponent. We only need a coding scheme to achieve positive
error exponent in the capacity region in
Theorem~\ref{Thm.Int_capacity}. Hence we use the simpler decoding
rule here. } decoder~\cite{Csiszar}. This decoding rule is universal
in the sense that the decoder does not need to know the multiple
access channel $W_{Z|XY}$. We describe the decoding rule here, the
estimate of the joint message is the message pair such that the
input to the channel $W_{Z|XY}$ and the output of the channel have
the maximal empirical mutual information. i.e.:
\begin{eqnarray}
(\widehat m_x(z^n), \widehat m_y(z^n))=\argmax_{i\in\{1,2,...,
2^{nR_x}\}, j\in\{1,2,..., 2^{nR_y}\}} I(z^n; x^n(i),
y^n(j))\label{eqn.decoder1}
\end{eqnarray}
where $z^n$ is the channel output and $x^n(i)$ and $y^n(j)$ are the
channel inputs for message $i$ and $j$ respectively. $I(z^n; x^n,
y^n )$ is the empirical mutual information between $z^n$ and $(x^n,
y^n)$, the point to point maximal mutual mutual information decoding
is studied in~\cite{Csiszar}.
If there is a tie, the decoder can choose an arbitrary winner
or simply declare error. In Lemma~\ref{lemma:regionII}, we show that
by using the randomized fixed-composition encoding and the maximal
mutual information decoding, a non-negative error exponent is
achieved in Region $II$.
\vspace{0.1in}
\subsubsection{Decoding rule in Region $I$} \label{section:regionI}In Region $I$, decoder
$X$ only estimates $m_x$ by treating the input of encoder $Y$ as
a source of random noises. This is essentially a point to point
channel coding problem. The channel itself has memory since the
input of encoder $Y$ is not memoryless. Similar to the multiple
access channel coding problem studied in Region $II$, we use a
maximal mutual information decoding rule:
\begin{eqnarray}
\widehat m_x (z^n)=\argmax_{i\in\{1,2,..., 2^{nR_x}\} } I(z^n;
x^n(i))\label{eqn.decoder2}
\end{eqnarray}
In Lemma~\ref{lemma:regionI}, we show that
by using the randomized fixed-composition encoding and the
maximal mutual information decoding, a non-negative error exponent
is achieved in Region $I$.
\vspace{0.1in}
\subsubsection{Lower bound on the error exponent in Region $II$}\label{section:regionIIEE}
\begin{lemma}{(Region $II$) Multiple-access channel error exponents (joint error probability).}\label{lemma:regionII}
For the randomized coding scheme described in
Definition~\ref{def:randomized_coding}, and the decoding rule
described in (\ref{eqn.decoder1}),
the decoding error probability averaged over all messages, code books and channel behaviors is
upper bounded by an exponential term:
\begin{eqnarray}
&&\Pr((m_x,m_y)\neq (\widehat m_x, \widehat m_y ))\nonumber\\
&=&\left (\frac{1}{|\mathcal T^n(P_X)|}\right)^{2^{nR_x}}\left (\frac{1}{|\mathcal T^n(P_Y)|}\right)^{2^{nR_y}}\label{eqn.inter_error_mac} \\
&&\ \
\sum_{c_X}\sum_{c_Y}\frac{1}{2^{nR_x}}\sum_{m_x}\frac{1}{2^{nR_y}}\sum_{m_y}\sum_{z^n}
W_{Z|XY}(z^n|x^n(m_x),y^n(m_y))
1\left((\widehat m_x(z^n), \widehat m_y(z^n))\neq (m_x, m_y)\right)\nonumber\\
&\leq& 2^{-n (E-\epsilon_n)}.
\end{eqnarray}
$\epsilon_n$ converges to zero as $n$ goes to infinity, and $E=\min \{E_{xy}, E_{x|y}, E_{y|x}\},\mbox{
where}$
\begin{eqnarray} E_{xy}&=&\min_{Q_{XYZ}:Q_X=P_X,
Q_Y=P_Y} D(Q_{Z|XY}\|W| Q_{XY})
+D(Q_{XY}\|P_X\times P_Y) +|I_Q(X,Y;Z)-R_x-R_y|^+\nonumber\\
E_{x|y}&=&\min_{Q_{XYZ}:Q_X=P_X, Q_Y=P_Y} D(Q_{Z|XY}\|W| Q_{XY})
+ D(Q_{XY}\|P_X\times P_Y) +|I_Q(X;Z|Y)-R_x|^+\nonumber\\
E_{y|x}&=&\min_{Q_{XYZ}:Q_X=P_X, Q_Y=P_Y} D(Q_{Z|XY}\|W| Q_{XY}) +
D(Q_{XY}\|P_X\times P_Y)
+|I_Q(Y;Z|X)-R_y|^+\nonumber
\end{eqnarray}
where $|t|^+ =\max\{0,t \}$ and the random variables $(X,Y,Z)\sim
Q_{XYZ}$ in $I_Q(X;Z|Y), I_Q(Y;Z|X)$ and $I_Q(X, Y;Z)$.
\end{lemma}
\vspace{0.1in}
{\em Remark 1: it is easy to verify that
$D(Q_{Z|XY}\|W| Q_{XY})+D(Q_{XY}\|P_X\times P_Y) = D(Q_{XYZ}\|
P_X\times P_Y\times W)$, so the expressions for the error exponents
can be further simplified. We use the expressions similar to those
in~\cite{Pokorny_MAC} because they are more intuitive. \em}
{\em Remark 2: The proof parallels that
in~\cite{Pokorny_MAC} which is in turn an extension to the point to
point channel coding problem studied in~\cite{Csiszar}. The method
of types is the main tool for the proofs. The difference is that we
need to show the lower bound to the average error probability
instead of showing the existence of \textit{a} good code book
in~\cite{Pokorny_MAC}. Without giving details, we follow Gallager's
proof in~\cite{Gallager} and claim the existence of \textit{a} good
code with the same error exponent as that in~\cite{Pokorny_MAC} as a
simple corollary of Lemma~\ref{lemma:regionII}. \em}
\vspace{0.1in}
\proof First we have an obvious upper bound on the error probability
\begin{eqnarray}
&&\Pr((m_x,m_y)\neq (\widehat m_x, \widehat m_y ))\nonumber\\
&=& \Pr( m_x\neq \widehat m_x, m_y\neq \widehat m_y ) + \Pr(
m_x\neq \widehat m_x, m_y= \widehat m_y )
+ \Pr( m_x = \widehat m_x, m_y\neq \widehat m_y )\nonumber\\
&\leq&\Pr( m_x\neq \widehat m_x, m_y\neq \widehat m_y ) + \Pr(
m_x\neq \widehat m_x| m_y= \widehat m_y)
+ \Pr( m_y\neq \widehat m_y |m_x = \widehat m_x
))\label{eqn:unionbound}
\end{eqnarray}
The inequality~(\ref{eqn:unionbound}) follows the equality
$P(A,B)=P(A|B)P(B)\leq P(A|B)$. Now we upper bound each individual
error probability in (\ref{eqn:unionbound}) respectively by
exponentials of $n$. We only need to show that
\begin{eqnarray}
&&\Pr( m_x\neq \widehat m_x, m_y\neq \widehat m_y )\leq 2^{-n(E_{xy}-\epsilon_n)}, \label{eqn.proofpart1}\\
\mbox{ }&& \Pr( m_x \neq \widehat m_x| m_y = \widehat m_y )\leq
2^{-n(E_{x|y}-\epsilon_n)},\label{eqn.proofpart2}\\
\mbox{and }&& \Pr( m_y \neq \widehat m_y| m_x = \widehat m_x )\leq
2^{-n(E_{y|x}-\epsilon_n)}.\label{eqn.proofpart2.aa}
\end{eqnarray}
We prove (\ref{eqn.proofpart1}) and (\ref{eqn.proofpart2}),
(\ref{eqn.proofpart2.aa}) follows (\ref{eqn.proofpart2}) by
symmetry. The proofs are in Appendix~\ref{section.appendix1}, where
a standard method of type argument is used. \hfill$\square$
\vspace{0.1in}
\subsubsection{Lower bound on the error exponent in Region $I$}\label{section:regionIEE}
\begin{lemma}{(Region $I$) point to point channel coding error exponent (decoding $X$ only).}\label{lemma:regionI} For the randomized coding scheme described in
Definition~\ref{def:randomized_coding}, and the decoding rule
described in (\ref{eqn.decoder2}),
the decoding error probability averaged over all messages, code books and channel behaviors is
upper bounded by an exponential term:
\begin{eqnarray}
\Pr(m_x\neq \widehat m_x)&=&\left (\frac{1}{|\mathcal T^n(P_X)|}\right)^{2^{nR_x}}\left (\frac{1}{|\mathcal T^n(P_Y)|}\right)^{2^{nR_y}}\nonumber\\
&&\ \
\sum_{c_X}\sum_{c_Y}\frac{1}{2^{nR_x}}\sum_{m_x}\frac{1}{2^{nR_y}}\sum_{m_y}\sum_{z^n}
W_{Z|XY}(z^n|x^n(m_x),y^n(m_y))
1\left(\widehat m_x(z^n)\neq m_x\right)\nonumber\\
&\leq& 2^{-n (E_x-\epsilon_n)}.\label{eqn.inter_error_avg}
\end{eqnarray}
$\epsilon_n$ converges to zero as $n$ goes to infinity, and
\begin{eqnarray}
E_{x}&=&\min_{Q_{XYZ}:Q_X=P_X, Q_Y=P_Y} D(Q_{Z|XY}\|W| Q_{XY}) +
D(Q_{XY}\|P_X\times P_Y) +|I_Q(X;Z)-R_x|^+\nonumber
\end{eqnarray}
\end{lemma}
\vspace{0.1in}
\proof We give a unified proof for (\ref{eqn.proofpart1}),
(\ref{eqn.proofpart2}) and (\ref{eqn.inter_error_avg}) in
Appendix~\ref{section.appendix1}. \hfill$\square$ \vspace{0.1in}
With Lemma~\ref{lemma:regionII} and Lemma~\ref{lemma:regionI}, we
know that some non-negative error exponents can be achieved for the
randomized $(P_X,P_Y)$ fixed-composition code if the rate pair
$(R_x,R_y)\in \mathcal R_x(P_X,P_Y)$. This is because both
Kullback-Leibler divergence and $|\cdot|^+$ are always non-negative.
Now we only need to show the positiveness of those error exponents
when the rate pair is in the interior of $ \mathcal R_x(P_X,P_Y)$.
\vspace{0.1in}
\subsubsection{Positiveness of the error exponents}\label{sec.positivityEE}
\begin{lemma}{}\label{lemma:positiveness}
For rate pairs $(R_x,R_y)$ in the interior of $ \mathcal
R_x(P_X,P_Y)$ defined in
Theorem~\ref{Thm.Int_capacity}:
$$\max\{\min\{E_{xy}, E_{x|y}, E_{y|x}\} , E_{x}\} >0.$$
More specifically, we show two things. First, if $R_x< I(X,Z)$,
where $(X,Z)\sim P_{X}\times P_{Y}\times W_{Z|XY}$, then $E_x>0$.
This covers Region $I$.
Secondly, if $R_x< I(X,Z|Y)$, $R_y< I(Y,Z|X)$ and $R_x+R_y< I(X,Y;Z)$,
where $(X,Y,Z)\sim P_{X}\times P_{Y}\times W_{Z|XY}$, then $\min\{E_{xy}, E_{x|y},
E_{y|x}\}>0$, this covers Region $II$.\\
\proof First, suppose that for some $R_x< I(X,Z)$, $E_x\leq 0$.
Since both Kullback-Leibler divergence and $|\cdot|^+$ are
non-negative functions, we must have $E_x=0$ and hence there exists
a distribution $Q_{XYZ}$, s.t. $Q_X=P_X$, $Q_Y=P_Y$ and all the
individual non-negative functions are zero:
\begin{eqnarray}
D(Q_{XY}\|P_X\times P_Y)&=&0 \nonumber\\
D(Q_{Z|XY}\|W| Q_{XY})&=&0\nonumber\\
|I_Q(X;Z)-R_x|^+&=&0\nonumber
\end{eqnarray}
The first equation tells us that $Q_{XY}= P_{X}\times P_Y$. Then the
second equation becomes $ D(Q_{Z|XY}\|W| P_{X}\times P_{Y})=0$,
this means that $Q_{Z|XY}\times P_{X}\times P_{Y}= W\times
P_{X}\times P_{Y}$, so $I_Q(X;Z)= I(X;Z)$ where the random variables
$(X,Y,Z)\sim P_{X}\times P_{Y}\times W_{Z|XY} $ in $I(X;Z)$. Now
the third equation becomes $|I(X;Z)-R_x|^+=0$ which is equivalent to
$I(X;Z)\leq R_x$, this is a contradiction to the fact that $R_x<
I(X,Z)$.
Secondly, suppose that for some rate pair $(R_x, R_y)$ in Region
$II$, i.e. $R_x< I(X,Z|Y)$, $R_y< I(Y,Z|X)$ and $R_x+R_y< I(X,Y;Z)$
and $\min\{E_{xy}, E_{x|y},
E_{y|x}\}\leq 0$, then $\min\{E_{xy}=0 $ or $ E_{x|y}=0$ or
$E_{y|x}\}=0$. Following exactly the same argument as that
in the first part of the proof of Lemma~\ref{lemma:positiveness},
we can get contradictions with the fact that the rate pair $(R_x,
R_y)$ is in the interior of Region $II$.\hfill $\square$
\end{lemma}
\vspace{0.1in} From the above three lemmas, we conclude that the
error probability for decoding message $X$ is upper bounded by
$2^{-n(E-\epsilon_n)}$ for all $(R_x,R_y)\in \mathcal R_x(P_X,P_Y)$,
where $E>0$ and $\lim\limits_{n\rightarrow \infty}\epsilon_n=0$.
Hence the error probability converges to zero exponentially fast for
large $n$. This concludes the achievability part of the proof for
Theorem~\ref{Thm.Int_capacity}.
\subsection{Converse}
We show that the average decoding error of Decoder $X$ does not
converge to zero with increasing $n$ if the rate pair $(R_x, R_y)$
is outside the capacity region $\mathcal R_x(P_X, P_Y)$ shown in
Figure~\ref{fig.inter_region}. There are three parts of the proof
for Regions $V$, $IV$ and $III$ respectively.
\subsubsection{Region $V$} First, we show that in Region $V$ the average error
probability does not converge to zero as block length goes to
infinity. This is proved by using a modified version of the
reliability function for rate higher than the channel
capacity~\cite{Dueck_RC}. \vspace{0.1in}
\begin{lemma}{Region $V$}, the average error probability for $X$ does not
converge to $0$ with block length $n$ if $R_x> I(X;Z|Y)$, where
$(X,Y,Z)\sim P_X\times P_Y\times W_{Z|XY}$.
\end{lemma}
\vspace{0.1in} \proof It is enough to show the case where there is
only one message for $Y$ and encoder $Y$ sends a code word $y^n$
with composition $P_Y$. The code book for encoder $X$ is still
uniformly generated among all the fixed-composition-$P_X$ code
books. In the rest of the proof, we investigate the typical behavior
of the codewords $x^n$
and modify the Lemma 3 and Lemma 5 from~\cite{Dueck_RC} to show
that
\begin{eqnarray}\Pr(\widehat m_x\neq m_x)=P_{e(x)}^n(R_x, R_y, P_X,
P_Y)>\frac{1}{2}\label{eqn.contra1}
\end{eqnarray} for large $n$.
The details of the proof are in Appendix~\ref{section.appendix2}. \hfill $\square$
\vspace{0.1in}
\subsubsection{Region $IV$}
The more complicated case is in Region $IV$. We show that the
decoding error probability for user $X$ does not converge to zero
with block length $n$. The proof is by contradiction. The idea is to construct
a decoder that decodes both message $m_x$ and message $m_y$ correctly with high
probability, if the decoding error for $m_x$ converges to zero.
Then again by using a modified proof used in proving
the reliability function for rate higher than
channel capacity in~\cite{Dueck_RC}, we get a contradiction.
\vspace{0.1in}
\begin{lemma}{Region $IV$}, the average error probability for $X$ does not
converge to $0$ with block length $n$ if $R_x< I(X;Z|Y)$, $R_y<
I(Y;Z|X)$ and $R_x+R_y> I(X,Y;Z)$ where $(X,Y,Z)\sim P_X\times
P_Y\times W_{Z|XY}$.\label{lemma.regionIV}
\end{lemma}
\vspace{0.1in}
\proof Suppose that
\begin{eqnarray}
\Pr(\widehat m_x\neq
m_x)=P_{e(x)}^n(R_x, R_y, P_X, P_Y)\leq
\delta_n\label{eqn.decoder3.0}
\end{eqnarray} where $\delta_n$ goes to zero with $n$. Let decoder
$X$ decode $m_y$ by the same decoding rule devised
in~(\ref{eqn.decoder1}):
\begin{eqnarray}
\widehat m_y(z^n)=\argmax_{j\in\{1,2,..., 2^{nR_y}\}} I(z^n;
x^n(\widehat m_x(z^n)), y^n(j)).\label{eqn.decoder3}
\end{eqnarray}
The decoding error for either message at decoder $X$ is now:
\begin{eqnarray}
\Pr((\widehat m_x, \widehat m_y)\neq (m_x,m_y))&=& \Pr( \widehat m_x \neq m_x )+\Pr( \widehat m_x=m_x , \widehat m_y \neq m_y) \nonumber\\
&\leq &\Pr( \widehat m_x \neq m_x )+\Pr( \widehat m_y \neq
m_y|\widehat m_x=m_x)\label{eqn.conditional_error}
\end{eqnarray}
Given $\widehat m_x=m_x$, (\ref{eqn.decoder3}) becomes
\begin{eqnarray}
\widehat m_y(z^n)=\argmax_{j\in\{1,2,..., 2^{nR_y}\} } I(z^n;
x^n(m_x ), y^n(j)).\label{eqn.decoder3a}
\end{eqnarray}
So the second term in the RHS of~(\ref{eqn.conditional_error}),
$\Pr( \widehat m_y \neq m_y|\widehat m_x=m_x)$, can be upper
bounded as shown in~(\ref{eqn.proofpart2}). Substitute the upper
bounds~(\ref{eqn.proofpart2}) and~(\ref{eqn.decoder3.0}) into
(\ref{eqn.conditional_error}), we have:
\begin{eqnarray}
\Pr((\widehat m_x, \widehat m_y)\neq (m_x,m_y)) \leq
\delta_n+2^{-n(E_{y|x}-\epsilon_n)}\label{eqn.contra2a}
\end{eqnarray}
This upper bound~(\ref{eqn.contra2a}) converges to $0$ as $n$ goes
to infinity. However in Appendix~\ref{section.appendix2}, we show
that
\begin{eqnarray}P_{e(xy)}^n(R_x, R_y, P_X,
P_Y)=\Pr((\widehat m_x, \widehat m_y)\neq (m_x,m_y))>\frac{1}{2}\label{eqn.contra2}
\end{eqnarray}
This is contradicted to~(\ref{eqn.contra2a}).
\hfill $\square$
\vspace{0.1 in}
\subsubsection{Region $III$} This is a corollary of Lemma~\ref{lemma.regionIV}.
This is intuitively obvious since for each rate pair $(R_x, R_y)$
in Region $III$, we can find a rate pair $(R_x, R'_y)$ in Region
$IV$ such that $R_y> R'_y$. We construct a contradiction as follows.
For a $(R_x,R_y)$ decoder, we can construct a new decoder for
$(R_x,R'_y)$ where $R'_y< R_y$, by revealing a random selection of a
$(R_x,R_y)$ code book that is the superset of the $(R_x,R'_y)$ code
book to the $(R_x,R_y)$ decoder and accept the estimate of the
$(R_x,R_y)$ decoder as the estimate for the $(R_x,R'_y)$ decoder. If
the average error probability is small for the $(R_x,R_y)$ code
books, the average error probability is small for this particular
$(R_x,R'_y)$ decoder as well, this is a contradiction to
Lemma~\ref{lemma.regionIV}. Hence the decoding error for encoder $X$
does not converge to $0$ with $n$ if the rate pair $(R_x, R_y)$ is
in Region $III$. \hfill $\square$\vspace{0.1in}
This concludes the converse part of the
proof for Theorem~\ref{Thm.Int_capacity}.
\section{Discussions on Time-sharing }\label{sec.timeshare}
The main result of this paper is the randomized fixed-composition
coding capacity region for $X$ that is $\mathcal R_x(P_X, P_Y)$
shown in Figure~\ref{fig.inter_region}. So obviously, the
interference channel capacity region, where decoding errors for
both $X$ and $Y$ are small, is the intersection of $\mathcal
R_x(P_X, P_Y)$ and $\mathcal R_y(P_X, P_Y)$ where $\mathcal R_y(P_X,
P_Y)$ is defined in the similar way but with channel $\tilde
W_{\tilde Z|XY}$ instead of $W_{Z|XY}$. The intersected region
defined in~(\ref{eqn.XYREGION}), $\mathcal R_{xy}(P_X, P_Y)$, is in
general non-convex as shown in Figure~\ref{fig.inter_region_XY}.
Similar to multiple-access channels capacity region, studied in
Chapter~15.3~\cite{Cover}, we use this capacity region $\mathcal
R_{xy}(P_X, P_Y)$ as the building blocks to generate larger capacity
regions.
\subsection{A digression to MAC channel capacity region}
Before giving the time-sharing results for interference channels and
show why the simple time-sharing idea works for MAC channels but not
for interference channels, we first look at $\mathcal R_x(P_X, P_Y)$
in Figure~\ref{fig.inter_region}. Region $II$ is obviously the
multiple access channel $W_{Z|XY}$ region achieved by input
composition $(P_X, P_Y)$ at the two encoders, denoted by $\mathcal
R_{xy}^{mac}(P_X \times P_Y)$. In~\cite{Cover}, the full description
of the MAC channel capacity region is given in two different
manners:
\begin{eqnarray}
CONVEX\left(\bigcup_{P_X, P_Y }\mathcal R_{xy}^{mac}(P_X \times P_Y)\right)\nonumber
= CLOSURE \left(\bigcup_{P_U, P_{X|U}, P_{Y|U}}\mathcal
R_{xy}^{mac}(P_{X|U} \times P_{Y|U}\times
P_U)\right)\label{eqn.mac_equi}
\end{eqnarray}
where $R_{xy}^{mac}(P_{X|U} \times P_{Y|U}\times P_U)=\{(R_x, R_y):
R_x\leq I(X;Z|Y,U), R_y\leq I(Y;Z|X,U),
R_x+R_y\leq I(X,Y;Z|U)\} $ and $U$ is the time-sharing auxiliary random variable and $|U|=4$.
The LHS of~(\ref{eqn.mac_equi}) is the convex hull of all the
fixed-composition MAC channel capacity regions. The RHS
of~(\ref{eqn.mac_equi}) is the closure (without convexification) of
all the time-sharing MAC capacity regions.%
The
equivalence in~(\ref{eqn.mac_equi}) is non-trivial, it is not a
consequence of the tightness of the achievable region. It hinges on
the convexity of the ``basic'' capacity regions $\mathcal
R_{xy}^{mac}(P_X, P_Y)$. As will be shown in
Section~\ref{sec.beyond}, this is not the case for interference
channels, i.e.~(\ref{eqn.mac_equi}) does not hold anymore.
\subsection{Simple time-sharing capacity region and error exponent}
The simple idea of time-sharing is well studied for multi-user
channel coding, broadcast channel coding. Whenever there are two
operational points $(R^1_x, R^1_y), (R^2_x, R^2_y)$, while there
exist two coding schemes to achieve small error probability at each
operational point, one can use $\lambda n$ amount of channel uses at
$(R^1_x, R^1_y)$ with coding scheme $1$ and $(1-\lambda) n$ amount
of channel uses at $(R^2_x, R^2_y)$ with coding scheme $2$. The rate
of this coding scheme is $(\alpha R^1_x+(1-\alpha) R^2_x, \alpha
R^1_y+(1-\alpha) R^2_y)$ and the error probability is still
small\footnote{The error exponent is, however, at most half of the
individual error exponent. } (no bigger than the sum of two small
error probabilities). This idea is easily generalized to more than
$2$ operational points.
This simple time sharing idea works perfectly for MAC channel coding
as shown in~(\ref{eqn.mac_equi}). The whole capacity region can be
described as time sharing among fixed-composition codes where the
fixed-composition codes are building blocks. If we extend this idea
to interference channel, we have the following simple time sharing
region as discussed in Section~\ref{sec.time-sharing-firstrun}:
\begin{eqnarray}
CONVEX\left(\bigcup_{P_X, P_Y} \mathcal R_{xy}(P_X,P_Y)\right)=
CONVEX\left(\bigcup_{P_X, P_Y} \mathcal R_{x}(P_X,P_Y) \bigcap
R_{y}(P_X,P_Y)\right).\label{eqn.interference_region}
\end{eqnarray}
We shall soon see in the next section that this result can be
improved.
\subsection{Beyond simple time-sharing: ``Uniform'' time-sharing}\label{sec.beyond}
In this section we give a time-sharing coding scheme that was first
developed by Gallager~\cite{Gallager_MAC} and later further studied
for universal decoding by Pokorny and
Wallmeier~\cite{Pokorny_MAC} to get better error exponents for MAC
channels. This type of ``uniform'' time-sharing schemes not only
achieves better error exponents, more importantly, we show that this
achieve \textbf{ bigger} capacity region than the simple
time-sharing scheme does for interference channels! Unlike the
multiple-access channels where the simple time-sharing achieves the
whole capacity region, this is unique to the interference channels,
due to the fact that the capacity region is the convex hull of the
intersections of pairs of non-convex regions (convex or not is not
the issue here, the real difference is the intersection operation).
The organization of this section parallel to that for the
fixed-composition. We first introduce the ``uniform'' time-sharing
coding scheme, then give the achievable error exponents and lastly
drive the achievable rate region for such coding schemes. The proofs
are omitted since they are similar to those for the
randomized fixed-composition codes.
\vspace{0.1in}
\begin{definition}{``Uniform'' time-sharing codes}\label{def:randomized_coding_TS}:
for a probability distribution $P_U$ on $\cal U$, where $ \mathcal U =\{u_1, u_2,..., u_K\}$ with $\sum_{i=1}^K P_U(u_i)=1$,
and a pair of conditional independent distributions $P_{X|U},
P_{Y|U}$. We define the two codeword sets\footnote{Again, we ignore the nuisance
of the non-integers here.} as
$$X_c(n)=\{x^n: x_1^{n P_U(u1)}\in P_{X|u_1},x_{n P_U(u_1)+1}^{n (P_U(u_1)+P_U(u_2))}\in P_{X|u_2} ,..., x_{n (1-P_U(u_1))}^{n }\in P_{X|u_L}
\}$$
i.e. the $i$'th chunk of the codeword $x^n $ with length $nP_U(u_i)$
has composition $P_{X|u_i}$, and similarly
$$Y_c(n)=\{y^n: y_1^{n
P_U(u1)}\in P_{Y|u_1},y_{n P_U(u_1)+1}^{n (P_U(u_1)+P_U(u_2))}\in
P_{Y|u_2} ,..., y_{n (1-P_U(u_1))}^{n }\in P_{Y|u_L} \}.$$ A
``uniform'' time-sharing code $(R_x, R_y, P_U P_{X|U} P_{Y|U})$
encoder picks a code book with the following probability: for any
message $m_x\in \{1,2,...,2^{nR_x}\}$, the code word $x^n(m_x)$ is
uniformly distributed in $X_c(n)$, similarly for encoder Y.
\end{definition}
\vspace{0.1in}
After the code book is randomly generated and revealed to the
decoder, the decoder uses a maximum mutual information decoding
rule. Similar to the fixed-composition coding, the decoder needs to
either decode both message $X$ and $Y$ jointly or simply treats $Y$
as noise and decode $X$ only, depending on where the rate pairs are
in Region $I$ or $II$, as shown in
Figure~\ref{fig.inter_region_TIMESHARE}. The error probability we
investigate is again the average error probability over all messages
and code books.
\begin{figure*}
\setlength{\unitlength}{3247sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(5562,4416)(-1701,-4240)
\thinlines { \put(1951,-3961){\vector( 1, 0){4800}}
}%
{ \put(1951,-3961){\vector( 0, 1){3825}}
}%
\thicklines { \put(3601,-211){\line( 0,-1){1350}}
\put(3601,-1561){\line( 1,-1){1500}} \put(5101,-3061){\line(
0,-1){900}}
}%
{
\multiput(5101,-3061)(0.00000,89.9054){32}{\makebox(6.6667,10.0000){\SetFigFont{10}{12}{\rmdefault}{\mddefault}{\updefault}.}}
}%
{
\multiput(1951,-1561)(90.00000,0.00000){35}{\makebox(6.6667,10.0000){\SetFigFont{10}{12}{\rmdefault}{\mddefault}{\updefault}.}}
}%
\put(1626,-361){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $R_y$}%
}}}}
\put(6301,-3836){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $R_x$}%
}}}}
\put(2701,-811){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $I$}%
}}}}
\put(2701,-2761){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $II$}%
}}}}
\put(4576,-811){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $III$}%
}}}}
\put(4576,-1861){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $IV$}%
}}}}
\put(6001,-1861){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $V$}%
}}}}
\put(3301,-4186){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $I(X;Z|U)$}%
}}}}
\put(1076,-3111){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $I(Y;Z|U)$}%
}}}}
\put(801,-1661){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $I(Y;Z|X,U)$}%
}}}}
\put(4626,-4186){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{14.4}{\rmdefault}{\mddefault}{\updefault}{ $I(X;Z|Y,U)$}%
}}}}
\end{picture}%
\caption[ ]{``Uniform'' time-sharing capacity region $\mathcal
R_x(P_U P_{X|U} P_{Y_U})$ for $X$, the achievable region is the
union of Region $I$ and $II$. This region is very similar to that
for fixed-composition coding shown in Figure~\ref{fig.inter_region},
only difference is now there is an auxiliary time-sharing random
variable $U$.}
\label{fig.inter_region_TIMESHARE}
\end{figure*}
\vspace{0.1in}
\begin{theorem}{Interference channel capacity region $\mathcal R_{x}(P_U P_{X|U}P_{Y|U})$ for
``uniform'' time-sharing codes with composition $P_U P_{X|U}P_{Y|U}$:}
\label{Thm.Int_capacity_time_share}
\begin{eqnarray}\label{eqn.int_region_timesharing}
\mathcal R_x(P_U
P_{X|U}P_{Y|U}) &=&\{(R_x,R_y): 0\leq R_x< I(X;Z|U), 0\leq R_y\}\ \ \ \bigcup\nonumber\\
&& \{(R_x,R_y): 0\leq R_x< I(X;Z|Y,U), R_x+R_y< I(X,Y;Z|U)\}
\end{eqnarray}
where the random variables in (\ref{eqn.int_region_timesharing}),
$(U, X, Y, Z)\sim P_U P_{X|U} P_{Y|U} W_{Z|X,Y}$. And the
interference capacity region for $P_U P_{X|U}P_{Y|U}$ is
\begin{eqnarray}
\mathcal R_{xy}(P_U P_{X|U}P_{Y|U})=\mathcal R_{x}(P_U
P_{X|U}P_{Y|U})\bigcap \mathcal R_{y}(P_U
P_{X|U}P_{Y|U})\label{eqn.simple-sharing-region-fina}
\end{eqnarray}
\end{theorem}
\vspace{0.1in}
The rate region defined in
(\ref{eqn.int_region_timesharing}) itself does not give any new
$X$-capacity regions for $X$, since both Region $I$ and $II$ in
Figure~\ref{fig.inter_region_TIMESHARE} can be achieved by simple
time-sharing of Region $I$ and $II$ repectively
in~(\ref{eqn.int_region}). But for the interference channel
capacity, we argue in the next section that this coding scheme gives
a strictly bigger capacity region than that given by the simple
time-sharing of fixed-composition codes
in~(\ref{eqn.interference_region}).
The proof of Theorem~\ref{Thm.Int_capacity_time_share} is
similar to that of Theorem~\ref{Thm.Int_capacity}. We omit the
details here. We only point out that the achievability part is
proved by deriving a positive error exponent for rate pair in the
interior of the capacity region defined in
Theorem~\ref{Thm.Int_capacity_time_share}. As shown
in~\cite{Pokorny_MAC} and also detailed in this paper for the randomized coding,
the error exponents in Region $II$ of in
Figure~\ref{fig.inter_region_TIMESHARE} is:
$$E=\min \{E_{xy}, E_{x|y}, E_{y|x}\},\mbox{ where}$$
\begin{eqnarray} E_{xy}&=&\min_{Q_{XYZ|U}:Q_{X|U}=P_{X|U}, Q_{Y|U}=P_{Y|U}}\nonumber\\
&& D(Q_{Z|XY}\|W|Q_{XYU})
+D(Q_{XY|U}\|P_{X|U}\times P_{Y|U}|U) +|I_Q(X,Y;Z)-R_x-R_y|^+\nonumber\\
E_{x|y}&=&\min_{Q_{XYZ|U}:Q_{X|U}=P_{X|U},
Q_{Y|U}=P_{Y|U}}\nonumber\\&& D(Q_{Z|XY}\|W| Q_{XYU})
+ D(Q_{XY|U}\|P_{X|U}\times P_{Y|U}|U) +|I_Q(X;Z|Y,U)-R_x|^+\nonumber\\
E_{y|x}&=&\min_{Q_{XYZ|U}:Q_{X|U}=P_{X|U},
Q_{Y|U}=P_{Y|U}}\nonumber\\&& D(Q_{Z|XY}\|W| Q_{XYU}) +
D(Q_{XY|U}\|P_{X|U}\times P_{Y|U}|U)
+|I_Q(Y;Z|X,U)-R_y|^+\nonumber
\end{eqnarray}
This is the error exponents in Lemma~\ref{lemma:regionII} with a
conditional auxiliary random variable $U$.
The error exponent in Region $I$ is
\begin{eqnarray}
E_{x}= && \min_{Q_{XYZ|U}:Q_{X|U}=P_{X|U}, Q_{Y|U}=P_{Y|U}}\nonumber\\
&& D(Q_{Z|XY}\|W| Q_{XYU}) + D(Q_{XY|U}\|P_{X|U}\times P_{Y|U}|U)
+|I_Q(X;Z|U)-R_x|^+\nonumber
\end{eqnarray}
\subsection{Why the ``uniform'' time sharing is needed?}
It is obvious that the ``uniform'' time-sharing fixed-composition
coding gives a bigger error exponent than the simple time-sharing
coding does. More interestingly, we argue that it gives a bigger
interference channel capacity region. First we write down the
interference channel capacity region generated from the basic
``uniform'' time-sharing fixed-composition codes:
\begin{eqnarray}
CONVEX && \left(\bigcup_{P_{X|U} P_{Y|U} P_U} \mathcal R_{xy}(P_U
P_{X|U}P_{Y|U})\right).\label{eqn.interference_region_TIMESHARING}
\end{eqnarray}
where $ \mathcal R_{xy}(P_U P_{X|U}P_{Y|U})$ is defined
in~(\ref{eqn.simple-sharing-region-fina}) and $CONVEX(A)$ is the
convex hull (simple time sharing) of set $A$.
$U$ is a time-sharing auxiliary random variable. Unlike the MAC
coding problem, where simple time-sharing of fixed-composition codes
achieve the full capacity region, it is not guaranteed for
interference channels. The reason is the intersection operator in
the basic building blocks in~(\ref{eqn.XYREGION})
and~(\ref{eqn.simple-sharing-region-fina}) respectively, i.e. the
interference nature of the problem\footnote{ To understand why
intersection is the difference but not the non-convexity, we
consider four convex sets: $A_1, A_2, B_1, B_2$. We show that
$CONVEX(A_1\bigcap B_1,A_2\bigcap B_2)$ can be strictly smaller than
$CONVEX(A_1, A_2)\bigcap CONVEX (B_1,B_2)$. Let $ A_1=B_2\subset
B_1= A_2$, then $CONVEX(A_1\bigcap B_1,A_2\bigcap B_2)=A_1$ is
strictly smaller than $CONVEX(A_1, A_2)\bigcap CONVEX
(B_1,B_2)=A_2$. This shows why uniform time-sharing gives bigger
capacity
region. }.
Obviously the rate region by simple time sharing of fixed
composition code in~(\ref{eqn.interference_region}) is a subset of
simple time sharing of the ``uniform'' time sharing capacity
region~(\ref{eqn.interference_region_TIMESHARING}). In the
following example, we illustrate
why~(\ref{eqn.interference_region_TIMESHARING}) is
bigger than~(\ref{eqn.interference_region}).\\
\textbf{\textit{Example: }} Suppose we have a symmetric interference channel, i.e. $\mathcal
R_{x}(P_{X} ,P_{Y})= \mathcal R^T_{y}(P_{Y}, P_{X})$ for all $P_X ,
P_Y$ where $^T$ is the transpose operation. The comparison of simple
timesharing capacity region and the more sophisticated time-sharing
fixed-composition capacity region are illustrated by a toy example
in Figure~\ref{fig.TIMESHARE}.
For a distribution $(P_X, P_Y)$, the
achievable region for the fixed-composition code is illustrated in
Figure~\ref{fig.TIMESHARE}, $\mathcal R_x (P_X, P_Y)$ and $\mathcal
R_y (P_X, P_Y)$ respectively, these are bounded by the red dotted
lines and red dash-dotted lines respectively, so the interference
capacity region $\mathcal R_{xy}(P_X, P_Y)$ is bounded by the
pentagon $ABEFO$. By symmetry, $\mathcal R_x (P_Y, P_X)$ and
$\mathcal R_y (P_X, P_Y)$ are bounded by the blue dotted lines and
blue dash-dotted lines respectively, the capacity region $\mathcal
R_xy(P_Y, P_X)$ is bounded by the pentagon $HGCDO$. So the convex
hull of these two regions is $ABCDO$.
Now consider the following timesharing fixed-composition coding
$P_{X|U}P_{Y|U}P_U$ where $\mathcal U=\{0,1\}$, $P_U(0)=P_U(1)=0.5$
and $P_{X|0}=P_{Y|1}=P_X$, $P_{X|1}=P_{Y|0}=P_Y$. The interference
capacity region is obviously bounded by the black pentagon
in~Figure~\ref{fig.TIMESHARE}. This toy example shows
why~(\ref{eqn.interference_region_TIMESHARING}) is bigger
than~(\ref{eqn.interference_region}).
\begin{figure}
\begin{center}
\includegraphics[width=90mm]{Inter_TS.eps}
\end{center}
\caption[ ]{Simple timesharing of fixed-composition capacity $ABCDO$
VS time-sharing fixed composition capacity(0.5) ( the black
pentagon)}
\label{fig.TIMESHARE}
\end{figure}
\section{Future directions}
The most interesting question about interference channel is the
geometry of the two code books. For point to point channel coding,
the code words in the optimal code book is uniformly distributed on
a sphere of the optimal compositions and the optimal composition
achieves the capacity. For MAC channels, a simple time-sharing among
different fixed-composition codes is sufficient and necessary to
achieve the whole capacity region, meanwhile for each
fixed-composition codes, the codewords are uniformly distributed.
However as illustrated in Section~\ref{sec.timeshare}, a more
interesting ``uniform'' time sharing is needed. So what is time
sharing? Both simple time sharing and ``uniform'' time sharing
change the shape of the code books, however, in different ways.
Simple time sharing ``glue'' segments of code words together due to
the independence of the coding in different segments of the channel
uses, meanwhile for ``uniform'' time sharing, code words still have
equal distances between one another. Better understanding of the
shape of code books may help us understand the interference
channels. Also in this paper, we give our first attempt at giving an
outer bound of the interference channel capacity region. We only
manage to give a tight outer bound to the time-sharing
fixed-composition code. An important future direction is to
categorize the coding schemes for interference channels and more
outer bound result may follow. This is in contrast to the
traditional outer bound derivations~\cite{Carleial} where genie is
used.
\section*{Acknowledgments}
The author thanks Raul Etkin, Neri Merhav and Erik Ordentlich for
introducing the problem and helpful discussions along the way.
\bibliographystyle{plain}
|
1,108,101,562,982 | arxiv | \section{Introduction}
\label{sec:introduction}
Online social networks (OSN) have grown significantly over the last ten years with billions of active users using a variety of social network services. OSNs have revolutionized the way people interact. People join social networking sites to connect and communicate with their friends in real-time. They share interests and activities across political, economic, and geographic borders.
As social network sites continue to develop both in number and size, the service providers accumulate unprecedented amount of information about OSN users. As a result, social networks are a valuable data source for research on information societies. In particular, underlying social graphs play a key role in understanding how people form communities, how the OSNs suggest friendship to two users who do not know each other but have many common friends, etc. However, social graphs are not published in clear form due to serious privacy concerns. Instead, they are anonymized in various forms and published to third party consumers such as sociologists, epidemiologists, advertisers and criminologists. Alternatively, social networking sites provide APIs \footnote{https://developers.facebook.com/docs/graph-api} for data crawlers at limited rates and within privacy constraints (e.g. only public friend lists are available). Using this method, the data crawlers can collect friendship information and build a partial (local) view of the target social graph.
To overcome the constraints set by the service providers, we can start from user perspective, i.e. the contributors of OSNs. More precisely, if users cautiously collaborate with one another, they can exchange \textit{noisy} friend lists (containing fake friendships) with their neighbors in several rounds to get better views of the true social graph. Our ideas are based on the fact that user IDs are public (e.g. Facebook profiles are searchable \cite{fb-dir}) but the friendships are not so, except when a user leaves his friend list in public mode. Using public user IDs, any user can claim fake links from himself to the users not in his friend list.
The aggregation problem in this paper is unique in the sense that the disseminated data over the links are the links themselves. However, there exist fundamental questions about the feasibility of this model. The first question is how to define simple and effective privacy concepts for the link exchange processes. The second question comes from the high volume of link lists in exchange which may increase exponentially round after round. While storage and computation complexity may not be big problems, communication costs are non-trivial. We address both questions by a simple $(\alpha,\beta)$-exchange protocol with or without Bloom filters. To protect true links from inference attacks, we add fake links which are $beta$-fraction of true links. Furthermore, we realize the attenuated propagation of links via the parameter $\alpha \leq 1$.
Basically, we assume that users are \textit{honest-but-curious} (HbC), i.e. they follow the protocol but try to figure out true friendships among noisy friend lists. To preserve link privacy, each node obfuscates its friend list by adding fake links originating from itself to a number of nodes not in its friend list. Then in exchange stage, nodes share only with their friends a fraction of noisy links they possess.
Our contributions are summarized as follows:
\begin{itemize}
\item We introduce a novel private link exchange problem as an alternative to social graph crawling and centralized anonymization of data. The problem is distributed and provides a privacy/utility trade-off for all nodes.
\item We present two schemes for $(\alpha,\beta)$-exchange protocol: Baseline and Bloom filter based. We protect the true links by adding fake links and requiring the propagation probability of links to be attenuated by distance. We analyze the advantages and disadvantages of each scheme.
\item We evaluate our proposed schemes on various synthetic graph models and draw a number of critical findings.
\end{itemize}
The paper is organized as follows. We review the related work for information dissemination in social graphs, distributed anonymization, social trust models and Bloom filter in Section \ref{sec:related}. Section \ref{sec:pre} briefly introduces key concepts of Bloom filter and our link exchange model. In Section \ref{sec:link-exchange}, we present Baseline $(\alpha,\beta)$-exchange that realizes the exchange model by sending noisy link lists in clear form. Section \ref{sec:bf} describes Bloom filter version of $(\alpha,\beta)$-exchange with constant complexities and better privacy. We validate the proposed schemes in Section \ref{sec:eval}. Finally, we present our remarks and suggest future work in Section \ref{sec:conclusion}.
Table \ref{tab:notation} summarizes notations used in this paper.
\begin{table}[htb]
\small
\centering
\caption{List of notations} \label{tab:notation}
\begin{tabular}{|c|l|}
\hline
\textbf{Symbol} &\textbf{Definition} \\
\hline
$G=(V,E)$ & social graph with $N=|V|$ and $M=|E_{G}|$\\
\hline
$A(G)$ & adjacency matrix of $G$\\
\hline
$D$ & degree sequence of $G$ (column vector)\\
\hline
$Diam(G)$ & diameter of $G$\\
\hline
$N(u)$ & neighbors of node $u$ in $G$, $d_u = |N(u)|$ \\
\hline
$T$ & number of exchange rounds \\
\hline
$(v,w)$ & true link between $v$ and $w$ \\
\hline
$(v\rightarrow w)$ & fake link generated by $v$ \\
\hline
$L_u(t)$ & set of links possessed by $u$ at round $t$\\
\hline
$L_{uv}(t)$ & set of links $u$ sends to $v$ at time $t$\\
\hline
$\propto$ & uniformly at random sampling without replacement\\
\hline
$\alpha$ & fraction of links shared between a pair of nodes\\
\hline
$\beta$ & fraction of fake links generated at $t = 0$ \\
\hline
$m$ & number of bits in Bloom filter \\
\hline
$k$ & number of hash functions used in Bloom filter \\
\hline
$n$ & number of elements in Bloom filter \\
\hline
$Bf_u(t)$ & Bloom filter possessed by $u$ at round $t$\\
\hline
$Bf_{uv}(t)$ & Bloom filter $u$ sends to $v$ at time $t$\\
\hline
\end{tabular}
\end{table}
\section{Related Work}
\label{sec:related}
Epidemic spreading \cite{pastor2001epidemic,moreno2002epidemic} is the most related to our work. In \cite{pastor2001epidemic}, Pastor-Satorras et al. study the spreading of infections on scale-free (power-law) networks via the susceptible-infected-susceptible (SIS) model \cite{bailey1975mathematical}. They find the absence of an epidemic threshold ($\lambda_c = 0$) and its associated critical behavior when the number of nodes goes to infinity using mean-field approximation. Moreno et al. \cite{moreno2002epidemic} provide a detailed analytical and numerical study of susceptible-infected-removed (SIR) on Watts-Strogatz (WS) small-world model and Barab\'{a}si-Albert (BA) scale-free model. WS graphs with exponentially distributed degrees can be considered as a \textit{homogeneous} model in which each node hash the same number of links. WS graphs have finite epidemic thresholds. On the contrary, BA graphs with power-law distributed degrees are \textit{heterogeneous} and they expose the weaker resistance to epidemics starting on highly connected nodes.
Giakkoupis et al. \cite{giakkoupis2015privacy} introduce a distributed algorithm RIPOSTE for disseminating information in a social network that preserves privacy of nodes. Whenever the information reaches a node, the node decides to either forward the information to his neighbors or drop it. RIPOSTE uses two global parameters $\delta$ and $\lambda$ and satisfies differential privacy by applying Randomized Response Technique (RRT) \cite{dwork2014algorithmic}. Our work is also a form of information dissemination over graphs but it spreads a large number of links, not a single item.
Gossip-based protocols \cite{ganesh2003peer,voulgaris2005cyclon} aim at providing alternatives to network-level multicast with good scalability and reliability properties. In these protocols, message redundancy for high reliability is ensured by the fact each member forwards each message to a set of other, randomly chosen, group members. Ganesh et al. \cite{ganesh2003peer} propose a fully decentralized and self-configuring protocol SCAMP that provides each member with a partial view of group membership. As the number of participating nodes changes, the size of partial views automatically adapts to the value required to support a gossip algorithm reliably. CYCLON \cite{voulgaris2005cyclon} is a protocol for construction of reliable overlay networks. It is targeted to overlays that have low diameter, low clustering, highly symmetric node degrees and highly resilient to massive node failures. These properties belong to random graphs. CYCLON employs enhanced shuffling operation in which nodes select neighbors for cache exchange based on their age.
By exchanging noisy link lists, our schemes are related to distributed graph anonymization \cite{campan2008clustering,tassa2013anonymization}. However, rather than producing a single global anonymized graph as in \cite{tassa2013anonymization}, link exchange protocols result in multiple local outputs. In addition, link exchange operates at finest-grained level (node-level) whereas previous works consider a small number of data holders who manage disjoint sets of nodes.
The idea of adding fake links to hide true links appears in a number of earlier studies, e.g. \cite{shokri2009preserving,nguyen2015anonymizing}. Shokri et al. \cite{shokri2009preserving} propose a method for privacy preservation in collaborative filtering recommendation systems. They develop a model where each user stores locally an offline profile on his own side, hidden from the server, and an online profile on the server from which the server generates the recommendations. Each user arbitrarily contacts other users over time, and modifies his own offline profile through aggregating ratings from other users. The more ratings a user aggregates, the higher privacy he is but lower accuracy in recommendations. Nguyen et al. \cite{nguyen2015anonymizing} present a centralized graph anonymization scheme based on edge uncertainty semantics. Fake links are added to probabilistically hide true links. They consider distance-2 fake links to keep higher utility.
\section{Preliminaries}
\label{sec:pre}
In this section, we present the exchange model and attack model. Then we review key concepts about Bloom filter.
\subsection{Exchange Model}
\label{subsec:exchange-model}
We consider a distributed exchange model in which each node possesses his friend list and all nodes participate in the exchange protocol. We work on the following assumptions
\begin{itemize}
\item \textbf{Assumption 1} The space of node IDs is public. A node can generate fake links to any node. All friend lists (true links) are private, i.e. the existence of true link $(u,v)$ is surely known to $u$ and $v$ only.
\item \textbf{Assumption 2} A node exchanges messages with its neighbors only. Interacting with neighbors is based on an intuition of trusted relationships: we trust our friends more than any stranger.
\item \textbf{Assumption 3} A synchronous model is guaranteed by \textit{round-tagged} messages. It means a node prepares the message for round $t+1$ if and only if it has received all $t$-th round messages from his friends.
\item \textbf{Assumption 4} All nodes are honest-but-curious. They follow the protocol but try to infer true links among noisy links.
\end{itemize}
\begin{figure}
\centering
\includegraphics[height=2.3in]{link-exchange}
\setlength{\abovecaptionskip}{-10pt}
\caption{Link exchange with $\alpha = 1$, $\beta = 1/3$}
\vspace{-1.0em}
\label{fig:link-exchange}
\end{figure}
Fig. \ref{fig:link-exchange} illustrates the exchange model. At round $t = 0$ (initial round), each node $u$ prepare a noisy friend list by adding some fake links $(u,v)$ (i.e. links from $u$ to some people not in his friend list). This is feasible because all user IDs are public (e.g. \cite{fb-dir}). For example, node 0 adds a fake link (0,3) and his noisy friend list \{(0,1), (0,2), (0,3)\} is ready to be exchanged. Similarly, the other nodes prepare their noisy friend lists as in Fig \ref{fig:link-exchange}. At round $t = 1$, all nodes send and receive noisy friend lists from their neighbors. The local views of nodes 0 and 1 at $t = 1$ are shown in Fig. \ref{fig:link-exchange} where the solid lines (resp. the dashed arrows) are the true links (resp. fake links) known by the node and the dashed lines represent noisy links received at the node.
\subsection{Attack Model}
\label{subsec:attack-model}
We consider honest-but-curious users (nodes) who follow the protocol but try to infer true links among noisy links. We propose a simple inference attack based on frequencies of links arriving to a node. Given a link $(v,w)$ (a true link or a fake link) arriving to node $u$, if $(v,w)$ does not exist in $u$'s local view, it will be added. Otherwise, its frequency is increased by 1. At the end of the protocol, each node sorts all links in its local view by frequency and selects top $K$ links as true links. How to select the value of $K$ will be discussed later.
By splitting noisy links into two sets of links as above, the inference capability of each node is evaluated by common measures \cite{fawcett2006introduction}: True Positives (TP), True Negatives (TN), False Positives (FP), False Negatives (FN). As we will see in Section \ref{sec:link-exchange}, the parameter $\alpha$ introduces an \textit{attenuation effect} for link propagation when $\alpha < 1$. Given a link $e$, nodes farther from $e$ have lower chance of getting this link. This effect adds another dimension to our privacy model.
\subsection{Bloom Filter}
The Bloom filter is a space-efficient probabilistic data structure that supports set membership queries. It was first conceived by Burton Howard Bloom in 1970 \cite{bloom1970space}. It is used to test whether an element is a member of a set and can result in false positives (claiming an element to belong to the set when it was not inserted), but never in false negatives (reporting an inserted element not in the set).
An empty Bloom filter is an array of $m$ bits, all set to zero. There must also be $k$ different hash functions defined, each of which maps or hashes some set element $x$ to one of the $m$ array positions with a \textit{uniform} random distribution. The number of elements inserted into the Bloom filter is $n$. Fig. \ref{fig:bloom-filter} gives an example of Bloom filter with $m=18$, $k=2$ and $n=3$. The MD5 hash algorithm is a popular choice for the hash functions. When an element not in the set $w$ is looked up, it will be hashed by the $k$ hash functions into bit positions. If one of the positions is zero, we conclude that $w$ is not in the set. It may happen that all the bit positions of an element have been set. When this occurs, the Bloom filter will erroneously report that the element is a member of the set, also known as false positives. Fig. \ref{fig:bloom-filter} shows $w$ as a false positive.
\begin{figure}
\centering
\includegraphics[height=1.2in]{bloom-filter}
\setlength{\abovecaptionskip}{-20pt}
\caption{Bloom filter}
\vspace{-1.0em}
\label{fig:bloom-filter}
\end{figure}
Given the three parameters $m$, $k$ and $n$, the false positive probability is (see \cite{broder2004network}).
\begin{equation}
p = \left( 1 - (1-\frac{1}{m})^{kn}\right)^{k} \approx (1 - e^{-kn/m})^{k}
\end{equation}
The false positive probability decreases as $m$ increases or $n$ decreases. Given $m$ and $n$, the probability of false positives $(1 - e^{-kn/m})^{k}$ is minimized at $k = k_{opt} = \frac{m}{n} \ln 2$ (see \cite{broder2004network}). In this case, the false positive rate $p = (1/2)^k$ or equivalently
\begin{equation}
k = -\log_2{p} \label{eqn:k}
\end{equation}
\section{Baseline $(\alpha,\beta)$-exchange}
\label{sec:link-exchange}
In this section, we present the main ideas of our proposed $(\alpha,\beta)$-exchange and the improvements using Bloom filters.
\subsection{Overview}
As shown in Section \ref{subsec:exchange-model}, the link exchange protocol is straightforward. At the beginning of the protocol, all the nodes agree on the number of rounds $T$ and the two parameters $\alpha \in [0,1]$, $\beta \geq 0$. Then, each node $u$ prepares his own noisy friend list $L_u(0)$ by setting $L_u(0) = \{(u,v)| v \in N(u)\}$ and adding $\beta N(u)$ fake links in the form $(u \rightarrow w)$ where $w \notin N(u)$. At $t = 1$, each node $u$ makes a noisy list $L_{uv}(1)$ for every neighbor $v$ so that $L_{uv}(1)$ contains $\alpha |L_u(0)|$ links sampled from $L_u(0)$. Similarly, node $v$ prepares a noisy list $L_{vu}(1)$ for $u$. All the nodes sends and receives noisy link lists. Next, each node aggregates noisy link lists by removing duplicate links (if any) and obtains his local view of graph by $L_u(1)$. The round $t = 1$ finishes.
At $t = 2$, the process repeats: all nodes $u$ makes a noisy list $L_{uv}(2)$ for every neighbor $v$ that contains $\alpha |L_u(1)|$ links sampled from $L_u(1)$. They exchange noisy link lists and after receiving all $L_{vu}(2)$ from his friends, node $u$ updates his local view and gets $L_u(2)$. When $t = T$, the protocol terminates.
\subsection{Baseline Scheme}
The idea in the previous section is called Baseline $(\alpha,\beta)$-exchange as all noisy link lists are in clear form. Algorithm \ref{algo-baseline} shows steps for Baseline $(\alpha,\beta)$-exchange.
\begin{algorithm}
\caption{Baseline $(\alpha,\beta)$-exchange}
\label{algo-baseline}
\begin{algorithmic}[1]
\Require undirected graph $G=(V,E)$, parameters $\alpha \in [0,1]$, $\beta \geq 0$, number of rounds $T$
\Ensure noisy local views of graph $L_u(T), u \in V$
\State // initialization stage
\For {$u \in V$}
\State $Fa(u) = \{(u \rightarrow w) | w \notin N(u) \}$ s.t. $|Fa(u)| = \beta |N(u)|$
\State $L_u(0) = \{(u,v)| v \in N(u)\} \cup Fa(u)$
\EndFor
\State // exchange stage
\For {$t = 1..T$}
\For {$(u,v) \in E$}
\State $u$ : $L_{uv}(t) \propto L_u(t-1)$ s.t. $|L_{uv}(t)| = \alpha |L_u(t-1)|$
\State $v$ : $L_{vu}(t) \propto L_v(t-1)$ s.t. $|L_{vu}(t)| = \alpha |L_v(t-1)|$
\State $u$ sends $L_{uv}(t)$ to $v$
\State $v$ sends $L_{vu}(t)$ to $u$
\EndFor
\For {$u \in V$}
\State $L_u(t) = L_u(t-1) \cup \bigcup\limits_{v \in N(u)} L_{vu}(t)$
\EndFor
\EndFor
\Return $L_u(T), u \in V$
\end{algorithmic}
\end{algorithm}
Given the graph structure $G=(V,E)$, two parameters $\alpha \in [0,1]$, $\beta \geq 0$ and the number of rounds $T$. The protocol takes place in two stages. In initialization stage, each node $u$ prepares his own noisy friend list $L_u(0)$ by adding $\beta N(u)$ fake links in the form $(u,w)$ where $w \notin N(u)$ (Lines 3 and 4). In exchange stage (Lines 6-13), at round $t$, each node $u$ makes a noisy list $L_{uv}(t)$ for every neighbor $v$ that contains $\alpha |L_u(t)|$ links sampled from $L_u(t)$. The exchange happens on every relationship (true link). Each node takes the union of all noisy links he receives before starting the next round.
\subsubsection{Faster Simulation in A Single PC}
For simulation in a single PC, storing all link lists for all nodes in clear form is a costly solution. Moreover, the union operation on lists is time-consuming. We present here a technique to reduce the memory footprint and processing time using bit sets.
Fig. \ref{fig:bit-set} outlines the idea. We have $M' = (1+2\beta)|E_G|$ distinct links consisting of $|E_G|$ true links and $2\beta|E_G|$ fake links. By indexing $M'$ links from 0 to $M'-1$, the noisy link list at each node is stored in a bit set of size $M'$. The union of link lists (Line 13 Algorithm \ref{algo-baseline}) is equivalent to an OR operation between bit sets. To prepare $L_{uv}(t)$ for link exchange in round $t$, node $u$ must recover link IDs in its bit set.
We emphasize that indexing links and storing link IDs in bit sets are only for simulation. In reality, the number of links are unknown to all nodes, so they must run Baseline or Bloom filter (Section \ref{sec:bf}) protocol.
\begin{figure}
\centering
\includegraphics[height=2.0in]{bit-set}
\setlength{\abovecaptionskip}{-10pt}
\caption{Fast simulation using bit sets (column vectors)}
\vspace{-1.0em}
\label{fig:bit-set}
\end{figure}
For the case $\alpha = 1$, the exchange volume is reduced further if each node $u$ sends only ``new'' links, i.e. the links that do not exist in $u$'s list in the previous round. Fig. \ref{fig:incremental} visualizes this idea in which ``new'' links are in shaded region and old links are in white region. Note that the incremental volume is valid only for $\alpha = 1$. When $\alpha < 1$, the phenomenon of multipath propagation (Fig. \ref{fig:edge-propagation}) requires both new and old links to be sampled with probability $\alpha$.
\begin{figure}
\centering
\includegraphics[height=1.4in]{incremental}
\setlength{\abovecaptionskip}{-10pt}
\caption{Incremental volume for $\alpha = 1$}
\vspace{-1.0em}
\label{fig:incremental}
\end{figure}
\subsubsection{Utility-Oriented Initialization}
\label{subsec:util-oriented}
Baseline scheme in the previous section lets a node $u$ generate fake links by connecting $u$ to a certain number of nodes not in $u$'s friend list. This initialization may make local sub graphs at the final round have distorted path distributions due to many fake links connecting faraway nodes. Distorted path distributions reduce the ``utility'' perceived at each node.
Based on the observation of using fake links connecting nearby nodes \cite{nguyen2015anonymizing}, we suggest a utility-oriented improvement by two-round initialization. We call a fake link $(u\rightarrow v)$ \textit{distance-2 link} if $d(u,v) = 2$. For example, $(0 \rightarrow 3)$ is a distance-2 fake link while $(2\rightarrow 10)$ is not. Correspondingly, $v$ is called a \textit{distance-2 node} w.r.t $u$.
We introduce a new parameter $\gamma \in [0,1]$ which stipulates that each node $u$ create $\gamma\beta d_u$ fake links at $t = 0$ and exchange $\alpha(1+\gamma\beta)d_u$ randomly chosen links to each of its neighbors. Node $u$ collects node IDs and save them in the set $ID_u$. At $t=1$, node $u$ uses node IDs in $ID_u$ to create $(1-\gamma)\beta d_u$ fake links. Algorithm \ref{algo-init-util} implements this idea.
The number of distance-2 nodes that $u$ collects in Line 7 of Algorithm \ref{algo-init-util} is $\alpha (\sum_{v\in N(u)} d_v - d_u - 2 Tri(u))$ where $Tri(u)$ is the number of triangles with $u$ as a vertex. Assuming that the set $Fa_0(u)$ contains no distance-2 links (Line 3 Algorithm \ref{algo-init-util}). The number of non-distance-2 nodes that $u$ collects is $\sum_{v\in N(u)} \alpha\gamma\beta d_v$. The expected number of distance-2 links that $u$ can create is
\begin{equation}
L2(u) = \frac{(1-\gamma)(\sum_{v\in N(u)} d_v - d_u - 2 Tri(u))}{[\sum_{v\in N(u)} d_v - d_u - 2 Tri(u)] + \sum_{v\in N(u)} \gamma\beta d_v} \nonumber
\end{equation}
$L2(u)$ is a decreasing function of $\gamma$. All nodes have the highest (resp. lowest) number of distance-2 fake links at $\gamma = 0$ (resp. $\gamma = 1$). The case of $\gamma = 1$ reduces to standard initialization (Lines 2-4 Algorithm \ref{algo-baseline}).
\begin{algorithm}
\caption{Two-round Initialization}
\label{algo-init-util}
\begin{algorithmic}[1]
\Require undirected graph $G=(V,E)$, parameters $\alpha,\gamma \in [0,1]$, $\beta \geq 0$
\Ensure each node $u$ issues $\beta d_u$ fake links
\State // t = 0
\For {$u \in V$}
\State $Fa_0(u) = \{(u \rightarrow w) | w \notin N(u) \}$ s.t. $|Fa_0(u)| = \gamma\beta |N(u)|$
\State $L_u(0) = \{(u,v)| v \in N(u)\} \cup Fa_0(u)$
\EndFor
\State // t = 1
\For {$(u,v) \in E$}
\State $u$ and $v$ exchange $\alpha$-fraction of their links
\EndFor
\For {$u \in V$}
\State $u$ aggregates all links it knows into $L_u(1)$
\State $ID_u = \{w| w=v_1 \wedge w=v_2, (v_1,v_2) \in L_u(1) \} \setminus \{u, N(u)\}$
\State $Fa_1(u) = \{(u \rightarrow w) | w \in ID_u \}$
\State $\;\;\;\;$ s.t. $|Fa_1(u)| = (1-\gamma)\beta |N(u)|$
\State $L_u(1) = L_u(1) \cup Fa_1(u)$
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Complexity Analysis}
Let $A$ be the adjacency matrix of $G$ and $D$ be the column vector of degree sequence of nodes, the number of links at all nodes is upper bounded by the following vector, where $I_N$ is the identity matrix of size $N$.
\begin{equation}
LU(t) = (I_N + \alpha A)^t (1+\beta) D \label{eqn:upperbound}
\end{equation}
We say $LU(t)$ is an ``upper-bound'' because $LU(t)$ accepts duplicate links. More precisely, let $LU_u(t)$ and $LU_{uv}(t)$ be the noisy link lists at node $u$ and for exchange without removing duplicate links as in Line 13 Algorithm \ref{algo-baseline}. We have $LU_u(t) = LU_u(t-1) + \sum\limits_{v \in N(u)} LU_{vu}(t)$, where ``+'' denotes \textit{multiset} semantics. Clearly, $L_u(t) < LU_u(t)$.
\begin{figure}
\centering
\includegraphics[height=1.4in]{edge-propagation}
\setlength{\abovecaptionskip}{-10pt}
\caption{Multipath link propagation}
\vspace{-1.0em}
\label{fig:edge-propagation}
\end{figure}
Note that the number of rounds $T$ can be small because of the following analysis. We have four simple facts (see Fig. \ref{fig:edge-propagation})
\begin{enumerate}
\item a true link $(v,w)$ is propagated to node $u$ at round $t$ if and only if $\min \{d(u,v), d(u,w)\} = t$ for $\alpha = 1$.
\item a fake link $(v\rightarrow w)$ is propagated to node $u$ at round $t$ if and only if $d(u,v) = t$ for $\alpha = 1$.
\item a true link $(v,w)$ is propagated to node $u$ at round $t$ with probability $\sum_{p_l \in P(u,v) \cup P(u,w)} \alpha^l$ for $\alpha < 1$. Here $p_l$ is a path of length $l$ from $u$ to $v$ or $w$.
\item a fake link $(v\rightarrow w)$ is propagated to node $u$ at round $t$ with probability $\sum_{p_l \in P(u,v)} \alpha^l$ for $\alpha < 1$.
\end{enumerate}
We consider three cases.
\textbf{Case 1: $\alpha = 1, \beta = 0$} In this case, there is no fake links. Using Fact 1, we have $|L_u(Diam(G)-1)| = m$, i.e. every node $u$ receives all true links in $G$ after $(Diam(G)-1$ rounds.
\textbf{Case 2: $\alpha = 1, \beta > 0$} In this case, there are $2\beta m$ fake links. Using Facts 1 and 2, we have $|L_u(Diam(G))| = (1+2\beta)m$, i.e. every node $u$ receives all true links and fake links in $G$ after $Diam(G)$ rounds.
\textbf{Case 3: $\alpha < 1, \beta \geq 0$} In this case, there are $2\beta m$ fake links. Using Facts 3 and 4, every node $u$ receives all true links $(v,w)$ in $G$ after $T$ rounds if
\begin{equation}
\sum_{t=1}^T [(\alpha A)^t]_{vu} + [(\alpha A)^t]_{wu} \geq 1
\end{equation}
and all fake links $(v\rightarrow w)$ if
\begin{equation}
\sum_{t=1}^T [(\alpha A)^t]_{vu} \geq 1
\end{equation}
The protocol's complexity is measured in storage, computation and communication. Because all links are stored in clear form, all complexities increase round by round (except the trivial case $\alpha = 0$). They are also upper bounded by the total links in graph, which is $(1+2\beta)|E_G|$. Intuitively, low-degree nodes will incur lower complexities than high-degree nodes. However, as $t$ increases, the gap gets narrower. In Section \ref{sec:bf}, we will achieve constant complexities by using Bloom filters.
\subsection{Privacy Analysis}
\label{subsec:baseline-priv}
In this section, we discuss the link inference attacks that can be mounted by nodes. Each node has knowledge about the true links connecting itself to its neighbors and the fake links it creates before the first round as well as the fake links pointing to it. The remaining links (denoted as $B_u$) stored at node $u$ are subject to an inference attack by $u$. As discussed in Section \ref{subsec:attack-model}, $u$ may mount an inference attack by sorting links in $B_u$ by weight and picks top-$K$ links as true links.
In Baseline $(\alpha,\beta)$-exchange, the ratio of true links over fake links is $\frac{1}{\beta}$. Each user, therefore, can set $K = \frac{|B_u|}{1+\beta}$ and divide $B_u$ into two sets $T_u$ (predicted true links) and $F_u$ (predicted fake links). The numbers of true positives, true negatives, false positives and false negatives are (see Fig. \ref{fig:attack-measure} for an illustration)
\begin{align}
TP_u &= |E_G \cap T_u| \;, FP_u = |T_u \setminus E_G| \\
FN_u &= |E_G \cap F_u| \;, TN_u = |F_u \setminus E_G|
\end{align}
The precision, recall and F1 score are defined as $Prec=TP_u/(TP_u + FP_u)$, $Recall=TP_u/(TP_u + FN_u)$ and $F1 = 2*Prec*Recall/(Prec + Recall)$.
\begin{figure}
\centering
\includegraphics[height=1.4in]{attack-measure}
\setlength{\abovecaptionskip}{-10pt}
\caption{Inference attack measures}
\vspace{-1.0em}
\label{fig:attack-measure}
\end{figure}
\section{Bloom Filter Based Scheme}
\label{sec:bf}
\subsection{Motivation}
\label{subsec:bf-motivation}
Baseline $(\alpha,\beta)$-exchange has several drawbacks that motivate a better approach. First, all link lists are in clear form, allowing nodes to store link frequencies for inference attack (Section \ref{subsec:baseline-priv}). If we obfuscate link lists, this kind of attack may be mitigated. Hashing could be a solution. Second, sending link lists in clear form may incur high communication cost, especially at high degree nodes. Assuming that all node IDs are in range $\{0...2^{32}-1\}$, i.e. each ID needs 4 bytes, each link is encoded in 8 bytes. Given a link list, a better way to encode it is to store all links $(u,v_i)$ incident to $u$ by $\{u| \{v_i\}\}$. In this way, the message length for a link list can be reduced up to 50\%. In average, each link costs between 32-bit to 64-bit. Using Bloom filters, the number of bits per link may be reduced. For example, with $k = 4$, the number of bits per link is $k/\ln 2 \approx 5.8$.
This section introduces a Bloom filter based approach. Compared to Baseline approach, it has several advantages and limitations. Bloom filters, by encoding links in compact forms, reduce the storage and communication costs. The computation at each node is also much simpler thanks to logical OR operation compared to set unions in Baseline.
\subsection{Bloom Filter Based Scheme}
Algorithm \ref{algo-bf} describes steps of Bloom filter version of $(\alpha,\beta)$-exchange. As for inputs, we add a global false positive probability $p$ and the number of links $|E_G|$. As analyzed in \cite{broder2004network}, the number of hash functions $k$ is set to $\lceil -\log_2{p}\rceil$ (Line 2). The number of bits per link is $c = k/\ln 2$ (Line 3). The length of every Bloom filter is $m = c.|E_G|$ (Line 4). Then, each node $u$ initializes its Bloom filter $Bf_u(0)$ by hashing all links in $L_u(0)$ using $k$ hash functions. At the same time, all nodes send their noisy links $L_u(0)$ to the coordinator who will gather all links into the list $L$. This list will be used in the recovery stage.
In the exchange stage, each pair of nodes $(u,v)$ prepare and exchange noisy link lists in encoded form $Bf_{uv}(t)$ and $Bf_{vu}(t)$ (Lines 14-18). Before the next round, each node aggregates all Bloom filters sent to it by taking the OR operation. (Lines 19 and 20). Finally, the recovery stage helps each node to obtain its noisy local view $L_u(T)$. In this stage, the coordinator sends to $L$ to all nodes. If we omit the role of the coordinator (Lines 5,11 and 23), each node $u$ has to try hash $\frac{N(N-1)}{2}$ possible links against its final Bloom filter $Bf_u(T)$.
\begin{algorithm}
\caption{Bloom filter $(\alpha,\beta)$-exchange}
\label{algo-bf}
\begin{algorithmic}[1]
\Require undirected graph $G=(V,E)$, parameters $\alpha \in [0,1]$, $\beta \geq 0$, number of rounds $T$, false positive probability $p$
\Ensure noisy local views of graph $L_u(T), u \in V$
\State // initialization stage
\State $k = \lceil -\log_2{p}\rceil$ (see equation (\ref{eqn:k}))
\State $c = k/\ln 2$
\State $m = c.|E_G|$
\State $L = \emptyset$
\For {$u \in V$}
\State $Bf_u(0) = \text{BloomFilter(k,m,c)}$
\State $Fa(u) = \{(u \rightarrow w) | w \notin N(u) \}$ s.t. $|Fa(u)| = \beta |N(u)|$
\State $L_u(0) = \{(u,v)| v \in N(u)\} \cup Fa(u)$
\State Hash all $e \in L_u(0)$ into $Bf_u(0)$
\State $L = L \cup L_u(0)$
\EndFor
\State // exchange stage
\For {$t = 1..T$}
\For {$(u,v) \in E$}
\State $u$ prepares $Bf_{uv}(t) = \text{BitErasure}(Bf_u(t-1), \alpha)$
\State $v$ prepares $Bf_{vu}(t) = \text{BitErasure}(Bf_v(t-1), \alpha)$
\State $u$ sends $Bf_{uv}(t)$ to $v$
\State $v$ sends $Bf_{vu}(t)$ to $u$
\EndFor
\For {$u \in V$}
\State $Bf_u(t) = Bf_u(t-1) \vee \bigvee\limits_{v \in N(u)} Bf_{vu}(t)$
\EndFor
\EndFor
\State // link recovery stage
\For {$u \in V$}
\State $L_u(T) = \text{Hash}(L, Bf_u(T))$
\EndFor
\Return $L_u(T), u \in V$
\end{algorithmic}
\end{algorithm}
\subsubsection{Bit Erasure}
Because Bloom filters store links information in encoded form, we have to simulate the $\alpha$-sampling steps (Lines 8 and 9, Algorithm \ref{algo-baseline}).
$\alpha$-sampling is equivalent to ``deletion'' of $(1-\alpha)|Bf_u(t-1)|$ elements from $Bf_u(t-1)$. We can perform this operation by recovering elements in $Bf_u(t-1)$ then explicitly keeping $\alpha$-fraction of elements and hashing these elements to an empty Bloom filter. This approach, however, is costly because the node must try $\frac{N(N-1)}{2}$ possible links. As a result, an implicit removal of $(1-\alpha)$-fraction of elements is needed.
Resetting one bit causes one or several misses (false negatives) and possibly reduces false positives. For example, resetting the second bit in Bloom filter (Fig. \ref{fig:bloom-filter}) makes $x$ a false negative whereas resetting the 12th-bit makes both $y$ and $z$ disappear. Moreover, if the 8-th bit is reset, $x$ becomes a false negative and $w$ is no longer a false positive.
Let $m_1$ be the number of 1-bits in Bloom filter $Bf_u(t-1)$ and $s$ be the number of randomly reset bits ($s < m_1$), the probability of a true positive remaining in Bloom filter is
\begin{equation}
(1-\frac{s}{m_1})^k \label{eqn:remove-bit}
\end{equation}
If omitting the effect of false positives (which is reduced as illustrated above), the formula (\ref{eqn:remove-bit}) is exactly the sampling fraction $\alpha$. In other words,
\begin{equation}
\alpha = (1-\frac{s}{m_1})^k \Rightarrow s = m_1 (1-\alpha^{1/k})
\end{equation}
We can see that $s$ is a decreasing function of $\alpha$ and $k$. An illustration of this fact is shown in Fig. \ref{fig:bit-erasure}.
\begin{figure}
\centering
\includegraphics[height=1.5in]{bit-erasure}
\caption{Fraction of erased bits as a function of $\alpha$ and $k$}
\vspace{-1.0em}
\label{fig:bit-erasure}
\end{figure}
Algorithm \ref{algo-bit-erasure} realizes $\alpha$-sampling implicitly via bit erasure.
\begin{algorithm}
\caption{Bit Erasure}
\label{algo-bit-erasure}
\begin{algorithmic}[1]
\Require Bloom filter $B$, parameter $\alpha \in [0,1]$, number of hashes $k$
\Ensure Bloom filter $B'$ that contains approximately $\alpha$ fraction of elements in $B$
\State $B' = B$
\State $M_1 = \{i | B(i) = 1\}$
\State $m_1 = |M_1|$
\State $s = \lfloor m_1 (1-\alpha^{1/k} \rfloor$
\State randomly reset $s$ bits in $m_1$ positions of $B'$ \\
\Return $B'$
\end{algorithmic}
\end{algorithm}
\subsubsection{Bloom Filter Compression}
\label{subsec:bf-compress}
In Algorithm \ref{algo-bf}, all Bloom filters stored at nodes and transmitted between nodes are of length $m$ bits where $m = |E_G|k/\ln 2$. For $p = 0.1$, we have $k = 4$ and $m \approx 5.8 |E_G|$. For $p = 0.01$, we have $k = 7$ and $m \approx 10.1|E_G|$. For million-scale graphs with hundreds of millions of links, the length of Bloom filters would be hundreds of megabytes. This is undesirable for message transmission although storage and computation are not big problems. However, we observe that as in Baseline $(\alpha,\beta)$-exchange, not all messages have the length of $\Theta(E_G)$. Thus, lossless data compression is a useful tool for Bloom filter exchange.
Arithmetic coding \cite{moffat1998arithmetic} is such a lossless compression scheme. Arithmetic coding differs from other forms of entropy encoding, such as Huffman coding \cite{huffman1952method}. Huffman coding separates the input into component symbols with symbol probabilities approximated by negative powers of two and replaces each with a code. Arithmetic coding encodes the entire message into a single number, a fraction $f$ where $0.0 \leq f < 1.0$.
\subsection{Complexity and Privacy Analysis}
Thanks to constant sizes of bit arrays and constant time for OR operations, the total communication cost of Bloom Filter scheme is constant and the aggregation of noisy link lists is constant too. However, Bloom Filter scheme incurs an extra recovery step at all nodes. Each node needs to download the full noisy link set $L$ from the coordinator. As we confirm in Section \ref{subsec:eval-bf}, the exchange time of Bloom Filter scheme is much lower than that of Baseline, but the recovery step costs higher time complexity.
As mentioned in Section \ref{subsec:bf-motivation}, all link lists are obfuscated in Bloom filters, frequency-based inference attacks may be mitigated if the set of all links $L$ is revealed to all nodes only after the final round. The ratio of true links over fake links in Bloom Filter scheme is almost identical to that of Baseline. The reason lies in the independence of all links in exchange protocols. All links have the same probability to be sampled and sent to neighbors of nodes. Interestingly, Bloom Filter helps reduce the true/fake link ratio faster than Baseline for small $\alpha$ (Section \ref{subsec:eval-vol-inference}) thanks to its inherent false positives as well as false negatives caused by bit erasure.
\section{Evaluation}
\label{sec:eval}
In this section, we empirically evaluate the performance of our proposed schemes on synthetic graphs. All algorithms are implemented in Java and run on a desktop PC with $Intel^{\circledR}$ Core i7-4770@ 3.4Ghz, 16GB memory.
Two kinds of synthetic graphs are generated: Barab\'{a}si-Albert power-law (PL) graphs and Erd\"{o}s-R\'{e}nyi (ER) random graphs \cite{newman2003structure}. Table \ref{tab:dataset} lists six synthetic graphs used in our experiments. Each test case is run 10 times. We abbreviate the two schemes Baseline (BS) and BloomFilter-based (BF).
We choose $\alpha \in \{0.25, 0.5, 0.75, 1.0\}$ and $\beta \in \{0.5, 1.0\}$. The default number of hash functions $k$ is 4.
\begin{table}[htb]
\small
\centering
\caption{Synthetic graphs} \label{tab:dataset}
\begin{tabular}{|c|r|r|r|}
\hline
\textbf{Graph} &\textbf{\#Nodes} & \textbf{\#Links} & \textbf{Diameter}\\
\hline
PL1 & 10,000 & 29,990 & 7 \\
\hline
\textbf{PL2} & \textbf{10,000} & \textbf{49,970} & \textbf{6} \\
\hline
PL3 & 10,000 & 99,872 & 5 \\
\hline
ER1 & 10,000 & 30,076 & 10 \\
\hline
\textbf{ER2} & \textbf{10,000} & \textbf{50,424} & \textbf{7} \\
\hline
ER3 & 10,000 & 99,615 & 5 \\
\hline
\end{tabular}
\end{table}
\subsection{Message Volume and Inference Attacks}
\label{subsec:eval-vol-inference}
We investigate the message volume by the total number of true/fake links at all nodes after each round $t=1..Diam(G)$. These values are normalized by dividing them by $N.M.(1+2\beta)$. We also estimate the inference attacks by the ratio between the number of true links and the number of fake links. Fig. \ref{fig:er2} and Fig. \ref{fig:pl2} show two-y-axis charts. The left y-axis is for the normalized number of links. The right y-axis is for the ratios.
Several observations can be made clearly from Figures \ref{fig:er2} and \ref{fig:pl2}. First, the number of true/fake links increases exponentially and converges fast as all nodes reach the round at $Diam(G)$. For $\alpha = 0.25$, Baseline does not converge because not all links are propagated to all nodes. Bloom filter scheme produces higher number of true/fake links, especially at $\alpha=0.25, 0.5$. For larger values of $\alpha$, the two schemes almost coincide. Second, the ratio of true links over fake links decreases round by round and converges to $\frac{1}{2\beta}$. In early rounds, the ratios are lower than $\frac{1}{\beta}$. Higher the ratio, higher inference risk of true links. Clearly, Bloom Filter scheme reduces the risk better than Baseline for $\alpha=0.25, 0.5$ in later rounds.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_0.25_0.50.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.25, \beta=0.5$}
\label{fig:er2-1-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_0.50_0.50.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.5, \beta=0.5$}
\label{fig:er2-2-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_0.75_0.50.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.75, \beta=0.5$}
\label{fig:er2-3-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_1.00_0.50.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=1.0, \beta=0.5$}
\label{fig:er2-4-1}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_0.25_1.00.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.25, \beta=1.0$}
\label{fig:er2-1-2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_0.50_1.00.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.5, \beta=1.0$}
\label{fig:er2-2-2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_0.75_1.00.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.75, \beta=1.0$}
\label{fig:er2-3-2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_1.00_1.00.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=1.0, \beta=1.0$}
\label{fig:er2-4-2}
\end{subfigure}
\caption{Normalized number of true/fake links and link ratios on ER2}
\label{fig:er2}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_0.25_0.50.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.25, \beta=0.5$}
\label{fig:pl2-1-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_0.50_0.50.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.5, \beta=0.5$}
\label{fig:pl2-2-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_0.75_0.50.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.75, \beta=0.5$}
\label{fig:pl2-3-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=1.0, \beta=0.5$}
\label{fig:pl2-4-1}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_0.25_1.00.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.25, \beta=1.0$}
\label{fig:pl2-1-2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_0.50_1.00.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.5, \beta=1.0$}
\label{fig:pl2-2-2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_0.75_1.00.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.75, \beta=1.0$}
\label{fig:pl2-3-2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_1.00.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=1.0, \beta=1.0$}
\label{fig:pl2-4-2}
\end{subfigure}
\caption{Normalized number of true/fake links and link ratios on PL2}
\label{fig:pl2}
\end{figure*}
Fig. \ref{fig:deg-volume} displays the distribution of link volume collected at sample nodes. We sort $V$ by degree and take 100 sample nodes. ER graphs which are commonly called \textit{homogeneous} graphs show nearly uniform distributions for various values of $(\alpha, \beta)$. On the contrary, PL graphs are \textit{heterogeneous} ones and sample nodes exhibit much more random distributions.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_0.25_0.50_deg.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.25, \beta=0.5$, ER2}
\label{fig:deg-er-1-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_0.50_0.50_deg.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.5, \beta=0.5$, ER2}
\label{fig:deg-er-1-2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_0.75_0.50_deg.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.75, \beta=0.5$, ER2}
\label{fig:deg-er-2-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_1.00_0.50_deg.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=1.0, \beta=0.5$, ER2}
\label{fig:deg-er-2-2}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_0.25_0.50_deg.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.25, \beta=0.5$, PL2}
\label{fig:deg-pl-1-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_0.50_0.50_deg.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.5, \beta=0.5$, PL2}
\label{fig:deg-pl-1-2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_0.75_0.50_deg.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.75, \beta=0.5$, PL2}
\label{fig:deg-pl-2-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50_deg.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=1.0, \beta=0.5$, PL2}
\label{fig:deg-pl-2-2}
\end{subfigure}
\caption{Number of links at sampled nodes ($t=1(.), t=2(+), t=3(\circ), t=4(\square), t=5(\diamond), t=6(\triangle), t=7(*)$)}
\label{fig:deg-volume}
\end{figure*}
The inference attack on Baseline scheme (Section \ref{subsec:baseline-priv}) is shown in Fig. \ref{fig:attack}. The average F1 scores for two values of $\beta$ are plotted at different rounds of Baseline protocol. We observe that the scores are quite close to the theoretical values $1/(1+\beta)$ (see the dashed lines). On ER2 graph, the inference attack is more effective at latter rounds and for larger $\alpha$ while this is not clear on PL2.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_0.50_attack.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{ER2, $\alpha=0.5$}
\label{fig:attack-er2-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_1.00_attack.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{ER2, $\alpha=1.0$}
\label{fig:attack-er2-2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_0.50_attack.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{PL2, $\alpha=0.5$}
\label{fig:attack-pl2-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_attack.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{PL2, $\alpha=1.0$}
\label{fig:attack-pl2-2}
\end{subfigure}
\caption{Inference attacks}
\label{fig:attack}
\end{figure*}
\subsection{Bloom Filter Scheme}
\label{subsec:eval-bf}
In this section, we examine the performance of Bloom Filter scheme. We set the false positive rate of Bloom Filter to 0.1, 0.01 and 0.001 (the number of hash functions $k$ is 4,7 and 10 respectively). Fig. \ref{fig:fp-volume} displays the normalized number of true/fake links by Baseline and Bloom Filter with different false positive rates. We find that lower false positive rates make no improvement for $\alpha=0.25, 0.5$. Bit Erasure (Algorithm \ref{algo-bit-erasure}) causes this effect. Lower $\alpha$ means more bits to be erased in Bloom filters. Consequently, the number of false positive links and false negative links is amplified round by round for small $\alpha$.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_0.50_0.50_fp.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.5, \beta=0.5$, ER2}
\label{fig:fp-er-1-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_0.50_1.00_fp.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.5, \beta=1.0$, ER2}
\label{fig:fp-er-1-2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_1.00_0.50_fp.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=1.0, \beta=0.5$, ER2}
\label{fig:fp-er-2-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_1.00_1.00_fp.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=1.0, \beta=1.0$, ER2}
\label{fig:fp-er-2-2}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_0.50_0.50_fp.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.5, \beta=0.5$, PL2}
\label{fig:fp-pl-1-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_0.50_1.00_fp.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.5, \beta=1.0$, PL2}
\label{fig:fp-pl-1-2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50_fp.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=1.0, \beta=0.5$, PL2}
\label{fig:fp-pl-2-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_1.00_fp.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=1.0, \beta=1.0$, PL2}
\label{fig:fp-pl-2-2}
\end{subfigure}
\caption{Normalized number of true/fake links by different false positive rates}
\label{fig:fp-volume}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_0.25_0.50_compress-log-cumul.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.25, \beta=0.5$, ER2}
\label{fig:comp-er-1-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_0.25_1.00_compress-log-cumul.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.25, \beta=1.0$, ER2}
\label{fig:comp-er-1-2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_0.75_0.50_compress-log-cumul.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.75, \beta=0.5$, ER2}
\label{fig:comp-er-2-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_0.75_1.00_compress-log-cumul.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.75, \beta=1.0$, ER2}
\label{fig:comp-er-2-2}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_0.25_0.50_compress-log-cumul.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.25, \beta=0.5$, PL2}
\label{fig:comp-pl-1-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_0.25_1.00_compress-log-cumul.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.25, \beta=1.0$, PL2}
\label{fig:comp-pl-1-2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_0.75_0.50_compress-log-cumul.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.75, \beta=0.5$, PL2}
\label{fig:comp-pl-2-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_0.75_1.00_compress-log-cumul.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.75, \beta=1.0$, PL2}
\label{fig:comp-pl-2-2}
\end{subfigure}
\caption{Communication complexity. Y-axis is the total number of bytes transmitted among nodes (log-scale)}
\label{fig:comp-volume}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_0.25_0.50_runtime.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.25, \beta=0.5$, ER2}
\label{fig:runtime-er-1-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_0.50_0.50_runtime.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.5, \beta=0.5$, ER2}
\label{fig:runtime-er-1-2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_0.75_0.50_runtime.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.75, \beta=0.5$, ER2}
\label{fig:runtime-er-2-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=er_10000_0001_1.00_0.50_runtime.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=1.0, \beta=0.5$, ER2}
\label{fig:runtime-er-2-2}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_0.25_0.50_runtime.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.25, \beta=0.5$, PL2}
\label{fig:runtime-pl-1-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_0.50_0.50_runtime.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.5, \beta=0.5$, PL2}
\label{fig:runtime-pl-1-2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_0.75_0.50_runtime.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=0.75, \beta=0.5$, PL2}
\label{fig:runtime-pl-2-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50_runtime.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{$\alpha=1.0, \beta=0.5$, PL2}
\label{fig:runtime-pl-2-2}
\end{subfigure}
\caption{Total simulation runtime of all nodes (in millisecond)}
\label{fig:runtime}
\end{figure*}
We compare the communication complexity of Baseline and Bloom Filter schemes. Fig. \ref{fig:comp-volume} reports the number of bytes transmitted among nodes after each round in Baseline and Bloom Filter (with or without compression).
Baseline scheme stores links in clear form, so it incurs exponential communication complexity. As discussed in Section \ref{subsec:bf-motivation}, we assume that each node ID cost 4 bytes and a link list of length $l$ may be stored compactly in $4l$ bytes.
Bloom Filter uses constant-sized bit arrays, so its communication cost is constant too. However, each node running Bloom Filter scheme has to download the full noisy list of $(1+2\beta)M$ links after the final round to find which links are contained in its bit array. The number of bytes in the download step is $4N(1+2\beta)M$ bytes for $N$ nodes. The communication cost of the download step dominates that of bit array exchange. Using bit array compression (Section \ref{subsec:bf-compress}), Bloom Filter scheme reduces the message size a little bit, especially at early rounds when a large part of bit arrays are zero bits. For $\alpha = 0.75$, Bloom Filter scheme saves the communication cost in the last three rounds in both ER2 and PL2. For $\alpha = 0.25$, it is worse than Baseline in all rounds (except the final round) on ER2 and in the first four rounds on PL2.
In Fig. \ref{fig:runtime}, we compare the runtime of Baseline and Bloom Filter simulations in a single PC. In each round, each node updates its link set (\texttt{count} operation) by aggregating noisy link lists from its neighbors. Then, each node prepares (\texttt{exchange} operation), for the next round, new noisy link lists sampled from its link set.
At $\alpha < 1$, the exchange operations cost an increasing time as more rounds are considered. Higher $\alpha$ makes the link sampling slower. Only at $\alpha = 1$, we have fast exchange operations. In particular, the exchange runtime of Bloom Filter scheme is constant for $\alpha = 1$ and is an increasing function of round for $\alpha < 1$ due to bit erasure operations. The count operation of Bloom Filter dominates that of Baseline because each node has to hash the full link set to recover its noisy link set at each round.
\begin{figure*}[t!]
\centering
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50-1_PL.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{}
\label{fig:util-PL-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50-1_CC.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{}
\label{fig:util-CC-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50-1_APD.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{}
\label{fig:util-APD-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50-1_Dist.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{}
\label{fig:util-Dist-1}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50-2_PL.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{}
\label{fig:util-PL-2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50-2_CC.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{}
\label{fig:util-CC-2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50-2_APD.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{}
\label{fig:util-APD-2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50-2_Dist.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{}
\label{fig:util-Dist-2}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50-3_PL.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{}
\label{fig:util-PL-3}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50-3_CC.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{}
\label{fig:util-CC-3}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50-3_APD.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{}
\label{fig:util-APD-3}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50-3_Dist.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{}
\label{fig:util-Dist-3}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50-6_PL.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{}
\label{fig:util-PL-6}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50-6_CC.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{}
\label{fig:util-CC-6}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50-6_APD.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{}
\label{fig:util-APD-6}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.22\textwidth}
\centering
\epsfig{file=pl_10000_5_01_1.00_0.50-6_Dist.eps, height=1.2in}
\setlength{\abovecaptionskip}{0pt}
\caption{}
\label{fig:util-Dist-6}
\end{subfigure}
\caption{Utility relative errors on PL2 ($\alpha = 1.0, \beta = 0.5$)}
\label{fig:utility}
\end{figure*}
\subsection{Utility-Oriented Initialization}
In this section, we illustrate the benefit of two-round initialization (Algorithm \ref{algo-init-util}). We set $\gamma = 0.0, 0.5$ and denote the enhanced scheme as \textit{D2}. Several utility metrics are chosen as follows.
- Power-law exponent of degree sequence: $PL$ is the estimate of $\eta$ assuming the degree sequence follows a power-law $n_d \sim d^{-\eta}$ where $n_d$ is the number of $d$-degree nodes.
- Clustering coefficient: $CC = \frac{3N_{\Delta}}{N_3}$ where $N_{\Delta}$ is the number of triangles and $N_3$ is the number of connected triples.
- Average distance: $APD$ is the average distance among all pairs of vertices that are path-connected.
- Distance distribution: $Distance$ is the normalized node-pair shortest-path histogram.
We take 100 sample nodes by degree and compare local aggregated graphs to the ground truth. The ground truth is computed by setting $\beta = 0$ in Baseline scheme. Fig. \ref{fig:utility} shows the benefit of two-round initialization (D2-0.0 and D2-0.5) on PL2 graph in early rounds. D2-0.0 and D2-0.5 schemes result in lower relative errors than Baseline and Bloom Filter in the first and second rounds, especially by $CC$ and $PL$ metrics. All schemes are comparable at $t=3$, except on $CC$ metric. Finally, Baseline and Bloom Filter are almost equivalent in terms of utility and they perform better D2 schemes at $t = Diam(G)$ on $PL$, $APD$ and $Distance$ metrics.
\section{Conclusion}
\label{sec:conclusion}
We motivate the private link exchange problem as an alternative to social graph crawling and centralized anonymization of data. The problem is distributed and provides a privacy/utility trade-off for all nodes. Our proposed problem is unique in the sense that the disseminated data over the links are the links themselves. We present two schemes for $(\alpha,\beta)$-exchange protocol: Baseline and Bloom filter based.
Experiments on synthetic graphs clarify advantages and drawbacks of both schemes. Baseline scheme keeps link lists in clear form, so its communication cost increases fast. Bloom Filter scheme incurs lower communication complexity but needs an extra recovery step in the final round. Both schemes guarantee link privacy in the range $[\frac{1}{2\beta}, \frac{1}{\beta}]$. In Baseline, the inference attack based on link counting is not much better than the random attack.
For future work, we plan to investigate asynchronous models and node/links failures. We also consider community-based link exchange models in which nodes are gathered in super nodes and the link exchange takes place among super nodes only.
\bibliographystyle{abbrv}
|
1,108,101,562,983 | arxiv | \section{Introduction}
One of the greatest mysteries of neural networks (NNs) is their ability to generalize well without any explicit regularization even when they are heavily overparametrized \citep{breiman1995reflections, CZhang}. For conventional machine learning algorithms, without a regularization term, heavily overparameterized models easily overfit the data. However, for NNs, it has been empirically observed that, with proper initialization, their training trajectories are implicitly biased towards well-generalized solutions. Such a training-induced regularization effect is commonly referred to as implicit regularization and is a central issue for the deep learning theory.
Currently, our theoretical understanding of implicit regularization is very limited. In this work, we make a further step to explore the following basic theoretical questions about implicit regularization with an aim of helping us understand NNs better: (i) How to define implicit regularization mathematically; (ii) What is the relation between implicit regularization and conventional explicit regularization; (iii) How to characterize implicit regularization.
Questions (ii) and (iii) are closely related in a sense that if implicit and explicit regularizations are equivalent, then we can always hope to find an explicit regularization function to fully characterize any implicit regularization. In our work, we specifically address the relation between implicit regularization and a widely considered class of explicit regularization---regularization by a data-independent function. In our study, this problem is converted to whether there always exists a data-independent function $G$ over the parameter space whose value exactly quantifies the preference of certain training process. For overparameterized linear models, specific nonlinear models and also NNs in the NTK regime, such a data-independent function $G$ can be exactly derived, detailed introduced in Related Works. On the other hand, it has been proved that, for specific problem like matrix factorization, stochastic convex optimization or one-neuron ReLU NN, implicit regularization cannot be explained by norms, strongly convex functions and data-independent functions \cite{NRazinNorm,ADauber,GVardi}.
In our work, we make a further step to propose two types of global nonlinear dynamical mechanisms beyond the description of various data-independent functions (see Section \ref{overlapping mechanisms and examples}). Importantly, we provide two general recipes, i.e., Two-point and One-point Overlapping Recipes, for producing families of one-hidden-neuron NNs that realize these two dynamical mechanisms, respectively, by which their implicit regularizations provably cannot be fully characterized by any data-independent functions. Based on these results, we believe these two mechanisms commonly exist in the training dynamics of general NNs, that is to say, the implicit regularization of NNs is in general data-dependent.
Our contribution in this work is summarized as follows.
\begin{itemize}
\item [(a)] We give a mathematical definition of regularization, and define implicit and explicit regularization accordingly (Section \ref{revisiting regularization}).
\item [(b)] We propose two general dynamical mechanisms, i.e., Two-point and One-point Overlapping mechanisms, which put stringent constraints or even make it impossible to fully characterize implicit regularization by data-independent functions (Section \ref{overlapping mechanisms and examples}).
\item [(c)] We provide specific one-hidden-neuron NNs with sigmoid and Softplus activations which realize the proposed mechanisms.
\item [(d)] We provide Two-point and One-point Overlapping Recipes to produce rich classes of one-hidden-neuron NNs which can realize each of these two mechanisms (Section \ref{overlapping recipes}).
\item[(e)] Our results suggest the profound data-dependency of implicit regularization in general, which should be carefully studied for NNs in the future.
\end{itemize}
\section{Related Works}
Recent years, many works study the implicit regularization \citep{kukavcka2017regularization} for various problems. Progress has been achieved for many problems, e.g., matrix/tensor factorization, deep linear neural networks, NNs in the NTK regime, linear and nonlinear models and general nonlinear deep NNs. We recapitulate some of these works as follows.
For general non-linear NNs, empirical studies suggest that NNs have an implicitly regularization towards low-complexity function during the training \citep{arpit2017closer,kalimeris2019sgd,goldt2020modeling,jin2019quantifying}. For example, the frequency principle \citep{xu_training_2018,xu2019frequency,rahaman2018spectral,zhang2021linear,xu2022overview} quantifies the implicit regularization of ``simple solution'' by showing that NNs learn the data from low to high frequency, i.e., implicit low-frequency regularization. The deep frequency principle qualitatively explains why deep learning can be faster by empirically showing that the effective target function for a deeper hidden layer biases towards lower frequency during the training \citep{xu2020deep}. However, such low-complexity/low-frequency regularization of general deep non-linear models is hard to be characterized by an exact function in general. Several special cases are often studied, for example, the model linear w.r.t. trainable parameters, the model linear w.r.t. both trainable parameters and input, and the model with homogeneous property.
For the first case of NNs, i.e., NNs are linearized w.r.t. trainable parameters but are still non-linear w.r.t. the input, such as NNs in the linear regime studied in \citet{luo2020phase} and NTK regime studied in \citet{jacot2018neural}, the low-frequency regularization of non-linear NNs can be exactly formulated by an data-independent function \citep{zhang2021linear,luo2020theory} which explicitly shows that NNs pick a low-frequency function from multiple solutions. Meanwhile, the implicit regularization of NNs in the linear regime can also be explicitly characterized by a norm of the difference between the initial parameters and learned parameters, or a norm of the difference between the initial NN output and learned NN output \citep{YZhang,mei2019mean}. \citet{chizat2020implicit} shows that infinitely wide two-layer neural networks in the linear regime with homogeneous activations can be fully characterized as a max-margin classifier in a certain situations.
For the second case, such as deep linear NNs, which are linear w.r.t. both trainable parameters and input, a series of results are obtained.
The implicit bias/regularization of depth in deep linear NNs are more quantitatively studied, such as biasing towards simple function to improve the generalization
\cite{gissin2019implicit} and accelerating the training by providing an regularization that can be approximated by an momentum with adaptive learning rates in the gradient descent training \citep{arora2018optimization}. The gradient descent training takes the linearly fully-connected networks to solutions with implicit regularization of max-margin \cite{soudry2018implicit} while linear convolutional networks to linear solutions with another penalty in the frequency domain \citep{gunasekar2018implicitcnn}. Deep matrix factorization by deep linear networks with gradient descent induces nuclear norm minimization of the learned matrix, leading to an implicit low-rank regularization \citep{gunasekar2018implicit,arora2019implicit,chou2020gradient}.
For the third case, specific models are studied. For example, \citet{woodworth2020kernel} study simple homogeneous models for which the implicit bias of training with gradient descent can be exactly derived as a function of the scale of the initialization.
The attempt to find explicit characterization for implicit regularization of non-linear NNs in general encounters much difficulty. Therefore, another line of works make effort to construct counter-examples that provably cannot be characterized explicitly by specific types of functions like norms, strongly convex functions or more general data-independent functions \citep{NRazinNorm,ADauber,GVardi}.
\citet{NRazinNorm} prove that, under some conditions, a deep linear NN, utilized in performing matrix completion task by gradient descent with mean squared error, can converge to a solution, any of whose norm is infinity, that is, the implicit bias drives all norms of the solution towards infinity. The example in \citet{NRazinNorm} implies that the implicit bias of the deep linear NN in such situation can not be described by any norm.
The example given by \citet{ADauber} is based on stochastic convex optimization, while our work do not require convexity.
\citet{GVardi} makes a step closer to general nonlinear NNs by providing examples of one-neuron NNs. Their results rely on the zero-initialization and the manually-assigned derivative of ReLU at $0$.
our work makes a further step along this line with an aim of better understanding the nature of implicit regularization of general NNs. Our work provides not only examples (Section \ref{example for one-point} and \ref{example for two-point}), but also general mechanisms (Section \ref{overlapping mechanism}) as well as corresponding recipes (Section \ref{overlapping recipes}) that can produce classes of one-hidden-neuron NNs in which the implicit regularization cannot be fully characterized by a type of or all data-independent functions. Our results suggest profound data-dependency of general implicit regularization, which is a valuable insight for advancing the study of NN implicit regularization.
\section{Revisiting Regularization} \label{revisiting regularization}
In conventional machine learning problems, regularization is often realized by adding a specific term to the loss function, namely explicit regularization, to help solve most ill-posed problems. However, implicit regularization, which is key to the magic of NNs, is imposed by specific training process. To discuss explicit and implicit regularization in a unified way, we revisit the notion of regularization in this section. We first present a list of notations we use in this paper. Then we provide a general mathematical formulation of regularization, and define implicit regularization and explicit regularization accordingly. Finally, we discuss two types of characterization of implicit regularization we focus on in this paper.
\subsection{Notations}
We will use the following notations throughout our paper, unless we specify their meanings explicitly. $\sigma$ denotes a real-valued activation function on $\sR$ and $\tsigma$ is the reciprocal of it, namely, $\tsigma(x) := 1 / \sigma(x)$. In Two-point Overlapping Recipe (Section \ref{CP1 partA} and \ref{CP1 partB}), we have no requirement on the smoothness of $\sigma$ (or $\tsigma$), while in One-point Overlapping Recipe (Section \ref{CP2}), we require that $\sigma \in C^1(\sR)$. $\vtheta_i, \vtheta_i^*, \vp$ are parameters in $\sR^M$, where $i$ is an index. Usually, $\vtheta_i := (\vw_i, a_i)$ and $\vtheta_i^* := (\vw_i^*, a_i^*)$. After Section \ref{revisiting regularization}, we consider training dataset of only one sample denoted by $S := (\vx, y)$ or $S_i := (\vx_i, y_i)$, where $i$ is an index and $\vx_i \in \sR^d$, $y_i \in \sR$. After Section \ref{overlapping mechanisms and examples}, $d = M - 1$. We consider the $\ell_2$ loss function, denoted by $L$ or $L(\cdot, S)$ with $L(\vtheta) := L(\vtheta, S) := |f(\vtheta, \vx) - y|^2$ for any given $f(\vtheta, \cdot): \sR^d \to \sR$. The set of global minima of $L$ with respect to $S$ is denoted by $\fM_S$. If $\min L(\cdot, S) = 0$ then $\fM_S = L^{-1}(\cdot, S) \{0\}$. $\gamma$ represents a curve and its parametrization from an interval to $\sR^M$. Finally, $A_{\text{min}}$ denotes the method that finds the $\fM_S$ of $L(\cdot, S)$ (the definition of \textit{method} is given in the subsection below).
\subsection{Regularization}
We begin by defining the regularization in a general sense as a mapping between collections of algorithms in the following.
We say $A$ is a \textit{method} if it maps an arbitrary dataset $S$ to a subset $A(S)$ of the parameter space $\sR^M$ of a model. We call $A(S)$ the \textit{solution set of $A$}.
\begin{definition}[Regularization]\label{Regularization}
Let $\fF := \{ f(\vtheta, \cdot): \sR^d \to \sR \}_{\vtheta \in \sR^M}$ be a family of functions parametrized by $\vtheta \in \sR^M$ and $\fA, \fA'$ be two collections of methods that find solutions to the parameters of $f(\vtheta, \cdot)$. The mapping
\begin{equation}
\fR: \fA \ni A \mapsto \fR(A) \in \fA'
\end{equation}
is called \textit{regularization}.
\end{definition}
By definition, regularization $\fR$ assigns each method $A \in \fA$ to another method. The effect of this assignment is that $\fR$ implicitly relates the solution set of $A$ to that of $\fR(A)$ (if both exist), in the sense that this relation is hidden behind the formula of $\fR$.
Remark that, above definition emphasizes the mathematical essence of regularization in general, which is a mapping between collections of algorithms. In practice, the study of regularization mostly focus on understanding the properties of specific families of such mappings, e.g., explicit regularization and implicit regularization defined as follows.
\subsection{Implicit and Explicit Regularization}
Based on definition \ref{Regularization} of regularization, we further define implicit and explicit regularization, whose relation is the main focus of this work.
In the two definitions below, $\fF := \{ f(\vtheta, \cdot): \sR^d \to \sR \}_{\vtheta \in \sR^M}$ is a family of functions parametrized by $\vtheta \in \sR^M$. Let $A_{\text{min}}$ be the method that finds the global minima of $L(\cdot, S)$ given dataset $S$, namely, $A_{\text{min}}(S)=\{\vtheta: L(\vtheta,S):=\mathrm{min}_{\vtheta \in \sR^M} L(\vtheta, S)\}$., and $\fA = \{ A_{\text{min}} \}$.
\begin{definition}[Implicit regularization of GF]\label{Implicit regularization}
Let $A_{\text{GF}, \vtheta_0}$ be the gradient flow (GF) of $L$ starting at $\vtheta_0$. For any given dataset $S$, $A_{\text{GF}, \vtheta_0}$ is defined by
\begin{equation}
\left \{ \begin{aligned}
& \dot{\vtheta}(t) = - \nabla_{\vtheta} L(\vtheta, S) \\
& \vtheta(0) = \vtheta_0.
\end{aligned} \right.
\end{equation}
Let $\fA' = \{ A_{\text{GF}, \vtheta_0} \}_{\vtheta_0 \in \sR^M}$. The regularization $\fR_{\vtheta_0}: \fA \to \fA'$, $\fR_{\vtheta_0} (A_{\text{min}}) = A_{\text{GF}, \vtheta_0}$ is called an \textit{implicit regularization of GF for $L$}, or simply, an \textit{implicit regularization for $L$}.
\end{definition}
\begin{remark}
For any $\vtheta_0$, we can find a regularization $\fR_{\vtheta_0}$ from $\fA$ to $\fA'$. This means we obtain a collection of regularization depending on the initial value of $A_{\text{GF}, \vtheta_0}$.
\end{remark}
To motivate the study of implicit regularization, we give the following definition of explicit regularization.
\begin{definition}[Explicit regularization]
Let $\sR^M$ be the parameter space for $L$. Let $\fA'$ be the collection of methods $A’$ such that given dataset $S$, any $\vtheta^* \in A'(S)$ satisfies
\begin{equation}
J_S(\vtheta^*, A') = \min_{\vtheta \in \sR^M} J_S(\vtheta, A'),
\end{equation}
where $J_S: \sR^M \times \fA' \to \sR$ is a given explicit function encoding the information of $L$ (see remark below). The regularization $\fR_{A'}: \fA \to \fA'$ that assigns $A_{\text{min}}$ to $A'$ is called an \textit{explicit regularization for $L$}.
\end{definition}
\begin{remark}
We provide above general definition of explicit regularization to unify the following two common forms of explicit regularization for $L$.
(i) $\fA' = \{ A_{\text{GF}, \vtheta_0} \}_{\vtheta_0 \in \sR^M}$. $J_S(\vtheta^*, A_{\text{GF}, \vtheta_0}) = H(\vtheta^*, A_{\text{GF}, \vtheta_0}) I_{\fM_S}$, where $H: \sR^M \times \fA' \to \sR$ is given, $I_{\fM_S} = 1$ on the set of global minimum $\fM_S$ of $L(\cdot, S)$ and $I_{\fM_S} = +\infty$ otherwise. Then that any $\vtheta^* \in A_{\text{GF}, \vtheta_0}(S)$ satisfies
\begin{equation}
H(\vtheta^*, A_{\text{GF}, \vtheta_0}) = \min_{\vtheta \in \fM_S} H(\vtheta, A_{\text{GF}, \vtheta_0}).
\end{equation}
Moreover, since each $A_{\text{GF}, \vtheta_0}$ is determined by $\vtheta_0$. This is equivalent to saying that there is some $G: \sR^M \times \sR^M \to \sR$ such that for any $\vtheta^* \in A_{\text{GF}, \vtheta_0}(S)$,
\begin{equation}
G(\vtheta^*, \vtheta_0) = \min_{\vtheta \in \fM_S} G(\vtheta, \vtheta_0).
\end{equation}
In this case, $\fR_{A'}$ is said to be \textit{characterized by data-independent function $H$ (or $G$)}.
(ii) $J_S(\vtheta, A') = L(\vtheta, S) + H(\vtheta, A')$ for some $H: \sR^M \times \fA' \to \sR$. In this case , $\fR_{A'}$ is the (additive) explicit regularization that is commonly used in machine learning. For example, $H(\vtheta, A') := \| \vtheta \|_r$ for $r \geq 1$.
\end{remark}
\subsection{Characterization of Implicit Regularization}
A direct approach to understand the implicit regularization $\fR$ is to look at the value of certain data-independent function $G$ over $\fM_S$ to determine the element chosen (or preferred) by $\fR$. Depending on the amount of information about $\fR$ provided by $G$, we classify the following two types of characterization of $\fR$ by $G$.
\begin{definition}\label{explicit definition}
We say that the implicit regularization for $L$ is \textit{characterized by a data-independent function} $G: \sR^M \times \sR^M \to \sR$ if for any $S$ and any $\vtheta_0$,
\begin{equation}
\text{argmin}_{\vtheta \in \fM_S} G(\vtheta, \vtheta_0)=\vtheta_0^*,
\end{equation}
where $\vtheta_0, \vtheta_0^*$ are the initial value and long-term limit of the GF for $L$, respectively.
\end{definition}
\begin{remark}
To guarantee that the argmin operation is valid, we require that $\vtheta_0^*$ is the \textit{unique} point at which the restriction of $G(\cdot, \vtheta)$ on $\fM_S$, $G(\cdot, \vtheta_0)|_{\fM_S}$, attains its minimum.
\end{remark}
\begin{definition}\label{weak explicit definition}
We say that the implicit regularization for $L$ is \textit{characterized by a data-independent function} $G: \sR^M \times \sR^M \to \sR$ \textit{in the weak sense} if for any $S$ and any $\vtheta_0$,
\begin{equation}
\min_{\vtheta \in \fM_S} G(\vtheta, \vtheta_0) = G(\vtheta_0^*, \vtheta_0),
\end{equation}
where $\vtheta_0, \vtheta_0^*$ are the initial value and long-term limit of the GF for $L$, respectively.
\end{definition}
\begin{remark}
Let us give two immediate examples.
(i) It is not difficult to see that if an implicit regularization can be characterized by a data-independent function, then it can be characterized by a data-independent function in the weak sense.
(ii) The constant function $G(\cdot,\cdot)\equiv\mathrm{Const}$ always characterizes the implicit regularization for $L$ in the weak sense.
\end{remark}
\section{Overlapping Mechanisms and Examples} \label{overlapping mechanisms and examples}
By above definitions, the study of implicit regularization in essence is to characterize the mapping from certain $A_{\min}(S)$ to $\fR(A_{\min})(S)$ obtained by tracing families of training trajectories of specific training dynamics like GF. Focusing on the approach of finding data-independent functions for characterization, it is important to study the potential dynamical mechanisms that put stringent constraints on $G$ or even make data-independent characterization impossible. In the following, we propose two such mechanisms, i.e., Two-point Overlapping Mechanism and One-point Overlapping Mechanism. These two mechanisms can be realized by one-hidden-neuron NNs with common activation functions like sigmoid and Softplus as shown by two concrete examples in Section \ref{example for two-point} and \ref{example for one-point}.
\subsection{Overlapping Mechanisms} \label{overlapping mechanism}
\begin{lemma}[Two-point Overlapping Mechanism]\label{two-point lemma}
Fix $\vtheta_0 \in \sR^M$. Let $I$ be an index set and $\left \{ S_i \right \}_{i \in I}$ be a collection of samples sets. For each $i \in I$, let $\vtheta_i^*$ denote the long-term limit of the GF for $L(\cdot, S_i)$ starting at $\vtheta_0$. Suppose that each $\vtheta_i^* \in \fM_{S_i}$, and that for any $i$, there is some $j \in I \backslash \{i\}$ such that $\vtheta_j^* \in \fM_{S_i} \backslash \{ \vtheta_i^* \}$ and $\vtheta_i^* \in \fM_{S_j} \backslash \{ \vtheta_j^* \}$. (see Figure \ref{implicit CP1} for an example)
\begin{itemize}
\item [(a)] The implicit regularization for $L$ cannot be characterized by any data-independent function $G: \sR^M \times \sR^M \to \sR$.
\item [(b)] Any data-independent function $G$ that characterizes the implicit regularization for $L$ in the weak sense is constant on $\{ \vtheta_i^* \}_{i \in I}$.
\item [(c)] Any data-independent function $G \in C(\sR^M \times \sR^M)$ that characterizes the implicit regularization for $L$ in the weak sense is constant on the closure of $\{ \vtheta_i^* \}_{i \in I}$.
\end{itemize}
\end{lemma}
\begin{proof}
See the proof of Lemma \ref{app two-point lemma} in Appendix.
\end{proof}
\begin{lemma}[One-point Overlapping Mechanism]\label{G-derivative lemma}
Fix $\vtheta_0, \vtheta_0^* \in \sR^M$. Let $\{ \gamma_i \}_{i=1}^M$ be $M$ trajectories of GF for $L$ from $\vtheta_0$ to $\vtheta_0^*$, such that $\lim_{t \to \infty} \frac{\dot{\gamma_i}(t)}{|\dot{\gamma_i} (t)|}$ exist for all $i$ and the limits are linearly independent. If the implicit regularization for $L$ is characterized by a data-independent function $G \in C^1 (\sR^M \times \sR^M)$ in the weak sense, then $\nabla G(\cdot, \vtheta_0)|_{\vtheta_0^*} = 0$, where the derivative is taken with respect to the first entry of $G$. (see Figure \ref{implicit CP2} for an example)
\end{lemma}
\begin{proof}
See the proof of Lemma \ref{appendix G-derivative lemma} in Appendix.
\end{proof}
\begin{remark}
One-point Overlapping Mechanism puts stringent constraint on $G$. If this mechanism is further strengthened such that trajectories starting from $\vtheta_0$ with different data $S$ can overlap at any point in a neighbourhood of $\vtheta_0^*$, then the corresponding implicit regularization cannot be characterized by any data-independent function. This strengthened mechanism can be realized for special cases in experiment, and we will try to provide a general recipe for this mechanism in our future works.
\end{remark}
Two-point Overlapping Mechanism Lemma \ref{two-point lemma}, which works for arbitrary function $G: \sR^M \times \sR^M \to \sR$, is the heart of Two-point Overlapping Recipes. It will be used to prove Theorem \ref{theorem 1}. One-point Overlapping Mechanism Lemma \ref{G-derivative lemma}, on the other hand, requires that $G \in C^1(\sR^M \times \sR^M)$. It is the heart of the One-point Overlapping Recipe and it will be used to prove Theorem \ref{theorem 2}.
In the following subsections, we provide concrete examples of one-hidden-neuron NNs with common activation functions that can realize each of the above mechanisms. These two specific examples further inspire our general recipes in Section \ref{overlapping recipes} for producing rich classes of one-hidden-neuron NNs.
\subsection{Example for Two-point Overlapping Mechanism} \label{example for two-point}
In this example, we consider the one-hidden-neuron network with sigmoid activation, i.e.,
\begin{equation}
f(\vtheta, x) = f(w,a, x) = \frac{a}{1 + e^{-wx}},
\end{equation}
and one-sample $\ell_2$ loss. As illustrated in Figure \ref{implicit CP1}, by properly choosing two one-sample dataset $S_1=(x_1,y_1)$ and $S_2=(x_2,y_2)$, we can obtain two global minima curves $\fM_{S_1}$ and $\fM_{S_2}$ intersecting at two points. Then, assigning each of these two points as a long-time limit (for a trajectory of GF) denoted by $\vtheta_1^*$ and $\vtheta_2^*$ respectively, we ``trace back'' the trajectories to find two lines of potential initial points intersecting at a point denoted by $\vtheta_0$.
By this procedure, we find a $\vtheta_0$, two samples $S_1$ and $S_2$ and two gradient trajectories $\gamma_1, \gamma_2$ converging to two points in $\fM_{S_1} \cap \fM_{S_2}$ as required by the Two-point Overlapping Mechanism (Figure \ref{implicit CP1}) .
Thus, the implicit regularization for $L$ can only be characterized by a data-independent function $G: \sR^2 \times \sR^2 \to \sR$ in the weak sense, because we must have $G(\vtheta_1^*, \vtheta_0) = G(\vtheta_2^*, \vtheta_0) = \min_{\vtheta \in \fM_{S_1}} G(\vtheta, \vtheta_0)$. Clearly, $\vtheta_1^*$ and $\vtheta_2^*$ cannot be differentiated without information from data by any data-independent function $G$. Therefore, as the GF trajectories differentiate $\vtheta_1^*$ and $\vtheta_2^*$, the corresponding implicit regularization must be data-dependent.
\begin{figure}[hbtp]
\centering
\includegraphics[width=0.3\textwidth]{Implicit_sigmoid_CP1_2_w_limit.png}
\caption{Two-point Overlapping Mechanism realized by an one-neuron-hidden NN with sigmoid activation. Tracing back of gradient trajectories $\gamma_1, \gamma_2$ finds $\vtheta_0$. This procedure inspires our Two-point Overlapping Recipe. We choose the initial point $\vtheta_0 = (0.922, 2.868)$. The sample for (i) the blue lines is $(x_1, y_1) = (1,1)$; (ii) the orange lines is $(x_2,y_2) = (12.307, 1.400)$.}
\label{implicit CP1}
\end{figure}
\subsection{Example for One-point Overlapping Mechanism} \label{example for one-point}
In this example, we consider another one-hidden-neuron NN with Softplus activation, i.e.,
\begin{equation}
f(\vtheta, x) = f(w,a, x) = a \log(1 + e^{wx})
\end{equation}
and the one-sample $\ell_2$ loss. As illustrated in Figure \ref{implicit CP2}, we first choose an initial point $\vtheta_0 = (w_0, a_0)$. Then we use the one-sample dataset $S=(x, -a_0 \sigma(x w_0))$ with various $x$, by which we obtain distinct trajectories of GF from $\vtheta_0$ to $\vtheta^*=(w_0, -a_0)$, which converge to $\vtheta^*$ from different directions. In Figure \ref{implicit CP2}, we show both the trajectories (dashed line) and $\fM_S$'s (solid line), i.e., the sets of global minima of $L$, which clearly exhibits the One-point Overlapping Mechanism.
Thus, if the implicit regularization for $L$ is characterized by a data-independent function $G \in C^1(\sR^2 \times \sR^2) \to \sR$ in the weak sense, then $\nabla G(\cdot, \vtheta_0)|_{\vtheta^*} = 0$, where the derivatives are taken with respect to the first entry of $G$.
\begin{figure}[hbtp]
\centering
\includegraphics[width=0.25\textwidth]{Implicit_softReLU_CP2_1.png}
\caption{One-point Overlapping Mechanism realized by an one-neuron-hidden NN with Softplus activation. The dashed lines are gradient trajectories and the solid lines are null sets of $L(\cdot, S)$, i.e., $\fM_S$'s, for different one-sample dataset $S=(x,y)$. We choose initial value $\vtheta_0 = (w_0, a_0) = (0.3,1)$. Then $\vtheta^* = (w_0, -a_0) = (0.3,-1)$. The sample for (i) blue lines is $(x,y) = (0,6, -a_0 \sigma(0.6 w_0))$; (ii) orange lines is $(x,y) = (1.0, -a_0 \sigma(w_0))$; (iii) brown lines is $(x,y) = (1.4, -a_0 \sigma(1.4 w_0))$; (iv) grey lines is $(x,y) = (1.8, -a_0 \sigma(1.8 w_0))$.}
\label{implicit CP2}
\end{figure}
\section{Overlapping Recipes} \label{overlapping recipes}
Inspired by above concrete examples, in this section, we provide two general recipes which produce rich classes of one-hidden-neuron NNs whose GF trajectories realize the corresponding mechanisms, thus cannot be fully characterized by a type of or all data-independent functions.
\subsection{Two-point Overlapping Recipe (Part A)}\label{CP1 partA}
We first provide a recipe that produces one-hidden-neuron networks subject to which the implicit regularization for $L$ cannot be characterized by any data-independent function $G: \sR^M \times \sR^M \to \sR$. It works by selecting a common initial value $\vtheta_0$ for two gradient trajectories with respect to $S_1, S_2$, which converge to two points $\vtheta_1^*, \vtheta_2^* \in \fM_{S_2} \cap \fM_{S_1}$, respectively. In this procedure, the choice of $\vtheta_0$ and $\vtheta_1^*$ are (almost) arbitrary and one of the samples ($S_1$ or $S_2$) can be chosen (almost) arbitrarily. Moreover, using this construction procedure, it is easy to construct a $\sigma$ of any degree of smoothness and with nice properties (monotonicity, periodicity, etc.).
The construction of $\sigma$, $\vtheta_0$, $\vtheta_1^*, \vtheta_2^*$ and samples $S_1, S_2$ is described as follows.
Let $P_h: \sR^M \to \span \{h\}$ be the (orthogonal) projection from $\sR$ onto $\span \{h\}$.
\begin{itemize}
\item [(a)] Find $\vw_0, a_0$ with $\vw_0 \neq 0$. Find $\vtheta_1^* = (\vw_1^*, a_1^*)$ with $\vw_1^* \neq \vw_0$, $S_1 = (\vx_1, y_1)$ with $\vx_1 \neq 0$, and $\sigma_1: \sR \to \sR$ such that the trajectory of GF for $L(\vtheta, S_1) = |a \sigma(\vx_1^\text{T} \vw) - y_1|^2$ starting at $\vtheta_0$ converges to $\vtheta_1^*$ as $t \to \infty$.
\item [(b)] Let $E_1 = \overline{\{\vx_1^\text{T} \vw(t): t \geq 0\}}$.
\item [(c)] Find some $\vw_2^*$ such that $\vx_1^\text{T}\vw_2^* \notin E_1 \cup \{0\}$, and if $l$ is the line segment connecting $\vw_0$ and $\vw_2^*$, then $0 \neq P_h(\vw_1^*) \notin P_h(l)$, where $h = \vw_2^* - \vw_0$.
\item [(d)] Find some $a_2^*$ and re-define $\sigma_1$ (if necessary) at $\{\vx_1^\text{T} \vw_2^*\}$ such that $a_2^* a_0 \geq 0$ and $a_2^* \sigma_1(\vx_1^\text{T} \vw_2^*) = y_1$. Let $\tilde{E}_1 = E_1 \cup \{\vx_1^\text{T} \vw_2^*\}$.
\item [(e)] Find $\vx_2 \in \span \{\vw_2^* - \vw_0\} \backslash \{0\}$ with $\sup \{|\vx_2|^{-1}|z|: z \in \tilde{E}_1\} < \min \{|P_h(\vw_1^*)|, |P_h(\vw_0)|, |P_h(\vw_2^*)|\}$.
\item [(f)] Define $\sigma_2: \sR \to \sR$ and $y_2$, such that i) $\sigma_2(\vx_2^\text{T} \vw) = \sigma_1(\vx_2^\text{T} \vw)$ whenever $\vx_2^\text{T} \vw \in \tilde{E}_1$, ii) $a_1^* \sigma_2(\vx_2^\text{T} \vw_1^*) = y_2$, and iii) the trajectory $\gamma := (\gamma_{\vw}, \gamma_a)$ of GF for $L(\cdot, S_2)$ starting at $\vtheta_0$ converges to $\vtheta_2^*$ as $t \to \infty$. Let $\sigma := \sigma_2$.
\end{itemize}
In Corollary \ref{expoential proposition corollary}, we show that step (a) and (f) are well-established.
\subsection{Two-point Overlapping Recipe (Part B)}\label{CP1 partB}
The Two-point Overlapping Recipe (Part A) gives one-hidden-neuron networks that make it impossible to characterize the implicit regularization for $L$ by any data-independent function $G$. In fact, we can repeat the construction steps in Section \ref{CP1 partA} to obtain countably many samples $\{ S_n \}_{n=1}^\infty$ and countably many long-term limits of gradient trajectories $\{ \vtheta_n^* \}_{n=1}^\infty$ such that if the implicit regularization for $L$ is characterized by a data-independent function $G$ in the weak sense, then $G$ must be constant on $\{ \vtheta: \vtheta = \vtheta_n^*, n \in \sN \}$. The detailed procedure is given below. As in Section \ref{CP1 partA}, this procedure can also give a $\sigma$ of any degree of smoothness and with nice properties (monotonicity, periodicity, etc.).
The construction is described as follows. For $n = 1$, do the steps (a), (b) to obtain $\vtheta_0$, $\vtheta_1^*$, $\sigma_1$ and $E_1$. For $n \geq 2$, do the following steps.
\begin{itemize}
\item [(a)] Find some $k \in \{1, ..., n-1\}$ and $\vw_n^* \in \sR^{M-1}$ such that $\vx_k^\text{T} \vw_n^* \notin E_{n-1} \cup \{0\}$, and if $l$ is the line segment connecting $\vw_0$ and $\vw_n^*$ then $0 \neq P_h(\vw_k^*) \notin P_h(l)$, where $h = \vw_n^* - \vw_0$.
\item [(b)] Find some $a_n^*$ and re-define $\sigma_{n-1}$ (if necessary) at $\{\vx_k^\text{T} \vw_n^*\}$ such that $a_n^* a_0 \geq 0$ and $a_n^* \sigma_{n-1}(\vx_k^\text{T} \vw_n^*) = y_k$. Let $\tilde{E}_{n-1} = E_{n-1} \cup \{\vx_k^\text{T} \vw_n^*\}$.
\item [(c)] Find $\vx_n \in \span \{\vw_n^* - \vw_0\} \backslash \{0\}$ with $\sup \{|\vx_n|^{-1}|z|: z \in \tilde{E}_{n-1}\} < \min\{|P_h(\vw_k^*)|, |P_h(\vw_0)|, |P_h(\vw_n^*)|\}$.
\item [(d)] Define $\sigma_n: \sR \to \sR$ and $y_n$, such that i) $\sigma_n(\vx_n^\text{T} \vw) = \sigma_{n-1}(\vx_n^\text{T} \vw)$ whenever $\vx_n^\text{T} \vw \in \tilde{E}_{n-1}$, ii) $a_k^*\sigma_n(\vx_n^\text{T}\vw_k^*) = y_n$, iii) the trajectory $\gamma := (\gamma_w, \gamma_a)$ of GF for $L(\cdot, S_n)$ starting at $\vtheta_0$ converges to $\vtheta_n^*$ as $t \to \infty$. Let $E_n := \tilde{E}_{n-1} \cup \vx_n^\text{T} \gamma_w$.
\end{itemize}
Finally, after doing this for countably many times, we have defined a function $\sigma_\infty$ on part of the real line. Now extend $\sigma_\infty$ to the whole real line. Let the extension be our activation function $\sigma$.
To see that $G$ must be constant on $\{ \vtheta: \vtheta = \vtheta_n^*, n \in \sN \}$, suppose we have proved that
\begin{equation}
G(\vtheta_1^*, \vtheta_0) = G(\vtheta_2^*, \vtheta_0) = ... = G(\vtheta_n^*, \vtheta_0).
\end{equation}
By our construction procedure, $\vtheta_{n+1}^* \in \fM_{S_k}$ for some $1 \leq k \leq n$, whence $G(\vtheta_{n+1}^*, \vtheta_0) \leq G(\vtheta_k^*, \vtheta_0)$. Similarly, $\vtheta_k^* \in \fM_{S_{n+1}}$, whence $G(\vtheta_k^*, \vtheta_0) \leq G(\vtheta_{n+1}^*, \vtheta_0)$. It follows that $G(\vtheta_k^*, \vtheta_0) = G(\vtheta_{n+1}^*, \vtheta_0)$, completing the induction step.
In Two-point Overlapping Recipe, we only choose (or find) countably many points on which $G$ is constant. When $M = 2$, this is not a coincident. In fact, for most $(w, a) \in \sR^2$ and most $(x_0, y_0) \in \sR^2$, there is a neighborhood $U$ around $(x_0, y_0)$ such that for any $(x,y) \in U$, the $L^{-1}(\cdot, (x_0, y_0)) \{0\}$ and $L^{-1}(\cdot, (x, y)) \{0\}$ cannot intersect at two points. Since each such $U$ contains a rational number and since $\sQ$ is countable and dense in $\sR$, it follows that we can find at most countably many points on which $G$ is constant. A detailed explanation of it is given in the following proposition. Note that when $L((a,w), (x,y)) = 0$, $a = y /\sigma(xw) = y \tsigma(xw)$.
\begin{proposition}[Two-point Overlapping Recipe is global when $M = 2$]\label{CP1 is global}
Let $w \in \sR$. Fix a sample $(x_0, y_0) \in \sR^2$ with $y_0 \neq 0$. Let $F:\sR^2 \to \sR$, $F(p,x) = \tsigma(xw)\tsigma(x_0 p) - \tsigma(xp)\tsigma(x_0 w)$. We have
\begin{itemize}
\item [(a)] Suppose that $|F(p,x)| \geq C |p - w|^k |x - x_0|^r$ for some $C > 0$ and $r,k \in \sN$ near $(w,x_0)$. Then for sufficiently small $\delta > 0$, if $0 < |x - x_0| < \delta$, $y \neq 0$ and $y \tsigma(xw) = y_0 \tsigma(x_0w)$, there is no $p \in \sR$ such that $0 < |p - w| < \delta$ and $y \tsigma(xp) = y_0 \tsigma(x_0p)$.
\item [(b)] Suppose that $\tsigma \in C^2$ and $\tsigma (x_0 w), \tsigma'(x_0 w) \neq 0$. Also suppose
\begin{equation}
\frac{1}{w} - x_0 \left [ \frac{\tsigma'(x_0 w)}{\tsigma(x_0 w)} - \frac{\tsigma''(x_0 w)}{\tsigma'(x_0 w)} \right ] \neq 0.
\end{equation}
Then for sufficiently small $\delta > 0$, if $0 < |x - x_0| < \delta$, $y \neq 0$ and $y \tsigma(xw) = y_0 \tsigma(x_0w)$, there is no $p \in \sR$ such that $0 < |p - w| < \delta$ and $y \tsigma(xp) = y_0 \tsigma(x_0p)$. If, however, $DF(p,x_0) \equiv 0$ for $p$ near $w$ or $DF(w,x) \equiv 0$ for $x$ near $x_0$, then $\sigma(x_0 p)$ is a power function near $x_0 w$.
\end{itemize}
\end{proposition}
\begin{proof}
See the proof of Proposition \ref{CP1 is global appendix} in Appendix.
\end{proof}
\begin{remark}
We do not prove the case in which $M > 2$, but we believe that the result also holds for $M > 2$. Namely, for most $(\vx_0, y_0) \in \sR^M$, there is a neighborhood $U$ around $(\vx_0, y_0)$ such that for any $(\vx,y) \in U$, the $L^{-1}(\cdot, (\vx_0, y_0)) \{0\}$ and $L^{-1}(\cdot, (\vx, y)) \{0\}$ cannot intersect at two points.
\end{remark}
A direct application of Proposition \ref{CP1 is global} is given in the Corollary below.
\begin{corollary}\label{CP1 global corollary}
Let $w > 0$. Following the notations in Proposition \ref{CP1 is global}, all the results below hold.
\begin{itemize}
\item [(a)] Any $\sigma$ and $x_0 \in \sR$ such that $\sigma(x_0 p)$ is a power function near $w$ (this includes ReLU and PReLU and Heaviside) satisfies $F = 0$ near $(w, x_0)$.
\item [(b)] If $\sigma = e^x$, for any $x_0 \in \sR$, we can find a sufficiently small $\delta > 0$ such that if $0 < |x - x_0| < \delta$, $y \neq 0$ and $y \tsigma(xw) = y_0 \tsigma(x_0w)$, there is no $p \in \sR$ with $0 < |p - w| < \delta$ and $y \tsigma(xp) = y_0 \tsigma(x_0p)$.
\item [(c)] If $\sigma = \frac{1}{1 + e^{-x}}$, for any $x_0 \in (-\infty, w^{-1})\cup(2w^{-1},\infty)$, we can find a sufficiently small $\delta > 0$ such that if $0 < |x - x_0| < \delta$, $y \neq 0$ and $y \tsigma(xw) = y_0 \tsigma(x_0w)$, there is no $p \in \sR$ with $0 < |p - w| < \delta$ and $y \tsigma(xp) = y_0 \tsigma(x_0 p)$.
\item [(d)] If $\sigma = e^{-x^2}$, for any $x_0 \in \sR$, $w \neq 0$, we can find a sufficiently small $\delta > 0$ such that if $0 < |x - x_0| < \delta$, $y \neq 0$ and $y \tsigma(xw) = y_0 \tsigma(x_0w)$, there is no $p \in \sR$ with $0 < |p - w| < \delta$ and $y \tsigma(xp) = y_0 \tsigma(x_0p)$.
\end{itemize}
\end{corollary}
\begin{proof}
See the proof of Corollary \ref{CP1 global corollary appendix} in Appendix.
\end{proof}
\subsection{One-point Overlapping Recipe}\label{CP2}
Clearly, Section \ref{CP1 partA} and \ref{CP1 partB} are not the only ways to negate the possibility that any one-hidden-neuron network can be characterized by a data-independent function $G$. We present another way below, considering $C^1$ functions satisfying $G(\vp,\vq) = G(\vp-\vq, 0)$ for $\vp, \vq \in \sR^M$. It is called \textit{One-point Overlapping Recipe}. In this recipe, the choice of samples are (almost) arbitrary, and we only require that $\sigma$ is differentiable, non-negative and strictly increasing on $\sR$.
The recipe is described as follows.
\begin{itemize}
\item [(a)] Find any $\sigma: \sR \to \sR^+$ such that $\sigma' > 0$.
\item [(b)] For each $n \in \sN$. find any $\vtheta_n = (\vw_n, a_n)$ and any $\vx_n$. Set $S_n = (\vx_n, -a_n \sigma(\vx_n^\text{T} \vw_n))$.
\item [(c)] Repeat step (b) until we find enough $\vtheta_n$'s with different values of $a_n$ ($\vw_n$ can be arbitrary).
\end{itemize}
This recipe works because of the following two lemmas.
\begin{lemma}\label{limit lemma}
Suppose that $\sigma > 0$ and $\sigma' > 0$ on $\sR$. For any sample $S = (x, y) \in \sR\backslash\{0\} \times \sR$ and any $\vtheta_0 = (w_0, a_0)$, the trajectory of GF for $L(\cdot, S)$ has a long-term limit $\vtheta_0^* \in \fM_S$.
\end{lemma}
\begin{remark}
This lemma is also used to construct concrete examples using Construction Procedure \ref{CP1 partB}. Moreover, the same result holds for $\sigma < 0$ and $\sigma' < 0$, because $L(\vtheta) = |a \sigma(wx) - y|^2 = |a (-\sigma)(wx) - (-y)|^2$.
\end{remark}
\textit{Proof.} See the proof of Lemma \ref{limit lemma appendix} in Appendix.
Using Lemma \ref{limit lemma}, we can derive the following result.
\begin{lemma}\label{vanishing nabla G}
Let $\sigma: \sR \to \sR^+$ be differentiable and strictly increasing. Then
\begin{itemize}
\item [(a)] Suppose that $M = 2$. If the implicit regularization for $L$ is characterized by a data-independent function $G \in C^1$ in the weak sense, then there are some $\vtheta_0, \vtheta_0^* \in \sR^2$ such that $\nabla G(\cdot, \vtheta_0) |_{\vtheta_0^*} = 0$, where the derivatives are taken with respect to the first entry of $G$.
\item [(b)] The result in (a) also holds for general $M \geq 2$.
\end{itemize}
\end{lemma}
\textit{Proof.} See the proof of Lemma \ref{vanishing nabla G appendix} in Appendix.
\subsection{Main Theorems} \label{main T}
In this subsection, we summarize our examples based on the Overlapping Recipes above. Complete proof of the results are given in the appendix. Let
\begin{equation}
\fG_M = \{ G \in C^1 (\sR^M \times \sR^M): G(\vp,\vq) = G(\vp-\vq, 0) \}.
\end{equation}
\begin{theorem}\label{theorem 1}
Based on the Two-point Overlapping Recipe, we have
\begin{itemize}
\item [(a)] For any $k \in \sN$, we can construct a $\sigma \in C^k$ following Section \ref{CP1 partA}, such that the implicit regularization for $L$ cannot be characterized by any data-independent function $G: \sR^M \times \sR^M \to \sR$.
\item [(b)] Following Section \ref{CP1 partB}, for any $k \in \sN$ we can find a $\sigma \in C^k$ such that if the implicit regularization for $L$ is characterized by a data-independent function $G \in C^1(\sR^M \times \sR^M)$ in the weak sense, then $G(\cdot, \vtheta_0)$ is constant on an open set of $\sR^M$ for some $\vtheta_0 \in \sR^M$.
\item [(c)] Following Section \ref{CP1 partB}, for any $k \in \sN$ we can find a $\sigma \in C^k$ having the property that if the implicit regularization for $L$ is characterized by a data-independent function $G \in \fG_M$in the weak sense, then $G$ is constant.
\end{itemize}
\end{theorem}
\begin{proof}
See the poof of Theorem \ref{theorem 1 (a) appendix}, \ref{theorem 1 (b) appendix} and \ref{theorem 1 (c) appendix} in Appendix.
\end{proof}
\begin{theorem}\label{theorem 2}
Let $\sigma: \sR \to \sR^+$ be differentiable and strictly increasing. Then
\begin{itemize}
\item [(a)] $L$ cannot be characterized by any strongly convex data-independent function $G \in C^1(\sR^M \times \sR^M)$.
\item [(b)] If the implicit regularization for $L$ is characterized by a data-independent function $G \in \fG_M$ in the weak sense, then $G(\cdot, \vtheta)$ is constant on a line in $\sR^M$ for any given $\vtheta$.
\end{itemize}
\end{theorem}
\begin{proof}
See the proof of Theorem \ref{theorem 2 appendix} in Appendix.
\end{proof}
\section{Conclusions and Discussion} \label{conclusion and discussion}
In this work, we provide mathematical definitions of regularization, implicit regularization and explicit regularization. We specify two levels of characterization of implicit regularization using a data-independent function $G$, i.e., (full) characterization and characterization in the weak sense. We propose two general dynamical mechanisms, i.e., Two-point and One-point Overlapping mechanisms, by which the implicit regularization is difficult to or even cannot be characterized by data-independent functions. Specifically, we give concrete one-hidden-neuron NNs with sigmoid or Softplus activation which realize these mechanisms for illustration. These examples further inspire us to provide general overlapping recipes that produce rich classes of one-hidden-neuron networks which realize these two mechanisms respectively. We show that our Two-point Overlapping Recipe depend on the global property of activation functions.
One advantage of our recipes is that they serve as general guidelines or construction principles, following which rich classes of common one-hidden-neuron NNs can be obtained. In comparison, each existing examples considers a more specific case. The generality of our recipes suggests that implicit regularization that can hardly or even cannot be characterized by data-independent functions in certain sense is common.
Overall, we make complete definitions of regularization and provide insightful overlapping recipes and examples for the further study of the implicit regularization for $L$ subject to neural networks. By our recipes, we believe that we can only obtain partial information in general about an implicit regularization by looking at the value of a data-independent function defined on parameter space. Further studies should be conducted to mathematically determine details of such partial information. Besides, one may alternatively look for meaningful\footnote{One trivial data-dependent characterization is to define $G(\vtheta_0,\vtheta,S) := \|\vtheta-\vtheta^*(\vtheta_0,S)\|_2^2$, where $S$ is the training data and $\vtheta^*(\vtheta_0,S)$ is the long-term limit of the trajectory of GF for $L(\cdot, S)$ starting at $\vtheta_0$ provided that it exists. See also \citet{GVardi} for other trivial forms.} data-dependent functions to characterize an implicit regularization. Since the non-equivalence between implicit and explicit regularization seem to depend on the global property of an activation function, one may also consider characterizing the gradient trajectory by looking at the values of more than one functions.
\section*{Acknowledgments}
This work is sponsored by the National Key R\&D Program of China Grant No. 2019YFA0709503 (Z. X.), the Shanghai Sailing Program, the Natural Science Foundation of Shanghai Grant No. 20ZR1429000 (Z. X.), the National Natural Science Foundation of China Grant No. 62002221 (Z. X.), the National Natural Science Foundation of China Grant No. 12101401 (T. L.), the National Natural Science Foundation of China Grant No. 12101402 (Y. Z.), Shanghai Municipal of Science and Technology Project Grant No. 20JC1419500 (Y.Z.), Shanghai Municipal of Science and Technology Major Project No. 2021SHZDZX0102, and the HPC of School of Mathematical Sciences and the Student Innovation Center at Shanghai Jiao Tong University.
|
1,108,101,562,984 | arxiv | \section{Introduction}
Signalized intersections are major sources of traffic delay and collision within modern transportation systems. The measures to improve the operational efficiency of a signalized intersection can be grouped in to four categories:
\begin{enumerate*}[label=\arabic*)]
\item Optimization of Signal Timing and Phase,
\item Conversion to a grade-separated interchange,
\item Reconfiguration to alternative intersection designs (AIDs), and
\item Adaptation of connected and automated vehicle (CAV) technology.
\end{enumerate*}
The traditional approach via signal optimization is no longer able to considerably alleviate congestion at signalized intersections in saturated condition \cite{dhatrak2010performance}. Grade-separation tends to incur a significant amount of infrastructure investment, which is difficult to economically justify under most circumstances. AIDs have the potential in improving the efficiency and safety of an intersection by strategically eliminating or changing the nature of the intersection conflict points.
While the adoption of AIDs exhibits an increasing trend in the U.S. as displayed in Fig. \ref{fig:AIDLocation}, additional research for AID is still needed. The most common AIDs include the diverging diamond interchange (DDI), the median U-turn intersection (MUT), the displaced left-turn intersection (DLT), and roundabout (RDT).
\begin{figure} [h]
\centering
\frame{\includegraphics[width=0.95\columnwidth]{aidLocation.eps}}
\caption{AID locations in contiguous U.S. (data source \cite{InstitueforTransportationResearchandEducation})}
\label{fig:AIDLocation}
\end{figure}
\textcolor{black}{The evolutionary role of the CAV technology to mobility, safety, and driver convenience has been discussed extensively in the past decades. At the same time, the adaptation of AIDs has been growing steadily and their benefits have gained recognition. However, the joint benefits of implementing CAV and AID have been seldom discussed. The Volpe National Transportation Systems Center estimated that it may take 25-30 for CAVs to reach 95\% of market penetration (MPR), even with a federal mandatory installation of DSRC devices on new light vehicles in the U.S. \cite{volpe2008vehicle}. }
\textcolor{black}{
In light of the aforementioned lead time, hybrid solutions may be a logical step for solving the pressing transportation issues.}
In this paper, we evaluate the potential benefits brought by CAV, AID, and the combination of both. We also quantify the influence of the driver's confusion on a restricted crossing U-turn intersection (RCUT). Such driver's confusion that is caused by the unconventional geometry deign is expected to be eliminated by CAV.
\section{Related Work}
\label{sect:Literature}
\subsubsection{Effectiveness of AIDs}
The majority of the research demonstrated the superior performance of AIDs to their conventional counterparts under various volume scenarios, for instance, heavy left-turn traffic, unbalanced split among intersection approaches, high overall volume, etc. Such scenarios can reveal the inadequacy of a conventional intersection. A diverging diamond interchange (DDI) outperform a conventional diamond interchange (CDI) under high traffic volume with left-turn demand exceeding 50\% of the total demand \cite{dhatrak2010performance}. When designed properly, the DDI can reduce 60\% of total intersection delay and 50\% of the total number of stops \cite{Chlewicki2011}. A signal optimization model for DDI was developed in \cite{yang2014development}, in which the common cycle length and green split for the two up-stream crossover intersections were determined by taking into account the adjacent conventional intersections.
The displaced left-turn (DLT) intersection is able to potentially reduce average intersection delays in most traffic demand scenarios. A before-and-after study for the DLT at Baton Rouge, LA showed that the reduction in total crashes and fatality were 24\% and 19\%, respectively. The simulation also demonstrated 20\% to 50\% increase in throughput compared to a conventional intersection \cite{hughes2010alternative}. The reduction for a median u-turn (MUT) intersection in total crashes ranges from 20\% to 50\%, as shown in the study conducted in \cite{scheuer1996evaluation,castronovo1995operational}.
\subsubsection{Effectiveness of CAV}
A CAV-based application on real-world signalized intersection was studied using Vissim in \cite{Zhong2017a}. The start-up lost time was assumed to be zero owing to V2X communication. Addtionally, all the CAVs within a platoon operated synchronously upon the commencement of a green phase. Without changing the existing signal plan, the average stop delay was reduced by 17\% when the market penetration rate (MPR) of CAV reached 70\%. Le Vine et al. \cite{le2016automated} studied the queue discharging operation of CAVs with the assured-clear-distance-ahead principle by using a deterministic simulation model. On the contrary to \cite{Zhong2017a}, they observed only marginal improvement to intersection throughput due to the synchronous start-up movement. However, they found that the processing time for a 10-vehicle queue did reduce by 25\% with full CAVs, compared to that for the human-driven vehicles (HVs) with the same amount of vehicles.
Realizing the potential long path to full vehicle automation, researchers also emphasized the possible cooperative scheme between CAVs and HVs by strategically consider the following HVs for intersection management \cite{le2016automated}.
A bi-level optimal intersection control algorithm was proposed in \cite{yang2016isolated}. The algorithm performed trajectory design for CAVs as well as the prediction for HVs based on real-time CAV data. The prediction of the trajectory of HVs was based on Newell’s car following model and the positional information of CAVs. The baseline used for comparison was an actuated signal control algorithm under a range of traffic demand between 1,000 and 2,000 vehicles per hour (vph).
\subsubsection{Driver's Confusion}
Unfamiliar urban intersections pose high cognitive demand on drivers who are prone to make unexpected maneuvers, which include hesitation, abrupt stop, deviation from the planned path, suddent aggressive maneuvers \cite{autey2013operational, sayed2006upstream}. The driver’s confusion was mentioned in most of the AID studies as a potential drawback. As we observed from practices, the off-ramp right tuning movements from the freeway in DDIs are often signalized due to the safety concern for unfamiliar drivers who may misidentify traffic on the opposite side of the roadway passing through a DDI interchange \cite{chilukuri2011diverging}. Some believe that the reduction in delay and travel time would be discounted after accounting for driver’s confusion \cite{Reid2001}.
A driving simulator provides a safe virtual environment for human subjects to experience a wide verity of scenario, including investigating the driver's confusion for AIDs. In \cite{Bared2007}, 74 drivers within the Washington D.C. area were recruited for the experiment which aimed to investigate the wrong way violation, navigation errors, red-light violations, and driving speed through the DDI. In \cite{Claros2017}, Park found that wrong way crashes inside the crossroad between ramp terminals accounted for 4.8\% of the fatal and injury crashes occurring at the DDI.
The CAV technology could be an excellent complement for the AIDs. The V2X connectivity is able to provide geometry information to help unfamiliar drivers to navigate through AIDs. Increasingly, the Automated Driver Assistant System could, when necessary, intervene with the erroneous movement as a result of the driver's confusion. Hence, the potential aid gained from CAV technology could improve the performance of AID by abating or even eliminating the concerns for the driver’s confusion.
\section{Experiment}
\label{sect:framework}
The primary benefits for the introduction of CAV to AIDs are the enhanced driving performance due to automation and the connectivity with the signal controller. In other words, CAVs can closely follow their predecessors and have no driver's confusion for AIDs nor start-up lost time. We first demonstrate the improvement of AIDs with various penetration of CAVs for a diverging diamond interchange (DDI) and a restricted crossing U-turn intersection (RCUT). Then a proof-of-concept simulation for the impact of driver’s confusion is conducted.
\textcolor{black}{Each CAV is assumed with SAE level 3 automation. The Enhanced Intelligent Driver Model (EIDM), developed by Kesting el al. \cite{Kesting2010} and expressed in (\ref{eq:eidm}), (\ref{eq: minDistCal}), and (\ref{eq: cahCal}), is adapted for longitudinal control), whereas the human drivers are responsible for the lateral control which is based on the Weidemann model \cite{Wiedemann1974, wiedemann1991modelling}. }
\begin{equation}
\ddot{x}=\begin{cases}
a[1-(\frac{\dot{x}}{\dot{x_{des}}})^{\delta }- (\frac{s^{*}(\dot{x}, \dot{x}_{lead})}{s_{0}})] & \\ \text{ if } x= \ddot{x}_{IDM} \geq \ddot{x}_{CAH} \\
(1-c)\ddot{x}_{IDM} + c[\ddot{x}_{CAH} + b \cdot tanh ( \frac{\ddot{x}_{IDM} - \ddot{x}_{CAH}}{b})] & \\\text{otherwise}
\end{cases}
\label{eq:eidm}
\end{equation}
\begin{equation}
s^{*}(\dot{x}, \dot{x}_{lead}) = s_{0} + \dot{x}T + \frac{\dot{x}(\dot{x} - \dot{x}_{lead})}{2\sqrt{ab}}
\label{eq: minDistCal}
\end{equation}
\begin{equation}
\ddot{x}_{CAH}=
\begin{cases}
\frac{\dot{x}^{2} \cdot \min(\ddot{x}_{lead}, \ddot{x})}{\dot{x}_{lead}^{2}-2x \cdot \min(\ddot{x}_{lead}, \ddot{x})} & \\
\dot{x}_{lead} (\dot{x} - \dot{x}_{lead}) \leq -2x \min(\ddot{x}_{lead}, \ddot{x}) \\
\min(\ddot{x}_{lead}, \ddot{x}) - \frac{(\dot{x}-\dot{x}_{lead})^{2} \Theta (\dot{x}- \dot{x}_{lead})}{2x} & \\ \text {otherwise}
\end{cases}
\label{eq: cahCal}
\end{equation}
\textcolor{black}{where $a$ is the maximum acceleration; $b$ is the desired deceleration; $c$ is the coolness factor; $\delta$ is the free acceleration exponent; $\dot{x}$ is the current speed of the subject vehicle; $\dot{x}_{des}$ is the desired speed, $\dot{x}_{lead}$ is the speed of the lead vehicle; $s_{0}$ is the minimal distance; $\ddot{x}$ is the acceleration of the subject vehicle; $\ddot{x}_{lead}$ is the acceleration of the lead vehicle; $\ddot{x}_{IDM}$ is the acceleration calculated by the original IDM model \cite{Treiber2000}; $T$ is the desired time gap; and $\ddot{x}_{CAH}$ is the acceleration calculated by the CAH component; $\Theta$ is the Heaviside step function. The IDM parameters used are listed in TABLE \ref{table: parameters}}.
\begin{table}[!ht]
\centering
\caption{\textcolor{black}{CACC Vehicle Control Parameters}}
\resizebox{\columnwidth}{!}
{
\begin{tabular}{cccccccccc}
\hline
Parameter & $T$ & $s_{0}$ & $a$ & $b$ & $c$ & $\theta$ & $\dot{x}_{des}$ \\ \hline
value & 0.9 s & 1 $m$ & 2 $m/s^{2}$ & 2$m/s^{2}$ & 0.99 & 4 & 105 $km/h$ \\
\hline
\end{tabular}
}
\label{table: parameters}
\end{table}
\textcolor{black}{
The benefits of AIDs and CAV are of complementary nature as exhibited in TABLE \ref{table:beneComp}. The primary benefit for CAV is the short following headway, which plays a crucial role in improving roadway capacity. Additionally, the elimination of start-up lost time (the time drivers takes to react and accelerate when a signal turns green from red) is also feasible owing to the vehicle-to-infrastructure (V2I) communication. The start-up lost time for HVs is set as 2 s. The effectiveness of the synchronized start has not been substantiated by previous research: some reserach found significant benefits \cite{Zhong2017a}, while other did not \cite{le2016automated}.
Therefore, the first two benefits for CAV (close following headway and no start-up lost time) are implemented in the simulation. The simulation is conducted in two settings. First, we will evaluate the overall intersection performance. Then we shift the study focus on the region where driver's confusion could occur in order to assess its impact.
}
\begin{table}[h]
\centering
\caption{\textcolor{black}{Benefits of CAV and AID}}
\begin{tabular}{l|ll}
\hline
Benefit & AID & CAV \\ \hline
Intersection conflict point reduction & \checkmark & \\
Signal phase reduction & \checkmark & \\
Traffic movement streamlining & \checkmark &\\
Close following headway & & \checkmark \\
Start-up lost time elimination & &\checkmark \\
Synchronously discharge & & \checkmark \\
Driver's confusion intervention & & \checkmark \\
\hline
\end{tabular}
\label{table:beneComp}
\end{table}
The PTV Vissim, a microscopic traffic simulation, and its external driver model application programming interface (API) are used to develop the simulation network. We have constructed two AIDs: a real-world DDI (Fig. \ref{fig:geoUAID}(a)) and a 1.61-mile, three-lane RCUT intersection Fig. \ref{fig:geoUAID}(b).
\begin{figure}[h]
\begin{minipage}[h]{0.8\columnwidth}
\centering
\subfloat[DDI Network]{\includegraphics[scale=0.26]{geoDDI.eps}}
\end{minipage}\\
\begin{minipage}[h]{0.8\columnwidth}
\centering
\subfloat[RCUT Netowrk]{\includegraphics[scale=0.212]{geoRCUT.eps}}
\end{minipage}
\caption{Configurations of selected DDI and RCUT }
\label{fig:geoUAID}
\end{figure}
The DDI is located at the intersection of the State Highway 72 (DE-72) and US Highway 13 (US-13). It was converted from a convetional diamond interchange in early 2016 and open to trafic in late 2016 \cite{deDDI}. Four settings for DDI are simulated as shown in TABLE \ref{table:simCase}.
\begin{table}[h]
\centering
\caption{Simulation Cases for DDI}
\begin{tabular}{l|llll}
\hline
Case & CDI & DDI & AV & MPR \\ \hline
Base-CDI & \checkmark & & & 0\% \\
Base-DDI & & \checkmark & & 0\%\\
CAV-CDI & \checkmark & & \checkmark & $10\text{-}100$\% \\
CAV-DDI & & \checkmark & \checkmark & $10\text{-}100$\% \\ \hline
\end{tabular}
\label{table:simCase}
\end{table}
The arterial demand is assumed to be 3,000 vph for both westbound and eastbound direction. The traffic volume for either of the on-ramp is 400 vph. A CDI network is built for the comparison between a CDI and a DDI. Signalization is only implemented at the two cross-over locations in the DDI. Each through movements has a 55-s green phase in each signal cycle which is 120 s. For the CDI, the phase timings are set as 73 s, 17 s, and 18 s for through, left-turn to the on-ramp, and left-turn from the off-ramp, respectively. The speed limit is 50 mph for both of the networks.
For the RCUT, only the westbound direction of the RCUT is analyzed. The distance between the minor street and the diverging point of the median U-turn is approximately 1,300 ft., larger than the 600-ft. minimal design requirement set forth by ASSHTO \cite{hancock2013policy} for RCUT. The mainline demand from the westbound direction is 5,000 vph and the demand from the southbound minor street is 400 vph.
For each level of MPR, ten replications of simulation is conducted to factor in the variability of the simulation. Each replication runs for 3,900 s with 300 s as the warm-up time to load the network with traffic. The simulation resolution is set as 10 Hz. For studying the driver' confusion, 30 replications for each level of the confused drivers are conducted to obtain additional samples for the ANOVA test. The data collection is performed every 5 min.
\section{Results \& Discussion}
\label{sect:result}
\subsection{Impact of CAV}
The network throughput of both the DDI and the CDI is shown in Fig. \ref{fig: netTPDDI}. The vertical bar associated with each marker represents the size of the 90\% confidence interval with bootstrapping \cite{haukoos2005advanced}, a statistical technique. The throughput of the network increased to 5,350 vph in DDI from 4,400 vph that is observed in the CDI case. The standard deviation of the throughput in the CDI case is greater than that of the DDI.
With CAVs in the network, the overall trend for throughput for either DDI and CDI is increasing, given there are cases of slight deceases (i.e. 50\% and 60\% in the CDI case). Furthermore, with the same level of MPR, the observations in DDI exhibits a narrower 90\% confidence interval, an indication of less standard deviation.
\begin{figure} [h]
\centering
\frame{\includegraphics[width=0.9\columnwidth]{TP.eps}}
\caption{Network throughput}
\label{fig: netTPDDI}
\end{figure}
The average delay for each vehicle is plotted in Fig. \ref{fig: netAvgDelay}. Similar to the throughput, the geometry configuration of the interchange greatly contributed to the reduction of the average delay. There is a clear separation (i.e. 40 s delay per vehicle) between the observations of DDI and those of the CDI. Again, the delay observed in DDI not only has a lower mean value, but also less standard deviation, compared to the CDI case. However, the delay only marginally decreases as the MPR increases.
Both Fig. \ref{fig: netTPDDI} and Fig. \ref{fig: netAvgDelay} jointly indicate that only with the short-following distance and the zero start-up lost time do not significantly increase the performance of the signalized interchange.
The start-up lost time is dedicated by the likelihood of a CAV being in the first vehicle at the stop line during a red phase. Even though zero start-up lost time are to be taken advantage of, the benefits from it would still be limited. For example for 120-s signal cycle within an hour, only 30 times per lane of such advantage is possible at best.
On the other hand, by reducing the signal phase and separating conflict, the network performance can be improved at a significant level.
Therefore, AIDs could instead play more significant roles in improving the efficiency of a signalized intersection than CAV in terms of mobility.
\begin{figure} [h]
\centering
\frame{\includegraphics[width=0.9\columnwidth]{avgDelay.eps}}
\caption{Average delay}
\label{fig: netAvgDelay}
\end{figure}
When it comes to RCUT, the flow-speed observations in three locations (diverging, upstream, and downstream) are shown in Fig. \ref{fig: fsRCUT}. In all three locations, the flow-speed curve of CAV systematically shifts to the higher flow rate region at the right side of the chart. The carrying capacity for the CAV case reaches 2,100 vph per lane.
\begin{figure} [h]
\centering
\includegraphics[width=\columnwidth]{speedFlowRCUT.eps}
\caption{Flow-speed curve observed at the diverging area for RCUT with full CAV penetration}
\label{fig: fsRCUT}
\end{figure}
\subsection{Impact of Driver's Confusion}
The corridor impact of the driver's confusion has not yet been taken into account in the previous studies. We consider the behaviors of drivers due to the confusion are: 1) sudden slowdown due to confusion prior to the AID ramp and 2) making an abrupt lane change at the last minute. The area for each AID that could most likely create confusion for drivers is identified in red in Fig. \ref{fig:geoUAID} based on the geometric design of the networks.
In the RCUT, it is the U-turn pocket lane in the diverging area, which accommodates U- and left-turn traffic. The route decision point is set closer to the U-turn pocket lane to induce aggressive lane change that is likely observed from the unfamiliar drivers in order to make it to the U-turn lane.
For the DDI, it is the signalized crossover intersections on the arterial. A reduction in desired speed is set for the unfamiliar drivers to mimic the slowdown behavior due to confusion.
The percentage of unfamiliar drivers is set from 0\% to 20\% with a 5\% increment. For each scenario, 30 replications are run. Point (road section) and network-wide performance data are collected every 5 min. The shockwave created by the driver’s confusion is illustrated in Fig. \ref{fig: impactEvaluationMethod}, where each line represents the trajectory of one vehicle from the simulation with 10\% unfamiliar drivers for RCUT. Red trajectory lines are unfamiliar drivers, whereas the cyan lines represent commuter drivers who are familiar and have gotten used to the RCUT. As seen, the sudden slowdown due to the driver’s confusion creates a shockwave and it propagates upstream, affecting the following vehicles. On the right side of Fig. \ref{fig: impactEvaluationMethod}, the traffic trajectories indicate a free-flow condition in the absence of slowdown or abrupt lane change induced by the driver’s confusion. As demonstrated, too much driver’s confusion could easily disrupt the progression of the traffic, not to mention the safety hazard it may create.
\begin{figure} [h]
\centering
\includegraphics[width=\columnwidth]{trajConfusion.eps}
\caption{Impact of driver's confusion}
\label{fig: impactEvaluationMethod}
\end{figure}
The speed-flow diagram of the diverging area of the RCUT network is shown in Fig. \ref{fig: fsConfusion}. The overall speed of the traffic flow with confused drivers is lower than the base case. This is due to the temporary traffic obstruction of the unexpected behaviors of the confused drivers. The impacted vehicles at the end of the diverging area where the data are collected have not regained the prevailing speed of the roadway. As a result, the data sample points shift downward to the range of 30 mph and 40 mph with the presence of confused drivers.
\begin{figure} [h]
\centering
\includegraphics[width=\columnwidth]{speedFlowConfusion.eps}
\caption{Flow-speed curve observed at the diverging area for RCUT for driver’s confusion}
\label{fig: fsConfusion}
\end{figure}
The average vehicle delay for the entire network is collected. ANOVA test with post-hoc Tukey’s method \cite{pairwiseAnova} is conducted to assess the statistical difference among the five tested scenarios at 95\% confidence level. The ANOVA test result (TABLE \ref{table:TurkeyTestRCUT}) shows that the pairwise differences among five levels of confused drivers are statistically different.
Similarly, the ANOVA test for average vehicle delay for DDI exhibits an increasing pattern that the average vehicle delays are statistically different at 95\% confidence level as shown in TABLE \ref{table:TurkeyTestDDI}.
\begin{table}[!]
\centering
\caption{ANOVA Test for Average Vehicle Delay in RCUT}
\begin{tabular}{llclllll}
\hline
\textbf{\begin{tabular}[c]{@{}l@{}}Confused \\ Driver Rate\end{tabular}} & \textbf{N} & \textbf{Delay, s/veh} & \multicolumn{5}{l}{\textbf{Grouping}} \\
\hline
\textbf{0\%} & 360 & 12.2 & A & & & & \\
\textbf{5\%} & 360 & 28.65 & & B & & & \\
\textbf{10\%} & 360 & 39.36 & & & C & & \\
\textbf{15\%} & 360 & 43.45 & & & & D & \\
\textbf{20\%} & 360 & 48.79 & & & & & E \\ \hline
\end{tabular}
\label{table:TurkeyTestRCUT}
\end{table}
\begin{table}[!]
\centering
\caption{ANOVA Test for Average Vehicle Delay in DDI}
\begin{tabular}{llclllll}
\hline
\textbf{\begin{tabular}[c]{@{}l@{}}Confused \\ Driver Rate\end{tabular}} & \textbf{N} & \textbf{Delay, s/veh} & \multicolumn{5}{l}{\textbf{Grouping}} \\
\hline
\textbf{0\%} & 360 & 81.42 & A & & & & \\
\textbf{5\%} & 360 & 82.44 & & B & & & \\
\textbf{10\%} & 360 & 83.54 & & & C & & \\
\textbf{15\%} & 360 & 84.41 & & & & D & \\
\textbf{20\%} & 360 & 85.78 & & & & & E \\ \hline
\end{tabular}
\label{table:TurkeyTestDDI}
\end{table}
\section{Conclusion}
\label{sect:conclusion}
The alternative intersection designs have attracted an increasing amount of attention as a promising measure to improve the performance of an intersection, as evidenced by field deployments and simulation study. The joint deployment of alternative intersection designs and CAV is studied in this paper via microscopic traffic simulation. According to the results on mobility, only 7\% increase in throughput is observed under full CAV market penetration, compared to the 20\% gain in throughput with only the conversion from a conventional diamond interchange to a diverging diamond interchange. Note that the benefits of the CAV could be further optimized in operation, such as using eco-driving approaching control, adaptive signal control, or ultimately with signal-free autonomous intersection management. They will be part of the future study.
The impact of the potential the driver's confusion is quantified by analyzing the traffic flow and vehicle trajectory data. It is found that the influence is more localized. Hence limited impact on performance at network level is observed. Future study should focus on the safety aspect at a more granular level (e.g., individual vehicle level). Additionally, explicit consideration for the increased safety brought by CAV should be integrated into the subsequent study.
Lastly, more sophisticated scenarios, including signal plans, demand composition, CAV applications, etc., should be included to expand the comparison.
\appendices
\section{\textcolor{black}{List of Abbreviations}}
\begin{tabular}{p{0.8in}|p{2.2in}}
\hline
\textbf{Abbreviation} & \textbf{Definition} \\ \hline
AID & alternative intersection design \\ \hline
ANOVA & analysis of variance \\ \hline
API & application programming interface \\ \hline
DDI & diverging diamond interchange\\ \hline
CDI & conventional diamond interchange \\ \hline
RDT & roundabout \\ \hline
CAV & connected and automated vehicle \\ \hline
MUT & median U-turn intersection\\ \hline
MPR & market penetration rate \\ \hline
DLT & displaced left-turn intersection\\ \hline
RCUT & restricted crossing U-turn intersection \\ \hline
V2X & vehicle-to-anything\\ \hline
V2I & vehicle-to-infrastructure \\ \hline
SAE & Society of Automotive Engineers \\ \hline
HV & human-driven vehicle\\ \hline
MPR & market penetration rate\\ \hline
ASSHTO & American Association of State Highway and Transportation Official\\ \hline
\end{tabular}
\bibliographystyle{IEEEtran}
|
1,108,101,562,985 | arxiv | \section{Introduction}
The possible existence of buckminsterfullerene (C$_{60}$) in astrophysical
environments has long been suggested \citep{kroto}, but only recently has
observational evidence for emission from C$_{60}$ in the gas phase been
forthcoming. Gas phase C$_{60}$ has now been detected in
the environments of young planetary nebulae \citep{cami,zhang}, RCB stars
\citep*{garcia} and in the reflection nebulae NGC\,2023 and NGC\,7023 that are
illuminated by B~stars \citep{sellgren}. In the case of the low-excitation
planetary nebula Tc~1, \cite{cami} argue that the C$_{60}$ (and
C$_{70}$) molecules are attached to the surfaces of cooler carbonaceous grains.
Many of the objects displaying C$_{60}$
also have strong ``Unidentified Infrared'' (UIR) features.
The formation of C$_{60}$ and other fullerenes in terrestrial laboratories
usually requires a hydrogen-deficient environment, and this seems to be
consistent with their presence in the environments of (evolved) H-deficient
carbon stars \citep{cami,garcia}. However the detection of C$_{60}$ in the
reflection nebulae NGC\,2023 and NGC\,7023 \citep{sellgren} indicates that
fullerene formation is possible in young (H-rich) environments. UIR features, as
well as ``Extended Red Emission'' attributed, among other hypotheses, to
small -- possibly ionised -- hydrocarbon molecules, are seen in the environment
of NGC\,7023 \citep{berne,sellgren}.
We report here the possible detection of solid phase C$_{60}$, in the
environment of the peculiar binary \XX, observed with the \sirtf\
\citep{wernera,gehrz}.
\section{The \XX\ binary}
\label{binary}
\XX\ is a binary consisting of a late (M7III)
giant and an early (B0V?) star (see e.g. \citealt{dewinter, evansetal93} and
references therein; \citealt{cool} give M6-8II). It is sometimes classed as a Be
star \citep[e.g.][]{dewinter} and sometimes as a symbiotic \citep{GCVS}.
However it shows few of the common symptoms of symbiosis, such as the presence
of high excitation emission lines.
\begin{figure}
\setlength{\unitlength}{1cm}
\begin{center}
\leavevmode
\begin{picture}(5.0,6.)
\put(0.0,4.0){\special{psfile=SED.eps
hoffset=-60 voffset=60 hscale=32 vscale=32 angle=-90}}
\end{picture}
\caption[]{Photometry of \XX, dereddened for $E(B-V)=0.51$ \citep{evansetal93}.
Filled black squares, $BV\!RI\,J\!H\!K\!L$ \citep{evansetal93};
open blue squares, WISE \citep{WISE};
inverted green triangles, AKARI \citep{AKARI};
open blue circles, IRAS PSC;
filled green squares, ISO PHOT-P;
red triangles, \sirtf\ MIPS.
In all cases errors are smaller than the plotted points.
Curve is DUSTY fit with parameters given in text; see text for details.
\label{SED}}
\end{center}
\end{figure}
While there is photometric and spectroscopic evidence that a cool
component in the \XX\ system dominates in the red
\citep{dewinter,evansetal93,cool}, understanding the nature of the hot component has proven to be problematic. The evidence is circumstantial: there is
spectroscopic evidence for a `hot companion' in the blue, in the form of H (and
other) emission lines.
\cite*{lockwood} argued that \XX\ is heavily reddened and estimated the
extinction, $A_{\rm v}$, to be $\sim4$~mag. Spectrophotometry of \XX\ was
presented by \cite{blair}, who deduced $E(B-V)\simeq 1.08$~mag on the basis of the
H$\alpha$/H$\beta$ ratio. They noted that this is significantly less than the
value given by \citeauthor{lockwood}, unless the ratio of total-to-selective
extinction is $R\simeq3.7$, but the polarization of \XX\ is inconsistent with a
high value of $R$ \citep{evansetal93}.
Although a B0V classification is assigned to the hot component, the presence of
a massive ($\sim20$\Msun) star in the \XX\ system seems unlikely on kinematic
grounds. For a distance of $\sim2$~kpc \citep{evansetal93} it lies $\sim400$~pc
above the Galactic plane and its proper motion \citep{hipparcos} takes it
towards the plane -- highly unlikely for a B0V star.
Furthermore, the spectral
energy distribution (SED) -- from 4400\AA\ to 100\mic\ -- can be fit (bearing in
mind the variability) by a two component DUSTY \citep{dusty} model. The hot component is a B {\em subdwarf} at the
centre of a dust shell having 0.01\mic\ amorphous carbon
grains with a temperature of 800~K at the inner boundary and an optical depth of
$\sim0.001$ in the visual. The cool component is a M7III star that effectively plays no part in heating the dust (see Fig.~\ref{SED}).
The DUSTY fit assumes that the dust shell is spherically symmetric with the
B~star located at its centre, so clearly the fit has its limitations (e.g. a
disc is more likely in a binary). However, the inner boundary of the dust shell
is $\sim7.2\times10^{11}$~m from the B~star. The size of the Str\"omgren
sphere associated with the B~star exceeds this if the gas density in its
vicinity $\ltsimeq10^{13}$~m$^{-3}$.
The reclassification of the hot component as a subdwarf removes the need for the
large reddening assigned by \citeauthor{lockwood} and others, and is consistent
with an interstellar reddening $E(B-V)=0.51$~mag \citep{evansetal93}. It also
has implications for the nature and evolution of the binary.
Although the cool component in \XX\ seems to be oxygen-rich -- as evidenced by
the presence of TiO and VO bands -- the 8.6\mic\ and 11.2\mic\ UIR features
reported in the IR spectrum of \XX\ by \citet{evans} are typical of carbon-rich
environments. However the usual 3.28\mic\ and 3.4\mic\ UIR features are weak.
Fig.~\ref{iso-sws} shows a spectrum of \XX\ obtained with the Short Wavelength
Spectrometer \citep[SWS;][]{degraauw} on the {\it Infrared Space Observatory}
\citep{kessler} that confirms the UIR features at 8.6\mic\ and 11.2\mic\
reported by \citet{evans}. The UIR feature at 6.25\mic\ is detected
and the non-detection of the 3.28\mic\ and 3.4\mic\ UIR features is confirmed.
The `8\mic' feature reported by \citeauthor{evans} is the long wavelength wing
of the well-known `7.7\mic' feature, affected by inadequate cancellation of the
atmosphere near the edge of the $8-13$\mic\ window.
In most stars the flux in the 3.28\mic\ UIR feature is typically comparable to
that of the `7.7' feature \citep[e.g.][]{tielens}. On this basis we would expect
the 3.28\mic\ feature in \XX\ to have a peak flux $\sim3$\,Jy. While there is
indeed evidence for a feature at $\sim3.3$\mic\ (see Fig.~\ref{iso-sws}), the
spectrum in this region is dominated by molecular absorption (e.g. CO, OH) in
the M~giant, which has a flux of $\sim 32$\,Jy at 3\mic. The apparent
absence of the 3.28\mic\ feature can presumably be attributed to the fact that
it is swamped by the emission from the M~star.
The variability of \XX\ is irregular, although the Hipparcos catalogue
\citep{hipparcos} lists it
as having a possible period of 3.52~days, and as displaying sudden dips in
luminosity. \cite{sobotka} reported that \XX\ went into a deep
(eclipse-like) minimum in 2005, the first for 37~years (see Fig.~\ref{LC}).
\cite{cool} found that the equivalent width of H$\alpha$ increased during the
minimum, indicating that the continuum around 656~nm had faded.
We note that, with a B subdwarf, most of the $V$-band light from the \XX\ system
comes from the M star, so that the eclipse in Fig.~\ref{LC} must be of the
giant, presumably by material in the vicinity of the B star. The optical depth
at $V$ at eclipse minimum is $\tau_V\simeq1.0$, far greater than that required
for the IR excess in Fig.~\ref{SED}, underlining the fact that the DUSTY
fit should not be taken too literally.
\begin{figure}
\setlength{\unitlength}{1cm}
\begin{center}
\leavevmode
\begin{picture}(5.0,5.)
\put(0.0,4.0){\special{psfile=iso-sws.eps
hoffset=-55 voffset=-190 hscale=40 vscale=50 angle=0}}
\end{picture}
\caption[]{ISO SWS spectrum of \XX; the 6.25\mic, 7.7\mic, 8.6\mic\ and
11.2\mic\ UIR features are identified. The apparent peak at 3.3\mic\ may be due
to the 3.3\mic\ UIR feature but the spectrum in this region is dominated by
molecular absorption in the M~giant.
\label{iso-sws}}
\end{center}
\end{figure}
\begin{figure
\setlength{\unitlength}{1cm}
\begin{center}
\leavevmode
\begin{picture}(5.0,5.)
\put(0.0,4.0){\special{psfile=LC.eps
hoffset=-60 voffset=50 hscale=30 vscale=30 angle=-90}}
\end{picture}
\caption[]{The $V$ band lightcurve of \XX\ from the {\bf All Sky Automated
Survey} (ASAS) database \citep{ASAS}. The times of the \sirtf\ IRS observations
are indicated.}
\label{LC}
\end{center}
\end{figure}
\section{Observations}
\XX\ was observed with the {\it Spitzer} Infrared Spectrograph
\citep[IRS;][]{houck} in staring mode on two occasions as \XX\ was emerging from
a deep minimum and some two years thereafter. The blue
peak-up array was used to centre the object in the IRS slits.
Observations were also obtained with the Multi-band Imaging Photometer for
{\it Spitzer} \citep[MIPS;][]{MIPS}. Spectra were obtained with both low- and
high-resolution IRS modes, covering the spectral range of 5--38\mic.
For the high-resolution modes we also obtained observations of the background;
however as we are comparing data from two epochs, the background measurement is
not critical. The spectrum was extracted from the version 12.3 processed
pipeline data product using SPICE version 2.2 \citep{spice}.
The spectra for the two epochs are shown in Fig.~\ref{IRS1}. There may be some
evidence for the 18\mic\ silicate feature, but the corresponding 9.7\mic\
feature is very weak. However the UIR features are clearly
present, as is an excess longward of $\sim15$\mic\ due to emission by
circumstellar dust (cf. Fig.~\ref{SED}). Such ``chemical dichotomy'' (i.e.
environments with a mix of C-rich and O-rich dust) is of course not uncommon
\citep[e.g.][and references therein]{clayton}.
There has clearly been a change in the infrared (IR) spectrum between 2005 and
2007. In particular H recombination lines are present in 2005 (as \XX\ was
emerging from eclipse) but were apparently weak in 2007; for example, the flux
in Hu\,$\alpha$\,12.371\mic\ was $1.61[\pm0.05]\times10^{-15}$~W~m$^{-2}$ in
2005, compared with $5.5[\pm1]\times10^{-16}$~W~m$^{-2}$ in 2007.
However, both H\,$\alpha$ and H\,$\beta$ are present in an optical spectrum of
\XX\ obtained on 2007 May 8 (within days of the 2007 IRS spectrum) by one of us
(LAH), as are a number of ``Diffuse Interstellar Bands'' (DIBs), which most
likely are of interstellar origin. These data will be presented in a future
paper (Helton et al., in preparation).
We have extracted a continuum from both spectra to highlight the UIR features;
the result is shown in Fig.~\ref{UIR1}. There was little change between 2005 and
2007, except that the 11.3\mic\ and 8.6\mic\ UIR features were significantly
stronger in 2007 (when the IR hydrogen emission lines were weak).
The central wavelengths of the UIR features in \XX\ (e.g.
$7.48\pm0.01$\mic\ in 2005, $7.79\pm0.01$\mic\ in 2007 for the `7.7' feature)
are consistent with excitation by a source with an effective temperature in
excess of $\sim10^4$~K \citep[e.g.][]{sloan,acke}, and therefore with
excitation by the B~star. The changes we see in the strengths and possibly
central wavelengths of the UIR features may be associated with changes in the
ionisation of the PAH \citep[e.g.][]{draine}, possibly as a result of changes in
the extinction in the dust shell around the B~star.
\begin{figure*}
\setlength{\unitlength}{1cm}
\begin{center}
\leavevmode
\begin{picture}(5.0,9.)
\put(0.0,4.0){\special{psfile=IRS1.eps
hoffset=-110 voffset=-190 hscale=60 vscale=48 angle=0}}
\end{picture}
\caption[]{\sirtf\ IRS spectrum of \XX\ in 2005 and 2007; the 2005 data have
been displaced upwards by 2~Jy for clarity. The 6.25\mic, 7.7\mic, 8.6\mic\ and
11.2\mic\ UIR and H recombination lines are identified; note the absence of H
recombination lines in 2007. The point at 24\mic\ is the MOPEX photometry. The
possible C$_{60}$ features at 7.1\mic, 17.25\mic\ and 19\mic\ in the 2007
data are indicated by triangles; see also Fig.~\ref{C60} below. \label{IRS1}}
\end{center}
\end{figure*}
Fig.~\ref{IRS1} also shows clear evidence for two broad features that are
present in 2007 but not in 2005. We have subtracted the continuum to highlight
these features (see Fig.~\ref{C60}). Features at these wavelengths are included
in the PAHFIT package \citep{pahfit} but we consider it unlikely that the two
features in \XX\ are due to emission by PAH molecules, for the following
reasons: (a)~their absence in 2005, when other UIR features were present;
(b)~the strength of the 19\mic\ feature compared with that of the 17.4\mic;
(c)~the 17.4\mic/7.7\mic\ flux ratio. The 17.4\mic\ feature was reported in
NGC\,7023 by \cite{wernerb}, who assigned it to ``aromatic hydrocarbons or
nanoparticles of unknown mineralogy''; it has subsequently been attributed to
C$_{60}$ \citep{sellgren}.
Possible identifications for these features are 17.4\mic\ and 18.9\mic\
C$_{60}$, which in the gas phase has four active vibrational
modes, at $\sim7.0\mic, 8.5\mic, 17.5\mic$ and 18.9\mic\ \citep[e.g.][]{C60b}.
Fig.~\ref{IRS1} also shows evidence for a feature at $\sim7$\mic, and
subtraction of the 2005 spectrum from the 2007 spectrum leaves a feature with
central wavelength $7.01\pm0.01$\mic\ (see Table~\ref{fluxes});
there is no evidence for the ``8.5\mic'' feature.
\begin{table*}
\begin{center}
\caption{Properties of C$_{60}$ features in \XX; wavelengths of gaseous and
solid C$_{60}$ features from Frum et al. (1991) and Kr\"atschmer et al. (1990)
respectively. Einstein coefficients $A$ from Mitzner \& Campbell (1995).}
\begin{tabular}{cccccc} \hline
$\lambda$ ($\!$\mic) & FWHM ($\!$\mic) & Flux ($10^{-15}$~W~m$^{-2}$) & Gas $\lambda$
($\!$\mic) & Solid $\lambda$ ($\!$\mic) & $A$ (s$^{-1}$) \\ \hline
$7.01\pm0.01$ & $0.25\pm0.01$ & $4.1\pm0.2$ & 7.11 & 7.00 & 151.6\\
8.5 & & $<0.2$ & 8.55 & 8.45 & 74.8 \\
$17.25\pm0.02$ & $0.72\pm0.04$ & $2.00\pm0.10$ & 17.53 & 17.33 & 14.6 \\
$18.99\pm0.01$ & $0.34\pm0.01$ & $2.00\pm0.10$ & 18.97 & 18.94 & 36.8 \\ \hline\hline
\end{tabular}
\label{fluxes}
\end{center}
\end{table*}
The ``17.4\mic'' feature we observe in \XX\ is actually at 17.25\mic,
quite different from the expected value of 17.53\mic\ for gas phase C$_{60}$
\citep{C60b}. However, {\it solid} C$_{60}$ has a feature at 17.3\mic\
\citep*{C60a,C60c}, closer to the 17.25\mic\ feature in \XX. We should
therefore consider whether the features in Fig.~\ref{C60} arise in gaseous or
solid C$_{60}$.
The flux ratios of the putative C$_{60}$ features enable an estimate of the
vibrational temperature $T_{\rm vib}$ if the C$_{60}$ is in gaseous form. Using
Einstein coefficients from \citeauthor{mitzner} (1995; included in
Table~\ref{fluxes}), values of $T_{\rm vib}\sim520\pm50$~K are obtained; however
the ``18.9\mic'' flux seems underestimated by a factor $\sim2$. A similar value
($\sim670$~K) is obtained assuming that the energy of a $\sim10$~eV photon
absorbed by a C$_{60}$ molecule is equally distributed amongst the available
vibrational modes. However, at this temperature the ``8.5\mic'' feature would
have a flux $\sim1.5\times10^{-15}$~W~m$^{-2}$, far greater than observed. We
also note that laboratory measurements on solid C$_{60}$ \citep{C60a} suggest
that the ``8.5\mic'' feature is rather weaker than the other three. Therefore on
the basis of (i)~the wavelength of the ``17.4\mic'' feature and (ii)~the
weakness of the ``8.5\mic'' feature, we conclude that the C$_{60}$ in \XX\ is
most likely in solid form; if so this is the first astrophysical detection of
solid C$_{60}$.
The absorption cross-section of C$_{60}$ has been measured by \cite{yagi}, from
which we estimate the Planck mean absorption cross-section per C$_{60}$ molecule
(averaged over the emission of the B~star) to be $\sim7\times10^{-21}$~m$^2$.
If (cf. Section~\ref{binary}) the B~star is situated at
$\sim7.2\times10^{11}$~m from the inner boundary of the dust shell,
the temperature of a C$_{60}$ grain of radius $a$ is
$\sim200\,(a/0.03\mic)^{1/4}$~K.
While the apparent absence of C$_{60}$ in the IRS spectrum immediately after
eclipse in 2005, and its presence in 2007, is suggestive, it is difficult to
argue that the eclipse is in any way connected with the presence of C$_{60}$ in
the spectrum, especially as it is the giant that is eclipsed: it is likely
therefore that the appearance of C$_{60}$ in 2007 is unconnected with the
eclipse of Fig.~\ref{LC}.
\section{C$_{60}$ in \XX}
We can make an estimate of the mass of C$_{60}$ using the combined flux in the
C$_{60}$ features. Assuming (cf. Section~\ref{binary}) the B~star is
situated at $\sim7.2\times10^{11}$~m from the inner boundary of the dust shell,
and using the Planck mean absorption cross-section above,
the absorbed power per C$_{60}$ particle is $\sim8.1\times10^{-18}$~W. The
emitted power \citep[assuming a distance of 2~kpc for \XX;][]{evansetal93} is
$\sim3.9\times10^{26}$~W, so $\sim4.8\times10^{43}$ C$_{60}$ particles
(i.e. $\sim2.9\times10^{-11}$\Msun), in solid form, are required.
This suggests that the number of C$_{60}$ molecules is $\sim0.03$ the
number of PAH molecules.
The detection of C$_{60}$ in a range of environments
\citep{cami,garcia,sellgren}, including both H-poor and H-rich environments,
indicates that C$_{60}$ can form in a variety of astrophysical conditions.
\citeauthor{garcia} suggest that both the UIR carrier and C$_{60}$ may form as a
result of the disintegration of hydrogenous amorphous carbon (HAC) grains.
However the fact that HAC is seen in environments \citep[e.g.
novae; cf.][]{evans2} in which C$_{60}$ is {\em not} seen indicates that there are
other factors that determine whether or not C$_{60}$ is detected.
Most of the objects in which C$_{60}$ has been reported are associated with
stars having effective temperature $T_{\rm eff}$ in the range
$\sim15\,000-30\,000$\,K, the exception being the RCB star V854~Cen ($T_{\rm
eff}\simeq6\,750$\,K). \XX\ is in the former category, while classical novae
have T$_{\rm eff}\gtsimeq50\,000$\,K at the time of dust formation.
Notwithstanding the small number of objects in which C$_{60}$ has been detected,
the data thus far may point to the fact that it is the effective temperature of
the central star that is the common factor in the detection of C$_{60}$, the
critical range being $\sim10\,000-30\,000$\,K.
In 2007 the C$_{60}$ in \XX\ seems to be present when the IR H recombination
lines are weak, and the 8.5\mic\ and 11.2\mic\ UIR features are strong. This
suggests either (a)~ that the C$_{60}$ is not a permanent feature of the \XX\
environment but is formed when conditions are favourable (either by
fragmentation of larger particles, or by chemical routes from smaller
molecules), or (b)~that C$_{60}$ is a permanent feature and that its excitation
is intermittent.
One possible scenario is that the C$_{60}$-bearing material is, as already
discussed, confined to the vicinity of the B star. The H lines arise from a
shell, also associated with the B star and possibly accreted from the giant wind;
the relative sizes of the ionised and dusty regions
depend on the gas density. The formation of C-rich dust would require the
photodissociation of wind CO by UV radiation from the B star to release C for
carbon chemistry \citep{evans}. Enhanced formation of C$_{60}$ (coincidentally
after the 2005 eclipse) would be consistent with the appearance of C$_{60}$ in
2007. Quenching of the UV radiation from the B star by the C$_{60}$-containing
dust would lead to reduced excitation of H in the shell. If this is correct then
it is likely that C$_{60}$ is formed ``bottom up'' rather than ``top down''.
\begin{figure}
\setlength{\unitlength}{1cm}
\begin{center}
\leavevmode
\begin{picture}(5.0,6.)
\put(0.0,4.0){\special{psfile=UIR1.eps
hoffset=-40 voffset=-160 hscale=35 vscale=30 angle=0}}
\end{picture}
\caption[]{UIR features in \XX; the expected wavelengths of the C$_{60}$
features are indicated by the triangles. The apparent ``excess'' at
$\sim7.5$\mic\ in the `7.7' UIR feature in 2005 is due to the presence of
\pion{H}{i} 6--5 and 8--6. The flux uncertainties in this wavelength range are
typically $\pm0.03$~Jy. \label{UIR1}}
\end{center}
\end{figure}
\begin{figure}
\setlength{\unitlength}{1cm}
\begin{center}
\leavevmode
\begin{picture}(5.0,6.)
\put(0.0,4.0){\special{psfile=C60.eps
hoffset=-30 voffset=-160 hscale=32 vscale=32 angle=0}}
\end{picture}
\caption[]{Possible C$_{60}$ features in \XX. The upper horizontal lines
indicate the wavelengths of the ``17.3\mic'' and ``18.9\mic'' features in
C$_{60}$ smoke \citep{C60a}, the lower lines of the corresponding features in
gaseous C$_{60}$ \citep{C60b}. \label{C60}}
\end{center}
\end{figure}
\section{Conclusions}
We have reported the possible detection of solid-phase C$_{60}$ in the
environment of the peculiar binary \XX. Contrary to previous work we conclude
that the hot star is a B subdwarf that is surrounded by an ionised shell and a
C$_{60}$-bearing shell, most likely in the form of a disc. Variations in the
optical depth of the latter results in variations in the excitation of H lines.
We will present a detailed discussion of the \XX\ system and its environment in
a forthcoming paper.
\section*{Acknowledgments}
We thank Dr L. d'Hendecourt for helpful comments on an earlier version.
This work is based on observations made with the \sirtf, which is operated by
the Jet Propulsion Laboratory, California Institute of Technology under a
contract with NASA.
This publication makes use of data products from the Wide-field Infrared Survey
Explorer, which is a joint project of the University of California, Los Angeles,
and the Jet Propulsion Laboratory/California Institute of Technology, funded by
the National Aeronautics and Space Administration.
Based on observations with AKARI, a JAXA project with the participation of ESA.
RDG, CEW and LAH were supported by various NASA {\it Spitzer}/JPL contracts and
the United States Air Force.
SS was supported by NASA and the NSF.
|
1,108,101,562,986 | arxiv | \section{Introduction}
We consider the 3-D Navier-Stokes (NS) initial value problem
\begin{equation}\label{nseq0}
v_t - \nu \Delta v = -\mathcal{P} [ v \cdot \nabla v ] + f(x),\ \ v(x, 0)
= v_0 (x), \ \ x\in\mathbb{T}^3 [0,2\pi] ,\ \ t\in\mathbb{R}^+
\end{equation}
where $v$ is the fluid velocity and $\mathcal{P} = I -\nabla \Delta^{-1}
(\nabla \cdot )$ is the Hodge projection operator to the space of
divergence-free vector fields. For simplicity we assume that the forcing
$f$ is time-independent.
In Fourier space, (\ref{nseq0}) can be written as
\begin{equation}\label{nseq}
{\hat v}_t + \nu |k|^2 {\hat v} = - i k_j P_k \left [ {\hat v}_j {\hat *}
{\hat v} \right ] + {\hat f} ~~,~~{\hat v} (k, 0)= {\hat v}_0,
\end{equation}
where ${\hat v} (k, t) = \mathcal{F} \left [ v (\cdot, t) \right ] (k)$ is
the Fourier transform of the velocity, ${\hat *}$ denotes Fourier
convolution, a repeated index $j$ indicates summation over $j =1,2,3$ and
$P_k=\mathcal{F}(\mathcal{P})$ is the Fourier space representation of the
Hodge projection operator on the space of divergence-free vector fields,
given explicitly by
\begin{equation}\label{8.0}
P_k \equiv 1 - \frac{k ( k \cdot )}{|k|^2}.
\end{equation}
We assume that $ {\hat v}_0$ and $ {\hat f} \in l^1 (\mathbb{Z}^3)$ and,
without loss of generality, that the average velocity and force in the
periodic box are zero, and hence ${\hat v} (0, t) =0={\hat f} (0)$.
Global existence of smooth solutions to the 3-D Navier-Stokes problem
remains a formidable open mathematical problem, even for zero forcing,
despite extensive research in this area. The problem is important not
only in mathematics but it has wider impact, particularly if singular
solutions exist. It is known \cite{Beale} that the singularities can only
occur if $\nabla v$ blows up. This means that near a potential blow-up
time, the relevance of NS to model fluid flow becomes questionable, since
the linear approximation in the constitutive stress-strain relationship,
the assumption of incompressibility and even the continuum hypothesis
implicit in derivation of NS become doubtful. In some physical problems
(such as inviscid Burger's equation) the singularity of an idealized
approximation is mollified by inclusion of regularizing effects. It may be
expected that if 3-D NS solutions exhibited blow up, then actual fluid
flow, on very small time and space scales, has to involve parameters other
than those considered in NS. This could profoundly affect our
understanding of small scale in turbulence. In fact, some 75 years back,
Leray \cite{Leray1}, \cite{Leray2}, \cite{Leray3} was motivated to study
weak solutions of 3-D NS, conjecturing that turbulence was related to
blow-up of smooth solutions.
The typical method used in the mathematical analysis of NS, and of more
general PDEs, is the so-called energy method. For NS, the energy method
involves {\it a priori} estimates on the Sobolev $\mathbb{H}^m$ norms of
$v$. It is known that if $ \| v (\cdot, t) \|_{\mathbb{H}^1}$ is bounded,
then so are all the higher order energy norms $\| v (\cdot, t)
\|_{\mathbb{H}^m}$ if they are bounded initially. The condition on $v$
has been further weakened \cite{Beale} to $\int_0^{t}\| \nabla \times v
(\cdot, t) \|_{L^\infty}dt <\infty$. Prodi \cite{Prodi} and Serrin
\cite{Serrin} have found a family of other controlling norms for classical
solutions \cite{Lady}. In particular, no singularity is possible if $ \| v
(\cdot, t) \|_{L^\infty}$ is bounded. The ${L^{3}} $ norm is also
controlling, as has been recently shown in \cite{Sverak}. For classical
solutions, global existence proofs exist only for small initial data and
forcing or for large viscosity ({\it i.e.} when the non-dimensional
Reynolds number is small). On a sufficiently small initial interval the
solution is classical and unique. Global weak solutions (possibly
non-unique) are only known to exist \cite{Leray1}, \cite{Leray2},
\cite{Leray3} in a space of functions for which $\nabla v$ can blow-up on
a small set in space-time\footnote{The 1-D Hausdorff measure of the set of
blow-up points in space-time is known to be zero \cite{Caffarelli}}.
However, when $f=0$ (no forcing), a time $T_c$ may be estimated in terms
of the $\| v_0 \|_{H^1} $ beyond which any weak Leray solution becomes
smooth again. Such an estimate, which also follows directly from Leray's
observation on the cumulative dissipation being bounded, is worked out in
the Appendix. \footnote{We are grateful to Alexey Cheskidov for pointing
out the fact that classical estimates are easily obtainable.}
Classical energy methods have so far failed to give global existence
because of failure to obtain conservation laws involving any of the
controlling norms \cite{Tao}.
Numerical solutions to (\ref{nseq0}) are physically revealing but do not
shed enough light into the existence issue. Indeed, the numerical errors
in Galerkin/finite-difference/finite-element approximations depend on
derivatives of $v$ that are not known to exist {\it a priori} beyond an
initial time interval.
This paper introduces a new method in approaching these issues. In our
formulation, the velocity $v (x, t)$ is obtained as a Laplace transform:
\begin{equation}\label{intro.1}
v (x, t) = v_0 (x) + \int_0^\infty U (x, q) e^{-q/t^n} dq ,\qquad n\ge 1
\end{equation}
where $U$ satisfies an integral equation (IE) which always has a unique
acceptable smooth solution. Looking for $v$ in this form is motivated by
our earlier work \cite{NS16} showing that small $t$ formal series
solutions, which exist for analytic initial conditions and forcing, are
Borel summable to actual solutions. In that case, the actual solution is
indeed in the form (\ref{intro.1}), with $n=1$. However, the
representation (\ref{intro.1}), the IE for ${\hat U} (k, q) \equiv
\mathcal{F} \left [ U (\cdot, q) \right ] (k)$,(\ref{IntUeqn}), and its
properties important to (\ref{intro.1}) are valid even when $f$, $v_0 $
and the corresponding solutions are not analytic in $x$. An overview of
our approach and nature of results is given in \cite{JTS}.
\begin{Note}\label{fn1}
For general initial data and forcing, $U$ is in $L_1 \left (\mathbb{R}^+,
e^{-\alpha q} dq \right ) $, as defined in (\ref{eq:eqL}). If $n>1$, then
$U$ is analytic in $q$ in an open sector. For $n = 1$, the solution is
$q$-analytic in a neighborhood of $\mathbb{R}^+ \cup \{0 \}$
\footnote{This together with the $L^1$ estimate proves Borel summability
of the small $t$ series.} iff ${\hat v} (x,0)$ and ${\hat f} (x)$ are
analytic in $x$.
\end{Note}
As it will be seen later, using $n>1$ is advantageous for some initial
data.
In Fourier space, (\ref{intro.1}) implies
\begin{equation}\label{intro.1.1}
{\hat v} (k,t) = {\hat v}_0 (k) + \int_0^\infty {\hat U} (k, q) e^{-q/t^n}
dq.
\end{equation}
{\bf Notation.} Variables in the Fourier domain are marked with a hat
$\hat{}$, Laplace convolution is denoted by $*$, Fourier-convolution by
${\hat *}$, while $\text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}}$ denotes Fourier followed by Laplace convolution
(their order is unimportant).
As seen in \S4, $\hat{U}$ satisfies the following IE:
\begin{equation}\label{IntUeqn}
{\hat U} (k, q) = -i k_j \int_0^q \mathcal{G} (q, q'; k) {\hat H}_j (k,
q') dq' + {\hat U}^{(0)} (k, q) =:\mathcal{N} \left [ {\hat U} \right ]
(k, q),
\end{equation}
where
\begin{equation}\label{7.2}
{\hat H}_j (k, q) = P_k \left [ {\hat v}_{0,j} {\hat *} {\hat U} + {\hat
U}_j {\hat *} {\hat v}_0 + {\hat U}_j \text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} {\hat U} \right ] (k, q).
\end{equation}
The kernel $\mathcal{G}$, the inhomogeneous term ${\hat U}^{(0)} (k, q)$
and their essential properties are given in (\ref{kernelG}) and
(\ref{eqU0}) in \S4.
\begin{Note}{
\rm The solutions of (\ref{IntUeqn}), needed on $\mathbb{R}^+$, are very regular,
see Note \ref{fn1}. The existence time of $\hat{v}$ is determined by the
behavior of $\hat{U}$ for large $q$. In this formulation, global existence
of $\hat{v}$ is equivalent to subexponential behavior of $\hat{U}$.}
\end{Note}
The IE formulation was first introduced in \cite{NS16} in a narrower
context, and provides a new approach towards solving IVPs.
\section{Main results}
We define
\begin{equation}\label{eq:eqL}
L_{1}(\mathbb{R}^+,e^{-\alpha q}dq)=\left\{g:\mathbb{R}^+\mapsto \mathbb{C} ~\Big|
\int_0^{\infty} e^{-\alpha q} |g(q)| dq < \infty\right\}.
\end{equation}
\begin{Assumptions}\label{A11}
In the following, unless otherwise specified, we assume that ${\hat v}_0$
and ${\hat f}$ are in $l^1 \left (\mathbb{Z}^3 \right )$, ${\hat v}_0 (0)
= 0 = {\hat f} (0)$, $n\ge 1$, $\nu>0$ and $\alpha$ in (\ref{eq:eqL}) is
large enough (see Proposition \ref{propthm01}).
\end{Assumptions}
\smallskip
\begin{Theorem}
\label{Thm01}
(i) Eq. (\ref{IntUeqn}) has a unique solution ${\hat U} (\cdot, q)\in
L_{1}(\mathbb{R}^+,e^{-\alpha q}dq)$. For $n > 1$ this solution is analytic in an
open sector, cf. Note \ref{fn1}. We let $U(x,q)=\mathcal{F}^{-1} \left [
{\hat U} (\cdot, q) \right ] (x)$.
(ii) With this ${\hat U} $, $\hat{v}$ in (\ref{intro.1.1}) ($v(x,t)$ in
(\ref{intro.1}) respectively) is a classical solution of (\ref{nseq}) (
(\ref{nseq0}), resp.) for $t \in \left ( 0, \alpha^{-1/n} \right )$.
(iii) Conversely, any classical solution of (\ref{nseq0}), $v (x, t)$,
$t\in (0,T_0)$ has a Laplace representation of the form (\ref{intro.1})
with $ U$ as in (i) and with
$$
{\hat U} (k, q) := \mathcal{L}^{-1} \left [ \mathcal{F} [ v (\cdot,
\tau^{-1/n}) ] (k) - \mathcal{F} [v_0 ] (k) \right ] (q)
$$
a solution of (\ref{IntUeqn}) in $L_{1}(\mathbb{R}^+,e^{-\alpha q}dq)$, $\alpha>
T_0^{-n}$.
\end{Theorem}
\z The proof is given at the end of \S5.
\begin{Remark}
{\rm Proposition \ref{propthm01} below provides (relatively rough)
estimates on $\alpha$. Theorem \ref{Thm03} gives sharper bounds in terms
of the values of ${\hat U}$ on a finite interval $[0, q_0]$. Smaller
bounds on $\alpha$ entail smooth solutions of (\ref{nseq0}) over a longer
time.}
\smallskip
We have the following result which, in a sense, is a converse of Theorem
(\ref{Thm01}).
\begin{Theorem}\label{Thm02}
For $f = 0$, if (\ref{nseq0}) has a global classical solution, then for
all sufficiently large $n$, $ U (x, q)=O(e^{-C_nq^{1/(n+1)}})$ as $q \to
+\infty$, for some $C_n>0$.
\end{Theorem}
\end{Remark}
\z The proof is given in \S7.
\begin{Corollary}
Theorems \ref{Thm01} and \ref{Thm02} imply that global existence is
equivalent to an asymptotic problem: $\hat{v}$ exists for all time iff
$\hat{U}$ decays in $q$ for some $n \in \mathbb{Z}^+$.
\end{Corollary}
The existence interval $\left ( 0, \alpha^{-1/n} \right )$ guaranteed by
Theorem~\ref{Thm01} is suboptimal. It does not take into account the fact
that the initial data $v_0 $ and forcing $f$ are real valued. (Blow up of
Navier-Stokes solution for complex initial conditions is known to occur
\cite{Sinai}). Also, the estimate ignores possible cancellations in the
integrals.
In the following we address the issue of sharpening the estimates, in
principle arbitrarily well, based on more detailed knowledge of the
solution of the IE on an interval $[0,q_0]$. This knowledge may come, for
instance, from computer assisted estimates or from rigorous bounds based
on optimal truncation of asymptotic series. If this information shows that
the solution is sufficiently small for $q$ near the right end of the
interval, then $\alpha$ can be shown to be small. This in turn results in
longer times of guaranteed existence, possibly global existence for $f=0$
if this time exceeds $T_c$, the time after which it is known that a weak
solution becomes classical.
\section{Sharpening the estimates; rigorous numerical analysis}
Let ${\hat U} (k, q)$ be the solution of (\ref{IntUeqn}), provided by
Theorem~\ref{Thm01}. Define
\begin{equation}\label{eq:eq456}
\hat {U}^{(a)} (k, q) =
\begin{cases}
\hat{U} (k, q) & \mbox{for $q \in (0, q_0] \subset \mathbb{R}^+$} \\
0 & \mbox{otherwise}
\end{cases},
\end{equation}
\begin{multline*}
\hat{U}^{(s)}(k,q) = -ik_{j} \int_{0}^{\min \{ q, 2 q_0 \}}
\mathcal{G}(q,q';k) \hat{H}_{j}^{(a)}(k,q')\,dq' + \hat{U}^{(0)}(k,q), \\
\hat{H}_{j}^{(a)}(k,q) = P_{k} \Bigl[ \hat{v}_{0,j} \hat{*} \hat{U}^{(a)}
+ \hat{U}_{j}^{(a)} \hat{*} \hat{v}_{0} + \hat{U}_{j}^{(a)} \text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}}
\hat{U}^{(a)} \Bigr](k,q).
\end{multline*}
Using (\ref{eq:eq456}) we introduce the following functionals of ${\hat
U}^{(a)} (k, q)$, ${\hat v}_0$ and ${\hat f}$:
\begin{align}\label{ext3.1.0}
& b: = \alpha^{1/2+1/(2n)} \int_{q_{0}}^{\infty} e^{-\alpha q}
\|\hat{U}^{(s)}(\cdot,q)\|_{l^1}\,dq, \\
& \epsilon_1 = \Gamma \biggl( \frac{1}{2}+\frac{1}{2n} \biggr) \biggl[
B_{1} + \int_{0}^{q_{0}} e^{-\alpha q'} B_{2}(q')\,dq' \biggr], \\
& \epsilon = \Gamma \biggl( \frac{1}{2}+\frac{1}{2n} \biggr)\, B_{3},
\end{align}
where
\begin{multline*}
B_{1} = 4\sup_{k \in \mathbb{Z}^{3}} \big\{ |k|B_{0}(k) \big\}
\|\hat{v}_{0}\|_{l^1}, \qquad B_{0}(k) = \sup_{q_{0} \leq q' \leq q}
\Big\{ (q-q')^{1/2-1/(2n)} |\mathcal{G}(q,q';k)| \Big\}, \\
B_{2}(q) = 4\sup_{k \in \mathbb{Z}^{3}} \big\{ |k|B_{0}(k) \big\}
\|\hat{U}^{(a)}(\cdot,q)\|_{l^1}, \qquad B_{3} = 2\sup_{k \in
\mathbb{Z}^{3}} \big\{ |k|B_{0}(k) \big\}.
\end{multline*}
\bigskip
\begin{Theorem}\label{Thm03}
The exponential growth rate $\alpha$ of $\hat{U}$ is estimated in terms of
the restriction of $\hat{U}$ to $[0,q_0]$ as follows.
\begin{equation}\label{ext9}
\text{\rm If \ \ \ }\alpha^{1/2+1/(2n)} >\epsilon_{1} + 2 \sqrt{\epsilon
b}\ \ \text{\rm then\ \ }\int_0^{\infty} \| \hat{U} (\cdot,q)\|_{l^1}
e^{-\alpha q}dq <\infty.
\end{equation}
\end{Theorem}
\z The proof of Theorem \ref{Thm03} is given in \S\ref{S6}.
\begin{Remark}{
\rm In \S \ref{furtherest}, it is shown that for a given {\em global
classical} solution to (\ref{nseq0}), in adapted variables, the quantity
$\epsilon_{1} + 2 \sqrt{\epsilon b}$ is small for large $q_0$.}
\end{Remark}
\bigskip
\begin{Remark}{
\rm In the proof it is also seen that if $\| {\hat U}^{(a)} (\cdot, q)
\|_{l^1}$ is small enough in a sufficiently large subinterval $[q_d,
q_0]$, then the right side of (\ref{ext9}) is small, implying a large
existence time $\left (0, \alpha^{-1/n} \right )$ of a classical solution
$v$. The guaranteed existence time is larger if $q_0$ is larger. If for
$f=0$, the estimated $ \alpha^{-1/n}$ exceeds $T_c$, the time for Leray's
weak solution to become classical again (see Appendix), then global
existence of a classical solution $v$ follows.}
\end{Remark}
Since the improved estimates in Theorem \ref{Thm03} rely on the values of
$\hat{U}$ on a sufficiently large initial interval, we analyze the
properties of a discretized scheme for numerical computation of $\hat{U}$
with {\em controlled errors}.
\begin{Definition}\label{defDNorm}
We introduce the following norm on functions defined on a $\delta $-grid
in $q$
$$
\| {\hat W} \|^{(\alpha, \delta)} = \sup_{m_s \le m \in \mathbb{Z}^{+}}
m^{1-1/n} \delta^{1-1/n} (1+m^2 \delta^2) e^{-\alpha m \delta} \| {\hat W
} (\cdot, m\delta) \|_{l^1}.
$$
\end{Definition}
\begin{Theorem}\label{Thm04}
Consider a discretized integral equation consistent with (\ref{IntUeqn}) (
cf. definition \ref{defconsis}) based on Galerkin truncation to $[-N,N]^3$
Fourier modes and uniform discretization in $q$,
$$
{\hat U}_{\delta}^{(N)} = \mathcal{N}_{\delta}^{(N)} \left [ {\hat
U}_\delta^{(N)} \right ],
$$
see (\ref{discret2}) below. Then, the error ${\hat U} - {\hat
U}^{(N)}_\delta$ at the points $q = m \delta$, satisfies
\begin{multline}\label{eq:eu1}
\| {\hat U} (\cdot, m \delta) - {\hat U}^{(N)}_\delta (\cdot, m\delta )
\|_{l^1} \\
\le \left [ 2 \| T_{E,N} \|^{(\alpha, \delta)} + 2 \| T_{E,\delta}^{(N)}
\|^{(\alpha, \delta)} + \| (I-\mathcal{P}_N) {\hat U} \|^{(\alpha,
\delta)} \right ] \frac{ e^{\alpha m \delta} }{ m^{1-1/n} \delta^{1-1/n}
(1+m^2 \delta^2)}
\end{multline}
for $m \ge m_s \in \mathbb{Z}^+$, where $m_s \delta=:q_m > 0$ is
independent of $\delta$. In (\ref{eq:eu1}), $T_{E,N}$ is the truncation
error due to Galerkin projection $\mathcal{P}_N$ and $T_{E,\delta}^{(N)}$
is the truncation error due to the $\delta$-discretization in $q$ for a
given $N$. We have $\| T_{E,N} \|^{(\alpha, \delta)}, \| (I-\mathcal{P}_N)
{\hat U} \|^{(\alpha, \delta)} \to 0$ as $N \to \infty$ for any $\delta$
and $ \| T^{(N)}_{E,\delta} \|^{(\alpha, \delta)} \to 0 $ as $\delta \to
0$, uniformly in $N$.
\end{Theorem}
\begin{Remark}{
\rm For small $q$, independent of $\delta$, an asymptotic expansion of
${\hat U}$ exists, and solving the equation numerically for $q \in [0,
q_m]$ can be avoided. For this reason we start with $q=q_m$.}
\end{Remark}
\section{Integral Equation (\ref{IntUeqn}) and its properties}
We define ${\hat u}$ through the decomposition
\begin{equation}\label{hatudef}
{\hat v} (k, t) = {\hat v}_0 (k) + {\hat u} (k, t).
\end{equation}
Then, (\ref{nseq}) implies
\begin{multline}\label{hatueq}
{\hat u}_t + \nu |k|^2 {\hat u} = - i k_j P_k \left [ {\hat v}_{0,j} {\hat
*} {\hat u} + {\hat u}_j {\hat *} {\hat v}_0 + {\hat u}_j {\hat *} {\hat
u} \right ] + {\hat v}_1 (k) =:-i k_j {\hat h}_j (k, t) + {\hat v}_1 (k),
\end{multline}
where ${\hat v}_1$ is given by (\ref{4}). Using ${\hat v} (k, 0) = {\hat
v}_0$, we have ${\hat u} (k, 0) = 0$ and we obtain from (\ref{hatueq}),
\begin{equation}\label{5}
{\hat u} (k, t) = -i k_j \int_0^t e^{-\nu |k|^2 (t-s)} {\hat h}_j (k, s)
ds + {\hat v}_1 (k) \left ( \frac{1-e^{-\nu |k|^2 t} }{\nu |k|^2} \right
).
\end{equation}
We look for $\hat{u}$ in the form of a Laplace transform
\begin{equation}\label{65}
{\hat u} (k, t) = \int_0^\infty {\hat U} (k, q) e^{-q/t^n} dq;\ \ n\ge 1
\end{equation}
We apply the inverse Laplace transform of (\ref{5}) with respect to $\tau
= 1/t^n$ (justified at the end of the proof of Lemma \ref{connection},
with more details in the Appendix) to obtain (\ref{IntUeqn}). The inverse
Laplace transform of $f$ is given, as usual, by
\begin{equation}\label{eq:deflaplace}
\left [ \mathcal{L}^{-1}f \right ] (p) = \frac{1}{2\pi
i}\int_{c-i\infty}^{c+i\infty} f(s)e^{ps}ds,
\end{equation}
where $c$ is chosen so that $f$ is analytic and has suitable asymptotic
decay for $\mathrm{Re} ~s \ge c$.
For $n=1$ the kernel $\mathcal{G}$ is given by, see \cite{NS16},
\begin{multline}\label{kernelG1}
\mathcal{G} (q, q'; k) = \frac{\pi z'}{z} \left ( J_1 (z) Y_1 (z') - J_1
(z') Y_1 (z) \right )\\{\rm where} ~z = 2 |k| \sqrt{\nu q} ~~,~~z'= 2 |k|
\sqrt{\nu q'}, \ \ (n=1)
\end{multline}
$J_1$ and $Y_1$ are Bessel functions of order 1, and
\begin{equation}\label{eqU01}
\hat{U}^{(0)} (k,q) = 2 {\hat{v}_{1}(k)} \frac{J_1 (z)}{z} ~~,~~{\rm
where} ~z= 2 |k| \sqrt{\nu q}
\end{equation}
For $n \ge 2$ the kernel has the form (derived in the Appendix, see
(\ref{Gdefineac}))
\begin{multline}\label{kernelG}
\mathcal{G}(q,q';k) = \int_{(q'/q)^{1/n}}^1 \left \{ \frac{1}{2 \pi i}
\int_{c-i \infty}^{c+i \infty} \tau^{-1/n} \exp \left [ - \nu |k|^2
\tau^{-1/n} (1-s) + (q-q' s^{-n} ) \tau \right ] d\tau \right \} ds \\
= \frac{\gamma^{1/n}}{\nu^{1/2} |k| q^{1-1/(2n)}} \int_{1}^{\gamma^{-1/n}}
(1-s^{-n})^{1/(2n)-1} (1-s\gamma^{1/n})^{-1/2} \mu^{1/2} F(\mu)\,ds,
\end{multline}
where
$$
\gamma = \frac{q'}{q}, \qquad \mu = \nu |k|^{2} q^{1/n} (1-s\gamma^{1/n})
(1-s^{-n})^{1/n},
$$
\begin{equation}\label{eqF}
F(\mu) = \frac{1}{2\pi i} \int_{C} \zeta^{-1/n} e^{\zeta-\mu
\zeta^{-1/n}}\,d\zeta,
\end{equation}
and $C$ is a contour starting at $\infty e^{-i \pi}$ and ending at $\infty
e^{i \pi}$ turning around the origin counterclockwise. The function
$\hat{U}^{(0)}(k,q)$ in (\ref{IntUeqn}) is defined by
\begin{equation}\label{eqU0}
\hat{U}^{(0)}(k,q) = \frac{{\hat v}_1 (k)}{\nu |k|^2} \mathcal{L}^{-1}
\left \{ 1 - \exp \left [ - \nu |k|^2 \tau^{-1/n} \right ] \right \} (q) =
\frac{\hat{v}_{1}(k)}{\nu |k|^{2} q} G(\nu |k|^{2} q^{1/n}),
\end{equation}
where
\begin{equation}\label{intG}
G(\tilde{\mu}) = -\frac{1}{2\pi i} \int_{C} e^{\zeta-{\tilde \mu}
\zeta^{-1/n}}\,d\zeta.
\end{equation}
\subsection{Properties of $F$, $G$, $\mathcal{G}$, ${\hat U}^{(0)} $ and
the relation between the IE and NS}
\begin{Lemma}\label{lemG}
The functions $F$, $G$ in (\ref{eqF}) and (\ref{intG}) are entire and
$G^\prime (\mu) = F(\mu)$. Furthermore $F(0) = \frac{1}{\Gamma (1/n)}$,
$G(0) = 0$ and, for $n\ge 2$, their asymptotic behavior for large positive
$\mu$ is given by
\begin{equation}\label{ar1}
F(\mu) \sim
\begin{cases}
\displaystyle \sqrt{\frac{2}{\pi (n+1)}} n^{\frac{3}{2 (n+1)}}
\mu^{\frac{n-2}{2 (n+1)} } \mathrm{Im} \left \{ \exp \left [ \frac{3 i \pi}{2
(n+1)} \right ] e^{-z} \right \} & {\rm if } \arg\mu=0 \\
\displaystyle -i \sqrt{\frac{1}{2 \pi (n+1)} } n^{\frac{3}{2 (n+1)}}
\mu^{\frac{n-2}{2 (n+1)} } \exp \left [ \frac{3 i \pi}{2 (n+1)} \right ]
e^{-z} & {\rm if } \arg \mu \in (0, \frac{n+3}{2 n} \pi ) \\
\displaystyle i \sqrt{\frac{1}{2 \pi (n+1)} } n^{\frac{3}{2 (n+1)}}
\mu^{\frac{n-2}{2 (n+1)} } \exp \left [ \frac{-3 i \pi}{2 (n+1)} \right ]
e^{-{\hat z}} & {\rm if } \arg\mu \in (-\frac{n+3}{2 n} \pi, 0 )
\end{cases}
\end{equation}
\begin{equation}\label{ar2}
G(\mu) \sim
\begin{cases}
\displaystyle -\sqrt{\frac{2}{\pi (n+1)} } n^{\frac{1}{2 (n+1)}}
\mu^{\frac{n}{2 (n+1)} } \mathrm{Im} \left \{ \exp \left [ \frac{i \pi}{2 (n+1)}
\right ] e^{-z} \right \} & {\rm if } \arg\mu=0 \\
\displaystyle i \sqrt{\frac{1}{2 \pi (n+1)} } n^{\frac{1}{2 (n+1)}}
\mu^{\frac{n}{2 (n+1)} } \exp \left [ \frac{i \pi}{2 (n+1)} \right ]
e^{-z} & {\rm if } \arg \mu \in (0, \frac{n+3}{2 n} \pi ) \\
\displaystyle -i \sqrt{\frac{1}{2 \pi (n+1)} } n^{\frac{1}{2 (n+1)}}
\mu^{\frac{n}{2 (n+1)} } \exp \left [ \frac{-i \pi}{2 (n+1)} \right ]
e^{-{\hat z}} & {\rm if } \arg\mu \in (-\frac{n+3}{2 n} \pi, 0 )
\end{cases}
\end{equation}
where
\begin{equation}\label{eq:z}
z =\xi_0 \mu^{n/(n+1)} e^{i \pi/(n+1)} ~,~ ~~\xi_0 = n^{-n/(n+1)} ~(n+1);\
{\hat z} = \xi_0 \mu^{n/(n+1)} e^{-i \pi/(n+1)}.
\end{equation}
\end{Lemma}
\begin{proof}
These results follow from standard steepest descent analysis and from
the ordinary differential equation that $F$ and $G$ satisfy, see \S
\ref{A12}.
\end{proof}
\begin{Remark}
We see that $F(\mu)$ and $G(\mu)$ are exponentially small for large $\mu$
when $\arg \mu \in \left (-\frac{(n-1)\pi}{2n} , \frac{(n-1) \pi}{2n}
\right ) $, that is, when $\arg q \in \left (-\frac{(n-1) \pi}{2},
\frac{(n-1) \pi}{2} \right )$.
\end{Remark}
\begin{Definition}
For $\delta > 0$ and $n \ge 2$ we define the sector
$$
\mathcal{S}_\delta := \left \{ q: \arg q \in \left ( - \frac{(n-1) \pi}{2}
+\delta, \frac{(n-1) \pi}{2} -\delta \right ) \right \}.
$$
\end{Definition}
\begin{Lemma}\label{L2.4}
For $n \ge 2$, $q, q' \in e^{i \phi} \mathbb{R}^+ \subset
\mathcal{S}_\delta $, with $0 < |q'| \le |q| < \infty$ and $k \in
\mathbb{Z}^3 $ we have
$$
|\mathcal{G}(q,q';k)| \leq \frac{C_{2}
|q-q'|^{\frac{1}{2n}-\frac{1}{2}}}{\nu^{1/2} |k| |q|^{1/2}},
$$
where $C_{2}$ only depends on $\delta$. For $n=1$, the same inequality
holds for $q, q' \in \mathbb{R}^+$ with $0 < q' \le q$.
\end{Lemma}
\begin{proof}
The case $n=1$ follows from the behavior of $J_1$ and $Y_1$, see
\cite{NS16}\footnote{In that paper the viscosity $\nu$ was scaled to 1.}.
For $n \ge 2$, it follows from Lemma \ref{lemG} that $|\mu^{1/2} F(\mu)|$
is bounded, with a bound dependent on $\delta$. Below, $C$ is a generic
constant, possibly $\delta$ and $n$ dependent. {F}rom (\ref{kernelG}) we
get
\begin{multline}
|\mathcal{G}(q,q';k)| \leq \frac{C \gamma^{1/n}}{\nu^{1/2} |k|
|q|^{1-1/(2n)}} \Biggl[ \int_{1}^{\frac{1}{2}(1+\gamma^{-1/n})} +
\int_{\frac{1}{2}(1+\gamma^{-1/n})}^{\gamma^{-1/n}} \Biggr] \\
\times (1-s^{-n})^{1/(2n)-1} (1-s\gamma^{1/n})^{-1/2}\,ds =: \frac{C
\gamma^{1/n}}{\nu^{1/2} |k| |q|^{1-1/(2n)}} \Bigl( I_{1} + I_{2} \Bigr) ;\
\ \text{where } \gamma = \frac{q'}{q}
\end{multline}
For $s \in \left (1,\frac{1}{2} (1+\gamma^{-1/n}) \right ]$ we have
\begin{multline}
(1-s^{-n})^{1/(2n)-1} (1-s\gamma^{1/n})^{-1/2} \leq \biggl(
\frac{s-1}{1-s^{-n}} \biggr)^{1-1/(2n)} (s-1)^{1/(2n)-1} \biggl(
\frac{1}{2} - \frac{1}{2} \gamma^{1/n} \biggr)^{-1/2} \\
\leq C \Bigl[ 1 + (s-1)^{1-1/(2n)} \Bigr] (s-1)^{1/(2n)-1} \biggl(
\frac{1}{2} - \frac{1}{2} \gamma^{1/n} \biggr)^{-1/2} \\
\leq C (1-\gamma^{1/n})^{-1/2} \Bigl[ 1 + (s-1)^{1/(2n)-1} \Bigr],
\end{multline}
and for $s \in [\frac{1}{2} (1+\gamma^{-1/n}),\gamma^{-1/n})$,
\begin{multline*}
(1-s^{-n})^{1/(2n)-1} (1-s\gamma^{1/n})^{-1/2} \leq C \Bigl[ 1 +
(s-1)^{1/(2n)-1} \Bigr] (1-s\gamma^{1/n})^{-1/2} \\
\leq C (1-\gamma^{1/n})^{1/(2n)-1} (1-s\gamma^{1/n})^{-1/2}.
\end{multline*}
Thus
\begin{multline*}
I_{1} \leq C (1-\gamma^{1/n})^{-1/2}
\int_{1}^{\frac{1}{2}(1+\gamma^{-1/n})} \Bigl[ 1 + (s-1)^{1/(2n)-1}
\Bigr]\,ds \\
\leq C \gamma^{-1/n} (1-\gamma^{1/n})^{-1/2} \Bigl[ (1-\gamma^{1/n}) +
(1-\gamma^{1/n})^{1/(2n)} \Bigr] \\
\leq C \gamma^{-1/n} (1-\gamma^{1/n})^{1/(2n)-1/2}, \\
I_{2} \leq C (1-\gamma^{1/n})^{1/(2n)-1} \int_{\frac{1}{2}(1
+\gamma^{-1/n})}^{\gamma^{-1/n}} (1-s\gamma^{1/n})^{-1/2}\,ds \\
\leq C \gamma^{-1/n} (1-\gamma^{1/n})^{1/(2n)-1/2}.
\end{multline*}
\end{proof}
\begin{Lemma}\label{lemU0}
(i) For $n \ge 2$ and $0\ne q \in \mathcal{S}_\delta$, we have for $\alpha
\ge 1$,
$$
\| {\hat U}^{(0)} (\cdot, q) \|_{l^1} \le c_1 \| {\hat v}_1 \|_{l^1}
|q|^{-1+1/n} \exp \left [ - c_2 \nu^{n/(n+1)} |q|^{1/(n+1)} \right ],
$$
$$
\| k {\hat U}^{(0)} (\cdot, q) \|_{l^1} \le c_1 \| |k| {\hat v}_1 \|_{l^1}
|q|^{-1+1/n} \exp \left [ - c_2 \nu^{n/(n+1)} |q|^{1/(n+1)} \right ],
$$
where $c_1$ and $c_2$ depend on $\delta$ and $n$. Thus, we have
\begin{equation}\label{eqlemU01}
\int_0^{\infty} e^{-\alpha |q|} \| {\hat U}^{(0)} (\cdot, q) \|_{l^1} d|q|
\le c_1 \| {\hat v}_1 \|_{l^1} \alpha^{-1/n} \Gamma \biggl( \frac{1}{n}
\biggr).
\end{equation}
With $c_1 = 1$ and $q\in\mathbb{R}^+$, the bound in (\ref{eqlemU01}) holds for $n
=1$ as well. For $n \ge 2$, noting that ${\hat v}_1 (0) =0={\hat f} (0)$,
we have
\begin{equation}\label{eqCGU0}
\int_0^\infty \| {\hat U}^{(0)} (\cdot, q) \|_{l^1} d|q| \le C_G \biggl\|
\frac{{\hat v}_1}{\nu |k|^2} \biggr\|_{l^1} \le C_G \left \{ \| {\hat v}_0
\|_{l^1} \biggl( 1 + \frac{2}{\nu} \| {\hat v}_0 \|_{l^1} \biggr) +
\frac{1}{\nu} \biggl\| \frac{{\hat f}}{|k|^2} \biggr\|_{l^1} \right \},
\end{equation}
where
$$
C_G = \sup_{\phi \in \left [-\frac{n-1}{2 n} \pi + \frac{\delta}{n},
\frac{n-1}{2n} \pi - \frac{\delta}{n} \right ]} n \int_0^\infty s^{-1}|G(s
e^{i \phi} ) | ds.
$$
(ii) If moreover $|k|^{j+2} {\hat v}_0, |k|^{j}{\hat f} \in l^1$ ($j=
0,1$), then
\begin{multline*}
\sup |q|^{1-1/n} (1+|q|^2) e^{-\alpha |q|} \| k^j {\hat U}^{(0)} (\cdot,
q) \|_{l^1} \\
\le 2 c_1 \| |k|^{j} {\hat v}_1 \|_{l^1} \le 2 c_1 \left [ \nu \|
|k|^{j+2} {\hat v}_0 \|_{l^1} + 2 \|{|k|^j \hat v}_0 \|_{l^1} \| |k| {\hat
v}_0 \|_{l^{1}}+ \| |k|^{j}{\hat f} \|_{l^1} \right ]
\end{multline*}
where the sup is taken over $\mathbb{R}^+$ if $n=1$ and over
$\mathcal{S}_\delta$ if $n>1$.
\end{Lemma}
\begin{proof}
The result follows from (\ref{eqU0}) and (\ref{4}) using the asymptotics
of $G$, cf. (\ref{ar2}) and the behavior $G({\tilde \mu}) \sim C {\tilde
\mu}$ near ${\tilde \mu} =0$. For $n=1$, the bound (\ref{eqlemU01})
follows from the fact that $ \left |2 z^{-1}{J_1(z)} \right | \le 1$.
\end{proof}
The following lemma proves that a suitable solution to the integral
equation (\ref{IntUeqn}) gives rise to a solution of NS.
\begin{Lemma}\label{connection}
For any solution ${\hat U} $ of (\ref{IntUeqn}) such that $ \| {\hat U}
(\cdot, q) \|_{l^1} \in L_{1}(\mathbb{R}^+,e^{-\alpha q}dq)$, the Laplace
transform
$$
{\hat v} (k, t) = {\hat v}_0 (k) + \int_0^\infty {\hat U} (k, q)
e^{-q/t^n} dq
$$
solves (\ref{nseq}) for $t \in \left ( 0, \alpha^{-1/n} \right )$. For
$n=1$, ${\hat v} (k, t)$ is analytic in $t$ for $\mathrm{Re} \frac{1}{t}>
\alpha$.
It will turn out, cf. Lemma \ref{instsmooth} in the appendix, that $|k|^2
{\hat v} (\cdot, t) \in l^1 $ for $t \in \left ( 0, \alpha^{-1/n} \right
)$. Therefore, $v (x, t) = \mathcal{F}^{-1} \left [ {\hat v} (\cdot, t)
\right ] (x)$ is the classical solution of (\ref{nseq0}).
\end{Lemma}
\begin{proof}
{F}rom (\ref{eqU0}), we obtain
\begin{multline*}
\int_0^\infty e^{-q t^{-n} } {\hat U}^{(0)} (k, q) dq = {\hat v}_1 (k)
\int_0^\infty e^{-q t^{-n}} \frac{1}{2 \pi i} \int_{c-i\infty}^{c+i\infty}
\frac{1-e^{-\nu |k|^2 \tau^{-1/n} } }{\nu |k|^2}e^{q \tau} d\tau dq \\
= {\hat v}_1 (k) \left ( \frac{1-e^{-\nu |k|^2 t}}{\nu |k|^2} \right ).
\end{multline*}
Furthermore , we may rewrite (\ref{kernelG}) as
\begin{multline}\label{eq124}
\mathcal{G} (q, q'; k) \\
= \frac{1}{2 \pi i} \int_{0}^1 \int_{c-i \infty}^{c+i \infty} \tau^{-1/n}
\left \{ \exp \left [ -\nu |k|^2 \tau^{-1/n} (1-s) + (q-q'/s^n) \tau
\right ] d\tau \right \} ~ds
\end{multline}
since the integral with respect to $\tau$ is identically zero when $ s \in
\left (0, (q'/q)^{1/n} \right )$ (the $\tau$ contour can be pushed to
$+\infty$), we can replace the lower limit in the outer integral in
(\ref{eq124}) by $(q'/q)^{1/n}$. Note that $\| \hat{H}_j (\cdot ,q)
\|_{l^{1}} \in {L}_1 \left ( e^{-\alpha |q|} d|q| \right )$, since
\begin{equation}\label{eq:conv2}
\|F*G\|_{\alpha}\le \|F\|_{\alpha}\|G\|_{\alpha}
\end{equation}
(see\cite{Duke} and also Lemma~\ref{lemBanach} below). Changing variable
$q'/s^n \to q'$ and applying Fubini's theorem we get
\begin{equation}\label{eqHG}
-i k_j \int_0^q {\hat H}_j (k, q') \mathcal{G} (q, q'; k) dq' = \int_0^1
s^n \left \{ \int_0^q \left [-i k_j {\hat H}_j \right ] (k, q' s^n)
\mathcal{Q} (q-q', s; k) dq' \right \} ds
\end{equation}
where for $q > 0$ we have
\begin{equation}
\mathcal{Q} (q,s;k) = \frac{1}{2 \pi i} \int_{c-i \infty}^{c+i \infty}
\exp \left [ -\nu |k|^2 \tau^{-1/n} (1-s) + q \tau) \right ] \tau^{-1/n}
d\tau.
\end{equation}
Laplace transforming (\ref{eqHG}) with respect to $q$, again by Fubini we
have
\begin{multline}
\int_0^\infty e^{-q t^{-n}} \left \{ \int_0^1 \int_0^q \left \{ -ik_j
{\hat H}_j \right \} (k, q's^n) \mathcal{Q} (q-q'; s, k) s^n dq' ds \right
\}~ dq \\
= -i k_j \int_0^1 ~ds ~g(t, s; k) {\hat h}_j (k, st ),
\end{multline}
where ${\hat h}_j (k, t) = \mathcal{L} \left [ {\hat H}_j (k, \cdot)
\right ] (t^{-n})$, ~~$g(t, s; k) = \mathcal{L} \left [ \mathcal{Q}
(\cdot, s; k) \right ] (t^{-n}) $. By assumption, $\| {\hat U} (\cdot, q)
\|_{l^1} \in L_{1} \left (\mathbb{R}^+, e^{-\alpha q} dq \right ) $ and
${\hat v}_0 (k) \in l^1$. {F}rom (\ref{7.2}) and (\ref{eq:conv2}) it
follows that ${\hat H}_j$ is Laplace transformable in $q$ and
$$
{\hat h}_j (k, t) = P_k \left \{ {\hat v}_{0,j} {\hat *} {\hat u} + {\hat
u}_j {\hat *} {\hat v}_0 + {\hat u}_j {\hat *} {\hat u} \right \} (k, t),
$$
while
$$
g(t, s; k) = t \exp \left [ -\nu |k|^2 t (1-s) \right ].
$$
This leads to
\begin{multline*}
{\hat u} (k, t) = t \int_0^1 e^{-\nu |k|^2 t (1-s)} \left [ -i k_j {\hat
h}_j \right ] (k, s t) ds + {\hat v}_1 (k) \left ( \frac{1-e^{-\nu |k|^2
t}}{\nu |k|^2} \right ) \\
= \int_0^t e^{-\nu |k|^2 (t -\tau) } \left [ -i k_j {\hat h}_j \right ]
(k, \tau) d\tau + {\hat v}_1 (k) \left ( \frac{1-e^{-\nu |k|^2 t}}{\nu
|k|^2} \right )
\end{multline*}
and thus
$$
{\hat u}_t + \nu |k|^2 {\hat u} = -i k_j {\hat h}_j (k, t) ~+~{\hat
v}_1~~, ~~{\rm with} ~{\hat u} (k, 0) = 0.
$$
Therefore, using expression (\ref{hatueq}) for ${\hat h}_j$, we see that
${\hat v} (k, t) = {\hat u} (k, t) + {\hat v}_0 (k) $ (\ref{nseq}), with
${\hat v} (k, 0) = {\hat v}_0 (k) $. Analyticity in $t$ of this solution
in region $\mathrm{Re} \frac{1}{t} > \alpha$ follows from the representation
(\ref{intro.1.1}). It is clear that $|k|^2 {\hat v} (\cdot, t), {\hat f}
\in l^1 $, ensures that $\mathcal{F}^{-1} \left [ {\hat v} (\cdot, t)
\right ] (x) $ is a classical solution to (\ref{nseq0}).
\end{proof}
\section{Existence of a solution to (\ref{IntUeqn})}
\z First, we prove some preliminary lemmas.
\begin{Lemma}\label{lem0.1}
By standard Fourier theory, if ${\hat v}, {\hat w} \in l^1 \left (
\mathbb{Z}^3 \right )$, then so is ${\hat v}{\hat *} {\hat w}$, and $ \|
{\hat v} {\hat *} {\hat w} \|_{l^1} \le \| {\hat v} \|_{l^1} \| {\hat w}
\|_{l^1}$. \quad\blackslug\lower 8.5pt\null\par
\end{Lemma}
\begin{Lemma}\label{lem0.2}
$$
\| P_k \left [ {\hat w}_j {\hat *} {\hat v} \right ] \|_{l^1} \le 2 \|
{\hat w}_j \|_{l^1} \|{\hat v} \|_{l^1}.
$$
\end{Lemma}
\begin{proof}
It is easily seen from the representation of $P_k$ in (\ref{8.0}) that
\begin{equation}\label{Pbound}
| P_k {\hat g} (k) | \le 2 |{\hat g} (k)|.
\end{equation}
The rest follows from Lemma (\ref{lem0.1}).
\end{proof}
\begin{Lemma}\label{lemC2}
Let $C_2=C_2(\delta,n)$ be given by
\begin{multline*}
C_2 = 2 \sup_{\substack{q, q' \in e^{i \phi} \mathbb{R}^+ \subset
\mathcal{S}_\delta,\, 0 \le |q'| \le |q| \\ k \in \mathbb{Z}^3}} \nu^{1/2}
|k| |q|^{1/2} |q-q'|^{1/2-1/(2 n)} | \mathcal{G} (q, q'; k) |\quad
\mbox{{\rm for} $n \ge 2$}, \\
C_2 = 2 \sup_{\substack{q, q' \in \mathbb{R}^+,\, 0 \le q' \le q \\ k \in
\mathbb{Z}^3}} \nu^{1/2} |k| q^{1/2} | \mathcal{G} (q, q'; k) |\quad
\mbox{{\rm for} $n = 1$}.
\end{multline*}
Then, for $n \ge 2$, we have
\begin{multline}\label{N1}
\| \mathcal{N} [{\hat U}] (\cdot , q) \|_{l^1} \le \frac{C_2}{\nu^{1/2}
|q|^{1/2}} \int_0^{|q|} (|q|-s)^{-1/2+1/(2n)} \left \{ \| {\hat U} (\cdot
, s e^{i \phi} ) \|_{l^1} \right. \\
\left. * \| {\hat U} (\cdot , s e^{i \phi}) \|_{l^1} + 2 \| {\hat v}_0
\|_{l^1} \| {\hat U}(\cdot , s e^{i \phi}) \|_{l^{1}} \right \} ds + \|
{\hat U}^{(0)} (\cdot, q) \|_{l^1},
\end{multline}
\begin{multline}\label{N2}
\| \mathcal{N} [{\hat U}^{[1]}] (\cdot , q) - \mathcal{N} [{\hat U}^{[2]}]
(\cdot , q) \|_{l^1} \\
\le \frac{C_2}{\nu^{1/2} |q|^{1/2}} \int_0^{|q|} (|q|-s)^{-1/2+1/(2n)}
\left \{ \left ( \| {\hat U}^{[1]} (\cdot , s e^{i \phi}) \|_{l^1} + \|
{\hat U}^{[2]} (\cdot , s e^{i \phi}) \|_{l^1} \right ) \right. \\
\left. * \| {\hat U}^{[1]} (\cdot , s e^{i \phi}) - {\hat U}^{[2]} (\cdot
, s e^{i \phi} )\|_{l^1} + 2 \| {\hat v}_0 \|_{l^1} \| {\hat U}^{[1]}
(\cdot , s e^{i \phi}) -{\hat U}^{[2]} ( \cdot , s e^{i \phi}) \|_{l^1}
\right \} ds.
\end{multline}
For $n=1$, (\ref{N1}) and (\ref{N2}) hold for $q \in \mathbb{R}^+$, {\it
i.e.} when $\phi=0$.
\end{Lemma}
\begin{proof}
{F}rom Lemma \ref{lem0.2}, we have, for any $q$
$$
\| P_k \left \{ {\hat U}_j \text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} {\hat U} \right \} (k,q ) \|_{l^{1}} \le 2
\| {\hat U} (\cdot, q) \|_{l^1} * \| {\hat U} (\cdot, q)\|_{l^{1}},
$$
and similarly
$$
\| P_k \left \{ {\hat v}_{0,j}{\hat *}{\hat U} (\cdot, q) + {\hat U}_j
(\cdot, q) {\hat *} {\hat v}_{0} \right \} \|_{l^{1}} \le 4 \| {\hat v}_0
\|_{l^1} \| {\hat U} (\cdot, q)\|_{l^1},
$$
and (\ref{N1}) follows.
The second part of the lemma follows by noting that
\begin{equation}\label{Udiff}
{\hat U}_j^{[1]} \text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} {\hat U}^{[1]} -{\hat U}_j^{[2]} \text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} {\hat U}^{[2]}
= {\hat U}^{[1]}_j \text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} \left ({\hat U}^{[1]} - {\hat U}^{[2]} \right ) +
\left( {\hat U}^{[1]}_j - {\hat U}^{[2]}_j \right ) \text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} {\hat U}^{[2]}.
\end{equation}
Applying Lemma \ref{lem0.2} to (\ref{Udiff}), we obtain
\begin{multline*}
\| P_k \left \{ {\hat U}_j^{[1]} \text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} {\hat U}^{[1]} (\cdot, q) -{\hat
U}_j^{[2]} \text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} {\hat U}^{[2]} (\cdot, q) \right \} \|_{l^{1}} \le 2 \|
{\hat U}^{[1]} (\cdot, q) \|_{l^{1}} * \| {\hat U}^{[1]} (\cdot,
q) - {\hat U}^{[2]} (\cdot, q) \|_{l^1} \\
+ 2 \| {\hat U}^{[2]} (\cdot, q) \|_{l^{1}} * \| {\hat U}^{[1]} (\cdot, q)
- {\hat U}^{[2]} (\cdot, q) \|_{l^{1}},
\end{multline*}
from which (\ref{N2}) follows easily.
\end{proof}
It is convenient to define a number of different $q$-norms, $q \in e^{i
\phi} \mathbb{R}^+ \cup \{0\} \subset \mathcal{S}_\delta$.
\begin{Definition}\label{DefA}
(i) For $\alpha > 0$, $n\ge 2$, we let $\mathcal{A}^{(\alpha)}$ be the set
of analytic functions in $\mathcal{S}_\delta$ with the norm
\begin{equation}\label{8.0.1}
\| {\hat f} \|^{(\alpha)} = \sup_{q \in \mathcal{S}_\delta} |q|^{1-1/n}
(1+|q|^2) e^{-\alpha |q|} \|{\hat f} (\cdot, q) \|_{l^{1}} < \infty,
\end{equation}
while for $n=1$, $\mathcal{A}^{(\alpha)}$ will denote the set of
continuous functions on $[0, \infty)$ with norm $\| \cdot \|^{(\alpha)}$.
(ii) Let $\alpha > 0$, $n \ge 2$, $\delta > 0$. We define a Banach space
$\mathcal{A}_1^{\alpha;\phi} $ of functions along the ray
$|q|e^{i\phi}\in\mathcal{S}_\delta$ with the norm
\begin{equation}\label{8.0.0}
\| {\hat f} \|_1^{\alpha;\phi} = \int_0^\infty e^{-\alpha |q|} \| {\hat f}
(\cdot, |q|e^{i\phi}) \|_{l^{1}} d|q| < \infty.
\end{equation}
We agree to omit the superscript $\phi$ when $\phi=0$ (which is always the
case if $n=1$).
\end{Definition}
\begin{Lemma}\label{lemBanach}
We have the following Banach algebra properties:
\begin{equation}\label{eq:norm1}
\| {\hat f}~ \text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} ~ {\hat g} \|_1^{\alpha; \phi} \le \| {\hat f}
\|_1^{\alpha; \phi} \| {\hat g} \|_1^{\alpha; \phi},
\end{equation}
\begin{equation}\label{eq:norm2}
\| {\hat f} ~\text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}}~ {\hat g} \|^{(\alpha)} \le M_0 \| {\hat f} \|^{(\alpha)}
\| {\hat g} \|^{(\alpha)},
\end{equation}
where
$$
M_0 = 2^{4-1/n} \int_0^\infty \frac{ds}{s^{1-1/n} (1+s^2)}.
$$
\end{Lemma}
\begin{proof}
In the following, we take $u(s) = \| {\hat f}(\cdot, s e^{i \phi})
\|_{l^1}$ and $v (s) = \| {\hat g} (\cdot, s e^{i\phi}) \|_{l^1} $. For
(\ref{eq:norm1}) we note that for any $L > 0$,
\begin{multline}
\int_0^L e^{-\alpha |q|} \int_0^{|q|} u(s) v (|q|-s) ds ~d|q| \\
= \int_0^L \int_0^{|q|} e^{-\alpha s} e^{-\alpha (|q|-s)} u(s) v (|q|-s)
ds ~d|q| \le \int_0^L e^{-\alpha s} u(s) ds \int_0^L e^{-\alpha \tau}
v(\tau) d\tau.
\end{multline}
{F}rom (\ref{8.0.1}), we note that
$$
\int_0^{|q|} u(s) v(|q|-s) ds \le \| {\hat f} \|^{(\alpha)} \| {\hat g}
\|^{(\alpha)} e^{\alpha |q|} \int_0^{|q|} \frac{ds}{s^{1-1/n}
(|q|-s)^{1-1/n} [1+s^2] [1+(|q|-s)^2]}.
$$
Finally,
\begin{multline*}
\int_0^{|q|} \frac{ds}{s^{1-1/n} (|q|-s)^{1-1/n} [1+s^2] [1+(|q|-s)^2]} \\
= 2 \int_0^{|q|/2} \frac{ds}{s^{1-1/n} (|q|-s)^{1-1/n} [1+s^2]
[1+(|q|-s)^2]} \\
\le \frac{2^{2-1/n}}{|q|^{1-1/n} (1+|q|^2/4)} \int_0^{|q|/2}
\frac{ds}{s^{1-1/n} [1+s^2]} \le \frac{2^{4-1/n}}{|q|^{1-1/n} (1+|q|^2)}
\int_0^\infty \frac{ds}{s^{1-1/n} [1+s^2]},
\end{multline*}
where we used $\sup \frac{1+|q|^2}{1+|q|^2/4} = 4$.
\end{proof}
\begin{Lemma}\label{NUnormbound}
Let $C_2$ be as in Lemma \ref{lemC2} and $\alpha \ge 1$. The operator
$\mathcal{N} $ in (\ref{IntUeqn}) is well defined on:
(i) $\mathcal{A}_1^{\alpha; \phi}$, where it satisfies the following
inequalities
\begin{equation}
\| \mathcal{N} [{\hat U}] \|_1^{\alpha; \phi} \le C_2 \nu^{-1/2} \Gamma
\biggl( \frac{1}{2n} \biggr) \alpha^{-1/(2n)} \left \{ \left ( \| {\hat U}
\|_1^{\alpha; \phi} \right )^2 + 2 \| {\hat v}_0 \|_{l^1} \| {\hat U}
\|_1^{\alpha; \phi} \right \} + \| {\hat U}^{(0)} \|_1^{\alpha; \phi},
\end{equation}
\begin{multline}
\| \mathcal{N} [{\hat U}^{[1]} ] - \mathcal{N} [ {\hat U}^{[2]}]
\|_1^{\alpha; \phi} \le C_2 \nu^{-1/2} \Gamma \biggl( \frac{1}{2n} \biggr)
\alpha^{-1/(2n)} \\
\times \left \{ \left ( \| {\hat U}^{[1]} \|_1^{\alpha; \phi} + \| {\hat
U}^{[2]} \|_1^{\alpha; \phi} \right ) \| {\hat U}^{[1]} - {\hat U}^{[2]}
\|_1^{\alpha; \phi} + 2 \| {\hat v}_0 \|_{l^{1}} \| {\hat U}^{[1]}-{\hat
U}^{[2]} \|_1^{\alpha; \phi} \right \}.
\end{multline}
(ii) $\mathcal{A}^{(\alpha)}$, where it satisfies the inequalities:
\begin{equation}\label{NUnormsup}
\| \mathcal{N} [{\hat U}] \|^{(\alpha)} \le C_2 C_3 \nu^{-1/2}
\alpha^{-1/(2n)} \left \{ M_0 \left ( \| {\hat U} \|^{(\alpha)} \right )^2
+ 2 \| {\hat v}_0 \|_{l^{1}} \| {\hat U} \|^{(\alpha)} \right \} + \|
{\hat U}^{(0)} \|^{(\alpha)},
\end{equation}
\begin{multline}\label{3.50}
\| \mathcal{N} [{\hat U}^{[1]} ] - \mathcal{N} [ {\hat U}^{[2]}]
\|^{(\alpha)} \le C_2 C_3 \nu^{-1/2} \alpha^{-1/(2n)} \\
\times \left \{ M_0 \left ( \| {\hat U}^{[1]} \|^{(\alpha)} + \| {\hat
U}^{[2]} \|^{(\alpha)} \right ) \| {\hat U}^{[1]} - {\hat U}^{[2]}
\|^{(\alpha)} + 2 \| {\hat v}_0 \|_{l^{1}} \| {\hat U}^{[1]}-{\hat
U}^{[2]} \|^{(\alpha)} \right \},
\end{multline}
where $C_3$ is defined in (\ref{C3def}) and depends on $n$ alone.
\end{Lemma}
\begin{proof}
(i) For any $0 < L \le \infty$ and $u \ge 0$ we have
\begin{multline*}
\int_0^L e^{-\alpha |q|} |q|^{-1/2} \left ( \int_0^{|q|}
(|q|-s)^{-1/2+1/(2n)} u(s e^{i\phi} ) ds \right ) d|q| \\
= \int_0^L u(s e^{i \phi} ) e^{-\alpha s} \left ( \int_{s}^L |q|^{-1/2}
(|q|-s)^{-1/2+1/(2n)} e^{-\alpha (|q|-s)} d|q| \right ) ds \\
\le \int_0^L e^{-\alpha s} u(s e^{i\phi}) \left \{ \int_0^L
{s'}^{-1/2+1/(2n)} (s'+s)^{-1/2} e^{-\alpha s'} ds' \right \} ds.
\end{multline*}
Using (\ref{N1}) it follows that
\begin{multline*}
\int_0^\infty e^{-\alpha |q|} \| \mathcal{N} [{\hat U} ] (\cdot, |q| e^{i
\phi}) \|_{l^1} d|q| \\
\le C_2 \nu^{-1/2} \Gamma \biggl( \frac{1}{2n} \biggr) \alpha^{-1/(2n)}
\left ( \left [ \| {\hat U} \|_1^{\alpha; \phi} \right ]^2 + 2 \| v_0
\|_{l^{1}} \| {\hat U} \|_{1}^{\alpha; \phi} \right ) + \| {\hat U}^{(0)}
\|_1^{\alpha; \phi}.
\end{multline*}
{F}rom (\ref{N2}), it now follows that
\begin{multline*}
\int_0^\infty \| \mathcal{N} [{\hat U}^{[1]} ] - \mathcal{N} [ {\hat
U}^{[2]}] \|_{l^1} e^{-\alpha |q|} d|q| \\
\le C_2 \nu^{-1/2} \Gamma \biggl( \frac{1}{2n} \biggr) \alpha^{-1/(2n)}
\left \{ \left ( \| {\hat U}^{[1]} \|_1^{\alpha; \phi} + \| {\hat U}^{[2]}
\|_1^{\alpha; \phi} \right ) \| {\hat U}^{[1]} - {\hat U}^{[2]}
\|_1^{\alpha; \phi} \right . \\
\left. + 2 \| {\hat v}_0 \|_{l^{1}} \| {\hat U}^{[1]}-{\hat U}^{[2]}
\|_1^{\alpha; \phi} \right \}.
\end{multline*}
(ii) We first note that
\begin{multline*}
|q|^{1/2-1/n} \int_0^{|q|} e^{-\alpha (|q|-s)} (|q|-s)^{-1/2+1/(2n)}
s^{-1+1/n} (1+s^2)^{-1} ds \\
= |q|^{1/(2n)} \int_0^1 e^{-\alpha |q| (1-t)} t^{-1+1/n}
(1-t)^{-1/2+1/(2n)} (1+t^2 |q|^2 )^{-1} dt \\
= |q|^{1/(2n)} \left \{ \int_0^{1/2} e^{-\alpha |q| (1-t)}
\frac{t^{-1+1/n} (1-t)^{-1/2+1/(2n)}}{ (1+t^2 |q|^2)} dt + \right. \\
\left . \int_{1/2}^1 e^{-\alpha |q| (1-t)} \frac{t^{-1+1/n}
(1-t)^{-1/2+1/(2n)}}{ (1+t^2 |q|^2) } dt \right \}
\end{multline*}
\begin{multline}\label{r1}
\le |q|^{1/(2n)} e^{-\alpha |q|/2} \int_0^{1/2} t^{-1+1/n}
(1-t)^{-1/2+1/(2n)} dt \\
+ \frac{2^{1-1/n} |q|^{1/(2n)}}{1+|q|^2/4} \int_{1/2}^1 e^{-\alpha |q|
(1-t)} (1-t)^{-1/2+1/(2n)} dt.
\end{multline}
The first term on the right of (\ref{r1}) is bounded by $n 2^{1/2-3/(2n)}
|q|^{1/(2n)} e^{-\alpha |q|/2}$. For the second term we separate two
cases. Let first $\alpha |q| \le 1$. It is then clear that
\begin{multline*}
|q|^{1/(2n)} \int_{1/2}^1 e^{-\alpha |q| (1-t)} (1-t)^{-1/2+1/(2n)} dt \\
\le |q|^{1/(2n)} \int_{1/2}^1 (1-t)^{-1/2+1/(2n)} dt \le
\frac{2n}{(n+1)\alpha^{1/(2n)}}.
\end{multline*}
Now, if $\alpha |q| > 1$, we have
\begin{multline*}
|q|^{1/(2n)} \int_{1/2}^1 e^{-\alpha |q| (1-t)} (1-t)^{-1/2+1/(2n)} dt =
|q|^{1/(2n)} \int_{0}^{1/2} e^{-\alpha |q| t} t^{-1/2+1/(2n)} dt \\
\le |q|^{1/(2n)} \Gamma \biggl( \frac{1}{2}+\frac{1}{2n} \biggr) \left
[\alpha |q| \right ]^{-1/2-1/(2n)} \le \alpha^{-1/(2n)} \Gamma \biggl(
\frac{1}{2}+\frac{1}{2n} \biggr).
\end{multline*}
Combining these results we get
$$
|q|^{1/(2n)} \int_{1/2}^1 e^{-\alpha |q| (1-t)} (1-t)^{-1/2+1/(2n)} dt \le
\alpha^{-1/(2n)} C_1,
$$
where
$$
C_1 = \max \left \{ \Gamma \biggl( \frac{1}{2}+\frac{1}{2n} \biggr),
\frac{2n}{n+1} \right \}.
$$
Therefore,
\begin{multline}\label{C3def}
\sup_{|q| > 0} \left \{ |q|^{1-1/n} (1+|q|^2) e^{-\alpha |q|} |q|^{-1/2}
\int_0^{|q|} e^{\alpha s} (|q|-s)^{-1/2+1/(2n)} s^{-1+1/n} (1+s^2)^{-1} ds
\right \} \\
\le (C_0 + 2^{3-1/n} C_1) \alpha^{-1/(2n)} \equiv C_3 \alpha^{-1/(2n)},
\end{multline}
where
$$
C_0 = n 2^{1/2-1/n} \left [ \sup_{\gamma > 0} \gamma^{1/(2n)} e^{-\gamma}
+ 4 \sup_{\gamma > 0 } \gamma^{2+1/(2n)} e^{-\gamma} \right ].
$$
From (\ref{N1}) and the definition of $\| \cdot \|^{(\alpha)}$, it follows
that
$$
\| \mathcal{N} [{\hat U} ] \|^{(\alpha)} \le C_2 C_3 \nu^{-1/2}
\alpha^{-1/(2n)} \left [ M_0 \left ( \| {\hat U} \|^{(\alpha)} \right )^2
+ 2 \| {\hat v}_0 \|_{l^{1}} \| {\hat U} \|^{(\alpha)} \right ] + \| {\hat
U}^{(0)} \|^{(\alpha)}.
$$
Inequality (\ref{3.50}) follows similarly.
\end{proof}
\begin{Lemma}\label{inteqn}
The integral equation (\ref{IntUeqn}) has a unique solution in:
(i) the ball of radius $2 \| {\hat U}^{(0)} \|_1^{\alpha; \phi}$ in
$\mathcal{A}_1^{\alpha; \phi}$, if $\alpha$ is large enough so that
\begin{equation}\label{ensure2}
C_2 \nu^{-1/2} \Gamma \biggl( \frac{1}{2n} \biggr) \alpha^{-1/(2n)} \left
( 4 \| {\hat v}_0 \|_{l^{1}} + 4 \| {\hat U}^{(0)} \|_1^{\alpha; \phi}
\right ) < 1.
\end{equation}
Here $C_2$ is the same as in Lemma \ref{lemC2} and depends on $\delta$ and
$n$ for $n \ge 2$. For $n=1$ we have $\phi=0$.
(ii) the ball of radius $2 \| {\hat U}^{(0)} \|^{(\alpha)}$ in
$\mathcal{A}^{(\alpha)}$ if $\alpha$ is large enough so that
\begin{equation}\label{ensure3}
C_2 C_3 \nu^{-1/2} {\alpha}^{-1/(2n)} \left ( 4 \| {\hat v}_0 \|_{l^{1}} +
4 M_0 \| {\hat U}^{(0)} \|^{(\alpha)} \right ) < 1,
\end{equation}
where $C_2$ (defined in Lemma \ref{lemC2}) and $C_3$ (defined in
(\ref{C3def})) depend on $\delta$ and $n$ for $n \ge 2$.
\end{Lemma}
\begin{proof}
The estimates in Lemma \ref{NUnormbound} imply that $\mathcal{N}$ maps a
ball of size $ 2 \| {\hat U}^{(0)} \|_1^{\alpha; \phi} $ in
$\mathcal{A}_1^{\alpha; \phi}$ back to itself and that $\mathcal{N}$ is
contractive in that ball when $\alpha$ satisfies (\ref{ensure2}). In
$\mathcal{A}^{(\alpha)}$, the estimates of Lemma \ref{NUnormbound} imply
that $\mathcal{N}$ maps a ball of size $ 2 \| {\hat U}^{(0)} \|^{(\alpha)}
$ to itself and that $\mathcal{N}$ is contractive in that ball when
$\alpha$ satisfies (\ref{ensure3}).
\end{proof}
\begin{Remark}
If $\alpha$ satisfies both (\ref{ensure2}) and (\ref{ensure3}), then it
follows from Lemma \ref{connection} and the uniqueness of classical
solution of \ref{nseq} that the solutions ${\hat U}$ in
$\mathcal{A}^{\alpha; \phi}_1 $ and $\mathcal{A}^{(\alpha)} $ are one and
the same.
\end{Remark}
\begin{Lemma}\label{lemqder}
The $q$-derivatives of ${\hat U} (k, q)$ in $\mathcal{A}^{(\alpha)}$ for
$q > 0$ are estimated by:
\begin{multline}\label{Uderbound}
\left \| \frac{\partial^m}{\partial q^m} {\hat U} (\cdot, q) \right
\|_{l^{1}} \le C_m \| {\hat v}_1 \|_{l^1} \frac{q^{-1+1/n}
\omega^{-m}}{1+q^2} e^{\alpha q +\omega \alpha}, \\
{\rm where}~\omega = q/2~{\rm for}~q \le 2,\ \omega = 1~{\rm for}~q > 2.
\end{multline}
\end{Lemma}
\begin{proof}
For $q \le 2$, we use Cauchy's integral formula on a circle of radius
$q/2$ around $q$ and Lemma \ref{lemU0} to bound ${\hat U} $ for $|q| > 0$,
$\arg q \in \left [ -(n-1) \frac{\pi}{2} + \delta, (n-1) \frac{\pi}{2} -
\delta \right ]$ (we may pick for instance $\delta = \frac{\pi}{4}$ to
obtain specific values of constants here). For $q > 2$, the argument is
similar, now on a circle of radius 1.
\end{proof}
In the following we need bounds on $\| k {\hat U} \|^{(\alpha)}$. We
rewrite (\ref{IntUeqn}) using the divergence-free condition (note that $k
{\hat U}$ is a tensor of rank 2) as
\begin{multline}\label{IntUeqn1}
k {\hat U} (k, q) = -i k \int_0^q \mathcal{G} (q, q'; k) P_k \left \{
{\hat U}_j \text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} [ k_j {\hat U} ] + {\hat v}_{0,j} {\hat *} [ k_j {\hat U} ]
\right \} (k, q') dq' + {\hat U}^{(0,1)} (k, p) \\
:= \mathcal{\tilde N} \left [ k {\hat U} \right ] \\
{\rm where }~~{\hat U}^{(0,1)} (k, p) := -i k \int_0^q \mathcal{G} (q, q';
k) P_k \left [ {\hat U}_j {\hat *} [k_j {\hat v}_0] \right ] (k, q') dq' +
k {\hat U}^{(0)} (k, j).
\end{multline}
We now think of ${\hat U}$ in (\ref{IntUeqn1}) as known; then
$\mathcal{\tilde N}$ becomes linear in $k {\hat U}$.
\begin{Lemma}\label{lemkU}
If $|k|^3 {\hat v}_0 \in l^1 $ and $\alpha$ is large enough so that
(\ref{ensure3}) is satisfied, then
$$
\| |k| {\hat U} \|^{(\alpha)} \le 4 c_1 \left ( \nu \| |k|^3 {\hat v}_0
\|_{l^{1}} + 2 \| |k| {\hat v}_0 \|_{l^1}^2 + \| |k| {\hat f} \|_{l^1}
\right ) + \| |k| {\hat v}_0 \|_{l^1}.
$$
\end{Lemma}
\begin{proof}
{F}rom (\ref{IntUeqn1}), we obtain
\begin{multline*}
\| |k| {\hat U} \|^{(\alpha)} = \| \mathcal{\tilde N} [ |k| {\hat U} ]
\|^{(\alpha)} \\
\le C_2 C_3 \nu^{-1/2} \alpha^{-1/(2n)} \left \{ M_0 \| {\hat U}
\|^{(\alpha)} \| |k| {\hat U} \|^{(\alpha)} + \| {\hat v}_0 \|_{l^{1}} \|
|k| {\hat U} \|^{(\alpha)} \right \} + \| {\hat U}^{(0,1)} \|^{(\alpha)}.
\end{multline*}
Lemma \ref{inteqn}, which applies when $\alpha$ satisfies (\ref{ensure3}),
implies that $\| {\hat U} \|^{(\alpha)} \le 2 \| {\hat U}^{(0)}
\|^{(\alpha)}$ and thus
\begin{multline*}
\| |k| {\hat U} \|^{(\alpha)} \le C_2 C_3 \nu^{-1/2} \alpha^{-1/(2n)} \|
|k| {\hat U} \|^{(\alpha)} \left \{ 2 M_0 \| {\hat U}^{(0)} \|^{(\alpha)}
+ \| {\hat v}_0 \|_{l^{1}} \right \} + \| {\hat U}^{(0,1)} \|^{(\alpha)}
\\
\le \frac{1}{2} \| |k| {\hat U} \|^{(\alpha)} + \| {\hat U}^{(0,1)}
\|^{(\alpha)}.
\end{multline*}
Thus,
\begin{multline*}
\| |k| {\hat U} \|^{(\alpha)} \\
\le 2 \| {\hat U}^{(0,1)} \|^{(\alpha)} \le 2 \| |k| {\hat U}^{(0)}
\|^{(\alpha) } + 4 M_0 C_2 C_3 \nu^{-1/2} \alpha^{-1/(2n)} \| |k| {\hat
v}_0 \|_{l^1} \| {\hat U}^{(0)} \|^{(\alpha)}.
\end{multline*}
Lemma follows from (\ref{ensure3}) and bounds on ${\hat U}^{(0)} $ given
in Lemma \ref{lemU0}.
\end{proof}
\begin{Proposition}\label{propthm01}
Assume ${\hat f} (0) =0 = {\hat v}_0 (0)$ and we define $\bigl\|
\frac{\hat f}{|k|^2} \bigr\|_{l_1} = \sum_{k \in \mathbb{Z}^3 \setminus
\{0\}} \frac{ |{\hat f} |^2}{|k|^2} $. If for $n \ge 2$, $\alpha$
satisfies the condition:
\begin{equation}\label{ensure4}
C_2 \nu^{-1/2} \Gamma \biggl( \frac{1}{2n} \biggr) \alpha^{-1/(2n)}
\biggl\{ 4 \| {\hat v}_0 \|_{l^{1}} + 4 C_G \biggl[ \| {\hat v}_0 \|_{l^1}
\Bigl( 1 + \frac{2}{\nu} \|{\hat v}_0 \|_{l^1} \Bigr) + \frac{1}{\nu}
\biggl\| \frac{{\hat f}}{|k|^2} \biggr\|_{l^1} \biggr] \biggr\} < 1,
\end{equation}
with constants $C_2$ and $C_G$ defined in Lemmas \ref{lemC2} and
\ref{lemU0}, then the integral equation (\ref{IntUeqn}) has a unique
solution in a ball of size $2 \| {\hat U}^{(0)} \|_1^{\alpha; \phi}$ in
$\mathcal{A}_1^{\alpha; \phi}$. If in addition $ |k|^2 {\hat v}_0 \in l^1
$, then for $n \ge 1$ and $\alpha=\alpha_1$ is such that
\begin{equation}\label{ensure5}
C_2 \nu^{-1/2} \Gamma \biggl( \frac{1}{2n} \biggr) \alpha_1^{-1/(2n)}
\biggl\{ 4 \| {\hat v}_0 \|_{l^{1}} + 4 c_1 \Gamma \biggl( \frac{1}{n}
\biggr) \alpha_1^{-1/n} \| {\hat v}_1 \|_{l^1} \biggr\} < 1,
\end{equation}
where
\begin{equation}\label{4}
{\hat v}_1 (k) = \left ( -\nu |k|^2 {\hat v}_0 - i k_j P_k \left [ {\hat
v}_{0,j}{\hat*}{\hat v}_0 \right ] \right ) + {\hat f} (k)
\end{equation}
with $c_1$ defined in Lemma \ref{lemU0}, then the integral equation
(\ref{IntUeqn}) has a unique solution in a ball of size $2 \| {\hat
U}^{(0)} \|_1^{\alpha_1; \phi}$ in $\mathcal{A}_1^{\alpha_1; \phi}$.
\end{Proposition}
\begin{proof}
The proof follows from Lemma \ref{lemU0} since (\ref{ensure4}) and
(\ref{ensure5}) imply (\ref{ensure2}), and thus Lemma \ref{inteqn}
applies.
\end{proof}
\noindent{\bf Proof of Theorem \ref{Thm01}}
Proposition \ref{propthm01} gives a unique solution to (\ref{IntUeqn}) in
some small ball in the Banach space $\mathcal{A}_1^{\alpha; \phi}$ for
sufficiently large $\alpha$. {F}rom Lemma \ref{connection}, we see that
${\hat U}$ generates via (\ref{intro.1.1}) a solution ${\hat v}$ to
(\ref{nseq}) for $t \in \left [ 0, \alpha^{-1/n} \right )$. Classical
arguments (presented for completeness in Lemma \ref{instsmooth} in the
Appendix), show that $|k|^2 {\hat v} (\cdot, t) \in l^1$ and hence
$\mathcal{F}^{-1} \left [ {\hat v} (\cdot, t) \right ] (x)$ is a smooth
solution to (\ref{nseq0}) for $t \in \left (0, \alpha^{-1/n} \right )$.
Analyticity in $t$ for $\mathrm{Re} \frac{1}{t^n} > \alpha$ follows from the
Laplace representation. For optimal analyticity region in $t$, we choose
$n=1$.
It is well known that (\ref{nseq0}) has locally a unique classical
solution \cite{Temam}, \cite{Doering}, \cite{ConstFoias}. Thus, given
${\hat v}_0, {\hat f} \in l^1$, all solutions obtained via the integral
equation coincide. Furthermore, ${\hat v} (k, t) - {\hat v}_0$ is
inverse-Laplace transformable in $1/t^n$ and the inverse Laplace transform
satisfies (\ref{IntUeqn}). Therefore, no restriction on the size of ball
in spaces $\mathcal{A}^{\alpha; \phi}_1$, $\mathcal{A}^{(\alpha)}$ is
necessary for uniqueness of the solution of (\ref{IntUeqn}).
\begin{Remark}{
\rm The arguments in the proof of Theorem~\ref{Thm01} show that $\| {\hat
v} (\cdot, t) \|_{l^1} < \infty$ over an interval of time implies that the
solution is classical. This is not a new result. Standard Fourier
arguments show that, in this case, we have $ \| v (\cdot, t)
\|_{{L}^\infty} < \infty$, {\em i.e.} one of the Prodi-Serrin criteria for
existence of classical solutions \cite{Prodi}, \cite{Serrin} is
satisfied.}
\end{Remark}
\section{Error bounds in a Galerkin approximation involving $[-N, N]^3$ Fourier modes}
\begin{Definition}
We define the operator $\mathcal{N}^{(N)} $ (associated to $\mathcal{N}$)
by
\begin{multline}
\mathcal{N}^{(N)} \left [ {\hat U} \right ] (k, q) \\
= -ik_j \int_0^q \mathcal{G} (q, q'; k) \mathcal{P}_N P_k \left [ {\hat
U}_j \text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} {\hat U} + {\hat v}_{0,j} {\hat *} {\hat U} + {\hat U}_j {\hat *}
{\hat v}_0 \right ] (k, q') dq' + \mathcal{P}_N {\hat U}^{(0)} (k, q),
\end{multline}
where $\mathcal{P}_N$, the Galerkin projection to $[-N, N]^3$ Fourier modes, is given by
$$
\left [ \mathcal{P}_N {\hat U} \right ] (k, q) = {\hat U} (k, q) ~~{\rm
for}~~k \in [-N, N]^3~~,~~~\left [\mathcal{P}_N {\hat U}\right] (k,
q)=~~0~~ {\rm otherwise}.
$$
\end{Definition}
\begin{Lemma}\label{lemNUN}
The integral equation
$$
{\hat U}^{(N)} = \mathcal{N}^{(N)} \left [ {\hat U}^{(N)} \right ]
$$
has a unique solution in $ \mathcal{A}^{\alpha}_1$\footnote{Recall this
means $\mathcal{A}^{\alpha;\phi}_1$ with $\phi=0$} as well as in
$\mathcal{A}^{(\alpha)}$, if $\alpha$ satisfies the conditions in Theorem
\ref{Thm01}.
\end{Lemma}
\begin{proof}
The proof is very similar to that of Theorem \ref{Thm01} part 1, noting
that the Galerkin projection $\mathcal{P}_N$ does not increase $l^1$
norms and $\mathcal{N}^{(N)}$ and $\mathcal{N}$ have similar properties.
\end{proof}
\begin{Lemma}\label{lemTEN}
Assume that $\alpha$ is large enough so that
\begin{equation}\label{ensure3.2}
C_2 C_3 \nu^{-1/2} {\alpha}^{-1/(2n)} \left ( 4 \| {\hat v}_0 \|_{l^{1}} +
4 M_0 \| {\hat U}^{(0)} \|^{(\alpha)} \right ) \leq \frac{1}{2},
\end{equation}
and that $|k|^3 {\hat v}_0,\ |k| {\hat f} \in l^1$. Define the Galerkin
truncation error:
\begin{multline}\label{eqTEN}
T_{E, N} = \mathcal{P}_N {\hat U} - \mathcal{N}^{(N)} \left [
\mathcal{P}_N {\hat U} \right ] = \mathcal{P}_N \mathcal{N} \left [ {\hat
U} \right ] - \mathcal{N}^{(N)} \left [ \mathcal{P}_N {\hat U} \right ] \\
= -i k_j \int_0^q \mathcal{G} (q, q'; k) \mathcal{P}_N P_k \Bigl[ {\hat
v}_{0,j} {\hat *} (I-\mathcal{P}_N) {\hat U} + (I-\mathcal{P}_N) {\hat
U}_j {\hat *} {\hat v}_0 \\
+ (I-\mathcal{P}_N) {\hat U}_j \text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} \mathcal{P}_N {\hat U} + \mathcal{P}_N
{\hat U}_j \text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} (I-\mathcal{P}_N) {\hat U} + (I-\mathcal{P}_N ) {\hat U}_j
\text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} (I-\mathcal{P}_N) {\hat U} \Bigr] (k, q') dq'.
\end{multline}
Then,
\begin{equation*}
\| {\hat U} - {\hat U}^{(N)} \|^{(\alpha)} \le \| (I-\mathcal{P}_N ) {\hat
U} \|^{(\alpha)} + 2 \| T_{E,N} \|^{(\alpha)},
\end{equation*}
where
\begin{multline*}
\| (I-\mathcal{P}_N) {\hat U} \|^{(\alpha)} + 2 \| T_{E,N} \|^{(\alpha)}
\\
\le \frac{1}{N} \left [ 2 c_1 \left ( \nu \| |k|^3 {\hat v}_0 \|_{l^1} + 2
\| |k| {\hat v}_0 \|^2_{l^1} + \| |k| {\hat f} \|_{l^1} \right ) + \| |k|
{\hat v}_0 \|_{l^1} \right ] \\
\times \left \{ 1 + 4 c \| {\hat v}_0 \|_{l^1} + 12 c \left ( \nu \| |k|^2
{\hat v}_0 \|_{l^1} + 2 \| |k| {\hat v}_0 \|_{l^1} \| {\hat v}_0 \|_{l^1}
+ \| {\hat f} \|_{l^1} \right ) \right \}.
\end{multline*}
\end{Lemma}
\begin{proof}
Clearly,
$$
\| {\hat U} - {\hat U}^{(N)} \|^{(\alpha)} \le \| (I-\mathcal{P}_N) {\hat
U} \|^{(\alpha)} + \| \mathcal{P}_N {\hat U} - {\hat U}^{(N)}
\|^{(\alpha)}.
$$
By (\ref{ensure3.2}), (\ref{eqTEN}) and contractivity of
$\mathcal{N}^{(N)}$,
\begin{multline*}
\| \mathcal{P}_N {\hat U} - {\hat U}^{(N)} \|^{(\alpha)} \le \|
\mathcal{N}^{(N)} [ \mathcal{P}_N {\hat U} ] - \mathcal{N}^{(N)} [ {\hat
U}^{(N)} ] \|^{(\alpha)} + \| T_{E,N} \|^{(\alpha)} \\
\le \frac{1}{2} \| \mathcal{P}_N {\hat U} - {\hat U}^{(N)} \|^{(\alpha)} +
\| T_{E,N} \|^{(\alpha)},
\end{multline*}
so
$$
\| {\hat U} - {\hat U}^{(N)} \|^{(\alpha)} \le \| (I-\mathcal{P}_N) {\hat
U} \|^{(\alpha)} + 2 \| T_{E,N} \|^{(\alpha)}.
$$
Now estimates similar to (\ref{NUnormsup}) imply that
\begin{multline*}
\| T_{E,N} \|^{(\alpha)} \le c \| (I-\mathcal{P}_N) {\hat U} \|^{(\alpha)}
\left [ 2 \| {\hat v}_0 \|_{l^1} + 2 \| \mathcal{P}_N {\hat U}
\|^{(\alpha)} + \| (I - \mathcal{P}_N ) {\hat U} \|^{(\alpha)} \right ] \\
\le c \| (I - \mathcal{P}_N ) {\hat U} \|^{(\alpha)} \left [ 2 \| {\hat
v}_0 \|_{l^1} + 6 \| {\hat U}^{(0)} \|^{(\alpha)} \right ],
\end{multline*}
and Lemma \ref{lemkU} implies that
\begin{multline*}
\| (I - \mathcal{P}_N ) {\hat U} \|^{(\alpha)} \le \frac{1}{N} \| k {\hat
U} \|^{(\alpha)} \\
\le \frac{1}{N} \left [ 2 c_1 \left ( \nu \| |k|^3 {\hat v}_0 \|_{l^{1}} +
2 \| |k| {\hat v}_0 \|_{l^1}^2 + \| |k| {\hat f} \|_{l^1} \right ) + \|
|k| {\hat v}_0 \|_{l^1} \right ].
\end{multline*}
Hence the lemma follows.
\end{proof}
\section{The exponential rate $\alpha$ and the singularities of $v$}
We have already established that at most subexponential growth of $\|
{\hat U} (\cdot, q) \|_{l^1} $ implies global existence of a classical
solution to (\ref{nseq0}).
We now look for a converse: suppose (\ref{nseq0}) has a global solution,
is it true that ${\hat U} (\cdot, q)$ always is subexponential in $q$?
The answer is no. For $n=1$, any complex singularity $t_s$ in the
right-half complex $t$-plane of $v(x, t)$ produces exponential growth of
$\hat{U}$ with rate $\mathrm{Re}(1/t_s)$ (oscillatory with a frequency
$\mathrm{Im}(1/t_s)$).
However, if $f = 0$, we will see that for any {\em given global classical}
solution of (\ref{nseq0}), there is a $c>0$ so that for any $t_s$ we have
$|\arg t_s|>c$. This means that for sufficiently large $n$, the function
$v(x,\tau^{-1/n})$ has no singularity in the right-half $\tau$ plane. Then
the inverse Laplace transform
$$
U(x, q) = \frac{1}{2 \pi i} \int_{c-i\infty}^{c+i \infty} \left \{ v (x,
\tau^{-1/n} ) - v_0 (x) \right \} e^{q \tau} d\tau
$$
can be shown to decay for $q$ near $\mathbb{R}^+$.
We now seek to find conditions for which there are no singularities of $v
(x, \tau^{-1/n})$ in $\left \{ \tau: \mathrm{Re}\, \tau \ge 0, ~\tau \not\in
\mathbb{R}^+ \cup \{0\} \right \}$.
\begin{Lemma}{(Special case of \cite{FoiasTem})}\label{LL1}
If $f=0$ and $v (\cdot, t_0) \in {H}^1 \left (\mathbb{T}^3\right [0, 2\pi]
) $, then $v(x,t)$ is analytic in $x$ and $t$ in the domain $ |\mathrm{Im} ~x_j |
< c \nu |t-t_0| $, $ 0 < |t-t_0| < C $ for $\arg (t-t_0) \in \left [
-\frac{\pi}{4}, \frac{\pi}{4} \right ]$, where $c$ and $C$ are positive
constants ($C$ depends on $\| v_0 \|_{{H}^1}$ and $\nu$ and bounded away
from 0 when $\| v_0 \|_{{H}^1}$ is bounded).
\end{Lemma}
See page 71 of \cite{Foias}.
\begin{Lemma}\label{LL2}
(i) Assume $k {\hat v}_0 , {\hat f} \in l^1$ and $\alpha$ is large enough
so that (\ref{ensure2}) holds for $n=1$. The classical solution of
(\ref{nseq0}) has no singularity in $\mathrm{Re} \frac{1}{t} > \alpha$, $x \in
\mathbb{T}^3$.
(ii) Furthermore, for $f=0$ (no forcing), no singularity can exist for
$\arg (t - T_{c,a}) \in (-{\tilde \delta}, {\tilde \delta})$ for any $0 <
{\tilde \delta} < \frac{\pi}{2}$ and any $x \in \mathbb{T}^3$. ($T_{c,a}$
is estimated in terms of $\| v_0 \|_{H^1},\ \nu$, and ${\tilde \delta}$ in
Theorem \ref{Testimate} in the Appendix using standard arguments.)
\end{Lemma}
\begin{proof}
(i) The assumption implies $v_0 \in {H}^1 (\mathbb{T}^3)$. Since it is
well known (see for instance \cite{Foias}, \cite{Temam},
\cite{ConstFoias}, \cite{Doering}) that a classical solution to
(\ref{nseq0}) is unique, it follows that this solution equals the one
given in Theorem~\ref{Thm01} in the form (\ref{intro.1}). {F}rom standard
properties of Laplace transforms this solution is analytic for $\mathrm{Re}
\frac{1}{t} > \alpha$, where $\alpha$ is given in Theorem \ref{Thm01}.
(ii) We know that under these assumptions $\| v (\cdot, t)\|_{H^1}\to 0$
as $t\to \infty$. There is then a critical time $T_{c,a}$ so that standard
contraction mapping arguments show that $v (\cdot, t)$ is analytic for $t
- T_{c,a} \in {\tilde S}_{\tilde \delta}$ as seen in Theorem
\ref{Testimate} in the Appendix.
\end{proof}
\begin{Corollary}\label{Cr1}
If $f = 0$, for any $v_0$ there exists a $c>0$ so that any singularity
$t_s$ of the solution $v$ of (\ref{nseq0}) is either a positive real time
singularity, or else $|\arg t_s|>c$.
\end{Corollary}
\begin{proof}
If there exists a classical solution on $\mathbb{R}^+$ then $\| v(\cdot, t)
\|_{H^1}$ is uniformly bounded and by the proof of Lemma~\ref{LL2} (ii)
there is a $T_{c,a}$ (as given in Theorem \ref{Testimate} in the Appendix)
such that $v(\cdot,t)$ is analytic for $\arg (t-T_{c,a}) \in \left [ -
\frac{\pi}{4}, \frac{\pi}{4} \right ]$. Let now $M_1 = \max_{t \in
[0,T_{c,a}+\epsilon]} \| v(\cdot, t) \|_{H^1}$. Then by Lemma \ref{LL1},
for any $t'\in [0,T_{c,a}+\epsilon]$ there exists a $c_2=c_2 (M_1)$ such
that $v$ is analytic in the region $|t-t'|<c_2,\ |\arg(t-t')|\le\pi/4$.
Thus $v$ is analytic in (see Fig.\ref{fig.ns.analytic})
$$
\Bigl\{ t: |t-t'|<c_2, |\arg(t-t')|\le\frac{\pi}{4}, 0 \le t' \le
T_{c,a}+\epsilon \Bigr\} \bigcup\, \Bigl\{ t: |\arg (t -
T_{c,a})|\le\frac{\pi}{4} \Bigr\}.
$$
\begin{figure}[h]
\centering
\psfrag{Tca}{\tiny $T_{c,a}$}
\psfrag{c2}{\tiny $c_2$}
\includegraphics[scale=0.5]{ns.analytic.eps}
\caption{The region of analyticity of $v$.}
\label{fig.ns.analytic}
\end{figure}%
Thus, if $t_s$ is a singular point of $v$, then $\tan |\arg(t_s)|>c$ where
$$
c = \frac{c_2/\sqrt{2}}{T_{c,a}+c_2/\sqrt{2}} =
\frac{c_2}{\sqrt{2}\,T_{c,a}+c_2}.
$$
\end{proof}
\noindent{\bf Proof of Theorem \ref{Thm02}}
By definition,
$$
\hat{U}(k,q) = \frac{1}{2\pi i} \int_{C} \hat{u}(k,\tau^{-1/n})
e^{q\tau}\,d\tau = \frac{1}{2\pi i} \int_{C} \Bigl[ \hat{v}(k,\tau^{-1/n})
- \hat{v}_0(k) \Bigr] e^{q\tau}\,d\tau,
$$
where the Bromwich contour $C$ lies to the right of all singularities of
$\hat{u}(k,\tau^{-1/n})$ in the complex $\tau$-plane. By
Corollary~\ref{Cr1}, $\hat{u}(k,t)$ has no singularities in the sector
$$
S_{t,\phi} := \Bigl\{ t: |\arg t\,| \le \phi := \tan^{-1} c \Bigr\},
$$
so $\hat{u}(k,\tau^{-1/n})$ has no singularities in the sector
$$
S_{\tau,\phi} := \Bigl\{ \tau: |\arg \tau| < n\phi \Bigr\}.
$$
Clearly, if $n\phi \in (\frac{\pi}{2},\pi)$, then $\hat{u}(k,\tau^{-1/n})$
is analytic in a sector of width between $\pi$ and $2\pi$, and in
particular the Bromwich contour $C$ can be chosen to be the imaginary
axis. With the suitable decay of $\hat{u}(k,\tau^{-1/n})$ at $\tau =
\infty$:
\begin{multline*}
\hat{u}(k,t) = \hat{v}(k,t) - \hat{v}_0(k) = O(t)\qquad \mbox{as } t \to
0,\qquad \mbox{which means that} \\
\hat{u}(k,\tau^{-1/n}) = O(\tau^{-1/n})\qquad \mbox{as } \tau \to \infty,
\end{multline*}
Jordan's lemma applies and $C$ can be deformed to the edges of
the sector $S_{\tau,\phi}$, i.e.
\begin{multline*}
\hat{U}(k,q) = \frac{1}{2\pi i} \biggl\{ \int_{\infty e^{-i n\phi}}^{0} +
\int_{0}^{\infty e^{i n\phi}} \biggr\} \Bigl[ \hat{v}(k,\tau^{-1/n}) -
\hat{v}_0(k) \Bigr] e^{q\tau}\,d\tau \\
= \frac{1}{2\pi i} \biggl\{ \int_{\infty e^{-i n\phi}}^{0} +
\int_{0}^{\infty e^{i n\phi}} \biggr\} \hat{v}(k,\tau^{-1/n})
e^{q\tau}\,d\tau
\end{multline*}
(carefully note that the integral of $\hat{v}_0(k)$ over the contour is
0). Further, as shown in Theorem \ref{Testimate} in the Appendix, there is
a sector ${\tilde S}_{\tilde \delta}$ in the right-half $t$-plane (with
$\phi < {\tilde \delta} < \frac{\pi}{2}$) so that
$$
\|\hat{v}(\cdot,t)\|_{l^1} \le C e^{-\frac{3}{4} \nu \mathrm{Re}\,t}\qquad
\mbox{as $t \to \infty$ in ${\tilde S}_{\tilde \delta}$}.
$$
So
$$
\|\hat{v}(\cdot,\tau^{-1/n})\|_{l^1} \le C e^{-\frac{3}{4} \nu
\mathrm{Re}(\tau^{-1/n})}\qquad \mbox{as $\tau \to 0$ along $e^{\pm i
n\phi}(0,\infty)$},
$$
and the boundedness of $\|\hat{v}(\cdot,\tau^{-1/n})\|_{l^1}$ for large
$|\tau|$ implies that
$$
\|\hat{v}(\cdot,\tau^{-1/n})\|_{l^1} \le C e^{-\frac{3}{4} \nu
\mathrm{Re}(\tau^{-1/n})}\qquad \mbox{for all $\tau \in e^{\pm i
n\phi}(0,\infty)$}.
$$
It follows that
\begin{multline*}
\|\hat{U}(\cdot,q)\|_{l^1} \le C \int_{0}^{\infty} e^{-\frac{3}{4} \nu
\mathrm{Re}(\tau^{-1/n}) + q \mathrm{Re}\,\tau}\,d|\tau| \le C \int_{0}^{\infty}
e^{-\frac{3}{4} \nu r^{-1/n} \cos\phi + q r\cos n\phi}\,dr,
\end{multline*}
and a standard application of the Laplace method (with the change of
variable $r = q^{-n/(n+1)} s$) shows that
$$
\|\hat{U}(\cdot,q)\|_{l^1} \le C_1 e^{-C_2 q^{1/(n+1)}}\qquad \mbox{as } q
\to +\infty.
$$
\section{Estimates of $\alpha$ based the solution of (\ref{IntUeqn}) in
$[0, q_0]$}\label{S6}
Define $\hat{U}^{(a)}$ as in (\ref{eq:eq456}) and $\hat{U}^{(b)} =
\hat{U}-\hat{U}^{(a)}$. Using (\ref{IntUeqn}), it is convenient to write
an integral equation for $\hat{U}^{(b)}$ for $q > q_0$:
\begin{equation}\label{e.1}
\hat{U}^{(b)}(k,q) = -ik_{j} \int_{q_{0}}^{q} \mathcal{G}(q,q';k)
\hat{H}_{j}^{(b)}(k,q')\,dq' + \hat{U}^{(s)}(k,q),
\end{equation}
where
\begin{equation}\label{e.2}
\hat{U}^{(s)}(k,q) = -ik_{j} \int_{0}^{\min\{q,2q_{0}\}}
\mathcal{G}(q,q';k) \hat{H}_{j}^{(a)}(k,q')\,dq' + \hat{U}^{(0)}(k,q),
\end{equation}
and
\begin{gather}
\label{e.3} \hat{H}_{j}^{(a)}(k,q) = P_{k} \Bigl[ \hat{v}_{0,j} \hat{*}
\hat{U}^{(a)} + \hat{U}_{j}^{(a)} \hat{*} \hat{v}_{0} + \hat{U}_{j}^{(a)}
\text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} \hat{U}^{(a)} \Bigr](k,q), \\
\label{e.4} \hat{H}_{j}^{(b)}(k,q) = P_{k} \Bigl[ \hat{v}_{0,j} \hat{*}
\hat{U}^{(b)} + \hat{U}_{j}^{(b)} \hat{*} \hat{v}_{0} + \hat{U}_{j}^{(a)}
\text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} \hat{U}^{(b)} + \hat{U}_{j}^{(b)} \text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} \hat{U}^{(a)} +
\hat{U}_{j}^{(b)} \text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} \hat{U}^{(b)} \Bigr](k,q).
\end{gather}
Also, we define ${\hat R}^{(b)} (k, q) = -i k_j {\hat H}_j^{(b)} (k,q)$.
It is to be noted that the support of $\hat{H}^{(a)}$ is $[0, 2q_0]$.
Thus, if $\hat{U}^{(a)}$ is known (computationally or otherwise), then
$\hat{H}^{(a)}$ and therefore $\hat{U}^{(s)} $ are known for all $q$.
\noindent{\bf Proof of Theorem \ref{Thm03}:}
Note that
$$
|\hat{R}^{(b)}(k,q)| \leq 2|k| \Bigl[ 2|\hat{v}_{0}| \hat{*}
|\hat{U}^{(b)}| + 2|\hat{U}^{(a)}| \text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} |\hat{U}^{(b)}| + |\hat{U}^{(b)}|
\text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} |\hat{U}^{(b)}| \Bigr](k,q),
$$
where $|\cdot|$ is the usual Euclidean norm in $\mathbb{R}^{3}$. By Lemma
\ref{L2.4} we can define a best constant
\begin{equation}\label{sizeG}
B_{0}(k) = \sup_{q_{0} \leq q' \leq q} \Big\{ (q-q')^{1/2-1/(2n)}
|\mathcal{G}(q,q';k)| \Big\}
\end{equation}
and conclude that
\begin{multline*}
|\mathcal{G}(q,q';k) \hat{R}^{(b)}(k,q')| \leq 2|k|B_{0}(k)
(q-q')^{1/(2n)-1/2} \Bigl[ 2|\hat{v}_{0}| \hat{*} |\hat{U}^{(b)}| +
2|\hat{U}^{(a)}| \text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} |\hat{U}^{(b)}| \\
+ |\hat{U}^{(b)}| \text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} |\hat{U}^{(b)}| \Bigr](k,q').
\end{multline*}
It follows from Lemma \ref{lem0.1} that
$$
\|\mathcal{G}(q,q';\cdot) \hat{R}^{(b)}(\cdot,q')\|_{l^1} \leq \psi(q-q')
\Bigl[ B_{1} u + B_{2}*u + B_{3} u*u \Bigr](q'),
$$
where $\psi(q) = q^{1/(2n)-1/2}$ and
\begin{multline*}
u(q) = \|\hat{U}^{(b)}(\cdot,q)\|_{l^1},\qquad B_{1} = 4\sup_{k \in
\mathbb{Z}^{3}} \big\{ |k|B_{0}(k) \big\} \|\hat{v}_{0}\|_{l^1}, \\
B_{2}(q) = 4\sup_{k \in \mathbb{Z}^{3}} \big\{ |k|B_{0}(k) \big\}
\|\hat{U}^{(a)}(\cdot,q)\|_{l^1},\qquad B_{3} = 2\sup_{k \in
\mathbb{Z}^{3}} \big\{ |k|B_{0}(k) \big\}.
\end{multline*}
Taking the $l^1$-norm in $k$ on both sides of \eqref{e.1}, multiplying the
equation by $e^{-\alpha q}$ for some $\alpha \geq \alpha_{0} \geq 0$ and
integrating over the interval $[q_{0},M]$, we obtain
\begin{multline*}
L_{q_{0},M} \leq \int_{q_{0}}^{M} e^{-\alpha q} \int_{q_{0}}^{q}
\psi(q-q') \Bigl[ B_{1} u + B_{2}*u + B_{3} u*u \Bigr](q')\,dq'\,dq +
\int_{q_{0}}^{M} e^{-\alpha q} u^{(s)}(q)\,dq \\
\leq \int_{q_{0}}^{M} \Bigl[ B_{1} u + B_{2}*u + B_{3} u*u \Bigr](q')
\int_{q'}^{M} e^{-\alpha q} \psi(q-q') \,dq\,dq' + \int_{q_{0}}^{M}
e^{-\alpha q} u^{(s)}(q)\,dq \\
\leq \int_{0}^{\infty} e^{-\alpha q} \psi(q) \,dq \int_{q_{0}}^{M}
e^{-\alpha q'} \Bigl[ B_{1} u + B_{2}*u + B_{3} u*u \Bigr](q')\,dq' +
\int_{q_{0}}^{M} e^{-\alpha q} u^{(s)}(q)\,dq,
\end{multline*}
where
\begin{equation}\label{defus}
L_{q_{0},M} := \int_{q_{0}}^{M} e^{-\alpha q} u(q)\,dq,\qquad u^{(s)}(q) =
\|\hat{U}^{(s)}(\cdot,q)\|_{l^1}.
\end{equation}
If we use the fact that
\begin{multline*}
\int_{q_{0}}^{M} e^{-\alpha q'} u*v(q')\,dq' = \int_{q_{0}}^{M}
e^{-\alpha q'} \int_{q_{0}}^{q'} u(s) v(q'-s)\,ds\,dq' \\
= \int_{q_{0}}^{M} u(s) \int_{s}^{M} e^{-\alpha q'} v(q'-s)\,dq'\,ds
\end{multline*}
for any function $v$ on $[0,M]$ (recall that $u = 0$ on $[0,q_{0}]$), then
\begin{multline}\label{ext3}
L_{q_{0},M} \leq \int_{0}^{\infty} e^{-\alpha q} \psi(q) \,dq \Bigg\{
\biggl[ B_{1} + \int_{0}^{q_{0}} e^{-\alpha q'} B_{2}(q')\,dq' \biggr]
L_{q_{0},M} + B_{3} L_{q_{0},M}^{2} \Bigg\} \\ + b\alpha^{-1/2-1/(2n)}
\leq \alpha^{-1/2-1/(2n)} \Bigl[ \epsilon_{1} L_{q_{0},M} + \epsilon
L_{q_{0},M}^{2} \Bigr] + b\alpha^{-1/2-1/(2n)},
\end{multline}
where
\begin{gather}
\label{ext3.1} b = \alpha^{1/2+1/(2n)} \int_{q_{0}}^{\infty} e^{-\alpha q}
u^{(s)}(q)\,dq, \\
\label{ext3.2} \epsilon_{1} = \Gamma \biggl( \frac{1}{2}+\frac{1}{2n}
\biggr) \biggl[ B_{1} + \int_{0}^{q_{0}} e^{-\alpha_{0} q'} B_{2}(q')\,dq'
\biggr],\qquad \epsilon = \Gamma \biggl( \frac{1}{2}+\frac{1}{2n}
\biggr)\, B_{3}.
\end{gather}
For
$$
\epsilon_{1} < \alpha^{1/2+1/(2n)}\qquad \mbox{and}\qquad \bigl(
\epsilon_{1}-\alpha^{1/2+1/(2n)} \bigr)^{2} > 4\epsilon b,
$$
this leads to an estimate for $L_{q_{0},M}$ independent of $M$:
\begin{equation}\label{ext11}
L_{q_{0},M} \leq \frac{1}{2\epsilon} \biggl[ \alpha^{1/2+1/(2n)} -
\epsilon_{1} - \sqrt{\bigl( \epsilon_{1}-\alpha^{1/2+1/(2n)} \bigr)^{2} -
4\epsilon b} \biggr].
\end{equation}
So $\|\hat{U}(\cdot,q)\|_{l^1} \in {L}^{1}(e^{-\alpha q}\,dq)$ and the
solution to (\ref{nseq0}) exists for $t \in (0,\alpha^{-1/n})$, if
$\alpha$ is sufficiently large so that
$$
\alpha \geq \alpha_{0},\qquad \alpha^{1/2+1/(2n)} > \epsilon_{1} +
2\sqrt{\epsilon b}.
$$
Alternatively, we may choose $\alpha_0 = \alpha$, in which case $\alpha$
has to be large enough to satisfy:
$$
\alpha^{1/2+1/(2n)} > \epsilon_{1} + 2\sqrt{\epsilon b}.
$$
This completes the proof of Theorem \ref{Thm03}
\subsection{Further estimates on $\epsilon_1$, $b$ and $\epsilon$}
\label{furtherest}
By Lemma \ref{L2.4},
\begin{equation}\label{ext9.0}
c_{g} = \sup_{\substack{k \in \mathbb{Z}^3 \\ q_0 \leq q' \leq q}} \Big\{
|k|\, q^{1/2} (q-q')^{1/2-1/(2n)} |\mathcal{G} (q,q';k)| \Big\} < \infty,
\end{equation}
and by (\ref{defus}), (\ref{e.2}), Lemma \ref{L2.4}, Lemma \ref{lemU0} and
the compact support of $\hat{H}^{(a)}$,
\begin{equation}\label{ext9.1}
c_{s} = \sup_{q_{0} \leq q} \Big\{ q^{1/2-1/(2n)} u^{(s)}(q) \Big\} <
\infty.
\end{equation}
It follows that
\begin{equation}\label{ext9.2}
b \leq c_{s} \Gamma \biggl( \frac{1}{2}+\frac{1}{2n}, \alpha_{0} q_{0}
\biggr),\qquad \epsilon \leq 2 \Gamma \biggl( \frac{1}{2}+\frac{1}{2n}
\biggr) c_{g} q_{0}^{-1/2},
\end{equation}
where
$$
\Gamma(a,x) = \int_{x}^{\infty} t^{a-1} e^{-t}\,dt
$$
is the incomplete Gamma function, and condition (\ref{ext9}) is satisfied if
\begin{equation}\label{ext9.4}
\alpha > \alpha_{0},\qquad \alpha^{1/2+1/(2n)} > \epsilon_{1} + 2 \biggl[
2\Gamma \biggl( \frac{1}{2}+\frac{1}{2n} \biggr) \Gamma \biggl(
\frac{1}{2}+\frac{1}{2n}, \alpha_{0} q_{0} \biggr) c_{g} c_{s}
\biggr]^{1/2} q_{0}^{-1/4}.
\end{equation}
If on a large subinterval $[q_{d}, q_{0}]$, ${\hat U}^{(a)} (\cdot, q)$,
and therefore ${\hat H}^{(a)}$, decays, cf. the exponential decay in
Theorem \ref{Thm02}, then the estimated $c_s$ is small. Also, $\epsilon_1$
in (\ref{ext3.2}) is small for large $q_0$, ultimately since $B_0(k)$ in
(\ref{sizeG}) is small. It is then clear that $\alpha$ in (\ref{ext9.4})
can be chosen small as well.
\section{Control of numerical errors in $[0, q_0]$ in a discretized
scheme}
The errors in a numerical discretization scheme for 3-D Navier-Stokes
cannot be readily controlled since these depend on derivatives of the
classical solution; and these are not known to exist beyond some initial
time interval. In contrast to physical space approaches, the $q$
derivatives of the solution ${\hat U}$ to (\ref{IntUeqn}), are {\it a
priori} bounded on any interval $[q_m, q_0] \subset \mathbb{R}^+$ for $q_m
> 0$, by Lemma \ref{lemqder}. Further, if $q_m$ is chosen appropriately
small, then the small $t$ expansion of NS solution provides an accurate
representation for ${\hat U} $ and therefore of ${\hat H}_j$ in $[0, q_m]$
to any desired accuracy. Calculating the numerical solution to
(\ref{IntUeqn}) with rigorous error control is relevant in more than one
way.
In \S\ref{S6}, we have shown that control of ${\hat U}$ on a finite
$q$-interval provides sharper estimates on the exponent $\alpha$, and
therefore an improved classical existence time estimate for $v$. If this
estimate exceeds $T_c$, the time beyond which Leray's weak solution
becomes classical again (see the Appendix for a bound on $T_c$) then, of
course, global existence of $v$ follows.
Furthermore, a numerical scheme to calculate (\ref{IntUeqn}), which is
analyzed in this section is interesting in its own right. It provides,
through Laplace transform, an alternative calculation method for
Navier-Stokes dynamics. Evidently, this method is not numerically
efficient to determine $v (x, t)$ for fixed time $t$; nonetheless it may
be advantageous in finding long time averages involving $v$ and $\nabla v$
needed for turbulent flow. These can sometimes be expressed as functionals
of ${\hat U}$.
\begin{Definition}\label{defalphadelta}
We introduce a discrete operator $\mathcal{N}_\delta^{(N)}$ by
\begin{multline}\label{discret2}
\left \{ \mathcal{N}_{\delta}^{(N)} [ \hat{V} ] \right \} \left (k,
m\delta \right ) = - i k_j \sum_{m'=m_s}^{m-1} w^{(1)} (m, m'; k, \delta)
\mathcal{P}_N {\hat H}_{j,\delta}^{(N)} (k, m'\delta)
\\
+ {\hat U}^{(0,N)} (k, m \delta) - i k_j w^{(1,1)} (m, k, \delta)
\mathcal{P}_N {\hat H}_{j, \delta}^{(N)} (k, m\delta),
\end{multline}
where $k \in [-N, N]^3 \setminus \{0 \}$, $\mathbb{N} \ni m \ge m_s$, $q_m = m_s
\delta$ ($q_m$ is independent of $\delta$) and
\begin{equation}\label{eqU0N}
{\hat U}^{(0,N)} (k, m \delta) = {\hat U}^{(0)} (k, m\delta) - i k_j
\int_0^{q_m} \mathcal{G} (m\delta, q'; k) \mathcal{P}_N \hat{H}_j^{(N)}
(k, q') dq'
\end{equation}
is considered known, while for $m' \ge m_s$,
\begin{multline}\label{discret2.H}
{\hat H}_{j,\delta}^{(N)} (k, m'\delta) = \sum_{k' \in [-N, N]^3 \setminus
\{ 0, k \}} P_k \left [ {\hat v}_{0,j} (k')\hat{V}(k-k', m'\delta) + {\hat
v}_{0} (k') \hat{V}_j (k-k', m'\delta) \right ] \\
+ \sum_{\substack{k' \in [-N, N]^3 \setminus \{ 0, k \} \\
m^{\prime\prime}=m_s..,m'-m_s}} P_k \left [ {\hat V}_{j} (k',
m^{\prime\prime} \delta) {\hat V} (k-k', (m'-m^{\prime\prime}) \delta)
\right ] w^{(2)} (m', m^{\prime\prime}; k, \delta ) \\
+ 2\sum_{l=0}^{m_s-1} w^{(2,l)} (m', k, \delta) P_k \left [ {\hat E}^{(l)}
(k) {\hat *} \hat{V} (k, (m'-l) \delta) \right ].
\end{multline}
\end{Definition}
\z In (\ref{discret2.H}), ${\hat E}^{(l)} (k)$ involves ${\hat v}_0
(k)$--this representation encapsulates the singular contribution of ${\hat
U} (\cdot, q')$ and ${\hat U} (\cdot, q-q')$ when $q'$ and $q-q'$ are
small respectively. The precise form of these functions and of the weights
$w^{(1)} (m, m'; k, \delta)$, $w^{(1,1)} (m, k, \delta)$, $w^{(2)} (m',
m''; k, \delta)$ and $w^{(2,l)} (m', k, \delta)$ generally depend on the
particular discretization scheme employed to calculate
$\mathcal{N}_\delta^{(N)} [ {\hat U} ]$. Also, note that in
(\ref{discret2.H}), the nonlinear terms in the summation are absent when
$m_s \le m' < 2 m_s$. To simplify the discussion, we do not specify the
weights, but only require that they ensure consistency, namely that in the
formal limit $\delta \to 0$, the discrete operator ${\mathcal
N}_\delta^{(N)}$ becomes $\mathcal{N}^{(N)}$. Based on behavior of the
kernel $\mathcal{G}$, consistency implies that
\begin{multline}\label{wbounds}
|k| |w^{(1)} (m, m'; k, \delta) | \le \frac{C_1 \delta^{1/(2n)}}{m^{1/2}
(m-m')^{1/2-1/(2n)}} \\
|k w^{(1,1)} | \le C_{1,1} \delta^{1/2+1/(2n)} (m \delta)^{-1/2}~,~
|w^{(2)} | \le C_2 \delta ~,~ |w^{(2,l)}| \le C_3 \delta^{1/n}
(l+1)^{-1+1/n}.
\end{multline}
Consider the solution
\begin{equation}\label{discret3}
{\hat U}^{(N)}_\delta (k, m\delta) = \left \{ \mathcal{N}_{\delta}^{(N)}
\left [\hat {U}^{(N)}_\delta \right ] \right \} \left (k, m\delta \right )
~~{\rm for} ~~m_s \le m , ~~k \in [-N, N]^3,
\end{equation}
where as noted before, $q_m = m_s \delta$ is small enough so that the
known asymptotic series of $\hat{U}$ at $q=0$ can be used to accurately
calculate ${\hat U}^{(N)}$ and ${\hat H}_j^{(N)}$ for $q < q_m$, and thus
of ${\hat U}^{(0, N)}$ and ${\hat E}^{(l)}$ in (\ref{eqU0N}) and
(\ref{discret2.H}).
\begin{Definition}\label{defconsis}
We let
$$
T_{E, \delta}^{(N)} = \mathcal{N}^{(N)} {\hat U}^{(N)} -
\mathcal{N}^{(N)}_\delta {\hat U}^{(N)}
$$
be the truncation error due to $q$-discretization for a fixed number of
Fourier modes, $[-N, N]^3$. The discretization is consistent (in the
numerical analysis sense) if $T_{E, \delta}^{(N)}$ scales with some
positive power of $\delta$ and involves a finite number of derivatives of
${\hat U}$.
\end{Definition}
\begin{Definition}
We define $\| \cdot \|^{(\alpha,\delta)}$, the discrete analog of $ \|
\cdot \|^{(\alpha)}$, as follows:
$$
\| {\hat f} \|^{(\alpha,\delta)} = \sup_{m \ge m_s} m^{1-1/n}
\delta^{1-1/n} (1+m^2 \delta^2) e^{-\alpha m\delta} \|{\hat f} (\cdot,
m\delta) \|_{l^{1}}.
$$
\end{Definition}
\begin{Remark}{\label{remweight}
\rm More specific bounds on the truncation error depend on the specific
numerical scheme. It is however standard for numerical quadratures to
choose the weights $w^{(j)}$ so that $q$-integration is exact on $q \in
[q_m, q_0]$ for a polynomial of some order $l$. For a general ${\hat V}
(\cdot, q)$, the interpolation errors involve $l+1$ $q$-derivatives. Lemma
\ref{lemqder} guarantees that the derivatives of $\hat{U}$ are
exponentially bounded for large $q$. It follows that $\|
T^{(N)}_{E,\delta} \|^{(\alpha, \delta)} \to 0$ as $\delta \to 0$.}
\end{Remark}
\begin{Remark}
In the rest of this section, with slight abuse of notation, we write $*$
for the discrete summation convolution in $q$-space ({\it i.e.} sum over
$m'$) and $\text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}}$ for the discrete double, Fourier-Laplace, convolution.
Since the rest of the paper deals with discrete systems, this should not
cause confusion.
\end{Remark}
\begin{Lemma}\label{discHj}
For $m \ge m_s$, ${\hat H}_{j,\delta}^{(N)}(\cdot,m\delta)$ satisfies the
following estimate:
\begin{multline}
\| {\hat H}_{j,\delta}^{(N)} (\cdot, m\delta ) \|_{l^{1}} \\
\le C \frac{ e^{\alpha m \delta}}{ (1+m^2 \delta^2) m^{1-1/n}
\delta^{1-1/n}} \|\hat{U}_{\delta}^{(N)}\|^{(\alpha,\delta)} \left \{
\|\hat{U}_{\delta}^{(N)}\|^{(\alpha,\delta)} + \| {\hat v}_0 \|_{l^1} +
C_E \right \}
\end{multline}
\z for some known constant $C_E$.
\end{Lemma}
\begin{proof}
Using the properties of discrete convolution we see that
\begin{multline*}
\| P_k \left \{ {\hat v}_{0,j} {\hat *} {\hat U}_\delta ^{(N)} + {\hat
U}_{\delta, j}^{(N)} {\hat *} {\hat v}_{0} + {\hat U}_{\delta, j}^{(N)}
\text{\raisebox{-2pt}{$\stackrel{_{\displaystyle *}}{*}$}} {\hat U}_{\delta}^{(N)} \right \} \|_{l^{1}} \\
\le C \biggl \{ \| {\hat v}_0 \|_{l^1} \|{\hat U}_{\delta}^{(N)} (\cdot,
m\delta ) \|_{l^1} + \delta \sum_{m'=m_s}^{m-m_s} \|{\hat
U}_{\delta}^{(N)} (\cdot, m'\delta ) \|_{l^1} \|{\hat U}_{\delta}^{(N)}
(\cdot, (m-m')\delta ) \|_{l^1} \\
+ \delta^{1/n} \sum_{m'=0}^{m_s-1} (m'+1)^{-1+1/n} \|{\hat
E}^{(m')}\|_{l^1} \|{\hat U}_{\delta}^{(N)} (\cdot, (m-m')\delta )
\|_{l^1} \biggr \} \\
\le C \frac{e^{\alpha m \delta} }{m^{1-1/n} \delta^{1-1/n} (1+m^2
\delta^2)} \left \{ \left ( C_{E}+ \| {\hat v}_0 \|_{l^1} \right ) \|{\hat
U}_{\delta}^{(N)} \|^{(\alpha, \delta)} + \left ( \| {\hat
U}_{\delta}^{(N)} \|^{(\alpha,\delta)} \right )^2 \right \},
\end{multline*}
where, by a standard integral estimate,
\begin{multline*}
\delta^{1-1/n} m^{1-1/n} (1+m^2 \delta^2) \sum_{m'=1}^{m-1} \frac{\delta}{
[\delta m' \delta (m-m')]^{1-1/n} (1+\delta^2 {m'}^2) (1+ \delta^2
{(m-m')}^2 ) } < C, \\
\delta^{1-1/n} m^{1-1/n} (1+m^2 \delta^2) \sum_{m'=0}^{m_s-1}
\frac{\delta}{ [\delta (m'+1) \delta (m-m')]^{1-1/n} (1+\delta^2 (m-m')^2)
} < C,
\end{multline*}
for $C$ independent of $m$, $m'$ and $\delta$. In the above estimates we
have used
$$
\|{\hat E}^{(m')}\|_{l^1} \le C_E e^{\alpha_0 m'\delta}\qquad (\alpha_0
\le \alpha)
$$
which can be obtained from the definition of ${\hat E}^{(m')}$.
\end{proof}
\z Define ${\hat H}_{j,\delta}^{(N,1)}$ and ${\hat H}_{j,\delta}^{(N,2)} $
by substituting ${\hat U}^{(N)}_\delta = {\hat U}^{(N,1)}_\delta $ and
${\hat U}_\delta^{(N,2)}$, respectively, in ${\hat H}_{j,\delta}^{(N)}$.
\begin{Lemma}\label{discHjdiff}
For $m \ge m_s$, we have
\begin{multline*}
\| {\hat H}_{j,\delta}^{(N,1)} (\cdot, m\delta ) - {\hat H}_{j,
\delta}^{(N,2)} (\cdot, m\delta) \|_{l^{1}} \\
\le C \frac{ e^{\alpha m \delta}}{ (1+m^2 \delta^2) m^{1-1/n}
\delta^{1-1/n}} \| {\hat U}^{(N,1)}_\delta - {\hat U}^{(N,2)}_\delta
\|^{(\alpha,\delta)} \\
\times \left \{ \| {\hat U}^{(N,1)}_\delta \|^{(\alpha,\delta)} + \| {\hat
U}^{(N,2)}_\delta \|^{(\alpha,\delta)} + \| {\hat v}_0 \|_{l^1} + C_E
\right \}.
\end{multline*}
\end{Lemma}
\begin{proof}
The proof is similar to that of Lemma~\ref{discHj}
\end{proof}
\begin{Lemma}\label{lemTENd}
(i) For $C_4$ defined in (\ref{C4def}), assume $\alpha$ is large enough so
that
\begin{equation}\label{ensure6}
2 C_4 \alpha^{-1/2-1/(2n)} \left ( \left [C_E + \| {\hat v}_0 \|_{l^{1}}
\right ] + 2 \| {\hat U}^{(0,N)} \|^{(\alpha, \delta)} \right ) < 1.
\end{equation}
Then, for any $\alpha^{-1} \ge \delta_0 \ge \delta > 0$,
$\mathcal{N}_{\delta}^{(N)}$ is contractive and there is a unique solution
to ${\hat U}^{(N)}_\delta = \mathcal{N}_{\delta}^{(N)} \left [ {\hat
U}^{(N)}_\delta \right ]$, which satisfies the bounds
$$
\| {\hat U}^{(N)}_\delta (\cdot, m \delta) \|_{l^1} \le \frac{2 e^{\alpha
m \delta}}{m^{1-1/n} \delta^{1-1/n} (1+m^2 \delta^2) } \| {\hat U}^{(0,N)}
\|^{(\alpha, \delta)}.
$$
(ii) If $\alpha$ is such that
\begin{equation}\label{ensure6.2}
2 C_4 \alpha^{-1/2-1/(2n)} \left ( \left [C_E + \| {\hat v}_0 \|_{l^{1}}
\right ] + 2 \| {\hat U}^{(0,N)} \|^{(\alpha, \delta)} \right ) \leq
\frac{1}{2},
\end{equation}
then
$$
\| {\hat U}^{(N)}_\delta (\cdot, m \delta) - {\hat U}^{(N)} (\cdot,
m\delta) \|_{l^1} \le \frac{2 e^{\alpha m \delta}}{m^{1-1/n}
\delta^{1-1/n} (1+m^2 \delta^2) } \| T_{E,\delta}^{(N)} \|^{(\alpha,
\delta)}.
$$
\end{Lemma}
\begin{proof}
(i) We have
\begin{multline}\label{C4def}
\| \mathcal{N}_{\delta}^{(N)} [ {\hat U}^{(N)}_\delta ] (\cdot,m\delta)
\|_{l^1} \le \| {\hat U}^{(0,N)}(\cdot,m\delta) \|_{l^1}
\\
+ C \sum_{m'=m_s}^{m-1} \frac{\delta^{1/(2n)}}{m^{1/2}
(m-m')^{1/2-1/(2n)}} \|{\hat H}_{\delta}^{(N)} (\cdot, m'\delta)\|_{l^1} +
C \delta^{1/(2n)} m^{-1/2} \|{\hat H}_{\delta}^{(N)} (\cdot,
m\delta)\|_{l^1} \\
\le \frac{ e^{\alpha m \delta}}{ (1+m^2 \delta^2) m^{1-1/n}
\delta^{1-1/n}} \biggl\{ \|{\hat U}^{(0,N)}\|^{(\alpha,\delta)} \\
+ C_4 \alpha^{-1/2-1/(2n)} \| {\hat U}^{(N)}_\delta \|^{(\alpha,\delta)}
\Bigl( \| {\hat U}^{(N)}_\delta \|^{(\alpha,\delta)} + \| {\hat v}_0
\|_{l^1} + C_E \Bigr) \biggr\},
\end{multline}
where, by a standard integral estimate,
\begin{multline*}
\delta^{1-1/n} m^{1-1/n} (1+m^2 \delta^2) \sum_{m'=m_s}^{m-1} \frac{\delta
e^{\alpha (m'-m) \delta}}{[\delta m]^{1/2} [\delta (m-m')]^{1/2-1/(2n)}
[\delta m']^{1-1/n} (1+m'^2 \delta^2)} \\
\le C \alpha^{-1/2-1/(2n)},
\end{multline*}
and
$$
\frac{\delta^{1/2+1/(2n)}}{[\delta m]^{1/2}} \le C \delta^{1/2+1/(2n)} \le
C \alpha^{-1/2-1/(2n)}.
$$
Thus ${\hat U}_{\delta}^{(N)} = \mathcal{N}_{\delta}^{(N)} \left [ {\hat
U}^{(N)}_\delta \right ]$ has a unique solution such that
$$
\| {\hat U}_{\delta}^{(N)} \|^{(\alpha,\delta)} \le 2 \| {\hat U}^{(0,N)}
\|^{(\alpha,\delta)}.
$$
Hence the first part of the lemma follows.
(ii) Under the assumption,
\begin{multline*}
\| {\hat U}^{(N)} - {\hat U}_\delta^{(N)} \|^{(\alpha, \delta)} \le \|
\mathcal{N}_\delta^{(N)} [ {\hat U}^{(N)} ] - \mathcal{N}_\delta^{(N)} [
{\hat U}_\delta^{(N)} ] \|^{(\alpha, \delta)} + \| T_{E,\delta}^{(N)}
\|^{(\alpha, \delta)} \\
\le \frac{1}{2} \| {\hat U}^{(N)} - {\hat U}_\delta^{(N)} \|^{(\alpha,
\delta)} + \| T_{E,\delta}^{(N)} \|^{(\alpha, \delta)}.
\end{multline*}
So
$$
\| {\hat U}^{(N)} - {\hat U}_\delta^{(N)} \|^{(\alpha, \delta)} \le 2 \|
T_{E,\delta}^{(N)} \|^{(\alpha, \delta)}
$$
and the second part of the lemma follows.
\end{proof}
\noindent{\bf Proof of Theorem \ref{Thm04}:}
Note that
$$
{\hat U}_\delta^{(N)} - {\hat U} = {\hat U}_\delta^{(N)} - {\hat U}^{(N)}
+ {\hat U}^{(N)} - {\hat U}.
$$
From Lemmas \ref{lemTEN} and \ref{lemTENd}, it follows that
$$
\| {\hat U}_\delta^{(N)} - {\hat U} \|^{(\alpha, \delta)} \le 2 \| T_{E,
N} \|^{(\alpha,\delta)} + 2 \| T_{E,\delta}^{(N)} \|^{(\alpha, \delta)} +
\| (I-\mathcal{P}_N) {\hat U} \|^{(\alpha,\delta)},
$$
which tends to zero as $N \to \infty$, $\delta \to 0$, by Lemmas
\ref{lemTEN} and \ref{lemTENd}.
\section{Numerical Method}
In this section we describe a numerical scheme for calculating the
solution $\hat{U}_\delta^{(N)}$ over a fixed interval. The procedure can
be further optimized in a number of ways, such as adapting the quadrature
scheme to the features of the kernel.
\subsection{Outline of the Algorithm}
The main algorithm is summarized as follows:
\begin{verbatim}
initialization;
startup routine;
for each time step
advance the solution using second order Runge-Kutta integration;
end
estimate the error and output the results.
\end{verbatim}
\subsection{Startup Routine}
One difficulty in numerically solving \eqref{IntUeqn} is that the equation
is singular at $q = 0$. To overcome it, we first compute $\hat{u}$ for
small $t$ by solving \eqref{hatueq} using Taylor expansion:
$$
\hat{u}(k,t) = \sum_{m=1}^{\infty} \hat{c}_{m}(k)\, t^{m},
$$
where
\begin{multline*}
\hat{c}_{1} = \hat{v}_{1}, \\
\hat{c}_{m+1} = \frac{1}{m+1} \Biggl[ -\nu |k|^{2} \hat{c}_{m} -
ik_{j}P_{k}\biggl( \hat{v}_{0,j} \hat{*} \hat{c}_{m} + \hat{c}_{m,j}
\hat{*} \hat{v}_{0} + \sum_{\ell=1}^{m-1} \hat{c}_{\ell,j} \hat{*}
\hat{c}_{m-\ell} \biggr) \Biggr],\ m \geq 1.
\end{multline*}
Then $\hat{U}$ is computed for small $q$ by
$$
\hat{U}(k,q) = \sum_{m=1}^{m_{0}} \hat{d}_{m}(k)\, q^{m/n-1},
$$
where\footnote{Note that
$$
\int_{0}^{\infty} \hat{d}_{m} q^{m/n-1} e^{-q/t^{n}}\,dq = \hat{d}_{m}
t^{m} \int_{0}^{\infty} q^{m/n-1} e^{-q}\,dq = \Gamma \biggl( \frac{m}{n}
\biggr) \cdot \hat{d}_{m} t^{m},
$$
so $\Gamma(m/n) \cdot \hat{d}_{m} = \hat{c}_{m}$.}
$$
\hat{d}_{m} = \frac{\hat{c}_{m}}{\Gamma(m/n)}.
$$
\subsection{Second Order Runge-Kutta Integration}
After computing the solution on $[0,q_{m}]$ for some $q_{m} > 0$ by using
Taylor expansions, we solve the integral equation \eqref{IntUeqn} on
$[q_{m},q_{0}]$ using second order Runge-Kutta (predictor-corrector)
method. Since this numerical scheme is preliminary and far from being
optimized, we do not include the details here.
What is worth mentioning is the evaluation of the functions $F(\mu)$ and
$G(\mu)$. As shown in earlier sections, both $F$ and $G$ are entire
functions and have power series expansions at $\mu = 0$. For small $\mu$,
these expansions converge very rapidly (super-factorially) and provide an
efficient way to evaluate $F$ and $G$. For large $\mu$, however, the
alternating nature of the expansions raises the issue of catastrophic
cancellation, and it is no longer appropriate to use them for numerical
computation. In this regime we use the asymptotic expansions of $F$ and
$G$, which we derive below.
While the complete asymptotics of $F$ and $G$ can be derived using
Laplace's method, a faster and easier way is to use the differential
equations they satisfy. For example, recall that for $n = 2$,
$$
F(\mu) = \frac{1}{2\pi i} (I_{1} - \bar{I}_{1}) = \frac{1}{\pi} \mathrm{Im} I_{1},
$$
where
\begin{displaymath}
I_{1} = i \int_{0}^{\infty} r^{-1/2} e^{-r-i\mu r^{-1/2}}\,dr.
\end{displaymath}
It is easy to check that $I_{1}$ satisfies the third-order ODE (the same
equation satisfied by $F$)
$$
\mu I_{1}''' + I_{1}'' - 2I_{1} = 0,
$$
and it has the leading order asymptotics
$$
I_{1} \sim 2\sqrt{\frac{\pi}{3}}\, e^{-z},
$$
where
$$
z = 3 \cdot 2^{-2/3} \mu^{2/3} e^{i\pi/3}.
$$
If we make the change of dependent variable
$$
I_{1} = 2\sqrt{\frac{\pi}{3}}\, e^{-z} J_{1}(z),
$$
then $J_{1}$ must have the form
$$
J_{1}(z) = 1 + \sum_{m=1}^{\infty} a_{m} z^{-m},
$$
and it solves the ODE
$$
J_{1}''' - 3J_{1}'' + \Bigl( 3 + \frac{1}{4z^{2}} \Bigr) J_{1}' -
\frac{1}{4z^{2}}\, J_{1} = 0.
$$
It follows that
$$
F(\mu) \sim \frac{2}{\sqrt{3\pi}} \mathrm{Im} \Bigg\{ e^{-z} \biggl( 1 +
\sum_{m=1}^{\infty} a_{m} z^{-m} \biggr) \Bigg\},
$$
where $a_{1},\ a_{2}$, etc. are determined by the recurrence
\begin{multline*}
a_{0} = 1,\qquad a_{1} = -\frac{1}{12}, \\
a_{m} = -\frac{1}{12m} \biggl[ \Bigl( 12m^{2} - 12m + 1 \Bigr) a_{m-1} +
\Bigl( 4m^{3} - 12m^{2} + 9m - 2 \Bigr) a_{m-2} \biggr],\qquad m \geq 2.
\end{multline*}
Similarly,
$$
G(\mu) \sim -\frac{(4\mu)^{1/3}}{\sqrt{3\pi}} \mathrm{Im} \Bigg\{ e^{-z+i\pi/6}
\biggl( 1 + \sum_{m=1}^{\infty} c_{m} z^{-m} \biggr) \Bigg\},
$$
where
\begin{multline*}
c_{0} = 1,\qquad c_{1} = \frac{5}{12},\qquad c_{2} = -\frac{35}{288}, \\
c_{m} = \frac{1}{24m} \biggl[ \Bigl( -48m^{2} + 60m - 2 \Bigr) c_{m-1} +
\Bigl( -32m^{3} + 108m^{2} - 80m + 9 \Bigr) c_{m-2} \\
+ \Bigl( -8m^{4} + 52m^{3} - 102m^{2} + 67m - 14 \Bigr) c_{m-3}
\biggr],\qquad m \geq 3.
\end{multline*}
\section{Preliminary Numerical Results}
For all computations in this section we take $n = 2$. The numerical
results and computation scheme are preliminary. The algorithm has not been
optimized for efficiency, and not all estimates have been rigorously
analyzed yet, and these will be published elsewhere. Nonetheless, the
partial results show some important features of the integral equation
approach.
\subsection{Test Case}
We first tested our code with the following test function:
\begin{multline*}
\mbox{(Kida flow)}: v = (v^{(1)},v^{(2)},v^{(3)}), \\
v^{(1)}(x_{1},x_{2},x_{3},t) = \frac{\sin x_{1}}{1+t} (\cos 3x_{2} \cos
x_{3} - \cos x_{2} \cos 3x_{3}), \\
v^{(1)}(x_{1},x_{2},x_{3},t) = v^{(2)}(x_{3},x_{1},x_{2},t) =
v^{(3)}(x_{2},x_{3},x_{1},t).
\end{multline*}
The forcing $f$ corresponding to $v$ was generated with $\nu = 1$ and
equation \eqref{IntUeqn} was solved without the knowledge of $v$. The
computed solution was then compared to $v$.
For this test case, the startup routine computed the solution on $[0,q_m]
= [0,0.2]$ using $m_{0} = 8$ terms and the Runge-Kutta solver advanced the
solution to $q_0 = 1$. $2N = 16$ points (i.e. 8 Fourier modes) were used
in each dimension (excluding the extra points for anti-aliasing).
We computed the solution for different step size $\delta$ and the errors
at $q_0$
$$
e_{\delta} = \max_{x \in \mathbb{T}^{3}} |U_{\delta}^{(N)}(x,q_0) -
U(x,q_0)|
$$
are listed in Table \ref{table.err.acc.test}. To ensure the error decays
at the right order $O(\delta^{2})$, we also included in the table the
numerical order of convergence:
$$
\beta_{\delta} = \log_{2} \frac{e_{2\delta}}{e_{\delta}}.
$$
\begin{table}[h]
\centering \caption{Test case: errors at $q_0$.}
\label{table.err.acc.test}
\begin{tabular}{c||c|c}
\hline\hline
$\delta$ & $e_{\delta}$ & $\beta_{\delta}$ \\
\hline
$1/20$ & 1.3399e-04 & $-$ \\
$1/40$ & 3.1987e-05 & 2.07 \\
$1/80$ & 7.1462e-06 & 2.16 \\
$1/160$ & 1.3620e-06 & 2.39 \\
\hline\hline
\end{tabular}
\end{table}
\subsection{Kida Flow}
Now we consider the Kida flow with the initial condition
$$
v_{0}^{(1)}(x_{1},x_{2},x_{3},0) = \sin x_{1} (\cos 3x_{2} \cos x_{3} -
\cos x_{2} \cos 3x_{3}).
$$
We computed the solution for $\nu = 0.1$ with zero forcing to $q_0 = 10$
using $2N = 128$ points in each dimension, and step size $\delta = 0.05$.
The parameters for the startup procedure are the same as before: $q_m =
0.2$ and $m_{0} = 8$. To investigate the growth of the solution
$\hat{U}_{\delta}^{(N)}$ with $q$, we computed the $l^{1}$-norm
$$
\| \hat{U}_{\delta}^{(N)}(\cdot,q) \|_{l^{1}} = \sum_{k \in [-N,N]^{3}}
|\hat{U}_{\delta}^{(N)}(k,q)|
$$
and plotted $\| \hat{U}_{\delta}^{(64)}(\cdot,q) \|_{l^{1}}$ vs. $q$ in
Fig.\ref{fig.kn.kida.acc.0f}. For comparison we also included in
Fig.\ref{fig.kn.kida.acc.0f} a plot of the solution to the original
(unaccelerated) equation.
\begin{figure}[h]
\centering \subfigure[]{
\psfrag{p}{\tiny $p$}
\psfrag{kn1}{\tiny $\| \hat{U}_{\delta}^{(64)}(\cdot,p) \|_{l^{1}}$}
\psfrag{Zero forcing}{\tiny Zero forcing, $\nu = 0.1$}
\includegraphics[scale=0.7]{kn.1.kida.0f.nu.1.0e-01.eps}
} \subfigure[]{
\psfrag{q}{\tiny $q$}
\psfrag{kn1}{\tiny $\| \hat{U}_{\delta}^{(64)}(\cdot,q) \|_{l^{1}}$}
\psfrag{Zero forcing}{\tiny Zero forcing, $\nu = 0.1$}
\includegraphics[scale=0.7]{kn.1.kida.acc.0f.nu.1.0e-01.eps}
} \caption{For zero forcing and $\nu = 0.1$: (a). The original
(unaccelerated) equation, $\| \hat{U}_{\delta}^{(64)}(\cdot,p) \|_{l^{1}}$
vs. $p$. (b). Accelerated equation with $n = 2$, $\|
\hat{U}_{\delta}^{(64)}(\cdot,q) \|_{l^{1}}$ vs. $q$.}
\label{fig.kn.kida.acc.0f}
\end{figure}
Fig.\ref{fig.log.kn.kida.acc.0f} shows the plot of $\log \|
\hat{U}_{\delta}^{(64)}(\cdot,q) \|_{l^{1}}$ vs. $q^{1/3}$. Note that $\|
\hat{U}(\cdot,q) \|_{l^{1}} \sim c_{1} e^{-c_{2} q^{1/3}}$ for large $q$,
where $c_{2} = (0.3)^{2/3} 2^{-5/3} 3 \approx 0.42$.
\begin{figure}[h]
\centering \subfigure[]{
\psfrag{q13}{\tiny $q^{1/3}$}
\psfrag{log(kn1)}{\tiny $\log \| \hat{U}_{\delta}^{(64)}(\cdot,q) \|_{l^{1}}$}
\psfrag{Zero forcing}{\tiny Zero forcing, $\nu = 0.1$}
\includegraphics[scale=0.7]{log.kn.1.kida.acc.0f.nu.1.0e-01.eps}
} \subfigure[]{
\psfrag{q13}{\tiny $q^{1/3}$}
\psfrag{dlog(kn1)}{\tiny $\Delta_{-} \Bigl[ \log \| \hat{U}_{\delta}^{(64)}
(\cdot,s^{3}) \|_{l^{1}} \Bigr] / \Delta s$}
\psfrag{Zero forcing}{\tiny Zero forcing, $\nu = 0.1$}
\includegraphics[scale=0.7]{dlog.kn.1.kida.acc.0f.nu.1.0e-01.eps}
} \caption{Asymptotic behavior of $\| \hat{U}_{\delta}^{(64)}(\cdot,q)
\|_{l^{1}}$. (a). $\log \| \hat{U}_{\delta}^{(64)}(\cdot,q) \|_{l^{1}}$
vs. $q^{1/3}$. (b) $\Delta_{-} \Bigl[ \log \| \hat{U}_{\delta}^{(64)}
(\cdot,s^{3}) \|_{l^{1}} \Bigr] / \Delta s$ vs. $s$, where $s = q^{1/3}$
and $\Delta_{-}$ is the backward difference operator in $s$.}
\label{fig.log.kn.kida.acc.0f}
\end{figure}
\subsection{Longer Time Existence}
We next computed the constants in estimate \eqref{ext3}. By taking $q_{0}
= 10$ and $\alpha_{0} = 30$, we obtained
$$
b \approx 0,\qquad \epsilon \approx 1.1403,\qquad \epsilon_{1} \approx
13.6921.
$$
This implies the existence of the solution for $\alpha \geq 32.7564$,
which corresponds to an interval of existence $(0,\alpha^{-1/2}) =
(0,0.1747)$.
We compare with a classical estimate of the existence time. The formula
$$
T_{cl} = \frac{1}{c_m\| D^m v_0 \|_{L^2}}
$$
(where $c_m$ is known) was optimized in the range $m>5/2$, giving a
maximal value $T_{cl} \approx 0.01$ at $m \approx 3.2$, about 17 time
shorter than the time obtained from the integral equation.
Furthermore, considerable optimization of code is expected to allow
numerical calculation over much larger $q$-interval.
\section{Appendix}
\subsection{Derivation of the integral equation and of its properties}
\label{A1}
\subsubsection*{The integral equation}
We start with the Fourier transformed equation \eqref{eqn.ns.acc.fu}:
\begin{multline}\label{eqn.ns.acc.fu}
\hat{u}_{t} + \nu |k|^{2} \hat{u} = -ik_{j} P_{k}[\hat{v}_{0,j} \hat{*}
\hat{u} + \hat{u}_{j} \hat{*} \hat{v}_{0} + \hat{u}_{j} \hat{*} \hat{u}] +
\hat{v}_{1}(k) \\
=: -ik_{j} \hat{h}_{j} + \hat{v}_{1}(k) =: \hat{r} + \hat{v}_{1}(k), \\
\hat{u}(k,0) = 0,
\end{multline}
where
$$
\hat{v}_{1}(k) = \hat{f} (k) - \nu |k|^{2} \hat{v}_{0} - ik_{j}
P_{k}[\hat{v}_{0,j} \hat{*} \hat{v}_{0}].
$$
For $n > 1$, look for a solution in the form
\begin{equation}\label{eqn.ns.acc.int.fu}
\hat{u}(k,t) = \int_{0}^{\infty} \hat{U}(k,q) e^{-q/t^{n}}\,dq
\end{equation}
where
\begin{multline}\label{eqn.ns.acc.int.fr}
\hat{r}(k,t) = -ik_{j} \hat{h}_{j}(k,t) = -ik_{j} \int_{0}^{\infty}
\hat{H}_{j}(k,q) e^{-q/t^{n}}\,dq \\
=: \int_{0}^{\infty} \hat{R}(k,q) e^{-q/t^{n}}\,dq.
\end{multline}
Inversion of the left side of (\ref{eqn.ns.acc.fu}) and the change of
variable $\tau = t^{-n}$ yield
\begin{multline}\label{A10.88.1}
\hat{u}(k,t) = \int_{0}^{t} e^{-\nu |k|^{2} (t-s)} \hat{r}(k,s)\,ds +
\int_{0}^{t} e^{-\nu |k|^{2} (t-s)} \hat{v}_{1}(k)\,ds \\
= \int_{0}^{1} t e^{-\nu |k|^{2} t(1-s)} \hat{r}(k,ts)\,ds +
\frac{\hat{v}_{1}(k)}{\nu |k|^{2}} \Bigl( 1-e^{-\nu |k|^{2} t} \Bigr) \\
= \int_{0}^{1} \tau^{-1/n} e^{-\nu |k|^{2} \tau^{-1/n}(1-s)}
\int_{0}^{\infty} \hat{R}(k,q') e^{-q' s^{-n} \tau}\,dq'\,ds +
\frac{\hat{v}_{1}(k)}{\nu |k|^{2}} \Bigl( 1-e^{-\nu |k|^{2} \tau^{-1/n}}
\Bigr) \\
=: I(k,\tau) + J(k,\tau).
\end{multline}
Inverse Laplace transform (formal for now) of $I$ and $J$ yield:
\begin{multline}\label{Gdefine}
\frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} I(k,\tau) e^{q\tau}\,d\tau
\\
= \int_{0}^{\infty} \hat{R}(k,q') \int_{0}^{1} \bigg\{ \frac{1}{2\pi i}
\int_{c-i\infty}^{c+i\infty} \tau^{-1/n} e^{-\nu |k|^{2}
\tau^{-1/n}(1-s)+(q-q' s^{-n}) \tau}\,d\tau \bigg\} \,ds\,dq' \\
= \int_{0}^{\infty} \hat{R}(k,q') \int_{0}^{1} (q-q's^{-n})^{1/n-1}
\bigg\{ \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \zeta^{-1/n}
e^{\zeta-\mu\zeta^{-1/n}}\, d\zeta \bigg\}\,ds\,dq' \\
=: \int_0^\infty \hat{R} (k, q') \mathcal{G} (q, q'; k) dq',
\end{multline}
where
$$
\zeta = (q-q's^{-n}) \tau, \qquad \mu = \nu |k|^{2} (1-s)
(q-q's^{-n})^{1/n},
$$
while
\begin{multline}\label{J123}
\frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} J(k,\tau) e^{q\tau}\,d\tau =
\frac{\hat{v}_{1}(k)}{\nu |k|^{2}} \bigg\{ \frac{1}{2\pi i}
\int_{c-i\infty}^{c+i\infty} e^{q\tau} \Bigl( 1-e^{-\nu |k|^{2}
\tau^{-1/n}} \Bigr)\,d\tau \bigg\} \\
= \frac{\hat{v}_{1}(k)}{\nu |k|^{2} q} \bigg\{ \frac{1}{2\pi i}
\int_{c-i\infty}^{c+i\infty} \Bigl( e^{\tilde{\zeta}} -
e^{\tilde{\zeta}-\tilde{\mu}\tilde{\zeta}^{-1/n}} \Bigr)\,d\tilde{\zeta}
\bigg\} =: \hat{U}^{(0)}(k,q),
\end{multline}
where
$$
\tilde{\zeta} = q\tau,\qquad \tilde{\mu} = \nu |k|^{2} q^{1/n}.
$$
The Bromwich contour is homotopic to a contour $C$ from $\infty e^{-i\pi}$
to the left of the origin, ending at $\infty e^{i\pi}$ encircling the
origin, and we finally obtain the integral equation:
$$
\hat{U}(k,q) = \int_{0}^{q} \mathcal{G}(q,q';k) \Bigl[\! -\!ik_{j}
\hat{H}_{j}(k,q') \Bigr]\,dq' + \hat{U}^{(0)}(k,q),
$$
where
$$
\hat{H}_{j}(k,q) = P_{k} \biggl[ \hat{v}_{0,j} \hat{*} \hat{U} +
\hat{U}_{j} \hat{*} \hat{v}_{0} + \hat{U}_{j} \substack{* \\ *} \hat{U}
\biggr](k,q).
$$
Rescaling the integration variable, $s \to s \gamma^{1/n}$, the kernel in
(\ref{Gdefine}) becomes
\begin{multline}\label{Gdefineac}
\mathcal{G}(q,q';k) = q^{1/n-1} \gamma^{1/n} \int_{1}^{\gamma^{-1/n}}
(1-s^{-n})^{1/n-1} F(\mu)\,ds \\
= \frac{\gamma^{1/n}}{\nu^{1/2} |k| q^{1-1/(2n)}} \int_{1}^{\gamma^{-1/n}}
(1-s^{-n})^{1/(2n)-1} (1-s\gamma^{1/n})^{-1/2} \mu^{1/2} F(\mu)\,ds,
\end{multline}
where
$$
\gamma = \frac{q'}{q},\qquad \mu = \nu |k|^{2} q^{1/n} (1-s\gamma^{1/n})
(1-s^{-n})^{1/n},
$$
and
$$
F(\mu) = \frac{1}{2\pi i} \int_{C} \zeta^{-1/n} e^{\zeta-\mu
\zeta^{-1/n}}\,d\zeta.
$$
Furthermore, from (\ref{J123}) we have
\begin{equation}\label{defU}
\hat{U}^{(0)}(k,q) = \frac{\hat{v}_{1}(k)}{\nu |k|^{2} q} G(\nu |k|^{2}
q^{1/n}),
\end{equation}
where
$$
G(\mu) = -\frac{1}{2\pi i} \int_{C} e^{\zeta-\mu
\zeta^{-1/n}}\,d\zeta,\qquad G(0) = 0.
$$
\subsubsection*{Power series representations of $F$ and $G$}
To show that $F$ is entire, we start with the definition
\begin{equation}\label{defF}
F(\mu) = \frac{1}{2\pi i} \int_{C} \zeta^{-1/n} e^{\zeta} e^{-\mu
\zeta^{-1/n}}\,d\zeta
\end{equation}
and expand $e^{-\mu\zeta^{-1/n}}$ into power series of $\zeta^{-1/n}$ to
obtain
$$
F(\mu) = \frac{1}{2\pi i} \sum_{j=0}^{\infty} \frac{(-1)^{j}}{j!} \mu^{j}
\int_{C} \zeta^{-(j+1)/n} e^{\zeta} \,d\zeta,
$$
where the interchange of order of summation and integration is justified
by the absolute convergence of the series. {F}rom the integral
representation of the Gamma function (see \cite{Abramowitz}) we get
$$
\int_{C} \zeta^{-(j+1)/n} e^{\zeta}\,d\zeta = 2i \sin \biggl(
\frac{j+1}{n}\, \pi \biggr)\, \Gamma \biggl( 1-\frac{j+1}{n} \biggr) =
\frac{2\pi i}{\Gamma((j+1)/n)},
$$
(where in the last step we have used the identity ${\sin(\pi z)}
\Gamma(1-z)\Gamma(z) = {\pi}$) and thus $F$ has the power series
representation
$$
F(\mu) = \sum_{j=0}^{\infty} F_{j} \mu^{j},\qquad \mbox{where } F_{j} =
\frac{(-1)^{j}}{j!\, \Gamma((j+1)/n)}.
$$
Similarly, $G$ is an entire function and has the power series
representation
$$
G(\mu) = \sum_{j=1}^{\infty} G_{j} \mu^{j},\qquad \mbox{where } G_{j} =
-\frac{(-1)^{j}}{j!\, \Gamma(j/n)}.
$$
\subsubsection{The Asymptotics of $F$ and $G$ for $n \ge 2$ and large
$\mu>0$}\label{A12} Elementary contour deformation and estimates at $0$
show that
$$
F(\mu) = \frac{1}{2\pi i} (I_{1} - \bar{I}_{1}) = \frac{1}{\pi} \mathrm{Im} I_{1},
$$
where
\begin{multline}\label{defI1}
I_1 (\mu) = \int_{0}^{\infty} r^{-1/n} e^{i\pi/n} \exp\Bigl[ -r-\mu
r^{-1/n} e^{i\pi/n} \Bigr]\,dr \\
= n \mu^{1-2/(n+1)} e^{i\pi/n} \int_{0}^{\infty} s^{n-2} \exp\Bigl[
-\mu^{n/(n+1)} (s^{n} + e^{i\pi/n} s^{-1}) \Bigr]\,ds \\
= n \mu^{1-2/(n+1)} e^{2i\pi/(n+1)} \int_{0}^{\infty} x^{n-2} \exp\Bigl[
-w \varphi(x) \Bigr]\,dx,
\end{multline}
where
$$
w = \mu^{n/(n+1)} e^{i\pi/(n+1)},\qquad \varphi(x) = x^{n} + \frac{1}{x}.
$$
Similarly,
$$
\bar{I}_1 = n \mu^{1-2/(n+1)} e^{-2i\pi/(n+1)} \int_{0}^{\infty} x^{n-2}
\exp\Bigl[ -{\bar{w}} \varphi(x) \Bigr]\,dx.
$$
We now use the Laplace method to obtain the complete asymptotic expansion
of $I_{1}$ for large $w$ with $\arg w \in \left( -\frac{\pi}{2},
\frac{\pi}{2} \right)$ or $\arg \mu \in \left( -\frac{(n+3)\pi}{2n},
\frac{(n-1)\pi}{2n} \right)$. We then show that $I_1$ solves a linear
differential equation. It will follow, from standard results on
asymptotics in ODEs, that the expansion is valid in a wider complex
sector. First, it is easily seen that the only solution to the equation
$$
\varphi'(x) = nx^{n-1} - \frac{1}{x^{2}} = 0
$$
on the positive real axis is $x=x_{0} = n^{-1/(n+1)}$. If we introduce a
new variable
\begin{equation}\label{eqn.xi.x}
\xi = \varphi(x),
\end{equation}
then clearly $\xi$ decreases monotonically from $x = 0^{+}$ to $x =
x_{0}$, where it attains the minimum value
$$
\xi_{0} = \varphi(x_{0}) = n^{-n/(n+1)} (n+1).
$$
We denote this branch of $\varphi^{-1}$ by $x_{1}(\xi)$. Further, as $x$
increases beyond $x = x_{0}$ up to $\infty$, $\xi$ increases from
$\xi_{0}$ to $\infty$. We denote this branch of $\varphi^{-1}$ by
$x_{2}(\xi)$. It follows that
$$
I_{1} = n \mu^{1-2/(n+1)} e^{2i\pi/(n+1)} \Biggl[ -\int_{\xi_{0}}^{\infty}
\frac{x_{1}^{n-2}(\xi)\, e^{-w\xi}}{nx_{1}^{n-1}(\xi) -
x_{1}^{-2}(\xi)}\,d\xi + \int_{\xi_{0}}^{\infty} \frac{x_{2}^{n-2}(\xi)\,
e^{-w\xi}}{nx_{2}^{n-1}(\xi) - x_{2}^{-2}(\xi)}\,d\xi \Biggr].
$$
To find an expansion of $x_{i}(\xi),\ i=1,2$, we note that
$$
\xi - \xi_{0} = \varphi(x) - \varphi(x_{0}) = \sum_{j=2}^{\infty}
\varphi^{(j)}(x_{0}) \frac{(x-x_{0})^{j}}{j!},
$$
and thus
$$
(\xi-\xi_{0}) - \sum_{j=3}^{\infty} \varphi^{(j)}(x_{0})
\frac{(x-x_{0})^{j}}{j!} = \frac{1}{2} \varphi''(x_{0}) (x-x_{0})^{2},
$$
or
\begin{equation}\label{eqnx}
x_{\pm} = x_{0} \pm \sqrt{\frac{2}{\varphi''(x_{0})} \biggl[ (\xi-\xi_{0})
- \sum_{j=3}^{\infty} \varphi^{(j)}(x_{0}) \frac{(x-x_{0})^{j}}{j!}
\biggr]},
\end{equation}
where $x_{-} = x_{1}$ and $x_{+} = x_{2}$. By (\ref{eqnx}) we have
$$
\frac{x_{i}^{n-2}(\xi)}{nx_{i}^{n-1}(\xi)-x_{i}^{-2}(\xi)} =
\sum_{j=-1}^{\infty} b_{j}^{[i]} (\xi-\xi_{0})^{j/2}.
$$
Watson's lemma then implies that
\begin{multline}\label{asympI1}
I_{1} \sim n \mu^{1-2/(n+1)} e^{2i\pi/(n+1)} e^{-\xi_{0} w}
\sum_{j=-1}^{\infty} \int_{0}^{\infty} \Bigl( b_{j}^{[2]} - b_{j}^{[1]}
\Bigr) \eta^{j/2} e^{-w\eta}\,d\eta\qquad (\eta = \xi-\xi_{0}) \\
\sim n \mu^{1-2/(n+1)} e^{2i\pi/(n+1)} e^{-\xi_{0} w} \sum_{j=-1}^{\infty}
\Bigl( b_{j}^{[2]} - b_{j}^{[1]} \Bigr)\, \Gamma \biggl( 1+\frac{j}{2}
\biggr)\, w^{-1-j/2}.
\end{multline}
We see that
$$
b_{j}^{[2]} - b_{j}^{[1]} =
\begin{cases}
0 & j \mbox{ even} \\
2b_{j}^{[2]} & j \mbox{ odd}
\end{cases}.
$$
Similar analysis for $\bar{I}_1$ gives
\begin{multline}\label{asymphatI1}
\bar{I}_{1} \sim n \mu^{1-2/(n+1)} e^{-2i\pi/(n+1)} e^{-\xi_{0} {\bar w}}
\sum_{j=-1}^{\infty} \int_{0}^{\infty} \Bigl( b_{j}^{[2]} - b_{j}^{[1]}
\Bigr) \eta^{j/2} e^{-\bar{w} \eta}\,d\eta \\
\sim n \mu^{1-2/(n+1)} e^{-2i\pi/(n+1)} e^{-\xi_{0} {\bar w}}
\sum_{j=-1}^{\infty} \Bigl( b_{j}^{[2]} - b_{j}^{[1]} \Bigr)\, \Gamma
\biggl( 1+\frac{j}{2} \biggr)\, {\bar w}^{-1-j/2}.
\end{multline}
With $\xi_{0} w$ replaced by $z$, we finally obtain for $\mu$ large and
positive
\begin{multline}\label{eqFasymp}
F(\mu) = \frac{1}{\pi} \mathrm{Im} I_{1} \\
\sim \frac{n}{\pi} \mathrm{Im} \Bigg\{ \mu^{(n-2)/[2(n+1)]} e^{3i\pi/[2(n+1)]}
e^{-z} \sum_{m=0}^{\infty} 2b_{2m-1}^{[2]} \Gamma \biggl( m+\frac{1}{2}
\biggr)\, \xi_{0}^{m} z^{-m} \Bigg\},
\end{multline}
where
\begin{equation}\label{eqn.xi0.z}
\xi_{0} = n^{-n/(n+1)} (n+1),\qquad z = \xi_{0} \mu^{n/(n+1)}
e^{i\pi/(n+1)}.
\end{equation}
A similar analysis shows that
$$
G(\mu) \sim -\frac{n}{\pi} \mathrm{Im} \Bigg\{ \mu^{n/[2(n+1)]} e^{i\pi/[2(n+1)]}
e^{-z} \sum_{m=0}^{\infty} 2d_{2m-1}^{[2]} \Gamma \biggl( m+\frac{1}{2}
\biggr)\, \xi_{0}^{m} z^{-m} \Bigg\},
$$
where $z,\ \xi_{0}$ are given by \eqref{eqn.xi0.z} and $d_{j}^{[i]}$ are
coefficients of the expansion
$$
\frac{x_{i}^{n-1}(\xi)}{nx_{i}^{n-1}(\xi)-x_{i}^{-2}(\xi)} =
\sum_{j=-1}^{\infty} d_{j}^{[i]} (\xi-\xi_{0})^{j/2}.
$$
To obtain the leading asymptotics of $F$ and $G$, we note that
$$
\frac{x_{2}^{n-2}}{nx_{2}^{n-1}-x_{2}^{-2}} =
\frac{x_{2}^{n-2}}{\varphi'(x_{2})} =
\frac{x_{0}^{n-2}}{\varphi''(x_{0})(x_{2}-x_{0})} + O(1) =
\frac{x_{0}^{n-2}}{\sqrt{2 \varphi''(x_{0})}} (\xi-\xi_{0})^{-1/2} + O(1).
$$
It follows that
$$
b_{-1}^{[2]} = \frac{x_{0}^{n-2}}{\sqrt{2 \varphi''(x_{0})}} =
\frac{1}{\sqrt{2}} n^{3/[2(n+1)]-1} (n+1)^{-1/2}\qquad (\mbox{where }
\varphi''(x_{0}) = n^{3/(n+1)} (n+1)).
$$
Similarly
$$
d_{-1}^{[2]} = \frac{x_{0}^{n-1}}{\sqrt{2 \varphi''(x_{0})}} =
\frac{1}{\sqrt{2}} n^{1/[2(n+1)]-1} (n+1)^{-1/2}.
$$
As a result, we have to the leading order,
\begin{multline*}
F(\mu) \sim \sqrt{\frac{2}{\pi}}\, n^{3/[2(n+1)]} (n+1)^{-1/2} \mathrm{Im} \Big\{
\mu^{(n-2)/[2(n+1)]} e^{3i\pi/[2(n+1)]} e^{-z} \Big\}, \\
G(\mu) \sim -\sqrt{\frac{2}{\pi}}\, n^{1/[2(n+1)]} (n+1)^{-1/2} \mathrm{Im} \Big\{
\mu^{n/[2(n+1)]} e^{i\pi/[2(n+1)]} e^{-z} \Big\}.
\end{multline*}
\subsubsection*{Differential equations for $F$ and $G$ for $n\in\mathbb{N}$ and
extended asymptotics} To derive a differential equation satisfied by $F$,
we differentiate (\ref{defF}) $n$ times in $\mu$ (justified by dominated
convergence)
$$
F^{(n)}(\mu) = \frac{(-1)^{n}}{2\pi i} \int_{C} \zeta^{-1/n-1}
e^{\zeta-\mu \zeta^{-1/n}}\,d\zeta.
$$
Integrating by parts once, we get
\begin{multline*}
F^{(n)}(\mu) = \frac{n}{2\pi i} (-1)^{n} \int_{C} \zeta^{-1/n}
e^{\zeta-\mu \zeta^{-1/n}} \biggl( 1 + \frac{\mu}{n} \zeta^{-1/n-1}
\biggr)\,d\zeta \\
= (-1)^{n} n F(\mu) - \mu F^{(n+1)}(\mu),
\end{multline*}
so the differential equation satisfied by $F$ is
$$
\mu F^{(n+1)} + F^{(n)} - (-1)^{n} n F = 0.
$$
Since $G' = F$, the differential equation satisfied by $G$ is
$$
\mu G^{(n+2)} + G^{(n+1)} - (-1)^{n} n G' = 0.
$$
Integrating once and using $G(0) =0$, we obtain
\begin{equation}\label{Gdiff}
\mu G^{(n+1)} - (-1)^{n} n G = 0.
\end{equation}
We can make the same argument for
\begin{equation}\label{defI2}
I_2 (\mu) \equiv \int_{\infty e^{-i\pi}}^{0^+} e^{\zeta - \mu
\zeta^{-1/n}} d\zeta -1
\end{equation}
or for
\begin{equation}\label{defhatI2}
\bar{I}_2 (\mu) \equiv \int_{\infty e^{i\pi}}^{0^+} e^{\zeta - \mu
\zeta^{-1/n}} d\zeta -1.
\end{equation}
It is to be noted that $G(\mu) = \frac{1}{2\pi i} \left [ I_2 (\mu) -
{\bar I}_2 (\mu) \right ]$, while $I_2^\prime (\mu) = I_1 (\mu)$ and
${\bar I}_2^\prime (\mu) = {\bar I}_1 (\mu)$.
Equation (\ref{Gdiff}) has $(n+1)$ independent solutions with the
following asymptotic behavior for large $\mu$ (see \cite{Wasow}):
\begin{equation}\label{genWKB0}
\mu^{n/[2(n+1)]} \exp \left [ - z e^{-i 2 \pi j/(n+1)} \right ] ;\ \ z:=
\xi_0 e^{i \pi/(n+1)} \mu^{n/(n+1)},\ j=0,1,\dotsc,n.
\end{equation}
Thus, there is only one solution with the asymptotic behavior
$$
-\sqrt{\frac{2}{\pi}}\, n^{1/[2(n+1)]} (n+1)^{-1/2} \mu^{n/[2(n+1)]} \exp
\left [ - z \right ]~~{\rm for }~~\arg z = 0
$$
(all solutions independent from it are larger). Since $I_2 (\mu)$ has this
asymptotics in particular for $\arg \mu = - \frac{\pi}{n}$, corresponding
to $\arg z = 0$ as discussed already, $I_2$ is the only solution of
(\ref{Gdiff}) satisfying
\begin{equation}\label{genWKB}
I_2 (\mu) \sim -\sqrt{\frac{2}{\pi}}\, n^{1/[2(n+1)]} (n+1)^{-1/2}
\mu^{n/[2(n+1)]} \exp \left [ -z \right ]~~{\rm for }~~\arg \mu = -
\frac{\pi}{n}.
\end{equation}
As we rotate around in the counter-clockwise direction starting from $\arg
z = 0$ in the complex $z$ (or complex $\mu$) plane, the classical
asymptotics of $I_2$ can only change at antistokes lines. The first
antistokes line is $\arg z = \frac{\pi}{2} + \frac{2\pi}{n+1}$,
corresponding to $\arg \mu = \frac{(n+3)\pi}{2n}$.
Similarly, in a clockwise direction, the first antistokes line is $\arg z
= -\frac{\pi}{2} - \frac{2\pi}{n+1}$, {\it i.e.} $\arg \mu =
-\frac{(n+7)\pi}{2n}$.
Therefore, for $\arg \mu$ $\in$ $\left ( -\frac{(n+7)\pi}{2n},
\frac{(n+3)\pi}{2n} \right )$ the asymptotic expansion $I_2$ is the same.
{F}rom the symmetry between ${\bar I}_2$ and $I_2$, it follows that
\begin{equation}\label{genWKBhat}
{\bar I}_2 (\mu) \sim -\sqrt{\frac{2}{\pi}}\, n^{1/[2(n+1)]} (n+1)^{-1/2}
\mu^{n/[2(n+1)]} \exp \left [ - \bar{z} \right ]
\end{equation}
for $\arg \mu \in \left ( -\frac{(n+3)\pi}{2n}, \frac{(n+7)\pi}{2n} \right
)$. Since $G(\mu) = \frac{1}{2\pi i} \left [ I_2 (\mu) - {\bar I}_2 (\mu)
\right ]$, noting that $I_2 (\mu)$ is dominant for $\arg \mu \in \left (
0, \frac{(n+3)\pi}{2n} \right ) $, it follows that in this range of $\arg
\mu$, $G(\mu) \sim -\frac{i}{2\pi} I_2 (\mu)$. while for $\arg \mu \in
\left ( -\frac{(n+3)\pi}{2n}, 0 \right ) $, since ${\bar I}_2$ is
dominant, $G(\mu) \sim \frac{i}{2\pi} {\bar I}_2 (\mu)$. Lemma~\ref{lemG}
follows.
\subsection{Instantaneous smoothing}
The following result shows that the solution ${\hat v} (k, t)$ obtained
from ${\hat U} (k, q)$ corresponds to a classical solution of
(\ref{nseq0}) for $t \in (0, T]$, {\it i.e.} there is instantaneous
smoothing due to viscous effects. This is a known result (See for instance
\cite{Bertozzi}), but we include it for completeness.
\begin{Lemma}\label{instsmooth}
Assume ${\hat v}_0, {\hat f} \in l^1 (\mathbb{Z}^3)$, where ${\hat v}_0
(0) =0 = {\hat f} (0)$. Assume further that (\ref{nseq0}) has a solution
${\hat v} (k, t) $ with $ \| {\hat v} ( \cdot, t) \|_{l^1} < \infty $ for
$t \in [0, T]$. Then $v (x, t) = \mathcal{F}^{-1} \left [ {\hat v} (\cdot,
t) \right ] (x)$ is a classical solution of (\ref{nseq0}) for $t \in
\left ( 0, T \right ]$.
\end{Lemma}
\begin{proof}
It suffices to show $|k|^2 {\hat v} (\cdot, t) \in l^1$ for $t \in (0, T]
$ since this implies $v \in {C}^2$ and usual arguments imply that $v$
satisfies (\ref{nseq0}).
Consider the time interval $[\epsilon, T]$ for $\epsilon \ge 0$, $T <
\alpha^{-1/n}$. Define
$$
{\hat w}_\epsilon (k) = \sup_{\epsilon \le t \le T} |{\hat v}| (k, t).
$$
Since $|{\hat v} (k, t)| \le \int_0^\infty | {\hat U} (k, q)| e^{-\alpha
q} dq$, ${\hat w}_0$ (or ${\hat w}_\epsilon$) satisfies
$$
\| {\hat w}_0 \|_{l^1} \le \int_0^\infty \| {\hat U} (\cdot, q) \|_{l^1}
e^{-\alpha q} dq.
$$
On $[\epsilon, T]$ for $\epsilon > 0$, (\ref{nseq}) implies
\begin{equation}\label{intvk}
{\hat v} (k, t) = -i k_j \int_0^t e^{-\nu |k|^2 (t-\tau)} P_k \left (
{\hat v}_j {\hat *} {\hat v} \right ) (k, \tau) d\tau + {\hat v}_0
e^{-\nu |k|^2 t} + \frac{{\hat f}}{\nu |k|^2} \left (1-e^{-\nu |k|^2 t}
\right ).
\end{equation}
Therefore,
$$
|k| |{\hat v}| (k, t) \le 2 \left \{ {\hat w}_0 {\hat *} {\hat w}_0 \right
\} \int_0^t |k|^2 e^{-\nu |k|^2 (t-\tau)} d\tau + |k| {\hat v}_0 e^{-\nu
|k|^2 t} + \frac{|{\hat f}|}{\nu |k|} \left (1-e^{-\nu |k|^2 t} \right ).
$$
Since $\int_0^t \nu |k|^2 e^{-\nu |k|^2 (t-\tau)} d\tau \le 1$, it follows
that
\begin{equation}\label{khatw}
|k| {\hat w}_{\epsilon/2} \le \frac{2}{\nu} \left \{ {\hat w}_0 {\hat *}
{\hat w}_0 \right \} + \sqrt{\frac{2}{\nu \epsilon}} \left ( \sup_{\gamma
> 0} \gamma e^{-\gamma^2} \right ) | {\hat v}_0 | + \biggl| \frac{{\hat
f}}{\nu |k|} \biggr|.
\end{equation}
Using now the bounds on ${\hat w}_0$ we get
$$
\| |k| {\hat w}_{\epsilon/2} \|_{l^1} \le \frac{2}{\nu} \left \{ \| {\hat
U} (\cdot, q) \|^{\alpha}_1 \right \}^2 + \frac{C}{\epsilon^{1/2}
\nu^{1/2}} \| {\hat v}_0 \|_{l^1} + \nu^{-1} \biggl\| \frac{{\hat f}}{|k|}
\biggr\|_{l^1}.
$$
The evolution of ${\hat v}$ is autonomous in time, and thus, for $t \in
\left [ \frac{\epsilon}{2}, T \right ]$ we have
\begin{multline}\label{ml1}
{\hat v} (k, t) = -i \int_{\epsilon/2}^t e^{-\nu |k|^2 (t-\tau)} P_k \left
( {\hat v}_j {\hat *} [k_j {\hat v} ] \right ) (k, \tau) d\tau \\
+ {\hat v} (k, \epsilon/2) e^{-\nu |k|^2 (t-\epsilon/2)} + {\hat f} (k)
\frac{1-e^{-\nu |k|^2 (t-\epsilon/2 )} }{\nu |k|^2},
\end{multline}
where we used the divergence condition $k \cdot {\hat v} (k, t) =0$.
Multiplying (\ref{ml1}) by $|k|^2$ and using (\ref{khatw}), it follows
that for $t \in [ \epsilon, T]$ we have
\begin{multline*}
|k|^2 |{\hat v} (k, t)| \le 2 {\hat w}_{\epsilon/2} {\hat *} \left [ |k|
{\hat w}_{\epsilon/2} \right ] \int_{\epsilon/2}^t |k|^2 e^{-\nu |k|^2
(t-\tau)} d\tau \\
+ \frac{1}{\nu (t-\epsilon/2)} \left ( \sup_{\gamma > 0} \gamma
e^{-\gamma} \right ) | {\hat v} (k, \epsilon/2) | + \frac{| {\hat f}
|}{\nu},
\end{multline*}
implying that
$$
\| |k|^2 {\hat w}_\epsilon \|_{l^1} \le \frac{2}{\nu} \| {\hat
w}_{\epsilon/2} \|_{l^1} \| |k| {\hat w}_{\epsilon/2} \|_{l^1} +
\frac{C}{\epsilon \nu} \| {\hat w}_{\epsilon/2} \|_{l^1} + \frac{ \|
{\hat f} \|_{l^1} }{\nu}.
$$
Since $\epsilon > 0$ is arbitrary, it follows that $|k|^2 {\hat v} (\cdot,
t) \in l^1$ for $t \in (0, T]$.
\end{proof}
\subsection{Estimate of $T_c$ beyond which Leray's weak solution becomes
classical} It is known that (\ref{nseq}) is equivalent to the integral
equation
\begin{multline}\label{NS1}
{\hat v} (k, t) = \int_0^t e^{-\nu |k|^2 (t-\tau) } P_k \left [ - i k_j
{\hat v}_j {\hat *} {\hat v} \right ] (k, \tau)\,d\tau + e^{-\nu |k|^2 t}
{\hat v}_0
\\
\equiv \mathcal{F} \left \{ \mathcal{N} \left [ v \right ] (\cdot, t)
\right \} (k).
\end{multline}
Applying $\mathcal{F}^{-1}$ in $k$ to (\ref{NS1}), it follows that
\begin{equation}\label{NS1p}
v (x, t) = e^{\nu t \Delta} v_0 - \int_0^t e^{\nu (t-\tau) \Delta}
\mathcal{P} \left [ (v \cdot \nabla) v \right ] \equiv \mathcal{N} \left [
v \right ] (x, t).
\end{equation}
We first determine the value of $\epsilon$ such that, if $\| v_0 \|_{H^1}
\le \epsilon$, then classical solutions $v (\cdot, t)$ to Navier-Stokes
exist for all time. The argument holds for real $t$ as well as in
$$
{\tilde S}_{\tilde \delta} := \Bigl\{ t: \arg t \in ( - {\tilde \delta},
{\tilde \delta} ) \Bigr\},
$$
where $ 0 < {\tilde \delta} < \frac{\pi}{2} $. Sectorial existence of
analytic solution in $t$ with exponential decay for large $|t|$ was useful
in proving Theorem \ref{Thm02}. We denote by $\mathcal{A}_t $ the class of
functions analytic in $t$ for $t \in {\tilde S}_{\tilde \delta}$ for $0 <
|t| < T$.
We consider the space of functions
$$
X \equiv \left \{ \mathcal{A}_{t} H^1_x \right \} \cap \left \{ L_{|t|}^2
H^2_x \right \}:= \Big ( \mathcal{A}_{t} \otimes H^1(\mathbb{T}^3 [0,
2\pi])\Bigr)\,\,\cap \Bigl( L^2 \left [ e^{i \phi} (0,T) \right ] \otimes
H^2(\mathbb{T}^3 [0, 2\pi])\Bigr),
$$
where $t = |t| e^{i \phi} $, $|\phi| < {\tilde \delta}$, and the weighted
norm
$$
\| v \|_{X} = \sup_{t \in {\tilde S}_{\tilde \delta}, 0 < |t| < T} \| e^{
\frac{3}{4} \nu t} v (\cdot, t)\|_{H^1_x} + \sup_{|\phi| < {\tilde
\delta}} \left \{ \int_0^T \| e^{\frac{3}{4} \nu t} v (\cdot, |t| e^{i
\phi}) \|_{H^2_x}^2 d|t| \right \}^{1/2}.
$$
Note that
$$
\| f \|_{H^1_x} = \left ( \sum_{k} (1+|k|^2) |{\hat f} (k)|^2 \right
)^{1/2},\qquad \| f \|_{H^2_x} = \left ( \sum_{k} (1+|k|^4) |{\hat f} (k)
|^2 \right )^{1/2},
$$
and ${\hat f}$ is the Fourier-Transform of $f$.
The arguments below are an adaptation of classical arguments, see
\cite{Tao}. We introduce exponential weights in time, allowing for
estimates independent of $T$, and extend the analysis to a complex sector.
\begin{Lemma}\label{lemu0}
For $v_0 \in H^1_x $, with zero average over $\mathbb{T}^3 [0, 2\pi]$ we
have
$$
\| e^{\nu t \Delta} v_0 \|_X \le c_1 \| v_0 \|_{H^1_x},
$$
where $c_1 = \left ( 1 + \sqrt{ \frac{2}{\nu \cos {\tilde \delta} } }
\right )$.
\end{Lemma}
\begin{proof}
First, take $f = v_0$ and $t \in [0, T]$. Note that zero average implies
${\hat f}(0) = 0$; so we only need to consider $|k| \ge 1$.
\begin{equation}
\lvert e^{\frac{3}{2} \nu t} \rvert \| e^{\nu t \Delta} f \|^2_{H^1_x} \le
\sum_{k \ne 0} (1+|k|^2) e^{-2 \nu (|k|^2-3/4) t} |{\hat f}_k |^2 \le
\sum_{k \ne 0} (1+|k|^2) |{\hat f}_k|^2 = \| f \|_{H^1_x}^2.
\end{equation}
Also, note that
\begin{multline}\label{bbound}
\int_0^T \| e^{\frac{3}{4} \nu t} e^{\nu t \Delta} f \|_{H^2_x}^2 dt \le
\sum_{k \ne 0 } (1+|k|^4) |{\hat f}_k |^2 \left ( \int_0^T e^{-\nu (2
|k|^2 -\frac{3}{2}) t} dt \right ) \\
\le \sum_{k\ne 0} \frac{1+|k|^4}{\nu (2 |k|^2 -\frac{3}{2})} |{\hat f}_k
|^2 \le \frac{2}{\nu} \| f \|^2_{H^1_x}.
\end{multline}
If $t \in {\tilde S}_{\tilde \delta}$, we integrate along the ray $|t|
e^{i \phi}$. It is clear all the steps go through when $\nu$ is replaced
by $\nu \cos \phi$. A bound, uniform in $ {\tilde S}_{\tilde \delta}$, is
obtained by replacing $\frac{2}{\nu}$ in (\ref{bbound}) by $\frac{2}{\nu
\cos {\tilde \delta}}$. The result follows.
\end{proof}
\begin{Lemma}\label{lemuf}
If $e^{\frac{3}{4} \nu t} F \in L_{|t|}^2 L_x^2 $ uniformly in $\phi \in
(-{\tilde \delta}, {\tilde \delta} )$, then
\begin{equation}
\biggl\| \int_0^t e^{\nu (t-\tau) \Delta} F (x, \tau) d\tau \biggr\|_{X}
\le c_2 \sup_{|\phi| < {\tilde \delta}} \| e^{\frac{3}{4} \nu t} F \|_{
L_{|t|}^2 L_x^2 },
\end{equation}
with
$$
c_2 = \left ( \frac{2 \sqrt{2}}{\sqrt{\nu \cos {\tilde \delta} }} +
\frac{4 \sqrt{2}}{\nu \cos {\tilde \delta} } \right ).
$$
\end{Lemma}
\begin{proof}
We first show this for $t \in [0, T]$. The function
$$
v (x, t) = \int_0^t e^{\nu (t-\tau) \Delta} F (x, \tau) d\tau
$$
satisfies
\begin{equation}\label{A.veq}
v_t - \nu \Delta v = F,\qquad v(x, 0)=0.
\end{equation}
Multiplying (\ref{A.veq}) by $v^*$, the conjugate of $v$, integrating over
$x \in \mathbb{T}^3 [0, 2\pi]$ and combining with the equation for $v^*$
we obtain
\begin{equation}\label{A.veq1}
\frac{d}{dt} \| v (\cdot, t) \|^2_{L^2_x} + 2 \nu \| D v (\cdot, t)
\|^2_{L^2_x} \le \frac{4}{\nu} \| F (\cdot, t) \|^2_{L^2_x} +
\frac{\nu}{4} \| v (\cdot, t) \|^2_{L^2_x}.
\end{equation}
Similarly, taking the gradient in $x$ of (\ref{A.veq}), taking the dot
product with $\nabla v^*$ and combining with the equation satisfied by
$\nabla v^*$, we obtain
\begin{equation*}
\frac{d}{dt} \| D v (\cdot, t) \|^2_{L^2_x } + 2 \nu \| D^2 v (\cdot, t)
\|_{L^2_x}^2 = \int_{\mathbb{T}^3} (D F) \cdot (Dv^*) dx +
\int_{\mathbb{T}^3} (D F^*) \cdot (Dv ) dx.
\end{equation*}
Integration by parts and Cauchy's inequality give
\begin{equation}\label{A.veq2}
\frac{d}{dt} \| D v (\cdot, t) \|_{L^2_x }^2 + 2 \nu \| D^2 v (\cdot, t)
\|_{L^2_x}^2 \le \frac{4}{\nu} \| F (\cdot, t) \|^2_{L^2_x} +
\frac{\nu}{4} \| \Delta v (\cdot, t) \|^2_{L^2_x}.
\end{equation}
Combining (\ref{A.veq1}) and (\ref{A.veq2}) and using Poincar\'e's
inequality, we have
\begin{equation}\label{eq1UF}
\frac{d}{dt} \| v (\cdot, t) \|_{H^1_x}^2 + \frac{3}{2} \nu \| v (\cdot,
t) \|_{H^1_x}^2 + \frac{\nu}{4} \| D v (\cdot, t) \|^2_{H^1_x} \le
\frac{8}{\nu} \| F (\cdot, t) \|^2_{L^2_x}.
\end{equation}
Therefore, using (\ref{eq1UF}) and the fact that $v(x, 0)=0$,
$$
\| e^{\frac{3}{4} \nu t} v (\cdot, t) \|_{H^1_x}^2 \le \frac{8}{\nu}
\int_0^t \| e^{\frac{3}{4} \nu \tau} F (\cdot, \tau) \|^2_{L^2_x} d\tau.
$$
Hence,
\begin{equation}\label{A.veq3}
\sup_{t \in [0, T]} \| e^{\frac{3}{4} \nu t} v (\cdot, t) \|_{H^{1}_x} \le
\frac{2\sqrt{2}}{ \sqrt{\nu}} \| e^{\frac{3}{4} \nu t} F \|_{L^2_{|t|}
L^2_x}.
\end{equation}
Integration of (\ref{eq1UF}), using $v(x,0)=0$, gives
$$
\int_0^t \| e^{\frac{3}{4} \nu \tau} v (\cdot, \tau) \|^2_{H^2_x} d\tau
\le \frac{32}{\nu^2} \int_0^t \| e^{\frac{3}{4} \nu \tau} F (\cdot, \tau)
\|^2_{L^2_x} d\tau.
$$
Therefore, for $t \in [0, T]$, we obtain
\begin{equation}\label{A.veq4}
\left [ \int_0^t \| e^{\frac{3}{4} \nu \tau} v (\cdot, \tau) \|^2_{H^2_x}
d\tau \right ]^{1/2} \le \frac{4 \sqrt{2}}{\nu} \left [ \int_0^t \|
e^{\frac{3}{4} \nu \tau} F (\cdot, \tau) \|^2_{L^2_x} d\tau \right
]^{1/2}.
\end{equation}
Now (\ref{A.veq3}) and (\ref{A.veq4}) together imply
\begin{multline}\label{ineq1}
\sup_{t \in [0, T] } \biggl \{ \sum_{k\ne 0} (1+|k|^2) \biggl |
e^{\frac{3}{4} \nu t} t \int_0^1 e^{-\nu |k|^2 t (1-s)} {\hat F} (k, t s)
ds \biggr |^2 \biggr \}^{1/2} \\
+ \biggl \{ \int_0^T d|t| \sum_{k\ne 0} (1+|k|^4) \biggl | e^{\frac{3}{4}
\nu t} t \int_0^1 {\hat F} (k, t s) e^{-\nu |k|^2 t (1-s)} ds \biggr |^2
\biggr \}^{1/2} \\
\le \biggl ( \frac{2 \sqrt{2}}{\sqrt{\nu}} + \frac{4 \sqrt{2}}{\nu} \biggr
) \biggl \{ \int_0^T \sum_{k \ne 0} |e^{\frac{3}{4} \nu t} {\hat F}|^2 (k,
t) |dt| \biggr \}^{1/2},
\end{multline}
and replacing $t \in [0, T]$ by $t \in e^{i \phi} [0, T] \in {\tilde
S}_{\tilde \delta}$ is equivalent to replacing $\nu $ by $\nu \cos {\tilde
\delta}$.
\end{proof}
\begin{Lemma}\label{lemFest}
If $F = - \mathcal{P} \left [ v \cdot \nabla v \right ]$, then for $v \in
X$, and $t \in e^{i \phi} [0, T] \subset {\tilde S}_{\tilde \delta}$,
$$
\sup_{|\phi| < {\tilde \delta}} \| e^{\frac{3}{4} \nu t} F \|_{L^2_{|t|}
L^2_x} \le c_3 \| v \|_{X}^2,
$$
where $c_3 = \frac{c_4^{3/2}}{ (3 \nu \cos {\tilde \delta} )^{1/4} }$ for
$t \in {\tilde S}_{\tilde \delta}$, and $c_4$ is the Sobolev constant
bounding $\| \cdot \|_{L^6}$ by $\| \cdot \|_{H^1} $ (see for instance
\cite{adam}, page 75).
\end{Lemma}
\begin{proof}
First consider $t \in [0, T]$. H\"{o}lder's inequality implies
$$
\| e^{\frac{3}{4} \nu t} F \|^2_{L^2_{|t|} L^2_x} \le \left [ \int_0^T |
e^{- 3 \nu \tau}| d|\tau| \right ]^{1/2} \left [ \int_0^T \|
e^{\frac{3}{2} \nu \tau} | F (\cdot, \tau) | \|^4_{L^2_x} d|\tau| \right
]^{1/2}.
$$
Hence,
$$
\| e^{\frac{3}{4} \nu t} F \|_{L^2_{|t|} L^2_x} \le \frac{1}{(3
\nu)^{1/4}} \| e^{\frac{3}{2} \nu t} F \|_{L^4_{|t|} L^2_x}.
$$
If we replace $t \in [0, T]$ by $t \in {\tilde S}_{\tilde \delta}$ in this
argument, the effect is simply that $\frac{1}{(3 \nu)^{1/4}}$ gets
replaced by $ \frac{1}{(3 \nu \cos {\tilde \delta} )^{1/4}}$.
For nonnegative $u$, $w$, repeated use of H\"{o}lder's inequality gives
\begin{multline*}
\int_{\mathbb{T}^3} w^2 u^2 dx \le \left ( \int_{\mathbb{T}^3} w^6 dx \right )^{1/3}
\left ( \int_{\mathbb{T}^3} u^3 dx \right )^{2/3} \\
\le \left \{ \int_{\mathbb{T}^3} w^6 dx \right \}^{1/3} \left \{ \int_{\mathbb{T}^3}
u^{2} dx \right \}^{1/2} \left \{ \int_{\mathbb{T}^3} u^{6} dx \right \}^{1/6}
\le \| w \|_{L_x^6}^2 \| u \|_{L_x^2} \| u \|_{L_x^6}.
\end{multline*}
Therefore, it follows that
$$
\|e^{\frac{3}{2} \nu t} F (\cdot, t) \|_{L^2_x} \le \| e^{\frac{3}{2} \nu
t} |v (\cdot, t)| |\nabla v (\cdot, t)| \|_{L^2_x} \le \| e^{\frac{3}{4}
\nu t} v \|_{L_x^6} \| e^{\frac{3}{4} \nu t} \nabla v \|_{L_x^2}^{1/2} \|
e^{\frac{3}{4} \nu t} \nabla v \|_{L_x^6}^{1/2},
$$
and
$$
\| e^{\frac{3}{2} \nu t} F \|_{L^4_{|t|} L^2_x} \le \| e^{\frac{3}{4} \nu
t} v \|_{L^\infty_{|t|} L_x^6 } \| e^{\frac{3}{4} \nu t} \nabla v
\|^{1/2}_{L^{\infty}_{|t|} L_x^2} \| e^{\frac{3}{4} \nu t} \nabla v
\|^{1/2}_{L_{|t|}^2 L_x^6}.
$$
Using Sobolev inequalities, we have
$$
\| v (\cdot, t) \|_{L_x^6} \le c_4 \| v (\cdot, t) \|_{H^1_x},
$$
$$
\| D v (\cdot, t) \|_{L_x^6} \le c_4 \| D v (\cdot, t) \|_{H^1_x}.
$$
Thus
$$
\| e^{\frac{3}{2} \nu t} F \|_{L^4_{|t|} L^2_x} \le c_4^{3/2} \|
e^{\frac{3}{4} \nu t} v \|^{3/2}_{L^\infty_{|t|} H^1_x} \| e^{\frac{3}{4}
\nu t} D v \|^{1/2}_{L_{|t|}^2 H^1_x} \le c_4^{3/2} \| v \|_X^2.
$$
Therefore,
$$
\| e^{\frac{3}{4} \nu t} F \|_{L^2_{|t|} L^2_x} \le \frac{c_4^{3/2} }{(3
\nu \cos {\tilde \delta} )^{1/4}} \| v \|_{X}^2.
$$
Since the right hand side is independent of $\phi$, taking the supremum of
the left side over $\phi$ for $|\phi| < {\tilde \delta}$, the Lemma
follows.
\end{proof}
\begin{Lemma}\label{lemN}
The operator $\mathcal{N}$ defined in (\ref{NS1p}) satisfies the following
estimate:
\begin{multline*}
\| \mathcal{N} [ v ] \|_X \le c_1 \| v_0 \|_{H^1_x} + c_2 c_3 \| v \|_X
^2, \\
\| \mathcal{N} [ v^{(1)} ] - \mathcal{N} [ v^{(2)} ] \|_X \le c_2 c_3
\left ( \| v^{(1)} \|_{X} + \| v^{(2)} \|_X \right ) \| v^{(1)} - v^{(2)}
\|_X.
\end{multline*}
\end{Lemma}
\begin{proof}
Note that
$$
\mathcal{N} \left [ v \right ] = e^{\nu t \Delta} v_0 + \int_0^t e^{\nu
(t-\tau)\Delta} F (\cdot, \tau) d\tau,
$$
where $F = -\mathcal{P} \left [ v \cdot \nabla v \right ]$. By Lemmas
\ref{lemu0}, \ref{lemuf} and \ref{lemFest} it follows that
$$
\| \mathcal{N} \left [ v \right ] \|_X \le c_1 \| v_0 \|_{H^1_x} + c_2
c_3 \| v \|_X^2.
$$
For the second part, we note that
$$
v^{(1)} \cdot \nabla v^{(1)} - v^{(2)} \cdot \nabla v^{(2)} = \Bigl (
v^{(1)} - v^{(2)} \Bigr ) \cdot \nabla v^{(1)} + v^{(2)} \cdot \Bigl (
\nabla v^{(1)} - \nabla v^{(2)} \Bigr ).
$$
Using Lemmas \ref{lemu0}, \ref{lemuf} and \ref{lemFest} again, we obtain
the desired estimate.
\end{proof}
\begin{Lemma}\label{lemepsilon}
If
$$
\| v_0 \|_{H^1_x} < {\hat \epsilon} \equiv \frac{1}{4 c_1 c_2 c_3} =
\frac{ 3^{1/4} \nu^{7/4} [ \cos {\tilde \delta} ]^{7/4} }{ 8 \sqrt{2}\,
c_4^{3/2} (\sqrt{\nu \cos {\tilde \delta}} + \sqrt{2} ) (2+ \sqrt{\nu \cos
{\tilde \delta}}) },
$$
$v (x, t)$ exists in $X$ for any $T$. $v (\cdot, t)$ is analytic in $t \in
{\tilde S}_{\tilde \delta}$ and decays exponentially in that sector as
$|t| \to \infty$, with
$$
\| v (\cdot, t) \|_{H^1_x} < 2 c_1 {\hat \epsilon} e^{-\frac{3}{4} \nu
\mathrm{Re} t}.
$$
Further, this solution is smooth in $x$. If
$$
\| v_0 \|_{H^1_x} < \epsilon_0 \equiv \frac{3^{1/4} \nu^{7/4}}{8
\sqrt{2}\, c_4^{3/2} (\sqrt{\nu} + \sqrt{2} ) (2 + \sqrt{\nu} ) },
$$
then $v (x,t)$ is a classical solution for all $t \in \mathbb{R}^+$.
\end{Lemma}
\begin{proof}
If $\| v_0 \|_{H^1_x} < {\hat \epsilon}$, Lemma \ref{lemN} implies that
the operator $\mathcal{N}$ (defined in Lemma \ref{lemN}) is contractive
and hence a solution to Navier-Stokes equation exists in $X$. Since the
estimates are uniform in $t$, it follows that this solution exists for all
$t \in {\tilde S}_{\tilde \delta}$. Known results (or Theorem \ref{Thm01}
above) imply that if the initial data is in $H^1_x$, then the solution
becomes smooth (in fact, analytic for periodic data, \cite{FoiasTem})
instantly, and thus it is a classical solution when $t > 0$. Analyticity
and decay in $t$ follows from the definition of $X$, the arbitrariness in
the choice of $T$ and the observation that $\mathcal{N}$ in Lemma
\ref{lemN} is contractive in a ball of radius $2 c_1 \|v_0 \|_{H^1_x}$.
Further, by taking the $\lim_{\tilde \delta \to 0^+} {\hat \epsilon} =
\epsilon_0$, we obtain the less restrictive condition on $\| v_0
\|_{H^1_x}$ that ensures existence of classical solution only for $t \in
\mathbb{R}^+$.
\end{proof}
\begin{Lemma}\label{lemH2}
If $\| v_0 \|_{H^2_x} \le \epsilon_2$ for sufficiently small $\epsilon_2$,
$$
\| v (\cdot, t) \|_{H^2_x} \le 2 c_1 \| v_0 \|_{H^2_x} e^{-\frac{3}{4} \nu
\mathrm{Re} t}
$$
for any $t \in {\tilde S}_{\tilde \delta}$.
\end{Lemma}
\begin{proof}
This is similar to the proof of Lemma \ref{lemepsilon} with $X$ replaced
by
$$
X \equiv \left \{ \mathcal{A}_{t} H^2_x \right \} \cap \left \{ L_{|t|}^2
H^3_x \right \}:= \Big ( \mathcal{A}_{t} \otimes H^2(\mathbb{T}^3 [0,
2\pi])\Bigr)\,\,\cap \Bigl( L^2 \left [ e^{i \phi} (0,T) \right ] \otimes
H^3(\mathbb{T}^3 [0, 2\pi]) \Bigr),
$$
for $|\phi| < {\tilde \delta}$.
\end{proof}
\begin{Theorem}\label{Testimate}
A weak solution to (\ref{nseq0}) becomes classical when $t > T_c$, where
$$
T_c = \frac{256 E c_4^3 (\sqrt{\nu} + \sqrt{2} )^2 (2+\sqrt{\nu})^2 }{
3^{1/2} \nu^{9/2}}.
$$
This solution is analytic in $t$ for $(t - T_{c,a}) \in {\tilde S}_{\tilde
\delta}$, where
$$
T_{c,a} = \frac{256 E c_4^3 (\sqrt{\nu \cos {\tilde \delta}} + \sqrt{2}
)^2 (2+\sqrt{\nu \cos {\tilde \delta} })^2}{ 3^{1/2} \nu [ \nu \cos
{\tilde \delta} ]^{7/2} }.
$$
Further, for any constant $C$, there exists $T_2$ so that for $ (t-T_2)
\in {\tilde S}_{\tilde \delta}$,
$$
\| {\hat v} (\cdot, t) \|_{l^1} < C \exp \left [ -\frac{3}{4} \nu \mathrm{Re} \{
t-T_{2} \} \right ].
$$
\end{Theorem}
\begin{proof}
Leray's energy estimate implies
$$
\| \nabla v \|_{L^2_{|t|} L^2_x} \le \sqrt{\frac{E}{\nu}},
$$
where $E = \frac{1}{2} \| v_0 \|^2_{L^2_x}$. From a standard pigeon-hole
argument, it follows that there exists $T_1 \in (0, T]$ so that
$$
\| \nabla v (\cdot, T_1) \|_{L^2_x} \le \sqrt{\frac{E}{\nu T}}.
$$
Therefore, Poincar\'e's inequality implies
$$
\| v (\cdot, T_1) \|_{H^1_x} \le \sqrt{\frac{2 E}{\nu T}}.
$$
This means there exists some $T_1 \in [0, T_c]$, where
$$
T_c = \frac{256 E c_4^3 (\sqrt{\nu} + \sqrt{2} )^2 (2+\sqrt{\nu})^2 }{
3^{1/2} \nu^{9/2}}
$$
for which
$$
\| v (\cdot, T_1) \|_{H^1_x} < \frac{ 3^{1/4} \nu^{7/4}}{8 \sqrt{2}
c_4^{3/2} (\sqrt{\nu} + \sqrt{2}) (2+\sqrt{\nu})}.
$$
Replacing $t$ by $t-T_1$ in Lemma \ref{lemepsilon}, we see that the
solution is classical and smooth for $t - T_1 \in \mathbb{R}^+$, therefore
necessarily for $t > T_c$.
Further, from these arguments, it is clear that there exists a $T_{1,a}
\in [ 0, T_{c,a} ]$ so that
$$
\| v (\cdot, T_{1,a}) \|_{H^1_x} \le \frac{ 3^{1/4} [\nu \cos {\tilde
\delta}]^{7/4}}{8 \sqrt{2} c_4^{3/2} (\sqrt{\nu \cos {\tilde \delta} } +
\sqrt{2} ) (2+\sqrt{\nu \cos {\tilde \delta}})}.
$$
Replacing $t$ by $t-T_{1,a}$ in Lemma (\ref{lemepsilon}), we see that the
classical solution is analytic in $t - T_{1, a} \in {\tilde S}_{\tilde
\delta}$ (which includes the region $t - T_{c,a} \in {\tilde S}_{\tilde
\delta}$).
Further, since for $t > T_1$ we have
$$
\int_{T_1}^\infty \| v (\cdot, t) \|_{H^2_x}^2 dt \le \sup_{T > T_1} \| v
\|_X^2 \le (2 c_1 \epsilon_0)^{2},
$$
it follows from a pigeon-hole argument that given $\epsilon_2$, there
exists a $T_2 > T_1$ such that
$$
\| v (\cdot, T_2) \|_{H^2_x} < \epsilon_2.
$$
From Lemma \ref{lemH2}, it follows that $v$ exists for $t - T_2 \in
{\tilde S}_{\tilde \delta}$ and
$$
\| v (\cdot, t) \|_{H^2_x} < 2 c_1 \epsilon_2 e^{-\frac{3}{4} \nu \mathrm{Re}
(t-T_2)}.
$$
The last part of the theorem follows from (recall ${\hat v}(0) = 0$)
$$
\| {\hat v} (\cdot, t) \|_{l^1} \le c_5 \| |k|^2 {\hat v} (\cdot, t)
\|_{l^2} \le c_5 \| v (\cdot, t) \|_{H^2_x}.
$$
\end{proof}
\begin{Remark}{
\rm The decay rate $e^{-\frac{3}{4} \nu t}$ for $\| {\hat v} (\cdot, t)
\|_{l^1}$ is not sharp. A more refined argument can be given, to estimate
away the nonlinear terms and obtain a $e^{-\nu t}$ decay.}
\end{Remark}
\section{Acknowledgments.}
This work was supported in part by the National Science Foundation
(DMS-0406193, DMS-0601226, DMS-0600369 to OC and DMS-0405837, DMS-0733778
to S.T). We are grateful to P. Constantin for giving us useful references
and to Alexey Cheskidov for pointing out that estimates of the time beyond
which weak Leray solutions becomes classical are easy to obtain.
|
1,108,101,562,987 | arxiv | \section{Introduction}
Black-box optimization seeks to optimize a function based solely on input-output information. This problem is of particular interest in many scientific and engineering applications and is quite relevant to several machine learning tasks. In the former case, the optimized black box is often the result of a complex simulation code, software, or workflow where we can get only the output for a given input configuration. In the latter, many learning algorithms are sensitive to hyperparameters, which cannot be inferred during the training process and often need to be adapted by the user based on the training data~\cite{10.5555/3044805.3044891}.
Existing methods for solving black-box optimization can be grouped into model-based and model-free methods. In the former, a surrogate model for the black-box function is learned in an online fashion and used to speed up the search. Examples include trust-region methods ~\cite{Wild_2015}, sequential model-based optimization~\cite{hutter2011sequential,bergstra_algorithms_nodate}, and estimation of distribution methods~\cite{hauschild2011introduction}. In the latter, the search navigates the search space directly without any explicit model. Examples include Nelder-Mead methods~\cite{olsson1975nelder}, particle swarm optimization~\cite{poli2007particle}, cross-entropy methods~\cite{de2005tutorial}, variants of evolutionary algorithms~\cite{back1993overview}, random search, and simulated annealing~\cite{rutenbar1989simulated}. These two groups of methods have their own strengths and weaknesses. A key advantage of model-based over model-free methods is the search efficiency. Given an explicit model, the search can quickly identify promising regions of the search space and find high quality solutions faster than model-free methods can~\cite{shahriari_taking_2016}.
Bayesian optimization (BO) is a promising class of sequential-model-based optimization methods. It has been used in a wide range of black-box function optimization tasks~\cite{shahriari_taking_2016,bischl2017mlrmbo,bartz2016survey}. In BO, an incrementally updated surrogate model is used to learn the relationship between the inputs and outputs during the search. The surrogate model is then used to prune the search space and identify promising regions. BO navigates the search space by achieving a balance between exploration and exploitation to find high-performing configurations. While the exploration phase samples input configurations that can potentially improve the accuracy of the surrogate model, the exploitation phase samples input configurations that are predicted by the model to be high performing.
With the transition of high-performance computing (HPC) systems from petascale to exascale~\cite{10.1145/3372390}, massively parallel Bayesian optimizations that can take advantage of multiple computing units to perform simultaneous black-box evaluations are attractive. These methods will be particularly beneficial for many HPC use cases, such as simulator calibration, software tuning, automated search of machine learning pipelines, neural network hyperparameter tuning, and scientific simulation optimization. Nevertheless, the inherent sequential nature of BO~\cite{10.5555/2986459.2986743,hutter2011sequential,bergstra2013making,klein2017fast} presents challenges for scaling. Most of the parallel Bayesian optimization methods propose a multipoint acquisition strategy in a centralized architecture where the manager performs BO and workers evaluate the black-box functions. Also, these methods often use Gaussian process as surrogate model~\cite{shahriari_taking_2016,frazier_tutorial_2018}. However, these two components respectively the centralized architecture and the GPR are two major bottlenecks for scaling BO in an HPC setting.
To sum up, the problem we seek to solve is to {\bf improve both quality of the final solution and time to solution of black-box optimization problems}, by efficiently using a {\bf fixed number of workers computing in parallel}.
To that end, we develop a parallel BO method involving a {\bf decentralized architecture} without a single manager. Each worker runs its own BO and {\bf communicates asynchronously} its black box evaluation results with other workers. Additionally, each worker performs {\bf asynchronous surrogate model updates}, using the {\bf effective qUCB acquisition function} (previously proposed in a centralized context to effectively navigate search space).
Our original contribution is to combine these 3 key ingredients (decentralized, asynchronous communication/updates, and qUCB acquisition function) in a powerful parallel BO method that beats the state-of-the-art significantly by finding better solutions in shorter computational time. As a side contribution, we gain additional improvements by adapting the computational effort of the surrogate model updates, when it enters a more expensive regime approaching the effort of computing a new point of the black box model it tried to approximate in a cheap way. This situation was unprecedented in BO because never enough points were calculated to make this an issue.
Our principal findings support the effectiveness of our method:
\begin{itemize}
\item Faster convergence and better solution over state-of-the-art distributed synchronous BO methods~\cite{hernandez-lobato_parallel_nodate,garcia-barcos_fully_2019} on standard continuous black-box benchmark functions.
\item Faster convergence and better solution for neural networks hyperparameter tuning from the Exascale computing project CANDLE benchmarks over asynchronous centralized BO methods based on Gaussian process regression and pruning~\cite{8638041,li_hyperband_2018}.
\item Scaling studies of parallel BO methods at HPC scale involving 4,096 workers, a scale never been reported in the BO literature.
\end{itemize}
\section{Related Work}
Most parallel BO methods follow a centralized single manager/multiple workers~\cite{shahriari_taking_2016} approach, where a manager runs the BO and generates configurations sequentially~\cite{gonzalez_batch_2015,snoek_practical_2012}, and the workers evaluate the configurations and send the results back to the manager. The manager generates configurations in a batch synchronous or asynchronous way~\cite{pmlr-v97-alvi19a}. Many methods make use of Gaussian process regression (GPR) because it can provide gradients for optimization methods used in selecting the most informative configurations for evaluation. Although these methods have achieved considerable success, they are often limited to small HPC resources with tens of parallel workers.
By design, centralized BO is not a scalable approach because a single manager managing requests from many workers can create congestion and, therefore, increase the worker idle time, where the worker will be waiting for the manager to suggest a configuration to evaluate. This issue will worsen with larger numbers of workers because GPR has a cubic time complexity~\cite{liu_when_2019} for model fitting and will significantly affect the scalability of the BO methods.
In contrast to centralized search architectures, distributed BO was recently introduced based on stochastic policies such as Thompson sampling and Boltzmann policy~\cite{hernandez-lobato_parallel_nodate,garcia-barcos_fully_2019} \edit{(denoted by bUCB when applied to UCB)} but restricting the study to synchronous communication, a dozen parallel workers and GPR. Instead of GPR, other surrogate models have also been investigated in the literature. These include Bayesian neural networks~\cite{snoek_scalable_2015} and random forest~\cite{hutter_algorithm_2013} but were used only in sequential BO and batch synchronous single manager multiple worker schemes. They are also limited to 10s of parallel workers. We show that our proposed distributed BO with asynchronous communication will be able to scale better than that of the widely used centralized single manager/multiple worker parallelization schemes with GPR models.
\section{Background}
\subsection{Bayesian Optimization}
\edit{Bayesian optimization is a well-established method to solve the global optimization problem of expensive and noisy black-box functions~\cite{mockus1978application}. For a detailed overview see~\cite{shahriari_taking_2016}. The problem we seek to solve is formally presented in Eq.~\ref{eq:genprob}}:
\begin{equation}
\max_x \left\{ f(x) : x=(x_\cI,x_\cR,x_\cC) \in \cX \right\},
\label{eq:genprob}
\end{equation}
where $x=(x_\cI,x_\cR,x_\cC)$ is a vector of \edit{$n_d$} parameters partitioned into three types of parameters $\cI$, $\cR$, and $\cC$, respectively denoting discrete parameters with a natural ordering, continuous parameters taking real values, and categorical parameters with no special ordering; $f(x)$ is a {\em computationally expensive} black-box objective function that needs to be maximized (minimization problems can also be carried out similarly by changing the sign of $f$). Typically, the feasible set $\cX$ is defined by a set of constraints on the parameters $x$. This includes bound constraints that specify the minimum and maximum values for the parameters and linear and non-linear constraints that express the feasibility of the given parameter configuration through algebraic equations. Consequently, given these algebraic constraints, the time required to verify the feasibility (that is, if $x \in^?\cX$) is negligible. Hidden constraints are unknown to the user and generally require a evaluation of the black-box function $f$ to be uncovered.
The objective function $f(x)$ can be deterministic (the same values $f(x)$ for the same $x$) or stochastic (different $f(x)$ values for the same $x$). Generally, finding the global optimal solution of the stated problem is computationally intractable~\cite{larson2019derivative,bartz2016survey,bischl2017mlrmbo}, except for the simplest cases. The presence of integer and categorical parameters, algebraic and hidden constraints, results in a discontinuous parameter space, which makes the search process difficult. Several mathematical optimization algorithms take advantage of gradients that measure the change in the value of the objective function with respect to the change in the values of the parameters. This is not feasible for the general problem \ref{eq:genprob}, however, because the black-box function cannot be differentiated. We focus on the optimization problem setting where the black-box function $f$ is computationally expensive to evaluate.
\subsection{Surrogate Model}
\label{surrogate-model}
\edit{BO uses a computationally cheaper surrogate-model to provide new points for evaluation}. The choice of a good surrogate model plays a crucial role in the scalability and effectiveness of the BO search method. In most BO methods, GPR is employed because of its built-in uncertainty quantification capability~\edit{\cite{shahriari_taking_2016,frazier_tutorial_2018}}. Specifically, GPR implicitly adopts Bayesian modeling principles of estimating the posterior distribution of output from the given input-output pairs and provides the predictive mean and variance for the unevaluated input configurations. GPR also has the advantage of being differentiable.
However, \edit{while GPR is superior for faster convergence when run sequentially it remains} one of the key bottlenecks for computational scalability in HPC setting \edit{where thousands of input-output pairs (samples) can be computed in one batch}. \edit{In fact,} the GPR model needs to be refitted with a rapidly growing set of \edit{samples} (past and new), but it has a cubic complexity $O(n_\text{sample}^3)$~\cite{liu_when_2019} with respect to the number of \edit{samples} $n_\text{sample}$. For a small \edit{$n_\text{sample}$}, this is not an issue. At scale, however, the cubic time complexity for model fitting will slow \edit{or even stop} the search's ability to generate new input configurations, thus increasing the idle time of the workers and eventually resulting in poor HPC resource utilization as well as fewer overall evaluations. \edit{Other surrogate models were proposed in the literature such as deep neural networks~\cite{snoek_scalable_2015} and random-forest regressor (RFR)~\cite{breiman2001random,hutter_algorithm_2013}. We adopt RFR for its wide adoption thanks to its versatility with real, discrete and categorical features as well as its robustness}. RFR has a fitting time complexity of $O(n_\text{tree}\cdot n_\text{feature} \cdot n_\text{sample} \cdot \log(n_\text{sample}))$ with $n_\text{tree}$ number of trees in the ensemble and $n_\text{feature}$ number of features (\edit{$=n_d$}, problem dimension) per sample, which is constant for each search setting. In addition to the log-linear time complexity, RFR provides simple and easy in-node parallelization opportunities, where each tree can be built independently of other trees in the ensemble.
\subsection{Acquisition function}
\label{acq-func}
The way in which the input point $x$ is selected for evaluation is \edit{another} bottleneck for scalability. The selection method comprises an acquisition function that measures how good a point is and a selector that seeks to optimize the acquisition function over $\cX$. Typically, when all the parameters are continuous (or if they afford such encoding/transformation), specialized gradient-based optimizers are employed to select the next point. However, because of the lack of a closed-form integral for RFR, the mixed-integer nature of the parameter space, the presence of algebraic constraints, and the resulting discontinuity in the search space, such optimizers cannot be employed. While specialized mixed-integer nonlinear solvers for these types of problems do exist, they are computationally expensive and cannot be employed in the fast iterative context required for scaling. Therefore, we use a point selection scheme that uses an upper confidence bound (\ucb)~\cite{shahriari_taking_2016} acquisition function on top of a random sample from $\cX$. This scheme selects an input point $x$ for evaluation as follows. A large number of unevaluated configurations are sampled from the feasible set $\cX$. The BO uses a dynamically updated surrogate model $m$ to predict a point estimate (mean value) $\mu(x)$ and variance $\sigma(x)^2$ for each sampled configuration $x$. The sampled configurations are then ranked using the \ucb given by
\begin{equation} \label{eqn:ucb}
\ucb(x) = \mu(x) + \kappa \cdot \sigma(x),
\end{equation}
where $\kappa \geq 0$ is a parameter that controls the trade-off between exploration and exploitation. When $\kappa$ is set to zero, the search performs only exploitation (greedy); when $\kappa$ is set to a large value, the search performs stronger exploration. A balance between exploration and exploitation is achieved when $\kappa$ is set to an appropriate value, classically $\kappa = 1.96$, which translates into a 95\% confidence interval around the mean estimate when computing \ucb.
\section{Proposed Method}
In this section we introduce our novel approach to BO in the context of HPC exploiting a distributed architecture with asynchronous communication.
\edit{
In fact, multiple compute units can be used on an HPC platform, where each function evaluation requires a fraction of this platform. Let $n_\text{worker}$ be the number of available workers, where a worker represents a unit of computational resource available to evaluate the black-box function (e.g., CPU, GPU, fraction of a compute node, whole node or a group of nodes). Let $T_{\text{wall}}$ be the total wall-clock time for which these resources are available (i.e., job duration). Then the overall available compute time $T_{\text{avail}} = n_\text{worker} \cdot T_{\text{wall}}$ upper bounds the total time spent in black-box function evaluations $T_{\text{eff}} = \sum_{t_\text{eff} \in \mathcal{T}} t_\text{eff}$ used to perform the job, where $\mathcal{T}$ is the set of durations $t_\text{eff}$ for all evaluated black-box functions. We define ``effective utilization'' as $U_{\text{eff}} = T_\text{eff} / T_\text{avail} $. For the problem of ``parallel black-box optimization'' we seek to maximize the objective function $f$, as well as maximize the effective utilization $U_\text{eff}$. We maximize the later by minimizing the computational overhead of the search algorithm.
In other words, in addition to the objective function maximization, the optimization method should effectively use parallel resources by maximizing their usage mainly with black-box evaluations.}
BO is a promising approach for tackling the class of black-box optimization problems described in Eq.~\ref{eq:genprob}. BO tries to leverage accumulated knowledge of $f$ throughout the search by modeling it as a probability distribution $p(y|x)$, which represents the relationship between $x$, the input, and $y$, the output. Typically, BO methods rely on dynamically updating a surrogate model that estimates $p(y|x)$. Often, this distribution is assumed to follow a normal distribution. Therefore, the surrogate model estimates both $\mu(x)$ the mean estimate of $y$ and $\sigma(x)^2$ the variance.
The latter is leveraged to assess how uncertain the surrogate model is in predicting $x$~\cite{shahriari_taking_2016}. The surrogate model is cheap for prediction and can be used to prune the search space and identify promising regions, where the surrogate model is then iteratively refined by selecting new inputs that are predicted by the model to be high performing (exploitation) or that can potentially improve the quality of the surrogate model (exploration). BO navigates the search space by achieving a balance between exploration and exploitation to find high-performing input configurations.
\subsection{Asynchronous Distributed Bayesian Optimization}
\label{adbo}
\begin{figure*}[!t]
\vspace{-5mm}
\centering
\begin{subfigure}{0.49\linewidth}
\centering
\includegraphics[width=.6\textwidth]{figures/centralized-search-model.jpg}
\vspace{-4mm}
\caption{}
\label{fig:centralized-search}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\centering
\includegraphics[width=.6\textwidth]{figures/distributed-search-model.jpg}
\vspace{-4mm}
\caption{}
\label{fig:distributed-search}
\end{subfigure}
\caption{\edit{Centralized (\ref{fig:centralized-search}) and} distributed (\ref{fig:distributed-search}) search models. A circle represents a process with $W$ for a worker \edit{and $M$ for a manager}, an arrow represents a communication, $O$ represents the optimizer, and $f$ represents the computation of the black-box function. \edit{$t_{\text{wait}}$ is the time for which a worker waits before being processed by the optimizer. $t_\text{resp}$ is the time taken by the optimizer to suggest a new configuration.}\vspace{-0.5cm}}
\end{figure*}
Figure \ref{fig:distributed-search} shows a high-level sketch of our proposed asynchronous distributed Bayesian optimization (ADBO) method. The key feature of our method is that each worker executes a sequential BO search; performs only one black-box evaluation, which avoids congestion occurring in a centralized setting (see Fig.~\ref{fig:centralized-search}) ; and communicates the results with all other workers in an asynchronous manner. The BO of each worker differs from that of the other workers with respect to the value $\kappa$ used in the acquisition function \ucb. Each BO starts by sampling the value $\kappa_i$ from an exponential distribution $\text{Exp}(\frac{1}{\kappa})$, where $\kappa$ is the user-defined parameter. The BO that takes smaller $\kappa_i$ values will perform exploitation and sample points near the best found so far in the history $H$. On the other hand, the BO that receives large $\kappa_i$ values will perform exploration to reduce the predictive uncertainty of the RFR model.
Consequently, on average multiple BO searches will seek to achieve a good trade-off between exploration and exploitation; however, there will be a number of BO searches that perform stronger exploitation or exploration. This effect will increase as we scale to a large number of workers and can benefit the overall search. Our approach is inspired by the \qucb acquisition function~\cite{wilson_maximizing_2018} for the single manager/multiple workers setting, where different $\kappa$ values are sampled from the exponential distribution for the \ucb acquisition function and different points are selected based on these values to find the balance between exploration and exploitation. The main reason for adopting \qucb is computational simplicity. Compared to other multipoint generation strategies, such as the constant liar~\cite{ginsbourger2010kriging} \edit{(denoted by CL)} that exist in the single-manager/ multiple-worker methods, there is no significant overhead to achieve a balance between exploration and exploitation.
Algorithm \ref{alg:dmbs-process} shows the high-level pseudocode of our proposed ADBO method. The worker uses the history $H$, a data structure that keeps track of the input-output pairs seen by it and other workers. The search proceeds by initializing the optimizer object with a randomly sampled $\kappa_i$ value from an exponential distribution; the search is implemented through ask-and-tell interfaces. The former implements the acquisition function and selector functionality and is used to generate an input $x_i$ for the evaluation of the black-box function. The latter is used to give the evaluation results and retrain the RFR surrogate model. As soon as the black-box evaluation is completed, the \texttt{send\_all} function is used to send the $(x,y)$ to all other workers. Next the \texttt{recv\_any} function is used to read the evaluations results from a local worker-specific buffer that stores the results from all other workers.
Both \texttt{send\_all} and \texttt{recv\_any} are asynchronous; the former does not wait for the acknowledgment of other workers, and the latter does not wait for all workers to send their most recent results. The history $H$ is then updated with other workers' input-output pair along with the current $(x,y)$ pair. The updated history is then used to refit the surrogate model through the tell interface.
\begin{algorithm2e}[!ht]
\small
\DontPrintSemicolon
\SetInd{0.25em}{0.5em}
\SetAlgoLined
\SetKwInOut{Input}{Inputs}\SetKwInOut{Output}{Output}
\SetKwFunction{Tell}{tell}
\SetKwFunction{Ask}{ask}
\SetKwFunction{History}{History}
\SetKwFunction{Optimizer}{Optimizer}
\SetKwFunction{Update}{update}
\SetKwFunction{sendall}{send\_all}
\SetKwFunction{recvany}{recv\_any}
\SetKwFor{For}{for}{do}{end}
\Input{$f$: black-box function, $\kappa$: $UCB$ hyperparameter, $comm$: communicator}
\Output{$H$ the history of evaluated configurations}
\tcc{Initialization}
$H \leftarrow$ \History{}\\
$\kappa_{i} \sim \operatorname{Exp}\left(\frac{1}{\kappa}\right)$ \\
$optimizer \leftarrow \Optimizer(\kappa_{i})$\\
\tcc{Main loop}
\While{not done}{
$x \leftarrow$ $optimizer.$\Ask{}\\
$y \leftarrow f(x)$\\
\sendall($x, y, comm$)\\
$H \leftarrow$ \recvany($H, comm$)\\
$optimizer.$\Tell{$H$}\\
}
\caption{Asynchronous Distributed Bayesian Optimization (Worker Process) }
\label{alg:dmbs-process}
\end{algorithm2e}
\subsection{Improved uncertainty quantification in random forest}
\label{surrogate-model-uq}
Although RFR provides computational advantages, its uncertainty quantification capabilities are not well known or documented in the literature. The most widely used RFR implementation is from the Scikit-Learn package~\cite{scikit-learn}. Our analysis of uncertainty quantification with the default implementation of this package showed that the predictive variance is not as good as that of GPR. The primary reason was the best-split strategy adopted in the usual random forest algorithm to minimize the variance of the estimator. Although this results in better predictive accuracy, the predictive variance is not informative in the context of Bayesian optimization because it is constant in unexplored areas. We tested the random split strategy for tree splitting, as documented in~\cite{hutter_algorithm_2013}, and found that the uncertainty estimates from RFR are improved and are comparable to those with GPR. But because it does not follow the conventional RF algorithm, the split strategy was not exposed as a parameter to the user. We believe that this might be one of the reasons that RFR, despite its computational advantages, was not thoroughly experimented with for the purpose of uncertainty quantification. The uncertainty of the RFR is computed by applying the law of total variance on the mean estimate $\mu_\text{tree}(x)$ and variance estimate $\sigma_\text{tree}(x)^2$ of the trees of the ensemble, which are learned by minimizing the squared error:
\begin{equation} \label{eq:law-to-variance}
\sigma(x)^2 = \mathbb{E}[\sigma_\text{tree}(x)^2] + \mathbb{V}[\mu_\text{tree}(x)],
\end{equation}
where $\mathbb{E}[.]$ and $\mathbb{V}[.]$ are respectively the empirical mean and variance. Our new implementation of RFR with its improved uncertainty estimate module will be made available as open source software for future research.
\subsection{Reducing refitting complexity of the surrogate model}
\label{surrogate-model-refit}
Despite the log-linear time complexity, RFR surrogate model training can become computationally expensive as the search progresses. The reason is that at each refit step, the RFR model needs to be retrained from scratch with the entire set of collected input-output pairs $(x,y)$ that the search has so far acquired. This becomes an issue at scale because the larger the number of workers, the faster the value of $n_\text{sample}$ will grow and increase the refitting time. Therefore, we reduce the refitting time complexity by using two methods. First, we upper-bound the number of samples $n_\text{sample}$ to a constant value $n_\text{max\_sample}$ by quantile-based undersampling, where we define five equal-spaced quantiles on the set of collected objective values (using 6 intervals) and sample with replacement $n_\text{max\_sample}/5$ points for each interval.
The quantile-based undersampling provides a mechanism to reduce input points that have similar output values. While it is possible that a large number of points will be left out at a given refit step, all the points in the history have a positive probability of re-sampled again for the next fit. This also allows our BO method to look at different regions of the input space at differnt steps.
Second, instead of using the default $n_\text{feature}$ for the split in the tree construction, we use $\log_2(n_\text{feature})$; this is useful for problems with large dimensions. Consequently, the complexity of RFR refit will become bounded by $O(n_\text{tree}\cdot \log_2(n_\text{feature}) \cdot n_\text{max\_sample} \cdot \log(n_\text{max\_sample}))$.
\subsection{Implementation-specific details}
\label{imp-details}
\begin{algorithm2e}[!ht]
\small
\DontPrintSemicolon
\SetInd{0.25em}{0.5em}
\SetAlgoLined
\SetKwInOut{Input}{Inputs}\SetKwInOut{Output}{Output}
\SetKwFunction{Push}{push}
\SetKwFunction{Size}{size}
\SetKwFunction{Rank}{rank}
\SetKwFunction{ISend}{isend}
\SetKwFunction{Waitall}{waitall}
\SetKwFor{For}{for}{do}{end}
\Input{$comm$: communicator, $x$: configuration, $y$: objective value}
$requests \leftarrow \left[\:\right]$\\
\For{$i\leftarrow 1$ \KwTo $comm.$\Size{}}{
\If{$i \ne comm.\text{\Rank{}}$}{
$requests.$\Push{$comm.$\ISend{$i$, $(x,y)$}}
}
}
\Waitall{$requests$}
\caption{Sending Function (\texttt{send\_all})}
\label{alg:send_all_function}
\end{algorithm2e}
\begin{algorithm2e}[!ht]
\small
\DontPrintSemicolon
\SetInd{0.25em}{0.5em}
\SetAlgoLined
\SetKwInOut{Input}{Inputs}\SetKwInOut{Output}{Output}
\SetKwFunction{Push}{push}
\SetKwFunction{Size}{size}
\SetKwFunction{IRecv}{irecv}
\SetKwFunction{Done}{done}
\SetKwFunction{Data}{data}
\SetKwFunction{Cancel}{cancel}
\SetKwFor{For}{for}{do}{end}
\Input{$comm$: communicator, $H$: history of evaluated configurations}
\Output{$H$: updated history}
$N \leftarrow 0$\\
$received\_any \leftarrow (comm.\text{\Size{}} > 1)$\\
\While{$received\_any$}{
\tcc{Emit requests}
$received\_any \leftarrow \text{False}$\\
$requests \leftarrow \left[\:\right]$\\
\For{$i\leftarrow 1$ \KwTo $comm.$\Size{}}{
\If{$i \ne comm.\text{\Rank{}}$}{
$requests.$\Push{$comm.$\IRecv{$i$}}
}
}
\tcc{Process requests}
\For{$i\leftarrow 1$ \KwTo $comm.$\Size{} - 1}{
\If{$requests[i].$\Done}{
$H.$\Push{$requests[i].$\Data}\\
$N \leftarrow N + 1$\\
$received\_any \leftarrow \text{True}$\\
}{
$requests[i].$\Cancel
}
}
}
\caption{Receive Function (\texttt{recv\_any})}
\label{alg:recv_any_function}
\end{algorithm2e}
The details of the \texttt{send\_all} and \texttt{recv\_any} functions are shown in \ref{alg:send_all_function} and \ref{alg:recv_any_function}, respectively. The algorithms use primitives from the Message Passing Interface~\cite{10.5555/898758} (MPI). The \texttt{send\_all} function uses the MPI \texttt{isend} asynchronous primitive to send the evaluation results (i.e., input-output pair) to other workers. The \texttt{recv\_any} function uses the MPI \texttt{irecv} asynchronous primitive to receive the data previously sent by other workers. The
\textit{comm} variable is in charge of connecting the different processes of the current execution. In \texttt{send\_all}~(Alg.~\ref{alg:send_all_function}) we emit all asynchronous sending requests that are nonblocking (\edit{l}.1--6). The current process waits for the completion of these requests (l.7) to make sure the data is sent. Then, in \texttt{recv\_any}~(Alg.~\ref{alg:recv_any_function}) we start by setting a condition (l.2) to enter the following loop only if there are at least two workers. Subsequently, we enter the loop (l.3), which will be repeated until no additional data is received, a process that is managed with the \textit{received\_any} variable. Afterwards, inside the loop we have a first block (\edit{l}.4--10) that emits asynchronous receive worker at once (i.e., without blocking) and a second block between (\edit{l}.11--18) that processes the requests by updating the history $H$ with received data (l.13) when a request has been completed (l.12) or cancels the request otherwise (l.17) to avoid blocking and continue. This procedure should minimize the idle time of a worker and therefore increase utilization.
\section{Experiments}
First, we systematically evaluate our proposed ADBO method with \qucb (ADBO+qUCB) against the state-of-the-art distributed BO (SDBO+bUCB) on a number of commonly used benchmark functions found in the literature. Since two components differs between the methods (A vs. S and q vs. b), we conduct experiments to show the advantages of each components of the proposed ADBO+qUCB method, such as asynchronous communication (A vs. S), distributed search (D vs. C), and the acquisition function (qUCB vs. bUCB or CL). Finally, we apply our method ADBO+qUCB to the problem of tuning of hyperparameters of a deep neural network model training.
The first set of experiments is conducted on the Theta supercomputer at the Argonne Leadership Computing Facility (ALCF). Theta is a Cray XC40 that comprises 4,392-nodes, each equipped with a 64-core Intel Knights Landing processor with 16 GB of MCDRAM in-package memory and 192 GB of DDR4 memory. The compute nodes are interconnected by using an Aries Dragonfly network with a file system capacity of 10 PB. \edit{In this part, each worker is attributed 2 cores.}
The second set of experiments, with hyperparameter tuning, is conducted on ThetaGPU, another system at the ALCF. ThetaGPU comprises 24 NVIDIA DGX A100 nodes, each equipped with eight NVIDIA A100 Tensor Core GPUs, two AMD Rome CPUs of 64 cores, 320 GB of GPU memory. and 1 TB of DDR4 memory. \edit{In this part, each worker is attributed 1 GPU. All experiments when repeated once are performed with the same initial random state 42. The algorithm is implemented in Python 3.8 where the main packages used are mpi4py 4.0.0, scikit-learn 1.0.1 and a fork from scikit-Optimize. The RFR is from scikit-learn with a modified split rule. Neural networks are implemented in Tensorflow 2.8.}
\subsection{Comparison with the state-of-the-art distributed synchronous BO}
In this section, to test our idea of using asynchronous communication with \qucb, we compare our proposed ADBO+qUCB with a recently developed {\em synchronous} distributed BO (SDBO) with {\em Boltzmann policy} on top of the \ucb acquisition function (bUCB) (fully denoted as SDBO+bUCB)~\cite{garcia-barcos_fully_2019} on a number of commonly used benchmark functions\footnote{\url{https://www.sfu.ca/~ssurjano/optimization.html}}. These include 10-dimensional Ackley, Griewank, Levy, and Schwefel functions and a 6-dimensional Hartmann6D function. All these are minimization problems. We emulate the computationally expensive function $f$ by artificially setting the compute time of the black-box evaluation by sampling a time from a normal distribution $\mathcal{N}(\mu=60s, \sigma=20s)$. This simulates the variation of the run time of a black-box function $f$ such as neural networks with different training times or simulators with input-dependent run times. Both ADBO+qUCB and SDBO+bUCB use the RFR in our experiments. Every BO runs with a given wall time of $T_{wall}=25$ minutes. We used 128 workers for both search methods.
Figure~\ref{fig:benchmark} shows the search trajectories of the ADBO+qUCB and SDBO+bUCB methods, where the best-found solutions over the search time are plotted. From the results we can observe that our proposed ADBO+qUCB outperforms the state-of-the-art SDBO+bUCB method with respect to both solution quality and search time. ADBO+qUCB finds high-quality solutions in shorter search time, and on all the benchmarks the final solution found by ADBO is superior to that of SDBO+bUCB. This improvement is attributed to two factors: first, the asynchronous nature of ADBO+qUCB results in 1.68x more function evaluations on average than that of SDBO+bUCB; and second, the use of Boltzmann policy for the acquisition function was not as effective as the \qucb used in ADBO+qUCB.
\begin{figure}[!b]
\vspace{-5mm}
\centering
\begin{subfigure}{0.8\linewidth}
\centering
\includegraphics[width=\textwidth,height=0.5\textwidth]{figures/benchmark/plot_objective_multi_ackley.pdf}
\vspace{-5mm}
\caption{Ackley}
\label{fig:best-objective-ackley}
\end{subfigure}
\begin{subfigure}{0.8\linewidth}
\centering
\includegraphics[width=\textwidth,height=0.5\textwidth]{figures/benchmark/plot_objective_multi_griewank.pdf}
\vspace{-5mm}
\caption{Griewank}
\label{fig:best-objective-griewank}
\end{subfigure}
\begin{subfigure}{0.8\linewidth}
\centering
\includegraphics[width=\textwidth,height=0.5\textwidth]{figures/benchmark/plot_objective_multi_hartmann6D.pdf}
\vspace{-5mm}
\caption{Hartmann6D}
\label{fig:best-objective-hartmann6D}
\end{subfigure}
\begin{subfigure}{0.8\linewidth}
\centering
\includegraphics[width=\textwidth,height=0.5\textwidth]{figures/benchmark/plot_objective_multi_levy}
\vspace{-5mm}
\caption{Levy}
\label{fig:best-objective-levy}
\end{subfigure}
\begin{subfigure}{0.8\linewidth}
\centering
\includegraphics[width=\textwidth,height=0.5\textwidth]{figures/benchmark/plot_objective_multi_schwefel.pdf}
\vspace{-5mm}
\caption{Schwefel}
\label{fig:best-objective-schwefel}
\end{subfigure}
\caption{Comparison of asynchronous distributed with qUCB (ADBO+qUCB) and synchronous distributed with Boltzmann policy (SDBO+bUCB) for different benchmark functions.}
\label{fig:benchmark}
\end{figure}
\subsection{Impact of scaling ADBO workers}
In this section, we scaled the ADBO+qUCB and SDBO+bUCB methods from 128 workers to 4,096 workers on the benchmark functions. As a default we ran ADBO+qUCB without the two surrogate model refitting complexity reduction strategies (feature reduction and quantile-based undersampling).
The results are shown in Fig.~\ref{fig:scaling-dmbs} for a 5-dimensional Ackley function. \edit{Each experiment is repeated 5 times with different random-states and we plot the average and standard-error over these repetitions.} We observe a positive impact of scaling on the quality of the solution. Increasing the scale increases the ability of ADBO+qUCB to find high-quality solutions in short computation time, as shown in Fig.~\ref{fig:best-objective-scaling-dmbs}. With respect to the number of black-box evaluations performed by the ADBO+qUCB method, we can observe a linear scaling up to 2,048 workers; however, the scaling drops at 4,096 workers. Similarly, the effective utilization starts from \edit{$0.93$} for 128 workers and drops to \edit{$0.86$} and \edit{$0.77$} for 2,048 and 4,096 workers, respectively.
We found that this result is not due to communication overhead. In fact, the communication is still negligible at this scale, but the time required for RFR model retraining becomes a bottleneck. In particular, as the search progresses, with an increased scale the number of input-output pairs increases drastically, which increases the model retraining time. While the SDBO+bUCB method also achieves a linear scaling trend similar to ADBO+qUCB's, the number of function evaluations is much lower, which can be attributed to synchronization and the consequent low effective utilization of $0.5$ across different numbers of workers.
\begin{figure}[!t]
\vspace{-3mm}
\centering
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.75\textwidth]{figures/4-scaling-distributed/plot_objective_multi.pdf}
\vspace{-4mm}
\caption{The search trajectory showing the best objective found over time}
\label{fig:best-objective-scaling-dmbs}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.75\textwidth]{figures/4-scaling-distributed/scaling-distributed-qucb.pdf}
\vspace{-4mm}
\caption{The strong scaling with respect to the number of function evaluations}
\label{fig:scaling-plot-dmbs}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.75\textwidth]{figures/4-scaling-distributed/scaling-distributed-qucb-utilization.pdf}
\vspace{-4mm}
\caption{Effective utilization of workers}
\label{fig:scaling-plot-dmbs-utilization}
\end{subfigure}
\caption{\edit{Comparison of ADBO+qUCB (blue) and SDBO+bUCB (purple) for a growing number of parallel workers.}\vspace{-5mm}}
\label{fig:scaling-dmbs}
\end{figure}
\subsection{Impact of scaling problem dimensions}
In this section, we analyze the effect of increasing the size of the input dimension on ADBO+qUCB's effective utilization. We increase the dimensions of the Ackley function from 5, 10, 30, 50, and 100 with 4,096 parallel workers. We compare ADBO+qUCB, which uses all the input features and all collected samples, with its optimized variant ADBO+qUCB+frqs (feature reduced and quantile sampling), which reduces the refitting complexity of the RFR model. The upper bound on the number of samples is set to $n_\text{max\_sample}$ to $5,000$.
The results are shown in Fig.~\ref{fig:scaling-dimensions}. For ADBO+qUCB, we can already observe in the workers' scaling results (Fig.~\ref{fig:scaling-plot-dmbs}) that a loss of performance starts to be noticed at the 4,096 workers scale. In fact, the effective utilization is less stable in this setting with the quickly increasing quantity of samples that progressively slows the refitting of the surrogate model. When scaling the number of dimensions, this phenomenon is even worse, as shown, and utilization drops from about 75\% with 5 dimensions, to 49\% with 30 dimensions, and to 15\% with 100 dimensions. In fact, an increase of $n_\text{feature}$ quickly slows the refitting of the surrogate model. In contrast, if we look at the optimized version ADBO+qUCB+frqs, we can observe that the drop of utilization is amortized without being stopped. It brings up the utilization from 75\% to 80\% with 5 dimensions, from 49\% to 60\% with 30 dimensions, and from 15\% to 23\% with 100 dimensions. In conclusion, the feature reduction with quantile undersampling provides a solution to amortize the drop of performance when scaling the dimensionality of the search space.
\begin{figure}[!t]
\centering
\includegraphics[width=.75\linewidth]{figures/5-scaling-dimensions/scaling-dimensions-sdbo-utilization.pdf}
\vspace{-4mm}
\caption{Comparison of ADBO+qUCB and ADBO+qUCB+frqs (feature reduction and quantile-based undersampling) effective utilization when scaling the dimensionality of the problem.\vspace{-0.5cm}}
\label{fig:scaling-dimensions}
\end{figure}
\subsection{Componentwise evaluation}
In this section we evaluate the impact of the various ingredients of our proposed method ADBO+qUCB on performance. We start with a simple BO baseline, then progressively introduce the algorithmic components of our method. We use the 5-dimensional Ackley benchmark function. This function is particularly interesting because it contains many local optima and a unique global optimum $f(0,..,0) = 0$. It is often used to test the effectiveness of optimizers in escaping a local minimum.
\subsubsection{Batch synchronous vs. asynchronous in centralized setting}
In this section we evaluate the advantage of asynchronicity. For this we start with a classic baseline, centralized BO (CBO) method. In this case, a ``manager'' runs the search and generates configurations for evaluation, and workers evaluate them and return the results to the manager. A common method for generating multiple configurations is the constant liar (CL, also known as Kriging believer) method, where after selecting a first configuration $x_0$, the model is refitted with the maximum objective known so far (lie) for $x_0$ before selecting the next configuration $x_1$. This process is repeated until the number of required points is selected.
Thus, to generate $p$ points, the BO needs to refit the model $p-1$ times sequentially. Within CBO, two schemes are widely used in the literature: batch synchronous and asynchronous. In the batch synchronous version, we need to wait for all parallel evaluations to complete before iterating, which is adapted when the run time of the black-box evaluations is almost constant. In the asynchronous version, we resample a new configuration as soon as a worker becomes free, which is adapted when the run time of the black-box function varies. We refer to these methods as SCBO+CL and ACBO+CL, respectively. As a baseline, we use the vanilla sequential BO, referred as SEQ-1, which evaluates configurations one at a time and uses a single worker.
The results are shown in Fig.~\ref{fig:cmbs-liar}. We can observe that ACBO+CL finds high-quality solutions in shorter computation time when compared with SCBO+CL and SEQ-1 (Fig.~\ref{fig:best-objective-cmbs-cliar}). A key reason is the better worker utilization, as seen in Fig.~\ref{fig:utilization-cmbs-cliar}, where we observe that the batch synchronous scheme can significantly reduce the overall worker utilization. The effective utilization $U_\text{eff}$ is $0.16$ for SCBO+CL and $0.22$ for ACBO+CL, which results in a total of $508$ and $686$ function evaluations, respectively. However, we notice that the expensive refit of the surrogate model in the constant liar strategy prevents the search from scaling well to a higher number of workers. In conclusion, asynchronicity improves performance but constant liar is a bottleneck for better performance.
\begin{figure}[!b]
\vspace{-5mm}
\centering
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.75\textwidth]{figures/1-centralized-liar-synchronous-vs-asynchronous/plot_objective_multi.pdf}
\vspace{-4mm}
\caption{}
\label{fig:best-objective-cmbs-cliar}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.75\textwidth]{figures/1-centralized-liar-synchronous-vs-asynchronous/plot_utilization_multi_iter.pdf}
\vspace{-4mm}
\caption{}
\label{fig:utilization-cmbs-cliar}
\end{subfigure}
\caption{Comparison of the sequential (SEQ-1, grey), synchronous (SCBO+CL, orange), and asynchronous (ACBO+CL, yellow) centralized search with the constant liar strategy. On the top, the search trajectory(\ref{fig:best-objective-cmbs-cliar}) and at the bottom the percentage of active workers (\ref{fig:utilization-cmbs-cliar}) are presented with respect to the time.\vspace{-0.5cm}}
\label{fig:cmbs-liar}
\end{figure}
\edit{In order to address the overhead issue of the constant liar and the model refitting schemes, several multipoint acquisition strategies have been proposed. We compared two low-overhead strategies, Boltzmann policy with \ucb (denoted as bUCB) and \qucb, with the default constant liar strategy (denoted as CL) within the asynchronous centralized search setting (ACBO). Boltzmann policy transforms the vector of scores from the acquisition function into a categorical distribution~\cite{garcia-barcos_fully_2019} from which we can sample configurations. The Boltzmann operator is a generalization to the Softmax operator where the entropy of the categorical distribution can be managed by a temperature parameter. The temperature can be chosen according to a specific schedule such that convergence is maintained while exploration is ensured asymptotically~\cite{garcia-barcos_fully_2019}. We observed that ACBO+qUCB and ACBO+bUCB outperformed ACBO+CL with respect to both worker utilization and solution quality. }
\begin{comment}
\subsubsection{Multipoint acquisition strategies in asynchronous single manager/ multiple workers setting}
\label{multipoint acquisition}
In order to address the overhead issue of the constant liar and the model refitting schemes, several multipoint acquisition strategies have been proposed. Here we compare two low-overhead strategies, Boltzmann policy with \ucb \edit{(denoted as bUCB)} and \qucb, with the default constant liar strategy (denoted as CL) within the asynchronous centralized search setting (ACBO). Boltzmann policy transforms the vector of scores from the acquisition function into a categorical distribution~\cite{garcia-barcos_fully_2019} from which we can sample configurations. The Boltzmann operator is a generalization to the Softmax operator where the entropy of the categorical distribution can be managed by a temperature parameter. The temperature can be chosen according to a specific schedule such that convergence is maintained while exploration is ensured asymptotically~\cite{garcia-barcos_fully_2019}.
The results are shown in Fig.~\ref{fig:cmbs-asynchronous}. We can observe that ACBO+qUCB and ACBO+bUCB outperform ACBO+CL with respect to both worker utilization and solution quality. Both ACBO+qUCB and ACBO+bUCB achieve a utilization of $0.92$, which is 3x higher than that of CL. This increase in utilization results in evaluations $2,921$ and $2,909$, respectively, for Boltzmann and qUCB, which is 4x more than CL and translates to increasing the chances of finding a better solution. We note that even if ACBO+bUCB utilization is comparable to that of ACBO+bUCB, the Boltzmann policy does not provide significant speedup in finding high-quality solutions. On the other hand, ACBO+qUCB achieves a solution close to the best discovered in less than 5 min, whereas ACBO+CL takes 15 min. corresponding to a 3x speedup for the same number of parallel workers. Therefore, we conclude that qUCB is a better choice than bUCB and CL for parallel evaluations because of its significant speed-up in finding better solutions faster.
\begin{figure}[!t]
\vspace{-5mm}
\centering
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.8\textwidth]{figures/2-centralized-asynchronous/plot_objective_multi.pdf}
\vspace{-4mm}
\caption{}
\label{fig:best-objective-cmbs-asynchronous}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.8\textwidth]{figures/2-centralized-asynchronous/plot_utilization_multi_iter.pdf}
\vspace{-4mm}
\caption{}
\label{fig:utilization-cmbs-asynchronous}
\end{subfigure}
\caption{Comparison of the constant liar (ACBO+CL, yellow), Boltzmann policy (ACBO+bUCB, cyan), and qUCB (ACBO+qUCB, magenta) multipoint acquisition strategies for the asynchronous centralized setting. On the top, the search trajectory(\ref{fig:best-objective-cmbs-asynchronous}); at the bottom is shown the percentage of active workers(\ref{fig:utilization-cmbs-asynchronous}) with respect to the time.\vspace{-0.5cm}}
\label{fig:cmbs-asynchronous}
\end{figure}
\end{comment}
\subsubsection{Asynchronous centralized vs distributed search}
Here we compare ACBO+qUCB (centralized) and our proposed ADBO+qUCB (distributed) method and show that the latter is a promising alternative to the widely used single-manager parallel-BO approach.
The results are shown in Fig.~\ref{fig:best-objective-cmbs-vs-dmbs}. While the effective utilization of the two methods is comparable for 128 workers (see Fig.~\ref{fig:utilization-cmbs-vs-dmbs}), ADBO+qUCB achieves solutions that are better than those of ACBO+qUCB. The reason can be attributed to the BO search within each worker with $\kappa$ values, which provides diversity in balancing the exploration and exploitation of the search space. Moreover, ADBO+qUCB takes advantage of an ensemble of RFR models as opposed to a single RFR model in ACBO+qUCB. The input-output samples collected through the ensemble of RFR models allow the search method to escape local solutions faster. As we scale, this feature will become more advantageous in finding high-quality solutions in a short computation time.
\begin{figure}[!t]
\vspace{-5mm}
\centering
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.75\textwidth]{figures/3-centralized-vs-distributed/plot_objective_multi.pdf}
\vspace{-4mm}
\caption{}
\label{fig:best-objective-cmbs-vs-dmbs}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.75\textwidth]{figures/3-centralized-vs-distributed/plot_utilization_multi_iter.pdf}
\vspace{-4mm}
\caption{}
\label{fig:utilization-cmbs-vs-dmbs}
\end{subfigure}
\caption{Comparison of asynchronous centralized (ACBO+qUCB) and distributed (ADBO+qUCB) search with qUCB. On the top, the search trajectory (\ref{fig:best-objective-cmbs-vs-dmbs}) is presented; at the bottom is shown the percentage of active workers (\ref{fig:utilization-cmbs-vs-dmbs}) with respect to time.\vspace{-4mm}}
\end{figure}
\subsubsection{Comparison with baselines}
We compare our BO method to two commonly used baselines to demonstrate the trade-off between the computational cost of the surrogate model and the number of black-box evaluations required for high solution quality.
These baselines are random search (RD+ACBO) and BO with Gaussian Process (GP+ACBO), where both use centralized architecture. For GP+ACBO, the constant-liar scheme is used for multipoint generation. The search are performed with 4,096 workers.
The results are shown in Fig.~\ref{fig:comparison-with-baseline}. From the results we can observe that ADBO+qUCB outperforms Random and GP with respect to both search time and solution quality. ADBO+qUCB reaches a better objective than the other two baselines within 3 min. We also observe that RD+ACBO obtains better solution than GP+ACBO and latter gets stuck at 6 min. The poor performance of GP+ACBO can be attributed to the large refitting time complexity associated with GPR. This is evident from the worker utilization results, where the BO with GPR cannot generate points rapidly enough to keep the workers busy. The search in BO with GPR did not progress after 6 min. because the rest of the time was spent in fitting the model. The high computational cost of refitting the GP model and the associated bottleneck at scale were reported in previous works as well~\cite{snoek_scalable_2015,8638041}.
While RD+ACBO has a better utilization (0.96 with with 3,048 evaluations) than ADBO+qUCB (0.87 with 2,778 evaluations), the solution quality of latter is better than the former. Even if random search is a good to maximize utilization its probability of finding the optimal solution exponentially reduces with the number of dimensions for random-sampling. Let's take an example, for the Ackley function we want to evaluate the probability of finding $\approx 3.29$ (found by ADBO+qUCB). There are 5-dimensions and the bounds are (-32.768, 32.768). Assuming a uniform prior, the probability of sampling at $\epsilon$-precision for 1 dimension is $p_\epsilon := P(x^*-\epsilon \le X \leq x^*+\epsilon) = \frac{2\epsilon}{b-a}$. Now considering the sampling for each dimension to be independent, the probability of sampling at $\epsilon$-precision each dimension for $d$-dimensions is $p_d := p_\epsilon^d$. Finally, assuming that draws are independent for each new evaluation, the probability of sampling at $\epsilon$-precision each dimension after $n$-draws is $p_n := 1 - (1-p_d)^n$. In our example, $f(x+0.32) \approx 3.3$, therefore with $(a,b)=(-32.768, -32.768)$, $\epsilon = 0.32$, $d=5$ and $n=3048$ we have a probability of $p_n \approx 10^{-7}$ for Random to outperform ADBO+qUCB.
\begin{figure}
\vspace{-5mm}
\centering
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.75\textwidth]{figures/6-comparison-with-baseline/plot_objective_multi.pdf}
\vspace{-4mm}
\caption{}
\label{fig:best-objective-comparison-with-baseline}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.75\textwidth]{figures/6-comparison-with-baseline/plot_utilization_multi_iter.pdf}
\vspace{-4mm}
\caption{}
\label{fig:utilization-comparison-with-baseline}
\end{subfigure}
\caption{Comparison of asynchronous centralized random and Gaussian Process (GP) search with distributed (ADBO+qUCB) search with qUCB. On the top, the search trajectory (\ref{fig:best-objective-comparison-with-baseline}) is presented; at the bottom is shown the percentage of active workers (\ref{fig:utilization-comparison-with-baseline}) with respect to time.\vspace{-6mm}}
\label{fig:comparison-with-baseline}
\end{figure}
\subsection{Application to neural network hyperparameter tuning}
Here we demonstrate the effectiveness of ADBO+qUCB to tune the hyperparameters of neural networks. For this comparison we used DeepHyper~\cite{deephyper2018} and Optuna~\cite{optuna_software} which are two state-of-the-art open source package for neural network hyperparameter optimization.
DeepHyper adopts an asynchronous BO method~\cite{8638041} with a single manager and multiple workers with the \ucb acquisition function and constant liar scheme for generating hyperparameter configurations for multiple workers simultaneously. It also provide us with common search methods such as random search and asynchronous BO with Gaussian Process (GP) search methods for hyperparameter tuning. We refer these two search methods as RD+ACBO and GP+ACBO because they both follow a centralized asynchronous architecture. DeepHyper has been used to improve the accuracy of neural networks in several scientific machine learning applications~\cite{LIU2022111716,10.1145/3458817.3476203,DBLP:journals/corr/abs-2110-13511,liu2021machine}. In addition, we implemented an asynchronous variant of Hyperband (ASHA)~\cite{li_hyperband_2018,li_system_2020} based on the pruner API of the Optuna Python package. We selected two benchmarks, namely, Attn and Combo, from the Exascale Deep Learning and Simulation Enabled Precision Medicine for Cancer (ECP-CANDLE) project~\cite{wozniak2018candle}.
For both benchmarks we run the search on 64 workers (GPUs) for 3 hours. Each neural network is trained for 10-epochs with RD+ACBO, GP+ACBO, and ADBO+qUCB but a dynamic budget between 1 to 10-epochs is given for ASHA, which performs early pruning of poor performing configurations.
\begin{comment}
\subsubsection{Message-passing neural networks}
We focus on tuning the hyperparameters of a message-passing neural network~\cite{gilmer_neural_2017} (MPNN) taken from the Keras~\footnote{\url{https://keras.io/examples/graph/mpnn-molecular-graphs/}}. The benchmark contains 2,050 molecules and their corresponding blood-brain barrier permeability under a binary classification setup. The tunable hyperparameters and their range of values are shown in Table~\ref{tab:hyperparameters-mpnn}. The default MPNN graph neural network reaches an accuracy of 0.9258 AUC on the validation set after 40 epochs of training. ACBO+CL takes a prior distribution definition for hyperparameter values; these distributions determine how the values are sampled from the range. We use the same prior definitions for both methods. This is shown in the third column of Table~\ref{tab:hyperparameters-mpnn}. We use 8 nodes with 8 GPUs per node and 2 processes per GPU (with NVIDIA Multi-Process Service mode), which results in 128 parallel workers for the search.
\begin{table}[!ht]
\centering
\resizebox{0.9\linewidth}{!}{%
\begin{tabular}{l|l|l}
\multicolumn{1}{c|}{\textbf{Hyperparameter}} & \multicolumn{1}{c|}{\textbf{Range}} & \multicolumn{1}{c}{\textbf{Prior}} \\ \hline
learning rate & $\left[10^{-4}, 10^{-2}\right]$ & log-uniform \\
batch size & $\left[16, 256\right]$ & uniform \\
message units & $\left[32, 64\right]$ & uniform \\
message steps & $\left[2, 10\right]$ & uniform \\
num attention heads & $\left[6,8,10\right]$ & uniform \\
dense units & $\left[256,512\right]$ & uniform \\
dense units output & $\left[32, 1024\right]$ & log-uniform \\
activation output & \begin{tabular}[c]{@{}l@{}}{[}"relu", "swish", "sigmoid", \\ "tanh", "selu", "elu"{]}\end{tabular} & uniform
\end{tabular}%
}
\caption{Description of MPNN hyperparameters}
\label{tab:hyperparameters-mpnn}
\end{table}
\begin{figure}[!h]
\centering
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.8\textwidth]{figures/application-mpnn/plot_utilization_multi_iter.pdf}
\caption{Percentage of workers evaluating neural networks over time.}
\label{fig:utilization-app-mpnn}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.8\textwidth]{figures/application-mpnn/plot_count_better_than_best.pdf}
\caption{Number of models with accuracy that are better than the baseline.}
\label{fig:count-better-app-mpnn}
\end{subfigure}
\caption{Comparison of asynchronous distributed setting with qUCB with (ADBO+qUCB, blue) and DeepHyper's asynchronous centralized setting with constant liar (ACBO+CL, red).}
\label{mpnn-results}
\end{figure}
The results are shown in Fig.~\ref{mpnn-results}. We observe that our proposed ADBO+qUCB method completely outperforms ACBO+CL. From Fig.~\ref{fig:utilization-app-mpnn} we see that in ADBO+qUCB, workers spend most of the time training and evaluating the neural network models. ACBO+CL suffers poor worker utilization because of the single manager and constant liar scheme. As a consequence of improved worker utilization, ADBO+qUCB achieves an overall effective utilization of more than 95\%, whereas ACBO+CL's utilization is about 45\%. This utilization difference is directly reflected in the number of hyperparameter (neural network model) configurations evaluated by each method. We found that ADBO+qUCB performs 2x more evaluations than does ACBO+CL with 7,220 and 3,363 evaluations, respectively. In Fig.~\ref{fig:count-better-app-mpnn} we show the number of neural network models that are better than the baseline (0.9258 validation AUC) found by the two methods over time. ADBO+qUCB, thanks to its higher effective utilization, performs more evaluations and finds more models that are better than the baseline, where the best found model has a 0.939 validation AUC. While ACBO+CL can find only six models that are better than the baseline in 25 min, ADBO+qUCB finds more than that in just 5 min; at the end of 25 min, it finds 11 models that are better than the baseline model.
\end{comment}
\subsubsection{Attn benchmark}
The Attn~\cite{clyde2020systematic} benchmark dataset contains 271,915 training points and validation/testing 33,989 points. Each point is composed of 6,212 feature input and a binary output, representing a binary classification task. The total dataset is about 7.9 GB. The data suffers form a strong imbalance between positive and negative targets which is managed by a re-weighting of the loss. The baseline model is a feed-forward neural network with an attention layer. For the default neural architecture hyperparameters, there are layers which are all equipped with ReLU except the attention layer which uses softmax and with the number of neurons $\in$ [1000, 1000, 1000, 500, 250, 125, 60, 30]. Each of these layer is followed by a batch-normalization and dropout which are not activated by default.
The validation AU-ROC after 10-epochs is used as metric to guide the hyperparameter optimization. It reaches an AUC-ROC of 0.872.
The number of neurons and the activation function of each layer are exposed for hyperparameter tuning. The search space is defined as follows: number of neurons: [10, 1024] with a log-uniform prior; activation function: [elu, gelu, hard sigmoid, linear, relu, selu, sigmoid, softplus, softsign, swish, tanh]; optimizer: [sgd, rmsprop, adagrad, adadelta, adam]; global dropout-rate: [0, 0.5]; batch size [8, 512] with a log-uniform prior; and learning rate: [$10^{-5}$, $10^{-2}$] with a log-uniform prior. This corresponds to 17 hyperparameters.
The results are shown in Fig.~\ref{fig:application-attn}. From the search trajectory shown in Fig.~\ref{fig:best-objective-application-attn} we observe that our proposed ADBO+qUCB is converging faster and to a better solution than RD+ACBO, GP+ACBO, and ASHA. The poor performance of GP+ACBO is attributed to the rapidly decreasing worker utilization shown in Fig.~\ref{fig:utilization-application-attn}. Fig.~\ref{fig:fitting-application-attn} shows the time required for refitting the model in different methods. While the refiting time is negligible for ADBO+qUCB, RD+ACBO, ASHA, even at a moderate scale of 64 workers, GP+ACBO suffers from high refit cost. Note that this refit time increases over time because of the number of samples available for refit increase over time. Since ASHA and RD+ACBO are based on random sampling, due the curse of dimensionality, they cannot outperform ADBO+qUCB, which uses more intelligent exploration and exploitation strategies.
\begin{figure}
\vspace{-5mm}
\centering
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.75\textwidth]{figures/application-attn/plot_objective_multi.pdf}
\vspace{-4mm}
\caption{}
\label{fig:best-objective-application-attn}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.75\textwidth]{figures/application-attn/plot_utilization_multi_iter.pdf}
\vspace{-4mm}
\caption{}
\label{fig:utilization-application-attn}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.75\textwidth]{figures/application-attn/attn-fitting-time.pdf}
\vspace{-4mm}
\caption{}
\label{fig:fitting-application-attn}
\end{subfigure}
\caption{\edit{Comparison of hyperparameter search algorithms on the Attn Benchmark.}\vspace{-6mm}}
\label{fig:application-attn}
\end{figure}
\subsubsection{Combo benchmark}
The Combo benchmark dataset~\cite{xia_predicting_2018} is composed of 220,890 training data points and 55,222 testing data points respectively. Each data point has 3 class of input features: 942 (RNA-Sequence), 3,839 (drug-1 descriptors) and 3,893 (drug-2 descriptors) respectively. The data set size is about 4.2 GB. It is a regression problem with the goal of predicting the growth percentage of cancer cells given a cell line molecular features and the descriptors of two drugs. The validation data is composed of 20\% of the training data. The networks are trained for 10 epochs. The validation $R^2$ coefficient at the last epoch is used as metric for hyperparameter search.
The baseline model is composed of 3 inputs each processed by a sub-network of 3 fully connected layers. Then, the outputs of these sub-models are concatenated and input in an other sub-network of 3 layers before the final output. All the fully connected layers have 1000 neurons and ReLU activation. It reaches a validation $R^2$ of 0.816.
The number of neurons and the activation function of each layer are exposed for the hyperparameter search. The search space is defined as follows: number of neurons, activation function, optimizer, global dropout-rate, batch size, learning rate hyperparameters take the same range as in Attn hyperparameter search. A learning rate warmup strategies is activated based on a boolean variable. Accordingly the base learning rate of this warmup strategy is searched in [$10^{-5}$, $10^{-2}$] with a log-uniform prior. Residual connections are created based on a boolean variable. A learning rate scheduler is activated based on boolean variable. Finally, batch-normalization is activated based on boolean variable. This corresponds to 16 hyperparameters.
The results for utilization in Fig.~\ref{fig:utilization-application-combo} and update duration of the surrogate model in Fig.~\ref{fig:fitting-application-combo} are similar to Attn in Fig.~\ref{fig:utilization-application-attn} and \ref{fig:fitting-application-attn} respectively. The search trajectory shown in Fig.~\ref{fig:application-combo} shows that our proposed ADBO+qUCB outperforms RD+ACBO, GP+ACBO, and ASHA. The GP+ACBO also suffered from poor utilization due to quickly increasing model refitting overheads.
\begin{figure}
\vspace{-5mm}
\centering
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.75\textwidth]{figures/application-combo/plot_objective_multi.pdf}
\vspace{-4mm}
\caption{}
\label{fig:best-objective-application-combo}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.75\textwidth]{figures/application-combo/plot_utilization_multi_iter.pdf}
\vspace{-4mm}
\caption{}
\label{fig:utilization-application-combo}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=.75\textwidth]{figures/application-combo/combo-fitting-time.pdf}
\vspace{-4mm}
\caption{}
\label{fig:fitting-application-combo}
\end{subfigure}
\caption{\edit{Comparison of hyperparameter search algorithms for the Combo benchmark.}\vspace{-6mm}}
\label{fig:application-combo}
\end{figure}
\section{Conclusion and Future Work}
We developed an asynchronous distributed Bayesian optimization (ADBO) method that is capable of leveraging thousands of parallel workers on HPC systems. In ADBO, each worker has its own sequential BO that performs only one black-box evaluation at a time and exchanges the evaluation results among all other workers in an asynchronous way using nonblocking MPI operations \texttt{isend} and \texttt{irecv}. Each BO \edit{worker} differ from the others with respect to the exploration and exploitation trade-off and use a random forest surrogate model with improved uncertainty quantification capabilities and two methods that seek to reduce refitting time complexity.
The proposed asynchronous communication scheme is generic and can be applied to other distributed BO methods.
Our proposed ADBO method seeks to avoid the congestion issues of centralized BO methods. We showed that our proposed ADBO method outperforms a recently proposed state-of-the-art synchronous distributed BO method and the commonly used centralized BO with single manager and multiple workers. By scaling ADBO from 128 to 4,096 parallel workers we demonstrated an improved effective utilization of about 35\%, which translates to 1.68x more evaluated configurations on average compared with its synchronous version. We showed the advantage of scaling to improve the quality of the solution and to reduce the time to solution.
To address the problem of increasing the dimensionality and not bounding the size of collected input-output configurations, we proposed a method that combines feature reduction and quantile-based undersampling and demonstrated its benefits. Finally, we evaluated the efficacy of ADBO for tuning the hyperparameters of a graph neural network and showed that ADBO outperformed a centralized BO method in the state of the art hyperparameter tuning package.
We are making our software open source to benefit the community. We are expanding our study to other relevant areas such as simulation calibration, software tuning, and workflow tuning. Other future work will include 1) the use of accelerators for model refit and related low level system optimization of surrogate model refit, 2) low overhead zeroth-order optimization method to maximize the acquisition function, 3) online dimensional reduction and input domain decomposition, 4) continual/incremental learning surrogate models for ADBO, and 5) multi-objective optimization.
\section*{Acknowledgment}
This material is based upon work supported by the U.S.\ Department of Energy
(DOE), Office of Science, Office of Advanced Scientific Computing Research, under
Contract DE-AC02-06CH11357. This research used resources of the Argonne
Leadership Computing Facility, which is a DOE Office of Science User Facility.
\bibliographystyle{IEEEtran}
|
1,108,101,562,988 | arxiv | \section{Introduction}
In the last years, increasing attention has been paid to modified theories of gravity
in order to understand several open cosmological questions such as the accelerated
expansion of the universe \cite{Carroll}
and the dark matter origin \cite{Cembranos:2008gj}.
Some of those theories modify
General Relativity by adding higher powers of the scalar curvature $R$, the
Riemann and Ricci tensors or their derivatives \cite{Maroto&Dobado_1993}.
Lovelock and $f(R)$
theories are some examples of these attempts. It is therefore quite natural to ask
about black holes (BH) features in those gravitational theories since, on the one
hand, some BH signatures may be peculiar to Einstein's gravity and others may be
robust features of all generally covariant theories of gravity. On the other hand,
the results obtained may lead to rule out some models which will be in desagreement with
expected physical results. For thoses purposes, research on thermodynamical quantities
of BH is of particular interest.
In this work we will restrict ourselves to the so called $f(R)$ gravity
theories (see \cite{Sot}) in metric formalism in Jordan's frame. In this frame, the gravitational
Lagrangian
is given by $R+f(R)$ where $f(R)$ is an arbitrary function of $R$ and Einstein's
equations are
usually fourth order in the metric (see \cite{varia} for several proposed $f(R)$
functions compatible with local gravity tests and other cosmological constraints).
An alternative approach would be to use the
Einstein's frame, where ordinary Einstein's gravity coupled to a scalar plus a massive
spin-2 field is recovered. Even if a mathematical correspondence could be established
between those two frames, in the last years some controversy has remained about their
physical equivalence.
Previous literature on $f(R)$ theories \cite{Whitt} proved in Einstein's frame
that
Schwarzschild solution is the only static spherically
symmetric solution for an action of the form $R+aR^2$ in $D=4$.
In \cite{Mignemi} uniqueness theorems of spherically symmetric solutions for
general polynomial actions in arbitrary dimensions using Einstein's frame were proposed
(see also \cite{Multamaki} for additional results). See also \cite{olmo} for
spherical solutions with sources.
Using the euclidean action method (see for instance \cite{Hawking&Page, Witten})
in order to determine different thermodynamical quantities,
Anti de Sitter ($AdS$) BH in $f(R)$ models have been
studied \cite{Cognola}. In \cite{Briscese} the entropy of
Schwarzschild-de Sitter BH
was calculated for some particular cosmologically
viable models in vacuum and their cosmological stability was discussed.
BH properties have been also widely studied in other modified gravity theories.
For instance, \cite{cvetic,Cai_GaussBonet_AdS} studied BH
in Einstein's theory with a Gauss-Bonnet
term and cosmological constant. Different results were found depending on
the dimension $D$ and the sign of the constant horizon curvature $k$.
For $k=0,-1$, the Gauss-Bonnet term does not
modify $AdS$ BH thermodynamics at all (only the horizon position is modified with
respect to the Einstein-Hilbert ($EH$) theory) and BH are not only
locally thermodynamically stable but
also globally preferred. Nevertheless for $k=+1$ and $D=5$ (for $D\geq6$ thermodynamics
is again essentially that for $AdS$ BH) there exist some features not present in
the absence of Gauss-Bonnet term. Gauss-Bonnet and/or Riemann squared interaction terms
were studied in \cite{Cho} concluding that in this case phase transitions may occur
with $k=-1$ .
Another approach is given by Lovelock gravities, which are free of ghosts and where
field equations contain no more than second derivatives of the metric. These theories
were studied in \cite{Matyjasek} and the corresponding entropy was evaluated.
The paper is organized as follows: in section 2 we present some
general results for $f(R)$
gravities for interesing physical situations in metric formalism.
In sections 3 and 4, BH in $f(R)$
gravities are studied and explicit Einstein's field equations are presented for
static and spherically symmetric metrics. Section 5 is devoted to find
perturbative solutions for static and spherically symmetric background metric:
general metric coefficients are found depending on $f(R)$ derivatives evaluated
at background scalar curvature. Sections 6 and 7 are widely devoted to study
thermodynamical quantities and their consequences in local and global stability
for some particular $f(R)$ models. Finally, we include some conclusions.
\section{General Results}
In order to study the basics of the solutions of general $f(R)$ gravity theories,
let us start from the action
\begin{equation}
S\,=\,S_g+S_m
\end{equation}
where $S_g$ is the $D$ dimensional gravitational action:
\begin{equation}
S_g=\frac{1}{16 \pi G_D}\int \text{d}^{D}x\sqrt{\mid g\mid}\,(R+f(R))
\label{S_g}
\end{equation}
with $G_D \equiv M_D^{2-D}$ being the $D$ dimensional Newton's
constant, $M_D$ the corresponding Planck mass, $g$ the determinant
of the metric $g_{AB}$, $(A,B=0, 1, ..., D-1)$, $R$ the scalar
curvature and $R+f(R)$ is the function defining the theory under consideration.
As the simplest example, the $EH$ action with cosmological
constant $\Lambda_D$ is given by $f(R)=-(D-2)\Lambda_D$.
\\
The matter action $S_m$ defines the energy-momentum tensor as:
\begin{equation}
T^{AB}=-\frac{2}{\sqrt{\mid g\mid}}\frac{\delta S_m}{\delta
g_{AB}}.
\end{equation}
From the above action, the equations of motion in the metric formalism are just:
\begin{eqnarray}
R_{AB}(1+f'(R)) - \frac{1}{2}(R+f(R))\,g_{AB}+
(\nabla_A \nabla_B-g_{AB}\Box)f'(R)+8\pi G_D T_{AB}=0
\label{Einsteins_eqns}
\end{eqnarray}
where $R_{AB}$ is as usual the Ricci tensor and $\Box=\nabla_A \nabla^A$ with $\nabla$
the usual covariant derivative.
Thus for the vacuum $EH$ action with cosmological constant we have:
\begin{eqnarray}
R_{AB}-\frac{1}{2}R\,g_{AB}+\frac{D-2}{2}\Lambda_D g_{AB}=0
\end{eqnarray}
which means $R_{AB}=\Lambda_D g_{AB}$ and $R= D \Lambda_D$.
Coming back to the general case, the required condition to get
constant scalar curvature solutions $R\,=\,R_0$ (from now $R_0$
will denote a constant curvature value) in vacuum implies:
\begin{eqnarray}
R_{AB}\,(1+f'(R))-\frac{1}{2}\,g_{AB}\,(R+f(R))\,=\,0
\end{eqnarray}
Taking the trace in previous equation, $R_0$ must be a root of the equation:
\begin{eqnarray}
2(1+f'(R_0))\,R_{0}-D\,(R_{0}+f(R_{0}))\,=\,0
\label{root_R0}
\end{eqnarray}
For this kind of solution an effective cosmological constant may be defined
as $\Lambda_D^{eff}\equiv R_{0}/D$. Thus any constant curvature
solution $R=R_0$ with $1+f'(R_0)\neq 0$ fulfills:
\begin{eqnarray}
R_{AB}=\frac{R_{0}+f(R_0)}{2(1+f'(R_0))}\,g_{AB}
\end{eqnarray}
On the other hand one can consider:
\begin{equation}
2R\,(1+f'(R))-D\,(R+f(R))\,=\,0
\label{dif}
\end{equation}
as a differential equation for the $f(R)$ function so that the corresponding
solution would admit any curvature $R$ value. The
solution of this differential equation is just:
\begin{equation}
f(R)\,=\,a R^{D/2}-R
\end{equation}
where $a$ is an arbitrary constant. Thus the gravitational Lagrangian becomes
proportional to $a R^{D/2}$ which will have solutions of constant curvature for arbitrary $R$. The
reason is that this action is scale invariant since $a/G_D$ is
a non-dimensional constant.
Now we will address the issue of finding some general criteria to relate solutions of the $EH$
action with solutions of more general $f(R)$ gravities, not necessarily of
constant curvature $R$. Let $g_{AB}$ a solution of $EH$ gravity with
cosmological constant, i.e.:
\begin{eqnarray}
R_{AB}-\frac{1}{2}R\,g_{AB}+\frac{D-2}{2}\Lambda_D g_{AB}+8\pi G_D
T_{AB}=0
\label{LambdaCDM_eq}
\end{eqnarray}
Then $g_{AB}$ is also a solution of any $f(R)$ gravity, provided the following
compatibility equation
\begin{eqnarray}
f'(R) R_{AB}-\frac{1}{2}g_{AB}\left[f(R)+(D-2)\Lambda_D\right]
+(\nabla_A\,\nabla_B-g_{AB}\Box)f'(R)=0
\label{comp}
\end{eqnarray}
obtained from (\ref{Einsteins_eqns}) is fulfilled.
In the following we will consider some particularly
interesting cases. The simplest possibility is obviously vacuum
($T_{AB}=0$) with vanishing cosmological constant $\Lambda_{D}=0$.
Then the above equation \eqref{LambdaCDM_eq} becomes:
\begin{equation}
R_{AB}=\frac{1}{2}R g_{AB}
\end{equation}
which implies $R=0$ and $R_{AB}=0$. Consequently
$g_{AB}$ is also a solution of any $f(R)$ gravity provided $f(0)=0$, which is for instance the case
when $f(R)$ is analytical around $R=0$. When the cosmological constant is different from zero
($\Lambda_{D}\neq 0$), but still $T_{AB}=0$, we have also constant curvature with
$R_0=D\Lambda_D$ and $R_{AB}=\Lambda_D g_{AB}$. Then the
compatibility equation (\ref{comp}) reduces to (\ref{root_R0}).
In other words, $g_{AB}$ is also a solution of $f(R)$
provided $f(D\Lambda_D)=\Lambda_D(2-D+2f'(D\Lambda_D))$.
Notice that it would also be a solution for any $R_0$ in the particular case
$f(R)=aR^{D/2}-R$.
\\
Next we can consider the case with $\Lambda_D = 0$
and conformal matter ($T=T_A^A=0$).
For a perfect fluid this means having the equation of state
$\rho=(D-1)p$ where $p$ is the pressure and $\rho$ the energy density. In this case \eqref{LambdaCDM_eq} implies
\begin{eqnarray}
R\,=\,0\,\,\,;\,\,\, R_{AB}\,=\,8\pi G_D T_{AB}
\label{eq_conformal_matter}
\end{eqnarray}
Then, provided $f(0)=f'(0)=0$, $g_{AB}$ is also a solution of any
$f(R)$ gravity. This result could have particular interest in
cosmological calculations for ultrarelativistic matter (i.e.
conformal) dominated universes. For the case of conformal matter with non vanishing $\Lambda_D$ we have again
constant $R=R_0$ with $R_0=D \Lambda_{D}$ and $g_{AB}$ is a solution of $f(R)$
provided that once again $f(D\Lambda_D)=\Lambda_D(2-D+2f'(D\Lambda_D))$.
\section{Black Holes in $f(R)$ gravities}
Now we consider the external metric for the gravitational field
produced by a non rotating object in $f(R)$ gravity theories. The
most general static and spherically symmetric $D\geq 4$
dimensional metric can be written as (see \cite{ortin}):
\begin{eqnarray}
\text{d}s^2\,=\,e^{-2\Phi(r)} A(r)\text{d}t^2-A^{-1}(r)\text{d}
r^2-r^2\text{d}\Omega_{D-2}^2
\label{metric_D_v1}
\end{eqnarray}
or alternatively
\begin{eqnarray}
\text{d}s^2\,=\,\lambda(r)\text{d}t^2-\mu^{-1}(r)\text{d}r^{2}-r^2\text{d}\Omega_{D-2}^2
\label{metric_D_v2}
\end{eqnarray}
where $\text{d}\Omega_{D-2}^2$ is the metric on the $S^{D-2}$ sphere and identification
$\lambda(r)=e^{-2\Phi(r)}A(r)$ and $\mu(r)=A(r)$ can be straightforwardly established.
For obvious reasons the $\Phi(r)$ function is called the anomalous
redshift. Notice that a photon emitted at $r$ with proper
frequency $\omega_0$ is measured at infinity with frequency
$\omega_{\infty}= e^{-\Phi(r)}\sqrt{A(r)}\omega_0$.
As the metric is static, the scalar curvature $R$ in $D$ dimensions depends only on $r$ and
it is given, for the metric parametrization \eqref{metric_D_v1}, by:
\begin{eqnarray}
R(r)\, &=& \,\frac{1}{r^2}[D^2-5 D+6+r A'(r) \left(-2 D+3 r
\Phi '(r)+4\right)\nonumber\\&-&r^2 A''(r)-A(r) \left(D^2-5 D+2
r^2 \Phi '(r)^2-2 (D-2) r \Phi '(r)-2 r^2 \Phi ''(r)+6\right)].
\label{Dcurv}
\end{eqnarray}
where the prime denotes derivative with respect to $r$.
At this stage it is interesting to ask about which are the most
general static and spherically symmetric metrics with constant
scalar curvature $R_{0}$. This curvature can be found solving the
equation $R=R_0$. Then it is inmediate to see that for a constant $\Phi(r)=\Phi_{0}$ the
general solution is:
\begin{eqnarray}
A(r)\,=\,1+a_{1}r^{3-D}+a_{2}r^{2-D}-\frac{R_0}{D(D-1)}r^2
\label{A_solution_R_constant_Dobado_procedure}
\end{eqnarray}
with $a_{1}$ and $a_{2}$ being arbitrary integration constants. In fact,
for the particular case $D=4$, $R_{0}=0$ and $\Phi_{0}=0$, the
metric can be written exclusively in terms of the function:
\begin{eqnarray}
A(r)\,=\,1+\frac{a_{1}}{r}+\frac{a_{2}}{r^{2}}.
\label{RN_solution_R_constant_Dobado_procedure}
\end{eqnarray}
By establishing the identifications $a_{1}=-2G_{N}M$ and
$a_{2}=Q^2$, this solution corresponds to a Reissner-Nordstr\"{o}m solution, ie. a charged massive BH solution with mass $M$
and charge $Q$. Further comments about this result will be made
below.
\section{Constant curvature black-hole solutions}
By inserting the metric \eqref{metric_D_v1} into the general
$f(R)$ gravitational action $S_g$ in (\ref{S_g}), and making variations with
respect to the $A(r)$ and $\Phi(r)$ functions, we find the
equations of motion:
\begin{eqnarray}
(2-D ) (1+f'(R)) \Phi'(r)-r\left[f'''(R)
R'(r)^2+f''(R)(\Phi'(r)R'(r)+ R''(r))\right]\,=\,0
\label{eqn_A}
\end{eqnarray}
and
\begin{eqnarray}
&& 2 r A(r) f'''(R) R'(r)^2+ f''(R)[2 D A(r)R'(r)
- 4 A(r) R'(r) + 2 r A(r) R''(r)+A'(r) r R'(r)]+\nonumber\\
&&g'(R)[-2 r A(r) \Phi'(r)^{2} + 2 D A(r) \Phi'(r) -4 A(r)
\Phi'(r) - r A''(r) + 2 r A(r) \Phi''(r)+ \nonumber\\
&& A'(r)(2 - D + 3 r \Phi'(r)) ] - r(R+f(R))\,=\,0
\label{eqn_phi}
\end{eqnarray}
where $f'$, $f''$ and $f'''$ denote derivatives of $f(R)$ with
respect to the curvature $R$. \\
The above equations look in principle quite difficult to solve. For this reason
we will firstly consider the case of constant scalar curvature
$R=R_0$ solutions. Then the equations of motion reduce to:
\begin{equation}
(2-D)\,(1+f'(R))\Phi'(r)=0
\label{eq_motion_phi}
\end{equation}
and
\begin{equation}
R+f(R)+(1+f'(R))\left[A''(r)+(D-2)\frac{A'(r)}{r}
-(2D-4)\frac{A(r)\Phi'(r)}{r}-3A'(r)\Phi'(r)+2A(r)\Phi'^2(r)-2A(r)\Phi''(r)\right]\,=\,0
\label{eq_motion_A}
\end{equation}
As commented in the previous sections, the constant curvature solutions of
$f(R)$ gravities are given by:
\begin{equation}
R_0=\frac{D\,f(R_0)}{2(1+f'(R_0))-D}
\label{const}
\end{equation}
whenever $2(1+f'(R_0))\neq D$. Thus from \eqref{eq_motion_phi} $\Phi'(r)=0$ and then \eqref{eq_motion_A} becomes
\begin{equation}
R_{0}+f(R_0)+(1+f'(R_0))\left[A''(r)+(D-2)\frac{A'(r)}{r}\right]\,=\,0
\label{eqn_A_determination}
\end{equation}
Coming back to \eqref{eqn_A_determination}, and using \eqref{const}, we get
\begin{equation}
A''(r)+(D-2)\frac{A'(r)}{r}=-\frac{2}{D}R_0
\label{A_eq_R_constant}
\end{equation}
This is a $f(R)$-independent linear second order inhomogeneous differential
equation which can be easily integrated to give the general solution:
\begin{equation}
A(r)\,=\,C_1\,+\,C_2r^{3-D}-\frac{R_0}{D(D-1)}r^2
\label{A_solution_R_constant}
\end{equation}
which depends on two arbitrary constants $C_1$ and $C_2$. However
this solution has no constant curvature in
the general case since, as we found above, the constant curvature requirement
demands $C_{1}=1$. Then, for negative $R_0$, this solution
is basically the $D$ dimensional generalization obtained by Witten \cite{Witten}
of the BH in $AdS$ space-time
solution considered by Hawking and Page \cite{Hawking&Page}. With
the natural choice $\Phi_0=0$ the
solution can be written as:
\begin{equation}
A(r)=1-\frac{R_{S}^{D-3}}{r^{D-3}}+\frac{r^2}{l^2}.
\end{equation}
where
\begin{equation}
R_S^{D-3}=\frac{16\pi G_D M}{(D-2)\mu_{D-2}} \label{BHmass}
\end{equation}
with
\begin{equation}
\mu_{D-2}=\frac{2\pi^{\frac{D-1}{2}}}{\Gamma(\frac{D-1}{2})}
\end{equation}
being the area of the $D-2$ sphere, $l^2\equiv-D(D-1)/R_0 $ is the asymptotic $AdS$ space scale squared and $M$ is the mass parameter usually found in the literature.
Thus we have concluded that the only static and
spherically symmetric vacuum solutions with constant (negative) curvature of any
$f(R)$ gravity is just the Hawking-Page BH in $AdS$ space.
However this kind of solution is not the most general static and
spherically symmetric metric with constant curvature as can be seen by
comparison with the solutions found in
(\ref{A_solution_R_constant_Dobado_procedure}).
Therefore we have to conclude that there are constant curvature
BH solutions that cannot be obtained as vaccum solutions of any $f(R)$
theory. As we show below, in the $D=4$ case, we see that
the most general case can be described as
a charged BH solution in $f(R)$-Maxwell theory.
Indeed, let us consider now the case of charged black holes in $f(R)$ theories.
We will limit ourselves to the $D=4$ case, since in other dimensions
the curvature is not necessarily constant. The action of the theory
is now the generalization of the Einstein-Maxwell action:
\begin{equation}
S_g=\frac{1}{16 \pi G_4}\int \text{d}^{4}x\sqrt{\mid g\mid}\,(R+f(R)-F_{\mu\nu}F^{\mu\nu})
\end{equation}
where $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$.
Considering an electromagnetic potential of the form: $A_\mu=(V(r),\vec 0)$ and the static
spherically symmetric metric (\ref{metric_D_v1}), we find that the solution with
constant curvature $R_0$ reads:
\begin{eqnarray}
V(r)&=&\frac{Q}{r}\nonumber \\
\lambda(r)&=&\mu(r)=1-\frac{2G_{4}M}{r}+\frac{(1+f'(R_0))Q^2}{r^2}-\frac{R_0}{12}r^2
\end{eqnarray}
Notice that unlike the $EH$ case, the contribution of the black-hole charge
to the metric tensor is corrected by a $(1+f'(R_0))$ factor.
\section{Perturbative results}
In the previous section we have considered static spherically symmetric
solutions with constant curvature. In $EH$ theory this would provide the most general
solution with spherical symmetry. However, it is not guaranteed this to be the case
also in $f(R)$ theories. The problem of finding the general static spherically
symmetric solution in arbitrary $f(R)$ theories without imposing the
constant curvature condition is in principle too complicated. For that reason in this section
we will present a perturbative analysis of the problem, assuming that
the modified action is a small perturbation around $EH$ theory.
Therefore let us consider a $f(R)$ function of the form
\begin{eqnarray}
f(R)\,=-(D-2)\Lambda_{D}+\alpha g(R)
\label{expansion_en_alpha_fR}
\end{eqnarray}
where $\alpha\ll 1$ is a dimensionless parameter and $g(R)$ is assumed
to be analytic in $\alpha$. By using the metric parametrization given by
\eqref{metric_D_v2} the equations of motion become:
\begin{eqnarray}
\lambda (r) (1&+&f'(R)) \left\{2 \mu (r) \left[(D -2) \lambda '(r)
+r \lambda ''(r)\right]+r \lambda '(r) \mu '(r)\right\}\nonumber \\
&-&2 \lambda (r)^2 \left\{2 \mu(r)[(D -2) R'(r) f''(R)
+r f^{(3)}(R) R'(r)^2+r R''(r) f''(R)]+r R'(r) \mu '(r) f''(R)\right\}\nonumber\\
&-&r \mu (r) \lambda '(r)^2 (1+f'(R))+2 r \lambda (r)^2(R+f(R))=\,0
\label{eqn_lambda}
\end{eqnarray}
\begin{eqnarray}
&-&\lambda (r) \mu '(r) \left[2 (D -2) \lambda (r)
+r \lambda '(r)\right] (1+f'(R))\nonumber \\
&+&\mu (r) \Big\{2 \lambda (r) R'(r)
\left[2 (D -2) \lambda(r)+r\lambda '(r)\right] f''(R)
+r(1+f'(R))(\lambda '(r)^2-2 \lambda (r) \lambda ''(r))\Big\}\nonumber \\
&-&2 r \lambda (r)^2 (R+f(R))\,=\,0
\label{eqn_mu}
\end{eqnarray}
where prime denotes derivative with respect to
the corresponding argument and $R\equiv R(r)$ is given by (\ref{Dcurv}).
Now, assuming that the $\lambda(r)$ and
$\mu(r)$ functions appearing in the metric \eqref{metric_D_v2} are also
analytical in $\alpha$, they can
be written as follows
\begin{eqnarray}
\lambda(r)\,&=&\,\lambda_{0}(r)+\sum_{i=1}^{\infty}\alpha^{i}\lambda_{i}(r) \nonumber\\
\mu(r)\,&=&\,\mu_{0}(r)+\sum_{i=1}^{\infty}\alpha^{i}\mu_{i}(r)
\label{expansion_en_alpha_lambda&mu}
\end{eqnarray}
where $\{\lambda_{0}(r),\,\mu_{0}(r)\}$ are the unperturbed solutions for the
$EH$ action with cosmological constant given by
\begin{eqnarray}
\mu_{0}(r)\,&=&\,1+\frac{C_1}{r^{D-3}}-\frac{\Lambda_{D}}{(D-1)}r^2\nonumber\\
\lambda_{0}(r)\,&=&\,-C_{2}(D-2)(D-1)\,\mu_{0}(r)
\label{mu0_lambda0}
\end{eqnarray}
which are the standard BH solutions in a $D$ dimensional $AdS$
spacetime. Note that the factor $C_2$ can be chosen by
performing a coordinate $t$ reparametrization so that both
functions could be indentified. For the moment, we will keep the
background solutions as given in \eqref{mu0_lambda0} and we will
discuss the possibility of getting $\lambda(r)=\mu(r)$ in the
perturbative expansion later on.
By inserting \eqref{expansion_en_alpha_fR} and
\eqref{expansion_en_alpha_lambda&mu} in \eqref{eqn_lambda} and
\eqref{eqn_mu} we obtain the following first order equations:
\begin{eqnarray}
(D-3)\mu_{1}(r)+r\mu_{1}'(r)+\frac{2\Lambda_{D}g'(R_0)-g(R_0)}{D-2}r^{2}\,=\,0
\label{lambda_eqn1}
\end{eqnarray}
\begin{eqnarray}
&C_{2}& \left[C_{1} (D-1) r^{3-D}-\Lambda_{D} r^2+D-1\right]g(R_{0}) r^2+\left[C_{1} (D-3)
r^{3-D}+\frac{2\Lambda_{D}}{D-1}r^{2}\right]\lambda_{1}(r)\nonumber\\
&+&C_{2} (D-2)(D-1)\left(\Lambda_{D} r^2-D+3\right)\mu_{1}(r)\nonumber\\
&+&\left(1+C_{1} r^{3-D}-\frac{\Lambda_{D} r^2}{D-1}\right)
\left[2 C_{2} (1-D) r^2\Lambda_{D}g'(R_0)+r\lambda_{1}'(r)\right]\,=\,0\nonumber\\
&&
\label{mu_eqn1}
\end{eqnarray}
whose solutions are:
\begin{eqnarray}
\lambda_{1}(r)\,&=&\,C_{4}(D-1)(D-2)+\frac{(C_{1}C_{4}-C_{2}C_{3})(D-2)(D-1)}{r^{D-3}}
\nonumber \\
&-&\left[C_{4}(D -2)
\Lambda_{D}+C_{2}\left(g(R_0)-2\Lambda_{D} g'(R_{0})\right)\right] r^{2}
\nonumber\\
&&
\label{mu1}
\end{eqnarray}
\begin{eqnarray}
\mu_{1}(r)\,=\,\frac{C_{3}}{r^{D-3}}+\frac{\left(g(R_0)-2\Lambda_{D}g'(R_0)\right)}
{(D-2)(D-1)}r^{2}
\label{lambda1}
\end{eqnarray}
Up to second order in $\alpha$ the equations are:
\begin{eqnarray}
(D-3)\mu_{2}(r)+r\mu_{2}'(r) +\frac{(g(R_0)-2 \Lambda_{D}g'(R_0))}{D-2}\left(g'(R_0)
-\frac{2D}{D-2}\Lambda_{D}g''(R_0)\right)r^2\,=\,0
\label{lambda_eqn2}
\end{eqnarray}
\begin{eqnarray}
&&\left[-C_{1} (D-3) r^{3-D}-\frac{2\Lambda_{D} r^2}{D-1}\right]\lambda_{2}(r)
+C_{2}(D-2)(D-1)\left(-\Lambda_{D}r^2+D-3\right)\mu_{2}(r)\nonumber\\
&-&\left(C_{1} r^{4-D}+r-\frac{r^3\Lambda_{D}}{D-1}\right)
\lambda_{2}'(r)-C_{3}C_{4} (D-2) (D-1) \left(-\Lambda_{D} r^2+D-3\right) r^{3-D}\nonumber\\
&-&C_{2}\left[(D-1)(C_{1}r^{3-D}+1)-\Lambda_{D} r^2\right]
\left[2\Lambda_{D}g'(R_0)^2 +g(R_0)\left(\frac{2D\Lambda_{D}g''(R_0)}{D-2}-g'(R_0)\right)
-\frac{4D\Lambda_{D}^{2}g'(R_0)g''(R_0)}{D-2}\right]r^{2}\nonumber\\
&-&C_{4}[C_{1}(D-1)r^{3-D}+2][2\Lambda_{D}g'(R_0)-g(R_0)]r^{2}\,=\,0\nonumber\\
&&
\label{mu_eqn2}
\end{eqnarray}
whose solutions are:
\begin{eqnarray}
\lambda_{2}(r)\,&=&\,C_{6}+\frac{C_{6} C_{1}+(C_{3}C_{4}-C_{2}C_{5})(D-2)(D-1)}{r^{D-3}}
\nonumber\\
&+&\left[-\frac{C_{6}\Lambda_{D}}{D-1}+\left(g(R_0)-2 \Lambda_{D}g'(R_0)\right)
\left(C_{4}+C_{2} g'(R_0)-\frac{2 C_{2} D\Lambda_{D}g''(R_0)}{D-2}\right)\right] r^2
\nonumber\\
&&
\label{lambda2}
\end{eqnarray}
\begin{eqnarray}
\mu_{2}(r)\,=\,\frac{C_{5}}{r^{D-3}}+\frac{\left(g(R_0)-2\Lambda_{D}g'(R_0)\right)
\left(2 D\Lambda_{D} g''(R_{0})-(D -2) g'(R_0)\right)}{(D -2)^2 (D -1)}r^{2}
\label{mu2}
\end{eqnarray}
Further orders in $\alpha^{3,4,...}$ can be obtained by inserting
previous results in the order $3,4,...$ ones to get
$\{\lambda_{3,4,...}(r),\mu_{3,4,...}(r) \}$ but of course the
corresponding equations become increasingly complicated.
Notice that from the obtained results up to second order in $\alpha$,
the corresponding metric has constant scalar curvature for any
value of the parameters $C_1, C_2, \dots, C_6$. As a matter of fact,
this metric is nothing but the standard Schwarzschild-$AdS$ geometry,
and can be easily rewritten in the usual form by making a
trivial time reparametrization as follows:
\begin{eqnarray}
\overline{\lambda}(r)\,&\equiv&\,\lambda(r)\left[ -C_{2} (D ^2+3D-2)
+C_{4}\left(D^2-3D +2\right)\alpha +C_6\alpha^{2}+{\cal O}(\alpha^{3})\right]\nonumber\\
\overline{\mu}(r)\,&\equiv&\,\mu(r)
\label{lambda_reparametrization}
\end{eqnarray}
Therefore, at least up to second order, the only static,
spherically symmetric solutions which are analytical
in $\alpha$ are the standard Schwarzschild-$AdS$ space-times.
On the other hand, taking the inverse point of view,
if we assume the solutions to be of the $AdS$ BH type at
any order in the $\alpha$ expansion we can write:
\begin{eqnarray}
\lambda(r)\,\equiv\,\mu(r)=\,1+\left(\frac{\overline{R}_{S}}{r}\right)^{D-3} + J r^2
\label{mu_lambda}
\end{eqnarray}
as solution for the Einstein equations \eqref{eqn_lambda} and \eqref{eqn_mu}
with the gravitational lagrangian \eqref{expansion_en_alpha_fR} and
\begin{eqnarray}
\overline{R}_{S}\,&=&\,R_{S}+\Sigma_{i=1}^{\infty} C_{i}\alpha^{i}\nonumber\\
J\,&=&\,-\frac{\Lambda_{D}}{(D-1)}+\Sigma_{i=1}^{\infty}J_{i}\alpha^{i}
\end{eqnarray}
where $R_S$ and $C_i$ are arbitrary constants and the $J_{i}$
coefficients can be determined from (\ref{root_R0}):
\begin{eqnarray}
R-(D-2)\Lambda_{D} +\alpha g(R)+2(D-1)J(1+\alpha g'(R))\,=\,0
\label{algebraic_eqn}
\end{eqnarray}
with $R\,=\,-D(D-1)J$. Expanding previous equation in powers of $\alpha$
it is possible to find a recurrence equation for the $J_i$ coefficients, namely
for the $J_l$ (with $l>0$) coefficient, we find:
\begin{eqnarray}
&&(2-D)(D-1)J_{l}+\sum_{i=0}^{l-1}\sum_{cond.1}\frac{1}{i_{1}!i_{2}!
\ldots i_{l-1}!}(J_{1})^{i_{1}}(J_{2})^{i_{2}} \ldots (J_{l-1})^{i_{l-1}}g^{(i)}(R_{0})
+\nonumber\\
&& 2(D-1)\sum_{k=0}^{l-1}J_{k}\sum_{i=0}^{l-k-1}\sum_{cond.2}
\frac{1}{i_{1}!i_{2}!\ldots i_{l-k-1}!}(J_{1})^{i_{1}}(J_{2})^{i_{2}}\ldots
(J_{l-k-1})^{i_{l-k-1}}g^{(i+1)}(R_{0}) \,=\,0
\end{eqnarray}
with $R_{0}=-D(D-1)J_{0}\,\equiv\,D\Lambda_{D}$, where the first sum
is done under the condition 1 given by:
\begin{eqnarray}
\sum_{m=1}^{l-1}i_{m}=i, \,\, i_{m}\,\in \, \Bbb{N}\cup\{0\}\,\,\; \mbox{and}\,\,\;
\sum_{m=1}^{l-1} m \,i_{m}=l-1
\end{eqnarray}
and the second one under the condition 2:
\begin{eqnarray}
\sum_{m=1}^{l-k-1}i_{m}=i, \,\, i_{m}\,\in \, \Bbb{N}\cup\{0\}\,\,\; \mbox{and}\,\,\;
\sum_{m=1}^{l-k-1} m \,i_{m}=l-k-1
\end{eqnarray}
For instance we have:
\begin{eqnarray}
J_{1}\,&=&\
\frac{A(g\,;\,D,\,\Lambda_{D})}{(D-2)(D-1)}\nonumber\\
J_{2}\,&=&\
- \frac{A(g\,;\,D,\,\Lambda_{D})[(D - 2) g'(R_{0})
- 2D\Lambda_{D}g''(R_{0})]}{(D - 2)^2 (D-1)}
\label{lambda&mu_expansions}
\end{eqnarray}
where $A(g\,;\,D,\,\Lambda_{D})\equiv g(R_0)-2\Lambda_{D} g'(R_0)$.
Now we can consider the possibility of removing $\Lambda_{D}$ from
the action from the very beginning and still getting an $AdS$ BH
solution with an effective cosmological constant depending on
$g(R)$ and its derivatives evaluated at $R_{0}\equiv0 $. In this case the results,
order by order in $\alpha$ up to order $\alpha^2$, are:
\begin{eqnarray}
J_{0}(\Lambda_{D}=0)\,&=&\,0\nonumber\\
J_{1}(\Lambda_{D}=0)\,&=&\,\frac{g(0)}{(D -2) (D -1)}\nonumber\\
J_{2}(\Lambda_{D}=0)\,&=&\,-\frac{g(0) g'(0)}{(D -2) (D -1)}
\end{eqnarray}
As we see, in the context of $f(R)$ gravities, it is possible to have a BH in
an $AdS$ asymptotic space
even if the initial cosmological constant $\Lambda_D$ vanishes.
To end these two sections, we can summarize by saying that in the context
of $f(R)$ gravities the only spherically symmetric and
static solutions of negative constant curvature are the standard BH in
$AdS$ space. The same result applies in the general case (without impossing
constant curvature) in perturbation theory up to second order. However,
the possibility of having static and spherically symmetric solutions
with non constant curvature cannot be excluded in the case of
$f(R)$ functions which are not analytical in $\alpha$.
\section{Black-hole thermodynamics}
In order to consider the different thermodynamic quantities for
the $f(R)$ black-holes in $AdS$, we start from the temperature. In principle
there are two different ways of introducing this quantity for the
kind of solutions we are considering here. Firstly we can use the
definition coming from Euclidean quantum gravity \cite{HGG}. In this case one
introduces the Euclidean time $\tau=it$ and the Euclidean metric
$ds_E^2$ is defined as:
\begin{equation}
-\text{d}s_E^2=-\text{d}\sigma^2-r^2\text{d}\Omega^2_{D-2}
\end{equation}
where:
\begin{equation}
\text{d}\sigma^2=e^{-2\Phi(r)}A(r)\text{d}\tau^2+A^{-1}(r)\text{d}r^2.
\end{equation}
The metric corresponds only to the region $r>r_H$ where
$r_H$ is the outer horizon position with $A(r_H)=0$. Expanding
$\text{d}\sigma^2$ near $r_H$ we have:
\begin{equation}
\text{d}\sigma^2=e^{-2\Phi(r_H)}A'(r_H)\rho
\text{d}\tau^2+\frac{\text{d}\rho^2}{A'(r_H)\rho}
\end{equation}
where $\rho=r-r_H$. Now we introduce the new coordinates $\tilde R$ and
$\theta$ defined as:
\begin{eqnarray}
\theta=\frac{1}{2}e^{-\Phi(r_H)}A'(r_H)\tau
\nonumber\\
\tilde R=2\sqrt{\frac{\rho}{A'(r_H)}}
\end{eqnarray}
so that:
\begin{equation}
\text{d}\sigma^2=\tilde R^2\text{d}\theta^2+\text{d}R^2.
\end{equation}
According to the Euclidean quantum gravity prescription $\tau$
belongs to the interval defined by $0$ and $\beta_E=1/T_E$. On the
other hand, in order to avoid conical singularities, $\theta$
must run between $0$ and $2\pi$. Thus it is found that
\begin{equation}
T_E=\frac{1}{4\pi} e^{ -\Phi(r_H) } A'(r_H)
\end{equation}
Another possible definition of temperature was firstly proposed in
\cite{Hawking1974} stating that temperature can be given in terms of the
the horizon gravity $\mathcal{K}$ as :
\begin{eqnarray}
T_{\mathcal{K}}\equiv\frac{\mathcal{K}}{4\pi}
\end{eqnarray}
where $\mathcal{K}$ is given by:
\begin{eqnarray}
\mathcal{K}\,=\,\lim_{r\rightarrow
r_H}\frac{\partial_{r}g_{tt}}{\sqrt{|g_{tt}g_{rr}|}}.
\end{eqnarray}
Then it is straightforward to find:
\begin{eqnarray}
T_{\mathcal{K}}= T_E.
\end{eqnarray}
Therefore both definitions give the same result for this kind of
solution. Notice also that in any case the temperature depends
only on the behaviour of the metric near the horizon but it is
independent from the gravitational action. By this we mean that
different actions having the same solutions have also the same
temperature. This is not the case for other thermodynamic
quantities as we will see later. Taking into account the results
in previous sections and for simplicity we will concentrate only on
constant curvature $AdS$ BH solutions with $\Phi=0$ as a natural
choice and:
\begin{equation}
A(r)=1-\frac{R_S^{D-3}}{r^{D-3}}+\frac{r^2}{l^2}.
\end{equation}
Then, both definitions of temperature lead to:
\begin{equation}
\beta=1/T=\frac{4 \pi l^2 r_H}{(D-1)r_H^2+(D-3)l^2}.
\end{equation}
Notice that the temperature is a function of $r_H$ only, i.e. it
depends only on the BH size. In the limit $r_H$ going to zero the
temperature diverges as $T \sim 1/r_H$ and for $r_H$ going to
infinite $T$ grows linearly with $r_H$. Consequently $T$ has a
minimum at:
\begin{equation}
r_{H0}=l\sqrt{\frac{D-3}{D-1}}
\end{equation}
corresponding to a temperature:
\begin{equation}
T_0=\frac{\sqrt{(D-1)(D-3)}}{2 \pi l}
\end{equation}
The existence of this minimum was established in \cite{Hawking&Page}
for $D=4$ by Hawking and Page long time ago and it is well known.
More recently Witten extended this result to higher dimensions
\cite{Witten}. The minimun is important in order to set the regions
with different thermodynamic behaviors and stability properties.
For $D=4$, an
exact solution can be found for $r_H$:
\begin{eqnarray}
r_{H}\,=\,l\frac{2^{1/3} \left(9 \frac{R_{S}}{l}+\sqrt{12+81 \frac{R_{S}^2}{l^2}}
\right)^{2/3}-(24)^{1/3}}{6^{2/3} \left(9\frac{R_S}{l}
+\sqrt{12+81\frac{R_S^2}{l^2}}\right)^{1/3}}
\end{eqnarray}
Thus, in the $R_S\ll l$ limit, we find $r_H=R_S$, whereas in the opposite
case $l\ll R_S$, we get $r_H=(l^2 R_S)^{1/3}$.
For the particular
case $D=5$, $r_H$ can also be exactly found to be:
\begin{equation}
r_H^2=\frac{l^2}{2}\left(\sqrt{1+\frac{4R_S^2}{l^2}}-1\right)
\end{equation}
which goes to $R_S^2$ for $R_S\ll l$ and to $lR_S$ for $l\ll R_S$.
Notice that for any $T > T_0$, we have two
possible BH sizes: one corresponding to the small BH phase with
$r_H < r_{H0}$ and the
other corresponding to the large BH phase with $r_H > r_{H0}$.
In order to compute the remaining thermodynamic quantities, the Euclidean action
\begin{equation}
S_E=-\frac{1}{16 \pi G_D}\int \text{d}^{D}x\sqrt{g_E}\,(R+f(R))
\end{equation}
is considered. When the previous expression is evaluated on some
metric with a periodic Euclidean time with period $\beta$, it equals
$\beta$ times the free energy $F$ associated to this metric.
Extending to the $f(R)$ theories, the computation by Hawking and
Page \cite{Hawking&Page}, generalized to higher dimensions by Witten
\cite{Witten}, we compute the difference of this action evaluated on
the BH and the $AdS$ metric which can be written as:
\begin{equation}
\Delta S_E=-\frac{R_0+f(R_0)}{16 \pi G_D}\Delta V
\end{equation}
where $R_0=-D(D-1)/l^2$ and $\Delta V$ is the volume difference
between both solutions which is given by:
\begin{equation}
\Delta V=\frac{\beta \mu_{D-2}}{2(D-1)}(l^2r^{D-3}_H-r^{D-1}_H)
\end{equation}
so that:
\begin{equation}
\Delta S_E=-\frac{(R_0+f(R_0))\beta \mu_{D-2}}{36 \pi (D-1)
G_D}(l^2r^{D-3}_H-r^{D-1}_H)=\beta F.
\end{equation}
Notice that from this expression it is straightforward to obtain the
free energy $F$. We see that provided $-(R_0+f(R_0))>0$, which is the usual
case in $EH$ gravity, we have
$F>0$ for $r_H<l$ and $F<0$ for $r_H>l$. The temperature corresponding to the
horizon radius $r_H=l$ will be denoted $T_1$ and it is given by:
\begin{equation}
T_1=\frac{D-2}{2\pi l}.
\end{equation}
Notice that for $D>2$ we have $T_0<T_1$.
On the other hand, the total thermodynamical energy may now be obtained as:
\begin{equation}
E=\frac{\partial \Delta S_E}{\partial \beta}=-\frac{(R_0+f(R_0))M l^2}{2(D-1)}
\end{equation}
where $M$ is the mass defined in \eqref{BHmass}. This is one of the possible
definitions for the BH energy for $f(R)$ theories, see for instance \cite{Multamaki2007}
for a more general discussion. For the $EH$
action we have $f(R)=-(D-2)\Lambda_D$ and then it is immediate to
find $E=M$. However this is not the case for general $f(R)$
actions. Notice, that positive energy in $AdS$ space-time requires
$R_0+f(R_0)<0$. Now the entropy $S$ can be obtained from the well-known
relation:
\begin{equation}
S=\beta E- \beta F.
\end{equation}
Then one gets:
\begin{equation}
S=-\frac{(R_0+f(R_0))l^2 A_{D-2}(r_H)}{8 (D-1)G_D}
\end{equation}
where $A_{D-2}(r_H)$ is the horizon area given by
$A_{D-2}(r_H)\equiv r_H^{D-2}\mu_{D-2}$. Notice that once again
positive entropy requires $R_0+f(R_0)<0$.
For the $EH$ action we
have $R_0+f(R_0)=-2(D-1)/l^2$ and then we get the famous
Hawking-Bekenstein result \cite{Bekenstein}
\begin{equation}
S=\frac{ A_{D-2}(r_H)}{4G_D}
\end{equation}
Finally we can compute the heat capacity $C$ which can be written
as:
\begin{equation}
C=\frac{\partial E}{\partial T}=\frac{\partial E}{\partial
r_H}\frac{\partial r_H}{\partial T}
\end{equation}
Then it is easy to find
\begin{equation}
C=\frac{-(R_0+f(R_0))(D-2)\mu_{D-2}r^{D-2}_Hl^2}{8G_D(D-1)}
\frac{(D-1)r^2_H+(D-3)l^2}{(D-1)r^2_H-(D-3)l^2}.
\end{equation}
For the particular case of the $EH$ action we find:
\begin{equation}
C=\frac{(D-2)\mu_{D-2}r^{D-2}_H}{4G_D}\frac{(D-1)r^2_H+(D-3)l^2}{(D-1)r^2_H-(D-3)l^2}.
\end{equation}
In the Schwarzschild limit $l$ going to infinity this formula
gives:
\begin{equation}
C=-\frac{(D-2)\mu_{D-2}r^{D-2}_H}{4G_D}< 0
\end{equation}
which is the negative well-known result for standard BH. In the
general case, assuming like in the $EH$ case $(R_0+f(R_0))<0$, we
find $C>0$ for $r_H > r_{H0}$ (the large BH region) and $C<0$ for
$r_H < r_{H0}$ (the small BH region). For $r_H \sim r_{H0}$ ($T$
close to $T_0$) $C$ is divergent. Notice that in $EH$ gravity, $C<0$ necessarily
implies $F>0$ since $T_0<T_1$.
In any case, for $f(R)$ theories with $R_0+f(R_0)<0$, we have found
an scenario similar to the one described in
full detail by Hawking and Page in \cite{Hawking&Page} long time ago
for the $EH$ case.
For $T<T_0$, the only possible state of thermal equilibrium in an
$AdS$ space is pure radiation with negative free energy and there is
no stable BH solutions. For $T>T_0$ we have two possible BH
solutions; the small (and light) BH and the large (heavy) BH. The
small one has negative heat capacity and positive free energy as the
standard Schwarzschild BH. Therefore it is unstable under Hawking
radiation decay. For the large BH we have two possibilities; if
$T_0<T<T_1$ then both, the heat capacity and the free energy are
positive and the BH will decay by tunneling into radiation, but if
$T>T_1$ then the heat capacity is still positive but the free energy
becomes negative. In this case the free energy of the heavy BH will
be less than that of pure radiation. Then pure radiation will tend
to tunnel or to collapse to the BH configuration in equilibrium with
thermal radiation.
In general $f(R)$ theories one could also in principle consider the
possibility of having $R_0+f(R_0)>0$. However in this case
the mass and the entropy would be negative and therefore
in such theories the $AdS$ BH solutions would be unphysical.
Therefore $R_0+f(R_0)<0$ can be regarded as a necessary condition
for $f(R)$ theories in order to support $AdS$ BH solutions. Using
(\ref{root_R0}), this condition implies $1+f'(R_0)>0$. This last
condition has a clear physical interpretation in $f(R)$ gravities
(see \cite{silvestri} and references therein). Indeed, it can be
interpreted as the condition for the effective Newton's constant
$G_{eff}=G_{D}/(1+f'(R_0))$ to be positive. It can also be
interpreted from the quantum point of view as the condition which
prevents the graviton from becoming a ghost.
\section{Particular examples}
In this section we will consider some particular $f(R)$ models in
order to calculate the heat capacity $C$ and the free energy $F$ as
the relevant thermodynamical quantities for local and global
stability of BH's. For these particular models, $R_0$ can be
calculated exactly by using \eqref{root_R0}. For the sake of
simplicity we will fix the $D$-dimensional Schwarzschild radius in
(\ref{BHmass}) as $R_S^{D-3}=2$. The models we have considered are:
\subsection{Model I: $f(R)\,=\,\alpha (-R)^{\beta}$ }
Substituting in \eqref{root_R0} for arbitrary dimension we get
\begin{eqnarray}
R \left[\left(1-\frac{2}{D}\right)-\alpha(-R)^{\beta-1}
\left(1-\frac{2}{D}\beta\right)\right] \,=\,0
\label{eqn_Model_I}
\end{eqnarray}
We will only consider non-vanishing curvature solutions, thus we find:
\begin{eqnarray}
R_{0}\,=\,-\left[\frac{2-D}{(2\beta-D)\alpha}\right]^{1/(\beta-1)}
\label{R0_Model_I}
\end{eqnarray}
Since $D$ is assumed to be larger than 2,
the condition $(2\beta-D)\alpha<0$ provides well defined scalar curvatures $R_{0}$.
Two separated regions have thus to be studied: Region $1$ $\{\alpha<0,\,\beta>D/2\}$
and Region $2$ $\{\alpha>0,\, \beta<D/2\}$.
For this model we also get
\begin{eqnarray}
1+f'(R_{0})\,=\,\frac{D(\beta-1)}{2\beta-D}
\end{eqnarray}
Notice that in Region $1$, $1+f'(R_0)>0$ for $D>2$, since in this case $\beta>1$
is straightforwardly accomplished.
In Region $2$, we find that for $D>2$, the requirement $R_0+f(R_0)<0$, i.e. $1+f'(R_0)>0$,
fixes $\beta<1$, since this is the most stringent constraint over the
parameter $\beta$ in this region. Therefore the physical space of
parameters in Region $2$ is restricted to be $\{\alpha>0,\,\beta<1\}$.
In Figs. 1-3 we plot the physical regions in the parameter space $(\alpha,\beta)$
corresponding to the different signs of $(C,F)$.
\subsection{Model II: $f(R)\,=\, -(-R)^{\alpha}\,\text{exp}(q/R)-R$}
In this case, a vanishing curvature solution appears
provided $\alpha>1$. In addition, we also have:
\begin{eqnarray}
R_{0}\,=\,\frac{2 q}{2\alpha-D}
\label{R0_model_II}
\end{eqnarray}
To get $R_{0}<0$ the condition $q(2\alpha-D)<0$ must hold and two separated
regions will be studied: Region $1$ $\{q>0,\, \alpha<D/2\}$ and Region $2$
$\{q<0,\,\alpha>D/2\}$.
In Figs. 4-6 we plot the regions in the parameter space $(\alpha,q)$
corresponding to the different signs of $(C,F)$.
\subsection{Model III: $f(R)\,=\, R\,(\text{log}\alpha R)^{q}-R$}
A vanishing curvature solution also appears in this model. The non trivial one
is given by
\begin{eqnarray}
R_{0}\,=\,\frac{1}{\alpha}\text{exp}\left(\frac{2 q}{D-2}\right)
\label{R0_model_III}
\end{eqnarray}
Since $R_{0}$ has to be
negative, $\alpha$ must be negative as well, accomplishing $\alpha R_{0}>0$ and
since $\alpha R$, and therefore $\alpha R_{0}$, has to be bigger than one to have
a positive number powered to $q$, what imposes $q>0$ as can be read from the argument of
the exponential in the previous equation. Therefore there exists a unique accessible
region for parameters in this model: $\alpha<0$ and $q>0$.
In Figs. 7-8 we plot the regions in the parameter space $(\alpha,q)$
corresponding to the different signs of $(C,F)$.
\subsection{Model IV: $f(R)\,=\,-\alpha\frac{c_1\left(\frac{R}{\alpha}\right)^{n}} {1+\beta\left(\frac{R}{\alpha}\right)^n}$}
This model has been proposed in \cite{Hu&Sawicki2007} as cosmologically viable. Throughout this section, we consider $n=1$ for this model. Hence imposing $f'(R_0)=\epsilon$ we get
\begin{eqnarray}
c_{1}\,=\,-\frac{(D - 2 (1 + \epsilon))^2}{D^2 \epsilon}
\label{epsilon}
\end{eqnarray}
hence a relation between $c_1$, $D$ and $\epsilon$ can be imposed and therefore this model would only depend on two parameters $\alpha$ and $\beta$.
A vanishing curvature solution also appears in this model and two non trivial
curvature solutions are given by:
\begin{eqnarray}
R_{0}^{\pm}\,=\,\frac{\alpha \left[(c_1-2) D+4\pm\sqrt{c_1} \sqrt{c_1 D^2-8 D+16}\right]}{2 \beta (D-2)}
\label{R0_model_IV}
\end{eqnarray}
The corresponding $1+f'(R_0)$ values for \eqref{R0_model_IV} are
\begin{eqnarray}
1+f'(R_{0}^{\pm})\,=\,1-\frac{4(D-2)^2}{\left(\sqrt{c_1 D^2-8D+16}\pm\sqrt{c_1} D\right)^2}
\label{1+f'(R0)_model_IV}
\end{eqnarray}
where $c_1>0$ and $c_1>(8D-16)/D^2$ are required for real $R_{0}$ solutions. Since $1+f'(R_{0})>0$ is required, that
means that $\text{sign}(R_{0}^{\pm})=\text{sign}(\alpha\beta)$. It can be shown that $1+f'(R_{0}^{-})$ is not
positive for any allowed value of $c_1$ and therefore this curvature solution $R_{0}^{-}$ is excluded for our study.
$1+f'(R_{0}^+)>0$ only requires $c_1>0$ for dimension $D\geq4$ and
therefore $\epsilon<0$ is required according to \eqref{epsilon}.
Therefore only two accesible regions need to be studied: Region $1$
$\{\alpha>0,\, \beta<0\}$ and Region $2$, $\{\alpha<0,\,\beta>0\}$.
\\
In Figs. 9-10 we plot the thermodynamical regions in the parameter space $(\alpha,\beta)$ for a chosen $\epsilon=-10^{-6}$. Note that $1+f'(R_{0}^{+})$ does depend neither on $\alpha$ nor on $\beta$ and that $R_{0}^{+}$ only depend on the quotient $\alpha/\beta$ for a fixed $c_1$.
\section{Conclusions}
In this work we have considered static spherically symmetric
solutions in $f(R)$ theories of gravity in arbitrary dimensions.
After discussing the constant curvature case (including charged
black-holes in 4 dimensions), we have studied the general case
without imposing, a priori, the condition of constant curvature. We
have performed a perturbative analysis around the $EH$ case which
makes possible to study those solutions which are regular in the
perturbative parameter $\alpha$. We have found explicit expressions
up to second order for the metric coefficients, which give rise to
constant curvature (Schwarzschild $AdS$) solutions as in the $EH$
case.
On the other hand, we have also calculated thermodynamical
quantities for the $AdS$ black holes and considered the issue of the
stability of this kind of solutions. We have found that the
condition for a $f(R)$ theory of gravity to support this kind of
black holes is given by $R_0+f(R_0)<0$ where $R_0$ is the constant
curvature of the $AdS$ space-time. This condition has been seen to
imply also that the effective Newton's constant is positive and that
the graviton does not become a ghost. For these $f(R)$ gravities the
qualitative thermodynamic behavior of the BH is the same as the one
found by Hawking and Page for the AdS BH but the value of some
thermodynamic magnitudes is different for different $f(R)$
gravities.
Finally we have considered several explicit examples of $f(R)$
functions and studied the parameter regions in which BH in such
theories are locally stable and globally preferred, finding the same
qualitative behaviour as in standard $EH$ gravity.
\\
{\bf Acknowledgements:} This work has been supported by
Ministerio de Ciencia e Innovaci\'on (Spain) project numbers
FIS 2008-01323 and FPA
2008-00592, UCM-Santander PR34/07-15875 and UCM-BSCH GR58/08 910309.
|
1,108,101,562,989 | arxiv | \section{Introduction}
\label{sec:intro}
Federated learning (FL)~\cite{mcmahan2017communicationefficient} is a learning paradigm that trains models from distributed users or participants (e.g., mobile devices) without requiring raw training data to be shared, alleviating the rising concern of privacy issues when learning with sensitive data and facilitating learning deep models by enlarging the amount of data to be used for training.
In a typical FL algorithm, each user trains a model locally using their own data and a server iteratively aggregates users' incremental updates or intermediate models, converging to a model that fuses training information from all users.
A major challenge in FL comes from the heterogeneity of users.
One source of heterogeneity is distributional differences in training data collected by users from diverse user groups~\cite{fallah2020personalized,zhu2021datafree}.
Yet another source is the difference of computing resources, as different types of hardware used by users usually result in varying computation budgets.
For example, consider an application scenario of FL from mobile phones~\cite{hard2019federated}, where different types of mobile phones (e.g., generations of the same brand) may have drastically different computational power.
Data heterogeneity should be carefully handled during the learning as a single model trained by FL may fail to accommodate the differences~\cite{yu2020salvaging}.
A variety of approaches have been proposed to address the issue, such as customizing network structures~\cite{li2020fedbn,arivazhagan2019federated} or tailoring training strategies \cite{fallah2020personalized,dinh2020personalized} for each user.
Even though hardware heterogeneity is ubiquitous, their impacts to FL processes and possible solutions have received very limited attention so far.
The impacts of the two types of heterogeneity become aggravated when participating users' desire adversarial robustness during the inference stage, against imperceptible noise that can significantly mislead model predictions.
To address this issue, a straightforward extension of FL, federated adversarial training (FAT), can be adopted, which idea was explored in \cite{zizzo2020fat,reisizadeh2020robust}.
Locally, each user trains models with adversarially augmented samples, namely adversarial training (AT)~\cite{xie2020adversarial}.
As studied in central settings, the AT is data-thirsty and computationally expensive \cite{shafahi2019adversarial}.
Therefore, involving a fair amount of users in FAT is essential, given the fact that each individual user may not have enough data to perform AT.
However, this implies an increasing difficulty for fitting diverse data distributions and more intensive computation for each user, which could be $3-10$ times more costly than the standard equivalent \cite{shafahi2019adversarial,zhang2019you}.
The computation overhead can be prohibitive for FL users with limited computational budget such as mobile devices.
As such, it is often unrealistic to enforce \emph{all} users in a FL process to conduct AT locally, despite the fact that the robustness is indeed a strongly desired or even required property for all users.
This conflict raises a challenging yet interesting question: Is it possible to \emph{propagate adversarial robustness in FL} so that budget-limited users can benefit from robustness training of users with abundant computational resources?
\begin{figure}[tb]
\centering
%
\begin{subfigure}{0.67\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/PropagateRobustness.png}
\vskip -0.1in
\caption{} %
\label{fig:prop_rob}
\end{subfigure}
\hfil
\begin{subfigure}{0.3\textwidth}
\vskip 0.1in
\includegraphics[width=\textwidth]{fig/highlight/highlight}
\vskip -0.1in
\caption{} %
\label{fig:highlight}
\end{subfigure}
%
%
%
%
%
%
\vskip -0.1in
\caption{(a) We define a novel problem setting, where standard-training (ST) users, which might be limited by data or computational resources, can ``share" robustness from adversarial-training (AT) users who can afford it.
(b) Comparison of robustness on a varying portion of AT users, where a 5-domain digit recognition dataset is distributed to $50$ users in total and details are in \cref{sec:app:exp_highlight}. %
%
}
\vspace{-0.2in}
%
\end{figure}
Motivated by the question above, we formulate a novel problem setting called Federated Robustness Propagation (FRP), as depicted in \cref{fig:prop_rob}.
We consider a rather common non-iid FL setting that involves budget-sufficient users (AT users) that conduct adversarial training, and budget-limited ones (ST users) that can only afford standard training.
The goal of FRP is to propagate the adversarial robustness from AT users to ST users.
Note that sharing adversarial data is prohibited for mitigating the overhead of adversarial-data generation, due to the privacy consideration of the FL framework.
In \cref{fig:highlight}, we show that independent AT by users without FL (\texttt{local AT}) will not yield a robust model since each user has scarce training data.
Directly extending an existing FL algorithm \emph{FedAvg} \cite{mcmahan2017communicationefficient} or a heterogeneity-mitigated one \emph{FedBN} \cite{li2020fedbn} with AT treatments, named as FATAvg and FATBN,
give very limited capability of propagating robustness.
The limitation may arise from the following two reasons:
\textbf{1)}~Heterogeneity of users results in distinct noise behavior in different users, degrading the transferability of model robustness. %
\textbf{2)}~Structural differences between clean and adversarial samples may further impede robustness propagation~\cite{xie2019intriguing}.
To address the aforementioned challenges, we propose a novel method Federated Robust Batch-Normalization (FedRBN) to facilitate propagation of adversarial robustness among FL users.
\textbf{1)} We design a surprisingly simple linear method that transmits the robustness by copying batch-normalization (BN) statistics, inspired by the strong connection between model robustness and statistic parameters in the BN layer~\cite{schneider2020improving};
\textbf{2)} To efficiently propagate robustness among non-iid users, we weight and average multiple AT users' statistics as BN for every ST user;
\textbf{3)} Facing the structural difference between the clean and adversarial data, we train two separate BNs for each data type, which are adaptively chosen at the inference stage.
Our method is communication-efficient as it only incurs an one-time additional communication after training.
We conduct extensive experiments demonstrating the feasibility and effectiveness of the proposed method.
In \cref{fig:highlight}, we highlight some experimental results from \cref{sec:exp}.
When only $20\%$ of non-iid users used AT during learning, the proposed FedRBN yields robustness, competitive with the best all-AT-user baseline (FATBN) by only a$2\%$ drop (out of $59\%$) on robust accuracy.
Note that even if our method with $100\%$ AT users increase the upper bound of robustness, such a bound is usually not attainable because of the presence of resource-limited users that cannot afford AT during learning.
%
\section{Related Work}
\label{sec:related}
\vskip -0.1in
\textbf{Federated learning for robust models}.
The importance of adversarial robustness in the context of federated learning, i.e., federated adversarial training (FAT), has been discussed in a series of recent literature~\cite{zizzo2020fat,reisizadeh2020robust,kerkouche2020federated}.
Zizzo~\emph{et al.}~\cite{zizzo2020fat} empirically evaluated the feasibility of practical FAT configurations (e.g., ratio of adversarial samples) augmenting FedAvg with AT but only in \emph{iid} and label-wise non-\emph{iid} scenarios.
The adversarial attack in FAT was extended to a more general affine form, together with theoretical guarantees of distributional robustness \cite{reisizadeh2020robust}.
It was found that in a communication-constrained setting, a significant drop exists both in standard and robust accuracies, especially with non-\emph{iid} data \cite{shah2021adversarial}.
In addition to the challenges investigated above, this work studies challenges imposed by hardware heterogeneity in FL, which was rarely discussed.
Especially, when only limited users have devices that afford AT, we strive to efficiently share robustness among users, so that users without AT capabilities can also benefit from such robustness.
\textbf{Robust federated optimization}.
Another line of related work focuses on the robust aggregation of federated user updates~\cite{kerkouche2020federated,fu2019attackresistant}.
Especially, Byzantine-robust federated learning~\cite{blanchard2017machine} aims to defend malicious users whose goal is to compromise training, e.g., by model poisoning~\cite{bhagoji2018analyzing,fang2020local} or inserting model backdoor~\cite{bagdasaryan2018how}.
Various strategies \rev{aim} to eliminate the malicious user updates during federated aggregation~\cite{chen2017distributed,blanchard2017machine,yin2018byzantinerobust,pillutla2020robust}.
\rev{However, most of them assume the normal users are from similar distributions with enough samples such that the malicious updates can be detected as outliers.
Therefore, these strategies could be less effective on attacker detection when a finite dataset is given~\cite{wu2020federated}.}
Even though both the proposed FRP and Byzantine-robust studies work with robustness, but they have fundamental differences: the proposed work focus on \emph{the robustness during inference}, i.e., after the model is learned and deployed, whereas Byzantine-robust work focus on the robust learning process.
As such, all Byzantine-robust techniques can be combined with the proposed approach to provide training robustness.
\section{Problem Setting: Federated Robustness Propagation (FRP)}
In this section, we will review AT, present the unique challenges from hardware heterogeneity in FL and formulate the problem of federated robustness propagation (FRP). In this paper, we assume that a dataset $D$ includes sampled pairs of images $x\in \mathbb{R}^d$ and labels $y\in \mathbb{R}^c$ from a distribution ${\mathcal{D}}$.
Though we limit the data as images in this paper, our discussion could be generalized to other data forms.
We model a classifier, mapping from the $\mathbb{R}^{d}$ data/input space to classification logits $f: \mathbb{R}^{d} \rightarrow \mathbb{R}^{c}$, by a deep neural network (DNN).
Whenever not causing confusing, we use the symbol of a model and its parameters interchangeably.
For brevity, we slightly abuse $\mathbb{E}[\cdot]$ for both empirical average and expectation and use $[N]$ to denote $\{1,\dots,N\}$.
\subsection{Standard training and adversarial training}
An \emph{adversarial attack} applies a bounded noise $\delta_\epsilon: \norm{\delta_\epsilon} \le \epsilon$ to an image $x$ such that the perturbed image $A_\epsilon(x) \triangleq x+\delta_\epsilon$ can mislead a well-trained model to give a wrong prediction.
The norm $\norm{\cdot}$ can take a variety of forms, e.g., $L_{\infty}$-norm for constraining the maximal pixel scale. %
A model $f$ is said to be \emph{adversarially robust} if it can predict labels correctly on a perturbed dataset $\tilde D = \{(A_\epsilon(x), y) | (x, y)\in D\}$, and the standard accuracy on $D$ should not be greatly impacted.
Consider the following general learning objective:
\begin{align}
\min \nolimits_f L(f, D) = \min \nolimits_f {1\over |D|}\sum \nolimits_{(x,y)\in D} [ (1 - q)\ \ell_c (f;x,y) + q\ \ell_a (f;x,y) ], \label{eq:AT_ST_loss}
\end{align}
where $\ell_c$ is a standard classification loss on clean images and $\ell_a$ is an adversarial loss promoting robustness.
\cref{eq:AT_ST_loss} performs \emph{standard training} if $q=0$, and \emph{adversarial training} if $q\in (0, 1]$.
Without loss of generality, we limit our discussion for $q_a$ as $0$ or $0.5$.
A popular instantiation of \cref{eq:AT_ST_loss} is based on PGD attack \cite{madry2019deep,tsipras2019robustness}:
$\ell_c(f;x,y) = \ell(f(x), y), ~ \ell_a(f;x,y) = \max_{\norm{\delta} \le \epsilon} \ell(f(x+\delta), y)$,
where $\norm{\cdot}$ is the $L_\infty$-norm,
$\ell$ can be the cross-entropy loss, i.e., $\ell(f(x), y) = - \sum_{t=1}^c y_t \log (f(x)_t)$ where $t$ is the class index and $f(x)_t$ represents the $t$-th output logit.
\subsection{Problem setup and challenges}
We start with a typical FL setting: a finite set of distributions ${\mathcal{D}}_i$ for $i\in [C]$, from which a set of datasets $\{D_k\}_{k=1}^K$ are sampled and distributed to $K$ users' devices.
The users from distinct domains related with ${\mathcal{D}}_i$ expect to optimize objectives like \cref{eq:AT_ST_loss}. Some users can afford AT training (\emph{AT users}, $q_k = 0.5$) when the remaining users cannot afford and use standard training (\emph{ST users}, $q_k = 0$).
If the two groups of users train models separately, the models of ST users will be much less robust than those of AT ones.
Note that data exchange among users is forbidden according to the the FL setting for privacy concerns.
The goal of \emph{federated robustness propagation (FRP)} is to transfer the robustness from AT users to ST users at minimal computation and communication costs while preserve data locally. Formally, the FRP objective minimizes:
\begin{align}
\operatorname{FRP}(\{f_k\}; \{D_k | D_k \sim {\mathcal{D}}_i\},{\mathbf{q}}) \triangleq \sum\nolimits_{k\in [K]} {1\over |D_k|} \sum\nolimits_{(x,y)\in D_k} [ (1 - q_k) \ell_c (f_k) + q_k \ell_a (f_k) ], \label{eq:FRP}
\end{align}
where ${\mathbf{q}}\triangleq[q_1, \dots, q_K]$.
Note that different from FAT \cite{zizzo2020fat}, FRP assumes that $D_k$ sampled from different distributions and that there are at least one zero entry in ${\mathbf{q}}$.
In the federated setting, each user's model is trained separately when initialized by a global model, and is aggregated to a global model at the end of each epoch.
A popular aggregation technique is FedAvg~\cite{mcmahan2017communicationefficient},
which averages parameters by $f= {1\over K} \sum_{k=1}^K a_k f_k$ with normalization coefficients $a_k$ proportional to $|D_k|$.
Remarkably, \cref{eq:FRP} formalizes two types of common user heterogeneity in FL.
The first one is the \emph{hardware heterogeneity} where users' varying computation budgets result in sparsity of ${\mathbf{q}}$.
A node of tight computation budget, e.g., smartphone, may join FL with $q=0$, while a powerful one, e.g., desktop computer, uses $q=0.5$ \cite{hard2019federated}.
Besides, \emph{data heterogeneity} is represented as ${\mathcal{D}}_i$ differing by $i$. We limit our discussion as the common feature distribution shift (on $x$) in contrast to the label distribution shift (on $y$), as previously considered in \cite{li2020fedbn}.
Such distribution shift often happens when users are distributed across different environments, e.g., sensor data collected indoor and outdoor.
\noindent\textbf{New Challenges.} We emphasize that \emph{jointly} addressing the two types of heterogeneity in \cref{eq:FRP} forms a new challenge, distinct from either of them considered exclusively.
First, the sparsity in ${\mathbf{q}}$ worsens the data heterogeneity as additional distribution shift in the hidden representations from adversarial augmentation~\cite{xie2019intriguing}.
That means even if two users sample from the same distribution, their classification layers may operate on different distributions.
Second, the data heterogeneity makes the transfer of robustness non-trivial~\cite{shafahi2019adversarially}.
Hendrycks~\emph{et al.}\ discussed the transfer of models adversarially trained on multiple domains and massive samples~\cite{hendrycks2019using}.
In \cite{shafahi2019adversarially}, Shafahi~\emph{et al.} firstly studied the transferability of adversarial robustness from one data domain to another without the data-hungry problem.
They proposed fine-tuning the robustness-sensitive layers in a neural network on a target domain.
Distinguished from Shafahi~\emph{et al.}'s work, the FRP problem focuses on propagating robustness from multiple AT users to multiple ST users who have diverse distributions.
Thus, fine-tuning all source models in ST users is often not possible due to prohibitive computation costs.
%
\section{Method: Federated Robust Batch-Normalization (FedRBN)}
\subsection{Robustness propagation by copying debiased BN layers}
\label{sec:dbn_copy}
In centralized learning, an important observation is that robustness is highly correlated with the BN statistics~\cite{xie2019intriguing}.
We extend this investigation to the FL setting, where we assume all other parameters are shared besides BN layers. There are significant differences in BN parameters (mean and variance) between ST and AT users from the same domain, as shown in \cref{fig:bn_clean_noise_stat}.
This observation indicates that directly using local BN statistics can hardly grant robustness to an ST user, and suggests a possible way to transfer robustness through leveraging the BN layers from AT users in ST users upon predicting possible adversarial input images.
However, the distributions of users from distinct domains can be quite different~\cite{joaquinquinonero-candela2008dataset}, and therefore directly copying BN among users can suffer from the distribution shift by domains.
This motivates us to develop a shift-aware debiasing method.
\begin{figure}[t]
\vspace{-0.1in}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=0.49\textwidth]{fig/corr_analysis_DomainNet/M2M_digits_sigma_diff}
\hfil
\includegraphics[width=0.49\textwidth]{fig/corr_analysis_DomainNet/M2M_digits_mu_diff}
\vspace{-0.3in}
\caption{Difference of clean/noise statistics
}
\label{fig:bn_clean_noise_stat}
\end{subfigure}
\hfil
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.49\textwidth]{fig/corr_analysis_DomainNet/M2M_digits_sigma_diff_corr}
\hfil
\includegraphics[width=0.49\textwidth]{fig/corr_analysis_DomainNet/M2M_digits_mu_diff_corr}
\vspace{-0.3in}
\caption{Correlation of relative statistics in the 2nd BN layer.}
\label{fig:bn_xdomain_stat_corr}
\end{subfigure}
\vspace{-0.05in}
\caption{Models are trained with decoupled BN layers on Digits dataset. \texttt{bn1} is the first BN layer in the network. (a) Results on SVHN. (b) The relative statistics are compared on MNIST versus SVHN.}
%
\label{fig:corr_analysis}
\end{figure}
To capture the domain bias, we can leverage the BN layers as they are modeling local distributions.
Ideally, differential modeling two users will yield BN statistics for the corresponding two distribution, separately.
However, one challenge here is that the BN statistics in non-iid AT and ST users are \emph{biased simultaneously} by the domain difference and the adversarial noise.
As such, directly differential modeling will capture the mixture of both types of bias and therefore would not be effective to infer the domain bias.
Instead, we incrementally propose to simultaneously model clean data and noise data by BN for all AT users, since clean data are also available during training.
To do so, we replace standard BN layers by ones that use the \emph{dual batch-normalization (DBN)} structure~\cite{xie2020adversarial}, which keep two sets of BN statistics: one for clean data and one for noise data.
The channel-wise mapping of DBN from the input layer $x$ to output $z$ is
\begin{align}
z = w \left[ (1-h) \tfrac{x - \mu}{\sqrt{\sigma^2 + \epsilon_0}} + h \tfrac{x - \mu^r}{\sqrt{{\sigma^r}^2 + \epsilon_0}} \right] + b, \label{eq:dualBN}
\end{align}
where $\mu$ and $\sigma^2$ are the mean and variance over all non-channel dimensions,
$\mu^r$ and ${\sigma^2}^r$ are their corresponding noised statistics,
$h$ serves as a model-wise switch, which is $0$ for clean inputs or $1$ for noised inputs,
$\epsilon_0$ is a small constant for the numerical stability.
Different from prior work, e.g., \cite{xie2020adversarial,xie2019intriguing}, we use a shared affine weights ($w) and bias ($b) for efficiency considerations.
In the user-side training, we explicitly choose the clean or noise BN based on the input.
Though we introduce DBN for a bias-inference purpose, the DBN still merits the model performance as it normalize representations more precisely as prior work investigated \cite{xie2020adversarial}.
\textbf{Transfer robustness from a single user via copying BN layers}.
With the clean BN embedded in DBN, we can estimate the distributional difference by an AT clean BN statistic tuple $(\mu_s, \sigma^2_s)$ and an ST (clean) BN statistic tuple $(\mu_t, \sigma_t^2)$.
Formally, we propose a debiased statistic estimation by
\begin{align}
\hat \mu_t^r = \mu_s^r + \lambda (\mu_t - \mu_s), ~ \hat {\sigma_t^r}^2 = {\sigma_s^r}^2 \left( {\sigma_t^2}/{(\sigma_s^2 + \epsilon_0)} \right)^{\lambda}, \label{eq:debias_copy}
\end{align}
where $\lambda$ is a hyper-parameter in $[0,1]$. %
Note that when the distributions are matched, i.e., $\mu_t=\mu_s$, then debiasing is not necessary and is thus vanished.
Note that when $\lambda$ is $0$, the debias term is ignored.
On the other hand, since the debias term is roughly estimated, we may not want to choose an over-confident value either, e.g., set $\lambda =1 $.
Hence, we use $\lambda=0.1$ generally in our experiments.
To justify the rationality of \cref{eq:debias_copy}, we contrast the $\mu_s-\mu_t$ with $\mu_s^r -\mu_t^r$ in \cref{fig:bn_xdomain_stat_corr}.
The clean and noise BN statistics are estimated during training DBN,
and we observe a strong correlation of the relative difference among domains both for the mean and variance.
To understand the debiasing method, we provide a principled analysis on a simplified one-dimensional example.
We assume the noised inputs to a BN layer in user $s$ can be approximated by an affine-noise model
$\tilde x_s = \lambda x_s + \nu, ~ \tilde x_t = \lambda x_t + \nu$,
where $x_s \sim {\mathcal{N}}(\mu_s, \sigma_s^2)$, $x_t \sim {\mathcal{N}}(\mu_t, \sigma_t^2)$ and $\nu\sim {\mathcal{N}}(\mu', {\sigma'}^2)$ is domain-independent noise.
$\lambda$ is a constant scalar.
We further assume $\nu$ is independent from $x_s$ and $x_t$.
Taking expectation gives $\mu_s^r = \lambda \mu_s + \mu', ~ \mu_t^r = \lambda \mu_t + \mu'$; ${\sigma_s^r}^2 = \lambda^2 \sigma_s^2 + {\sigma'}^2, ~ {\sigma_t^r}^2 = \lambda^2 \sigma_t^2 + {\sigma'}^2$.
Due to the invariance assumption of $(\mu', \sigma')$, we have:
$ \hat \mu_t^r = \mu_s^r + \lambda (\mu_t - \mu_s), ~ \hat \sigma_t^r = \sqrt{ {\sigma_s^r}^2 + \lambda^2 (\sigma_t^2 - \sigma_s^2) } $.
However, $\hat \sigma_t^r$ is meaningless when ${\sigma_s^r}^2 + \lambda^2 (\sigma_t^2 - \sigma_s^2) < 0$.
To fix this, we use a division instead of subtraction to represent the relative relation in \cref{eq:debias_copy}.
This simplified example by no means serves as rigorous analysis, which is an open problem outside the scope of this paper.
\textbf{Multiple-source propagation by weighted BN averaging}.
In FL with multiple AT users available, we propose a strategy to transfer robustness from multiple sources.
Given $N$ source BN statistics, we use a weighted average to estimate the noise BN of a target ST user
$\hat \mu_t^r = \sum_{i}^{N} \alpha_i \hat \mu_{t, s_i}^r$, where $\hat \mu_{t, s_i}^r$ is the estimated value from user $s_i$ by \cref{eq:debias_copy}.
Likewise, $\hat \sigma_t^r$ can be estimated.
However, the difference between the $s_i$-th adversarial distribution and the $t$-th counterpart is unknown.
To tackle the issue, we first present the following result:
\begin{lemma}[Informal] \label{thm:multi_src_domain}
Suppose the the divergence between any data distribution ${\mathcal{D}}$ and its adversarial distribution $\tilde {\mathcal{D}}$ is bounded, i.e., $d_{{\mathcal{H}} \Delta {\mathcal{H}}} (\tilde {\mathcal{D}}, {\mathcal{D}}) \le d_{\epsilon}$ where $d_{{\mathcal{H}} \Delta {\mathcal{H}}}$ is ${\mathcal{H}} \Delta {\mathcal{H}}$-divergence in hypothesis space ${\mathcal{H}}$.
If a target model is formed by $\alpha_i$-weighted average of models from $D_{s_i}$, the summation $\sum_i \alpha_i d_{{\mathcal{H}} \Delta {\mathcal{H}}} (D_{s_i}, D_t)$ of divergence between a set of source standard datasets $\{D_{s_i}\}$ and the target adversarial dataset $D_t$ weighted by $\alpha_i$ upper-bounds the generalization error of the target adversarial data distribution $\tilde {\mathcal{D}}_t$.
\end{lemma}
The lemma extends an existing bound for federated domain adaptation \cite{peng2019federated}, and shows that the generalization error on the unseen target noised distribution $\tilde D_t$ is bounded by the $\alpha_i$-weighted distribution gaps.
Motivated by the analysis, we set $\alpha_i$ to be reversely proportional to the divergence between $D_{s_i}$ and $D_t$.
Specifically, we use a layer-averaged RBF-kernel to approximate the weight, i.e.,
$\alpha_i = {1\over L} \sum \nolimits_{l=1}^L \exp\left[-\gamma_{\text{rbf}} d_W^l(D_s, D_t) / p^l\right]$,
where $p^l$ is the number of channels in layer $l$.
The distribution discrepancy can be approximately measured by Wasserstein distance as $d_W^l(D_s, D_t) = \norm{\mu_{s_i}^l - \mu_t^l}^2_2 + \norm{\sigma_{s_i}^l - \sigma_t^l}^2_2$.
We use a large $\gamma_{rbf}$, i.e., $100\times \max_l p^l$, to contrast the in-distribution and out-distribution difference.
Lastly, we normalize $\alpha_i$ such that $\sum_i \alpha_i = 1$.
The formal analysis can be found in~\cref{thm:multi_src_domain:proof}.
\subsection{FedRBN algorithm and its efficiency}
We are now ready to present the proposed the two-stage Federated Robust Batch-Normalization (FedRBN) algorithm, as summarized
in \cref{fig:fedrbn_process}.
During training (\cref{alg:train}), we train models locally with decoupled clean and noise BNs for each user.
Locally trained models excluding BN are then aggregated by federated parameter averaging.
After training (\cref{alg:pre-inference}), we attentively copy BN parameters from multiple AT models into ST ones.
\begin{minipage}{0.49\textwidth}
\begin{algorithm}[H] %
%
\centering
\small
\caption{FedRBN: user training}
\label{alg:train}
\begin{algorithmic}[1]
\Require An initial model $f$ from the server, adversary $A(\cdot)$, dataset $D$, adversarial hyper-parameter $q$
\For{mini-batch $\{(x, y)\}$ in $D$}
%
\State Set $f$ to use clean BN by $h\leftarrow 0$
\State $L \leftarrow \mathbb{E}_{(x,y)} [ \ell(f, (x,y))$ ]
\If{$q > 0$}
\State Perturb data $\tilde x \leftarrow A(x)$
\State Set $f$ to use noise BN by $h \leftarrow 1$
\State $L \leftarrow (1-q) L + q \mathbb{E}_{(\tilde x,y)} [ \ell(f, (\tilde x,y))] $
\EndIf
\State Update $f$ by one-step gradient descent
\EndFor
\State \textbf{Upload} parameters of layers except BN layers
%
\end{algorithmic}
\end{algorithm}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\begin{algorithm}[H] %
%
\centering
\small
\caption{FedRBN: post-training}
\label{alg:pre-inference}
\begin{algorithmic}[1]
\Require Source AT users $\{s_i\}$, adversary $A(\cdot)$, local validation dataset $D_{\text{val}}$
\State Copy debiased noise-BN statistics attentively from users $\{s_i\}$ if current user is a ST user %
\State $D_a \leftarrow \emptyset$
\State Set $f$ to use clean BN by $h\leftarrow 0$
\For{mini-batch $\{(x, y)\}$ in $D_{\text{val}}$}
\State $\tilde x \leftarrow A(x)$
\State $D_a \leftarrow D_a \cup \left\{(f(x), 0), (f(\tilde x), 1) \right\}$
\EndFor
\State Fit a noise detector $g(\cdot)$ on $D_a$
\State \textbf{Return} noise detector $g(\cdot)$, modified $f$
\end{algorithmic}
\end{algorithm}
\end{minipage}
\begin{wrapfigure}{r}{0.5\textwidth}
\vspace{-0.1in}
\centering
\includegraphics[width=0.5\textwidth]{fig/FedRBN_process}
\caption{Overview of our FedRBN algorithm.}
\label{fig:fedrbn_process}
%
%
%
%
%
%
\vspace{-0.1in}
\end{wrapfigure}
\noindent\textbf{Inference-stage BN selection.} One issue of FedRBN is the choice of BN at inference time. To balance accuracy and robustness, we need a strategy to automatically select the right BN for each sample, i.e., applying noise BN when the input is found to be an adversarial one.
Although the differences between clean data and adversarial are subtle in raw images, recent advances have shown promising results with the help of network structures.
For example, in \cite{pang2018robust}, the authors find that the non-maximal entropy of their logits are quite distinct.
In \cite{feinman2017detecting}, their representations are separable under the similarity measurement of RBF-kernel (or K-density).
In \cref{fig:logits_tSNE_Digits,fig:logits_tSNE_DomainNet}, we visually show that adversarial examples can be identified in the space of logit vectors.
As described in \cref{alg:pre-inference}, we propose to fit a support vector machine (SVM) \cite{cortes1995supportvector}, denoted as $g(\cdot)$, with RBF kernels on the validation set of each user.
At inference, we predict $h$ in \cref{eq:dualBN} by $\hat h = g(f(x))$.
The use of RBF kernels is partially inspired by \cite{feinman2017detecting}, which used kernels on representations instead of logits.
We are aware of efforts that eliminate the detected adversarial samples from test/inference, e.g., \cite{pang2018robust}.
However, simply refusing to predict suspicious samples may break up the downstream services, especially in the challenging scenarios when the detection becomes sensitive and many samples are suspicious.
Instead, our method detects and predicts the suspicious samples using the robust model (with noise BN's), and does not refuse any predictions.
\textbf{BN operations}.
Since the BN statistics are only a small portion of any networks and do not require back-propagation, an additional BN statistic will not significantly affect the efficiency \cite{wang2020onceforall}. %
During training, because users do not send out BN layers, the communication cost is the same as a non-iid FL method (FedBN \cite{li2020fedbn}) and less than other fully-shared methods like FedAvg \cite{mcmahan2017communicationefficient}.
After the training is done, BN layers will be copied to transfer robustness, and such one-time cost of transferring marginal components of networks will be neglectable, compared to the total cost incurred in FL.
\textbf{The training and inference of noise detector}.
Introducing a noise detector results in an additional training cost, but such overhead is marginal in practice.
First, the training is only done once after the network training is done.
We only need to forward the whole network once for each sample to obtain the logit required for training the noise detector.
Therefore, the overall overhead for receiving robustness is extremely efficient as compared to the AT overhead.
Suppose adversarial samples are collected at real time which could be private.
Training a noise detector only requires approximately $a/T\times 100\%$ of the training adversarial samples where $T$ is the number of epochs and $a$ is the ratio of validation set versus the training set.
%
\section{Experiments}
\label{sec:exp}
\textbf{Datasets and models}.
To implement a non-iid scenario, we adopt a close-to-reality setting where users' datasets are sampled from different distributions. We used two multi-domain datasets for the setting.
The first is a subset (30\%) of \textsc{Digits}, a benchmark for domain adaption~\cite{peng2019federated}.
\textsc{Digits} has $28\times28$ images and serves as a commonly used benchmark for FL~\cite{caldas2019leaf,mcmahan2017communicationefficient,li2020federateda}.
\textsc{Digits} includes $5$ different domains: MNIST (MM) \cite{lecun1998gradientbased}, SVHN (SV) \cite{netzer2011reading}, USPS (US) \cite{hull1994database}, SynthDigits (SY) \cite{ganin2015unsupervised}, and MNIST-M (MM) \cite{ganin2015unsupervised}.
The second dataset is \textsc{DomainNet}~\cite{peng2019moment} processed by \cite{li2020fedbn}, which contains 6 distinct domains of large-size $256\times 256$ real-world images: Clipart (C), Infograph (I), Painting (P), Quickdraw (Q), Real (R), Sketch (S).
For \textsc{Digits}, we use a convolutional network with BN (or DBN) layers following each conv or linear layers.
For the large-sized \textsc{DomainNet}, we use AlexNet \cite{krizhevsky2012imagenet} extended with BN layers after each convolutional or linear layer~\cite{li2020fedbn}.
\textbf{Training and evaluation}. %
For AT users, we use $n$-step PGD (projected gradient descent) attack \cite{madry2019deep} with a constant noise magnitude $\epsilon$.
Following \cite{madry2019deep}, we use $\epsilon=8/255$, $n=7$, and attack inner-loop step size $2/255$, for training, validation, and test.
We uniformly split the dataset for each domain into $10$ subsets for \textsc{Digits} and $5$ for \textsc{DomainNet}, following \cite{li2020fedbn}, which are distributed to different users, respectively.
Accordingly, we have $50$ users for \textsc{Digits} and $30$ for \textsc{DomainNet}.
Each user trains local model for one epoch per communication round.
We evaluate the federated performance by \underline{standard accuracy} (SA), classification accuracy on the clean test set, and \underline{robust accuracy} (RA), classification accuracy on adversarial images perturbed from the original test set.
All metric values are averaged over users.
We defer other details of experimental setup such as hyper-parameters to \cref{sec:app:exp}, and focus on discussing the results.
\vspace{-0.5em}
\subsection{Comprehensive study}
\vspace{-0.5em}
To further understand the role of each component in the proposed FedRBN, we conduct a comprehensive study on its properties.
In experiments, we use three representative federated baselines combined with AT: FedAvg~\cite{mcmahan2017communicationefficient}, FedProx~\cite{li2020federateda}, and FedBN~\cite{li2020fedbn}.
We use FATAvg to denote the AT-augmented FedAvg, and similarly FATProx and FATBN. To implement hardware heterogeneity, we let $20\%$-per-domain users from $3/5$ domains conduct AT.
\begin{figure}
\centering
\includegraphics[width=0.24\textwidth]{fig/analysis/loss_curve_bmk}
\hfil
\includegraphics[width=0.24\textwidth]{fig/analysis/lambda_sens_RA_SA}
%
\hfil
\includegraphics[width=0.24\textwidth]{fig/separable_logits_Digits/logits_svm_C_ablation}
\hfil
\includegraphics[width=0.24\textwidth]{fig/separable_logits_Digits/logits_svm_gamma_ablation}
%
\vspace{-0.1in}
\caption{The convergence curves and parameters sensitivity of $\lambda$, $C$ and $\gamma$. $C$ is for regularization and $\gamma$ is for RBF-kernel used in SVM whose performance is evaluated on Digits domains.}
\label{fig:param_sens}
\vspace{-0.15in}
\end{figure}
\textbf{Convergence}. The first plot in \cref{fig:param_sens} shows convergence curves of different competing algorithms.
Since FedRBN only differs from FATBN by a DBN structure, FATBN and FedRBN have similar convergence rates that are faster than others.
We see that FedRBN converges even faster than FATBN. A possible reason is that DBN decouples the normal and adversarial samples,
the representations after BN layers will be more consistently distributed among non-iid users.
\newline
\textbf{Parameter Sensitivity of the $\lambda$, $C$ and $\gamma$}.
The second plot in \cref{fig:param_sens} shows a preferred $\lambda$ is neither too small or too close to 1.
Since $\lambda$ is critical when heterogeneity is severer, we evaluate the sensitivity as only one domain (MNIST in \textsc{Digits}) is adversarially trained.
We find that a larger $\lambda$ is more helpful for the RA, as the estimation is closer to the true robust one.
The rest plots in \cref{fig:param_sens} demonstrate the stability of the noise detector when a choice of $C=10$ and $\gamma=1/10$ for SVM generally works.
\newline
\textbf{Ablation Study}.
We use FATBN as the base method and incrementally add FedRBN components:
\texttt{+DBN}, \texttt{+detector} (for adaptively BN selection), \texttt{+copy} BN statistics, \texttt{+debias} copying, and \texttt{+reweight} average before copying.
\cref{tbl:ablation_comp} shows that even though DBN is a critical structure, simply adding DBN does not help unless the noise detector is applied.
Also, the most influential component is copying, supporting our key idea.
The \texttt{+debias} is more important in single AT domain case where domain gaps varies by different ST domains,
whereas \texttt{+reweight} matters more when more AT domains are available.
Other than \texttt{+copy}, all other components barely affect the SA.
\begin{table}[t!]
\caption{Ablation of different FedRBN components. Standard deviations are enclosed in brackets.}
\label{tbl:ablation_comp}
\tiny
\centering
\begin{tabular}{@{ }cc|*{6}{c}@{ }}
\toprule
%
%
\textbf{AT users} & \textbf{metric} & base & +DBN & +detector & +copy & +debias & +reweight \\ %
\midrule
%
%
20\% in 5/5 domains & RA & 41.2 (1.3) & 38.8 (1.4) & 42.4 (1.4) & 55.7 (1.0) & 55.7 (1.3) & 56.2 (1.0) \\
& increment & & -2.4 & +3.6 & \textbf{+13.3} & +0.0 & +0.5 \\
& SA & 86.4 (0.4) & 86.5 (0.4) & 86.2 (0.4) & 85.2 (0.6) & 85.2 (0.4) & 85.3 (0.4) \\
100\% in 1/5 domain & RA & 36.5 (1.8) & 34.3 (2.0) & 37.3 (1.8) & 48.1 (1.6) & 49.8 (1.6) & 49.8 (1.5) \\
& increment & & -2.2 & +3.0 & \textbf{+10.8} & +1.7 & +0.0 \\
& SA & 86.4 (0.4) & 86.4 (0.4) & 86.3 (0.4) & 84.3 (0.5) & 84.4 (0.4) & 84.3 (0.5) \\
\bottomrule %
%
\end{tabular}
\vspace{-0.2in}
\end{table}
\newline
\textbf{Impacts from Data Heterogeneity}.
To study the influence of different AT domains, we set up an experiment where AT users only reside on one single domain.
For simplicity, we let each domain contains a single user as in \cite{li2020fedbn} and utilize only 10\% of \textsc{Digits} dataset.
The single AT domain plays the central role in gaining robustness from adversarial augmentation and propagating to other domains.
The task is hardened by the non-singleton of gaps between the AT domain and multiple ST domains and a lack of the knowledge of domain relations.
Results in \cref{fig:benchmark_O2M} shows the superiority of the proposed FedRBN, which improves the RA more than $10\%$ in all cases with small drops in SA.
We see that the RA is the worst when MNIST serves the AT domain, whereas RA propagates better when the AT domain is SVHN or SynthDigits.
The possible reason is that SVHN and SynthDigits are more visually different from the rest domains (see \cref{fig:dataset_samples}), forming larger domain gaps at test.
\newline
\textbf{Impacts from Hardware Heterogeneity}.
We vary the number of AT users in training from $1/N$ (most heterogeneous) to $N/N$ (homogeneous) to compare the robustness gain.
\cref{fig:benchmark_partial_M2M} shows that our method consistently improves the robustness.
Even when all domains are noised, FedRBN is the best due to the use of DBN.
When not all domains are AT, our method only needs half of the users to be noised such that the RA is close to the upper bound (fully noised case).
\vspace{-0.5em}
\subsection{Comparison to baselines}
\vspace{-0.5em}
To demonstrate the effectiveness of the proposed FedRBN, we compare it with baselines on two benchmarks.
We repeat each experiment for three times with different seeds.
We introduce two more baselines: personalized meta-FL extended with FAT (FATMeta)~\cite{fallah2020personalized} and federated robust training (FedRob)~\cite{reisizadeh2020robust}.
Because FedRob requires a project matrix of the squared size of image and the matrix is up to $256^2\times256^2$ on \textsc{DomainNet} which does not fit into a common GPU, we exclude it from comparison.
Given the same setting, we constrain the computation cost in the similar scale for cost-fair comparison.
We evaluate methods on two FRP settings.
\textbf{1) Propagate from a single domain}.
In reality, a powerful computation center may join the FL with many other users, e.g., mobile devices.
Therefore, the computation center is an ideal node for the computation-intensive AT.
Due to limitations of data collection, the center may only have access to a single domain, resulting gaps to most other users.
We evaluate how well the robustness can be propagated from the center to others.
\textbf{2) Propagate from a few multi-domain AT users}.
In this case, we assume that to reduce the total training time, ST users are exempted from the AT tasks in each domain.
Thus, an ST user wants to gain robustness from other same-domain users.
The different-domain users may hinder the robustness performance due to the domain gaps in adversarial samples.
\cref{tbl:bmk_single_source_prop} shows that our method outperforms all baselines for all tasks, while it associates to only marginal overhead (for fitting noise detector).
Importantly, we show that only $20\%$ users are enough to achieve robustness comparable to the best fully-trained baseline.
Even evaluated by different attacks (see \cref{tbl:eval_attacks}), our method still outperforms others.
\begin{table}[t!]
\caption{Benchmarks of robustness propagation, where we measure the computation time ($T$) by counting $\times 10^{12}$ times of float add-or-multiplication operations (FLOPs).}
\label{tbl:bmk_single_source_prop}
\scriptsize
\centering
\begin{tabular}{@{ }l|*{18}{@{ }@{ }c}@{ }}
\toprule
& \multicolumn{9}{c}{Digits} & \multicolumn{9}{c}{DomainNet} \\
\cmidrule(r){2-10} \cmidrule(r){11-19}
AT users & \multicolumn{3}{c}{All} & \multicolumn{3}{c}{20\%} & \multicolumn{3}{c}{MNIST} & \multicolumn{3}{c}{All} & \multicolumn{3}{c}{20\%} & \multicolumn{3}{c}{Real} \\
\cmidrule(r){2-4} \cmidrule(r){5-7} \cmidrule(r){8-10}
\cmidrule(r){11-13} \cmidrule(r){14-16} \cmidrule(r){17-19}
Metrics & RA & SA & T & RA & SA & T & RA & SA & T & RA & SA & T & RA & SA & T & RA & SA & T \\
\midrule
FedRBN (ours) & \textbf{66.7} & \textbf{87.3} & 2218 & \textbf{56.2} & 85.3 & 665 & \textbf{49.8} & 84.3 & 665 & \textbf{30.6} & \textbf{53.7} & 38490 & \textbf{24.2} & 61.5 & 11547 & \textbf{18.0} & 59.5 & 10425 \\
FATBN & 60.0 & \textbf{87.3} & 2211 & 41.2 & \textbf{86.4} & 663 & 36.5 & \textbf{86.4} & 663 & 29.9 & 52.6 & 38363 & 20.3 & \textbf{63.2} & 11509 & 11.3 & \textbf{60.5} & 10390 \\
FATAvg & 58.3 & 86.1 & 2211 & 42.6 & 84.6 & 663 & 38.4 & 84.1 & 663 & 24.6 & 47.4 & 38363 & 15.4 & 57.8 & 11509 & 9.4 & 57.4 & 10390 \\
FATProx & 58.5 & 86.3 & 2211 & 42.8 & 84.5 & 663 & 38.1 & 84.1 & 663 & 24.8 & 47.1 & 38363 & 14.5 & 57.3 & 11509 & 9.4 & 56.6 & 10390 \\
FATMeta & 43.6 & 71.6 & 2211 & 35.0 & 72.6 & 663 & 35.3 & 72.2 & 663 & 6.0 & 23.5 & 38363 & 0.0 & 37.2 & 11509 & 0.1 & 38.1 & 10390 \\
FedRob & 13.1 & 13.1 & 2211 & 20.6 & 59.3 & 1032 & 17.7 & 48.9 & 645 & - & - & - & - & - & - & - & - & - \\
%
%
%
%
%
%
%
%
%
\bottomrule
\end{tabular}
\vspace{-0.2in}
%
\end{table}
\begin{table}[t]
\begin{minipage}{0.37\textwidth}
\caption{Compare FedRBN versus efficient federated AT on Digits.
}
%
\label{tbl:bmk_single_source_prop_free}
\tiny
\centering
\begin{tabular}{@{ }l|*{2}{@{ }c}*{2}{@{ }c}@{ }}
\toprule
& \multicolumn{2}{c}{20\% 3/5 AT domains} & \multicolumn{2}{c}{100\% Free AT \cite{shafahi2019adversarial}} \\ %
& \multicolumn{1}{c}{FedRBN} &
\multicolumn{1}{c}{FATAvg} &\multicolumn{1}{c}{FATAvg} & \multicolumn{1}{c}{FATBN} \\ %
\midrule
RA & \textbf{56.1} & 44.9 & 47.1 & 46.3 \\
SA & \textbf{86.2} & 85.6 & 63.6 & 57.4 \\
T & 273 & \textbf{271} & 276 & 276 \\
\bottomrule
\end{tabular}
\end{minipage}
\hfil
\begin{minipage}{0.59\textwidth}
\centering
\tiny
\caption{Evaluation of RA with various attacks on Digits. $n$ and $\epsilon$ are the step number and the magnitude of attack.}
\label{tbl:eval_attacks}
\begin{tabular}{@{ }l|*{5}{c}|c@{ }}
\toprule
Attack & PGD & PGD & MIA \cite{dong2018boosting} & MIA & LSA \cite{narodytska2016simple} & SA \\
$(n,~\epsilon)$ & (20,16) & (100,8) & (20,16) & (100,8) & (7, -) & - \\
\midrule
FedRBN & \textbf{42.8} & \textbf{54.5} & \textbf{39.9} & \textbf{52.2} & \textbf{73.5} & 84.2 \\
FATBN & 28.6 & 41.6 & 27.0 & 39.7 & 64.0 & \textbf{84.6} \\
FATAvg & 31.5 & 43.4 & 30.0 & 41.5 & 63.3 & 84.2 \\
\bottomrule
\end{tabular}
\end{minipage}
\vspace{-0.2in}
\end{table}
\textbf{Compare to full efficient AT}.
In \cref{tbl:bmk_single_source_prop_free}, we show that when computation time is comparable, our method can achieve both better RA and SA than full-AT baselines.
For results to be comparable, we train FedRBN for limited 150 epochs while Free AT for 300 epochs.
Although Free AT improves the robustness compared to FedAvg, it also greatly sacrifices SA performance.
Thanks to stable convergence and decoupled BN, FedRBN maintains both accurate and robust performance though the AT is not `free' for a few users.
\begin{figure}[ht]
\centering
\vspace{-0.1in}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=0.49\textwidth]{fig/benchmark/O2M_digits_RA.pdf}
\hfil
\includegraphics[width=0.49\textwidth]{fig/benchmark/O2M_digits_SA.pdf}
\caption{FRP from a single AT domain}
\label{fig:benchmark_O2M}
\end{subfigure}
%
%
%
%
%
%
%
%
%
%
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=0.49\textwidth]{fig/benchmark/M2M_partial_digits_RA.pdf}
\hfil
\includegraphics[width=0.49\textwidth]{fig/benchmark/M2M_partial_digits_SA.pdf}
%
\caption{FRP from partial AT users per domain}
\label{fig:benchmark_partial_M2M}
%
\end{subfigure}
\caption{Evaluating FRP performance with different FRP settings.
}
%
%
\vspace{-0.2in}
\end{figure}
%
\vspace{-0.5em}
\section{Conclusion}
\vspace{-0.5em}
In this paper, we investigate a novel problem setting, federate propagating robustness, and propose a FedRBN algorithm that transfers robustness in FL through robust BN statistics.
Extensive experiments demonstrate the rationality and effectiveness of the proposed method, delivering both generalization and robustness in FL.
We believe such a client-wise efficient robust learning can broaden the application scenarios of FL to users with diverse computation capabilities.
\textbf{Broader impact}.
The vulnerability of well-trained deep neural networks has drawn great attention in the research community.
In the scope of federated learning, different users have different needs and capabilities for such a robust learning schema.
For this sake, we propose a novel research task for propagating robustness among users in federated learning.
Our methods can be applied in many high-stakes real-world scenarios, including self-driving \cite{liang2019federated}, face recognition \cite{aggarwal2021fedface}, natural language prediction \cite{hard2019federated}, medical image analysis \cite{ronneberger2015unet,ng2021federated} and computer-aided diagnosis \cite{mansoor2015segmentation}.
{
\small
\bibliographystyle{unsrtnat}
%
%
|
1,108,101,562,990 | arxiv | \section{Introduction}
The paper considers consistent couplings of the Einstein equations for a pseudo-Riemannian metric with a completely symmetric tensor $\om_{i_{1}\dots i_{k}}$. The couplings are defined for a class of tensors that includes trace-free Codazzi tensors and conformal Killing tensors. For example, for a trace-free symmetric tensor $\om_{i_{1}\dots i_{k}} = \om_{(i_{1}\dots i_{k})}$ that is Codazzi, meaning $D_{[i}\om_{j]p_{1}\dots p_{k-1}} = 0$, the resulting equations have the form
\begin{align}\label{stressenergyintro}
\begin{split}
\sR_{ij} &- \tfrac{1}{2}\sR_{h}h_{ij} + \tfrac{n-2}{2n}\ka h_{ij} = c\left(\rictr(\om\kwedge \om)_{ij} - \tfrac{1}{2}|\om|_{h}^{2}h_{ij}\right),
\end{split}
\end{align}
where $c$ is a constant, $\ka$ is a function, $\rictr(\om \kwedge \om)_{ij} = \om_{i}\,^{p_{1}\dots p_{k-1}}\om_{jp_{1}\dots p_{k-1}}$ is the Ricci trace of the Kulkarni-Nomizu product $\om \kwedge \om$, and $\sR_{ij}$ and $\sR_{h}$ are the Ricci and scalar curvature of the Levi-Civita connection $D$ of $h_{ij}$. What is important about the constant $c$ is its sign, as its modulus can be absorbed into the tensor $\om$; see Remark \ref{signremark}.
It follows from the differential Bianchi identity and the Codazzi condition on $\om_{i_{1}\dots i_{k}}$ that $\ka$ must be constant.Note that \eqref{stressenergyintro} is equivalent to
\begin{align}
\sR_{ij}- c\om_{i}\,^{p_{1}\dots p_{k-1}}\om_{jp_{1}\dots p_{k-1}} =\ka h_{ij}.
\end{align}
In the form \eqref{stressenergyintro}, the right-hand side can be viewed as a stress energy tensor and $\tfrac{n-2}{2n}\ka$ can be regarded as a cosmological constant.
Equations such as \eqref{stressenergyintro} can be seen as formal analogues of the Einstein-Maxwell equations, with a completely symmetric tensor coupled to the metric in place of a two-form. The equations \eqref{stressenergyintro} are a special case of the more general equations \eqref{stressenergy} discussed in detail in Section \ref{couplingsection}, that allow for coupling also with a trace-free conformal Killing tensor and a relaxation of the Codazzi condition. (For expository simplicity the discussion here in the introduction focuses on \eqref{stressenergyintro}.)
One reason for considering conformal Killing and trace-free Codazzi tensors is that on an oriented Riemann surface these are exactly the real parts of holomorphic vector fields and holomorphic $k$-differentials \cite{Fox-2dahs}.
The expression $(\om \kwedge \om)_{ijkl}$ can be viewed as a curvature term, and Lemma \ref{projectivehiggslemma} shows that a metric $h$ and a tensor $\om$ as in \eqref{stressenergyintro} such that the modified curvature $\sR_{ijkl} - \tfrac{1}{4}(\om \kwedge \om)_{ijkl}$ is projectively flat, meaning it solves
\begin{align}\label{projectivehiggsintro}
\sR_{ijkl} - c(\om \kwedge \om)_{ijkl} = -\tfrac{\ka}{n(n-1)}(h\kwedge h)_{ijkl},
\end{align}
for some $c \in \rea$, yield a solution of \eqref{stressenergyintro}. (Because when $n = 3$ the module of trace-free curvature tensors is empty, in the $n=3$ case the equations \eqref{stressenergyintro} and \eqref{projectivehiggsintro} are equivalent, meaning that $h$ and $\om$ solve \eqref{projectivehiggsintro} for $c \in \rea$ if and only if they solve \eqref{stressenergyintro}.)
The relation of the equations \eqref{projectivehiggsintro} to the equations \eqref{stressenergyintro} is parallel to the relation, to which it specializes when $\om$ is identically zero, between constant sectional curvature metrics and Einstein metrics.
Section \ref{examplesection} records examples of solutions of the equations \eqref{stressenergy} and \eqref{projectivehiggsintro}. The equations for a mean curvature zero nondegenerate immersion of a hypersurface in a pseudo-Riemannian space form and the equations for an affine sphere are special cases of the equations \eqref{projectivehiggsintro}. In these cases the tensor $\om$ is, respectively, the second fundamental form or the cubic form of the immersion. In both these contexts, a solution $(h, \om)$ to the equations \eqref{stressenergyintro} can be viewed as a more general geometric structure, not necessarily induced via an immersion. These examples show that, at least for $k \leq 3$, solutions to \eqref{projectivehiggs} abound, although it should be remembered that the basic existence result for affine spheres, due to Cheng and Yau is highly nontrivial. (That the general formalism recovers in the $k = 2$ and $k = 3$ cases these well-known geometric settings serves also as a useful consistency check on the sometimes involved preliminary computations in Sections \ref{differentialoperatorssection} and \ref{vanishingsection}.)
Lemma \ref{stressenergyexamplelemma} shows that on any compact simple Lie group solutions to \eqref{stressenergyintro} can be constructed from invariant polynomials on its Lie algebra. These solutions show that \eqref{stressenergyintro} admits nontrivial solutions in arbitrarily high dimensions for arbitrarily large $k$.
If $h$ is Euclidean, so flat, the curvature terms in \eqref{stressenergyintro} vanish, and there remains a purely algebraic equation for the tensor $\om$. When $k = 3$, a harmonic cubic polynomial solving the algebraic part of \eqref{stressenergyintro} can be viewed as the structure tensor of a nonassociative algebra, and from this point of view it can be seen that solutions abound, as the author has shown in \cite[section $8$]{Fox-ahs} and \cite{Fox-crm, Fox-cubicpoly}. Here there are described two constructions of algebraic solutions that work for larger $k$. First, Theorem \ref{isoparametrictheorem} shows that all isoparametric polynomials yield solutions. Second, Lemma \ref{graphpolynomiallemma} shows that such a solution is associated with every $k$-regular graph.
General existence theorems are not studied here, but Section \ref{constraintsection} describes a priori constraints on solutions of the restricted system \eqref{projectivehiggsintro} when $h$ is Riemannian. Such results are motivated by and generalize results about immersed submanifolds that go back to Calabi \cite{Calabi-completeaffine}, in the context of affine spheres, Simons in the context of minimal immersions in spheres \cite{Simons}, and Cheng-Yau \cite{Cheng-Yau-maximalspacelike, Yau-cmcii} in the context of hypersurfaces of various kinds, as well as many others. The general pattern of such results is as follows. There is a Weitzenböck identity for the Laplacian of the squared-norm of a tensor satisfying some partial differential equation (here $\om$). Depending on the signs of some curvature terms there are two general classes of results. One uses refined Kato inequalities and sharp algebraic bounds on quadratic terms to obtain a differential inequality that, via a sort of argumentation developed most prominently by Calabi and Cheng-Yau, yields an upper bounds on $|\om|^2$ that can be interpreted as an upper bound on the scalar curvature of $h$ (working harder along the same lines one could obtain an upper bound on the Ricci curvature of $h$). The second follows arguments from Simons \cite{Simons}, and integrates the Weitzenböck formula to obtain integral bounds on $|\om|^{2}$. In both cases the results along these lines obtained here are reported in Section \ref{constraintsection}.
Theorem \ref{scalarcurvaturetheorem} shows that if $h$ is a complete Riemannian metric on a manifold of dimension $n> 2$ which with a trace-free Codazzi tensor $\om$ solves \eqref{projectivehiggsintro} for $c> 0$ and $\ka \in \rea$ then
\begin{enumerate}
\item If $\ka \geq 0$ then $\om$ is identically zero, and $h$ is a metric of constant sectional curvature.
\item If $\ka < 0$ then the scalar curvature $\sR_{h} = c|\om|^{2} + \ka$ of $h$ is nonpositive.
\end{enumerate}
In the special case corresponding to the context of the cubic form of a complete hyperbolic affine sphere, Calabi \cite{Calabi-completeaffine} showed the nonpositivity of the Ricci curvature. It seems reasonable to expect that, perhaps with some additional conditions, the nonpositivity of the scalar curvature in Theorem \ref{scalarcurvaturetheorem} can be improved to nonpositivity of the Ricci curvature. The remaining technical issue is to extend to $k > 3$ certain tensorial identities used in Calabi's argument for $k = 3$.
The proof of Theorem \ref{scalarcurvaturetheorem} requires the Weitzenböck formulas and refined Kato inequalities for trace-free Codazzi tensors and conformal Killing tensors described in sections \ref{differentialoperatorssection} and \ref{katosection} and a result of Cheng-Yau on the growth of solutions to a differential inequality of the form $\lap u \geq Bu^{1+\si} - Au$.
These results should be understood as generalizations of results for holomorphic tensors on surfaces and as a counterparts to classical vanishing theorems for holomorphic symmetric tensors due to Kobayashi \cite{Kobayashi-holomorphicsymmetric, Kobayashi-holomorphictensor}.
Theorem \ref{simonstheorem} is the integral bound parallel to that of Theorem \ref{scalarcurvaturetheorem} for solutions of \eqref{projectivehiggsintro}. Its $k = 2$ case recovers an integral estimate of the scalar curvature of a compact mean curvature zero hypersurface in a round sphere due to J. Simons \cite{Simons}, while its $k = 3$ case recovers the analogous integral estimate for the scalar curvature of a compact mean curvature zero Lagrangian submanifold of a constant holomorphic sectional curvature Kähler manifold due to B.-Y. Chen and K. Ogiue \cite{Chen-Ogiue}. The corresponding result for $k > 3$ obtained here is not sharp, as are the results for $k \leq 3$, because certain inequalities for norms of tensors used in intermediate steps that are sharp for $k \leq 3$ can be improved when $k > 3$. Even when $k \leq 3$, the method of proof, uniform in $k$ and not supposing an immersion in an ambient space, seems a conceptual improvement.
Section \ref{preliminariessection} describes background material. Section \ref{differentialoperatorssection} describes Weitzenböck formulas for trace-free Codazzi and conformal Killing tensors.
Most of the material recounted in sections \ref{differentialoperatorssection} and \ref{vanishingsection} was presented in the author's \cite[section $6$]{Fox-ahs}. In any case, much of this material has been obtained before or since by others in various forms; see in particular \cite{Stepanov}, \cite{Shandra-Stepanov-Mikes}, \cite{Heil-Moroianu-Semmelmann} and \cite{Hadfield}. It is presented here to make the exposition self-contained, because notations differ substantially between different authors, and because the cases of trace-free Codazzi and conformal Killing tensors can be given a uniform treatment and this seems clarifying.
The Weitzenböck identities are needed here for showing in full generality the the equations \eqref{stressenergy} generalizing \eqref{stressenergyintro} that are considered here are well formulated, and for understanding when their hypotheses are nontrivial. This is described in Section \ref{couplingsection}. They are needed also to obtain the estimate leading to Theorem \ref{scalarcurvaturetheorem}. In the proof of Theorem \ref{scalarcurvaturetheorem} there are needed the refined Kato inequalities for trace-free Codazzi and conformal Killing tensors described in section \ref{katosection}. As is explained there, these can be deduced from general results in \cite{Calderbank-Gauduchon-Herzlich} and \cite{Branson-kato} (see also \cite{Hitchin-vanishing}), but considerable work is required to translate general representation theoretic statements into the concrete contexts here, and it is simpler to give the direct proofs recorded here.
The Weitzenböck identities presented here were found by the author in \cite{Fox-ahs}, and have been treated independently in \cite{Heil-Moroianu-Semmelmann} (there are notational differences because of different curvature conventions and because here identities are written for symmetric tensors that are assumed trace-free). These Weitzenböck identities serve to deduce for conformal Killing tensors and trace-free Codazzi tensors vanishing theorems that are of independent interest and which are summarized as Corollary \ref{bochnercorollary}.
This objective was incompletely realized in \cite{Fox-ahs}, where there was obtained the partial result recalled here as Theorem \ref{bochnerliedivtheorem}. For conformal Killing tensors of rank $k = 2$ this partial result coupled with a theorem of Berger-Ebin \cite{Berger-Ebin} is enough to deduce the vanishing theorem under the desired hypotheses of nonnegative or nonpositive sectional curvature, but for $k > 2$ the author then did not see some step necessary to make the argument work and the desired vanishing theorems for trace-free Codazzi and conformal Killing tensors were reported in \cite[Corollary $3.1$]{Fox-2dahs} only for surfaces. For trace-free Codazzi tensors the desired result for all $k$ has recently been proved \cite[Corollary $1$]{Shandra-Stepanov-Mikes} along the lines similar to those taken here. For conformal Killing tensors the desired result for all $k$ was obtained in \cite{Dairbekov-Sharafutdinov} based on a different line of argument. In \cite{Heil-Moroianu-Semmelmann} an elegant proof was given for conformal Killing tensors based on the Weitzenböck formulas, and that proof adapts immediately to give the corresponding (for nonnegative sectional curvature) vanishing theorem for trace-free Codazzi tensors.
\section{Preliminaries}\label{preliminariessection}
Throughout the paper $M$ denotes a connected, smooth manifold of dimension $n$. The abstract index conventions in the sense of Penrose \cite[chapter $2$]{Penrose-Rindler} are used; see also \cite{Wald}. With these conventions indices are labels indicating tensor type and symmetries and do not refer to any local frame. Let $h_{ij}$ be a pseudo-Riemannian metric with Levi-Civita connection $D$. Indices are raised and lowered using $h_{ij}$ and the dual symmetric bivector $h^{ij}$ defined by $h^{ip}h_{pj} =\delta_{j}\,^{i}$. The inner product on a tensor module used here is always that defined by complete contraction with $h_{ij}$, $\lb \al, \be \ra = \al^{i_{1}\dots i_{k}}\be_{i_{1}\dots i_{k}}$, which generally differs from the metric induced by that on $TM$ by a constant factor that depends on the symmetries of the tensors considered. The curvature $\sR_{ijk}\,^{l}$ of $D$ is defined by $2D_{[i}D_{j]}\om_{k} = -R_{ijk}\,^{p}\om_{p}$ for $\om_{i} \in \Ga(\ctm)$. The curvature tensor $\sR_{ijkl} = \sR_{ijk}\,^{p}h_{pl} \in \Ga(\mcurv(\ctm))$ is defined by lowering the last index. The Ricci and scalar curvatures of $h$ are defined by $\sR_{ij} = \rictr(\sR)_{ij} = R_{pij}\,^{p}$ and $\sR_{h} = h^{ij}\rictr(\sR)_{ij}$.
The submodule of $\tensor^{4}\std$ corresponding to the lexicographic filling of the Young diagram with two rows of two boxes is
\begin{align}\label{mcurvdefined}
\mcurv(\std) = \{\sY_{ijkl} \in \tensor^{4}\std: \sY_{[ij]kl} = \sY_{ijkl} = \sY_{ij[kl]}, \sY_{[ijk]l} = 0\}.
\end{align}
that comprises the tensors $\sY_{ijkl}$ having \emph{metric curvature tensor type}. If $\sR_{ijk}\,^{l}$ is the curvature tensor of the Levi-Civita connection of a pseudo-Riemannian metric $h_{ij}$, the tensor $\sR_{ijkl} = \sR_{ijk}\,^{p}h_{pl}$ takes values in the vector bundle $\mcurv(\ctm)$.
The space $\pol(\ste)$ of polynomials on the vector space $\ste$ comprises those functions on $\ste$ that are polynomials with respect to any choice of coordinates $x^{1}, \dots, x^{n}$ such that $dx^{1}, \dots, dx^{n}$ is a parallel frame with respect to the affine structure on $\ste$ determined by the lines in $\ste$. The centroaffine structure on $\ste$ induces a graded algebra structure, $\pol(\ste) = \oplus_{k \geq 0}\pol^{k}(\ste)$, where $\pol^{k}(\ste)$ denotes the subspace of polynomials which are homogeneous of degree $k$ with respect to dilations centered on the origin. The graded symmetric algebra $S(\sted) = \oplus_{k \geq 0}S^{k}(\sted)$ of finite linear combinations of completely symmetric covariant tensors on $\ste$ is canonically isomorphic to $\pol(\ste)$ via the linear map sending $\om \in S^{k}(\sted) \to P^{\om}(x) = \om_{i_{1}\dots i_{k}}x^{i_{1}}\dots x^{i_{k}} \in \pol^{k}(\ste)$. Polarization yields the inverse linear map sending $P \in \pol^{k}(\ste) \to \om^{P}_{i_{1}\dots i_{k}} = \tfrac{1}{k!}D_{i_{1}}\dots D_{i_{k}}P \in S^{k}(\sted)$. The product $\al \sprod \be$ of $\al \in S^{k}(\sted)$ and $\be \in S^{l}(\sted)$ such that $P^{\al}P^{\be} = P^{\al\sprod \be}$ is given by the symmetrized tensor product $(\al \sprod \be)_{i_{1}\dots i_{k+l}} = \al_{(i_{1}\dots i_{k}}\be_{i_{k+1}\dots i_{k+l})}$. For a constant nondegenerate symmetric tensor $h_{ij}$ on $\ste$, or, what is the same, a pseudo-Riemannian metric on $\ste$ parallel with respect to the standard flat affine connection $D$, define
\begin{align}\label{trmetdefined}
&\tr(\om)_{i_{1} \dots i_{k-2}} = \om_{i_{1}\dots i_{k-2}p}\,^{p}, && \met(\om) = h \sprod \om = h_{(i_{1}i_{2}}\om_{i_{3}\dots i_{k+2})},
\end{align}
for $\om \in S^{k}(\sted)$. By convention, $\tr(\om) = 0$ when $k = 1$. These operators are adjoints, meaning that $\lb \al, \met(\be) \ra = \lb \tr(\al), \be\ra$ for $\al \in S^{k}(\ste)$ and $\be \in S^{k-2}(\ste)$. In particular, the orthogonal complement in $S^{k}(\sted)$ of the image of $\ho:S^{k-2}(\sted) \to S^{k}(\sted)$ is the space $S^{k}_{0}(\sted) = S^{k}(\sted) \cap \ker \tr$ of trace-free elements of $S^{k}(\sted)$.
Define a symmetric bilinear map $\kwedge:S^{k}(\std) \times S^{k}(\std) \to \mcurv(\std)$ by
\begin{align}\label{mcwedge}
(\al \kwedge \be)_{ijkl} = \al_{k[i}\,^{p_{1}\dots p_{k-2}}\be_{j]lp_{1}\dots p_{k-2}} - \al_{l[i}\,^{p_{1}\dots p_{k-2}}\be_{j]kp_{1}\dots p_{k-2}}.
\end{align}
It is immediate from the definition that $(\al \kwedge \be)_{[ijk]l} = 0$, so that $(\al \kwedge \be)_{ijkl} \in \mcurv(\std)$. When $k = 2$ the map $\kwedge$ is half what is usually called the Kulkarni-Nomizu product. There hold
\begin{align}\label{rictralbe}
\begin{split}
\rictr(\al \kwedge \be)_{ij} &= \al_{(i}\,^{p_{1}\dots p_{k-1}}\be_{j)p_{1}\dots p_{k-1}}
- \tfrac{1}{2}\al_{ij}\,^{p_{1}\dots p_{k-2}}\tr(\be)_{p_{1}\dots p_{k-2}} - \tfrac{1}{2}\be_{ij}\,^{p_{1}\dots p_{k-2}}\tr(\al)_{p_{1}\dots p_{k-2}},\\
\scal(\al \kwedge \be) & = \lb \al, \be \ra - \lb \tr\al, \tr \be \ra.
\end{split}
\end{align}
For $\al \in S^{2}(\std)$ there hold $\rictr(\al \kwedge h)_{ij} = \tfrac{2 - n}{2}\left(\al_{ij} + \tfrac{1}{n-2}\tr(\al)h_{ij}\right)$ and $\scal(\al \kwedge h) = (1-n)\tr \al$.
Taking $\al_{ij} = h_{ij}$, yields that $(h\kwedge h)_{ijkl} = 2h_{k[i}h_{j]l}$ satisfies $\rictr(h \kwedge h)_{ij} = (1-n)h_{ij}$ and $\scal(h \kwedge h) = -n(n-1)$.
It follow thats the trace-free part $\tf(\sY)_{ijkl}$ of $\sY_{ijkl} \in \mcurv(\std)$ is given by
\begin{align}\label{tfweyl}
\begin{split}
\tf(\sY)_{ijkl} &= \sY_{ijkl} + \tfrac{2}{n-2}(\rictr(\sY)\kwedge h)_{ijkl} - \tfrac{1}{(n-2)(n-1)}\scal(\sY)(h\kwedge h)_{ijkl}\\
& = \sY_{ijkl} + \tfrac{2}{n-2}(\mr{\rictr(\sY)}\kwedge h)_{ijkl} + \tfrac{1}{n(n-1)}\scal(\sY)(h\kwedge h)_{ijkl}.
\end{split}
\end{align}
Suppose that $h$ is positive definite. An immediate consequence of \eqref{tfweyl} is
\begin{align}\label{tfweylnorm}
\begin{split}
|\sY|^{2}_{h} &= |\tf \sY|^{2}_{h} + \tfrac{4}{n-2}|\tf \rictr(\sY)|^{2}_{h} + \tfrac{2}{n(n-1)}(\scal(\sY))^{2}.
\end{split}
\end{align}
Any $\sY_{ijkl} \in \mcurv(\std)$ determines a self-adjoint endomorphism of $S^{2}\std$ defined by $a_{ij} \in S^{2}\std \to \sY_{ipjq}a^{pq} \in S^{2}\std$. In general this endomorphism does not preserve the subspace $S^{2}_{0}\std$. However, the modified endomorphism $a_{ij} \to \op{\sY}(a)_{ij} = a^{pq}(\sY_{ipjq} + \rictr(\sY)_{p(i}h_{j)q})$ restricts to a self-adjoint endomorphism of $S^{2}_{0}\std$. This is the $k = 2$ special case of the following construction that goes back to A. Lichnerowicz in \cite[section $10$]{Lichnerowicz-propagateurs}.
\begin{lemma}\label{hycommutelemma}
For $\sY_{ijkl} \in \mcurv(\std)$, the linear operator $\op{\sY}: S^{k}(\std) \to S^{k}(\std)$ defined by
\begin{align}\label{syom}
\begin{split}
\op{\sY}(\om)_{i_{1}\dots i_{k}} &= \rictr(\sY)_{p(i_{1}}\om_{i_{2}\dots i_{k})}\,^{p} + (1-k)\sY_{p(i_{1}i_{2}}\,^{q}\om_{i_{3}\dots i_{k})q}\,^{p}.
\end{split}
\end{align}
has the following properties:
\begin{enumerate}
\item The operators $\ho:S^{k}(\std) \to S^{k+2}(\std)$ and $\tr:S^{k}(\std) \to S^{k-2}(\std)$ defined in \eqref{trmetdefined} commute with $\op{\sY}$ in the sense that there hold
\begin{align}\label{hycommute}
&(k+2)\op{\sY}(\ho(\om)) = k\ho(\op{\sY}(\om)),&& (k-2)\op{\sY}(\tr \om) = k\tr\op{\sY}(\om),
\end{align}
for $\om \in S^{k}\std$.
In particular $\op{\sY}(h^{\sprod k}) = 0$ for all $k \geq 1$.
\item $\op{\sY}(\tf \om) = \tf \op{\sY}(\om)$ for all $\om \in S^{k}(\std)$. Hence $\op{\sY}$ restricts to an endomorphism of $S^{k}_{0}\std$.
\end{enumerate}
\end{lemma}
\begin{proof}
It is claimed that
\begin{align}\label{syomint}
\begin{split}
\rictr(\sY)_{p(i_{1}}\ho(\om)_{i_{2}\dots i_{k+2})}\,^{p} & = \tfrac{k}{k+2}h_{(i_{1}i_{2}}\rictr(\sY)^{p}\,_{i_{3}}\om_{i_{4}\dots i_{k+2})p} + \tfrac{2}{k+2}\rictr(\sY)_{(i_{1}i_{2}}\om_{i_{3}\dots i_{k+2})},\\
\sY^{p}\,_{(i_{1}i_{2}}\,^{q}\ho(\om)_{i_{3}\dots i_{k+2})pq} & = \tfrac{k(k-1)}{(k+2)(k+1)} h_{(i_{1}i_{2}}\sY^{p}\,_{i_{3}i_{4}}\,^{q}\om_{i_{5}\dots i_{k+2})pq} + \tfrac{2}{(k+2)(k+1)}\rictr(\sY)_{(i_{1}i_{2}}\om_{i_{3}\dots i_{k+2})}.
\end{split}
\end{align}
Combining \eqref{syomint} using \eqref{syom} yields the first identity of \eqref{hycommute}. The validity of \eqref{syomint} is shown as follows. Write
\begin{align}\label{syomint0}
\begin{split}
\tbinom{k+2}{2}\met(\om)_{i_{1}\dots i_{k+1}p}& = h_{pi_{k+1}}\om_{i_{3}\dots i_{k}} + kh_{p(i_{1}}\om_{i_{2}\dots i_{k})i_{k+1}}\\& + kh_{i_{k+1}(i_{1}}\om_{i_{2}\dots i_{k})p} + \tbinom{k}{2}h_{(i_{1}i_{2}}\om_{i_{3}\dots i_{k})i_{k+1}p}.
\end{split}
\end{align}
Contracting \eqref{syomint0} with $\rictr(\sY)_{i_{k+2}}\,^{p}$ yields
\begin{align}\label{syomint1}
\begin{split}
\tbinom{k+2}{2}\rictr(\sY)_{i_{k+2}}\,^{p}\met(\om)_{i_{1}\dots i_{k+1}p} &= \rictr(\sY)_{i_{k+1}i_{k+2}}\om_{i_{3}\dots i_{k}} + k\rictr(\sY)_{i_{k+2}(i_{1}}\om_{i_{2}\dots i_{k})i_{k+1}} \\&+ k\rictr(\sY)_{i_{k+2}}\,^{p}h_{i_{k+1}(i_{1}}\om_{i_{2}\dots i_{k})p} + \tbinom{k}{2}\rictr(\sY)_{i_{k+2}}\,^{p}h_{(i_{1}i_{2}}\om_{i_{3}\dots i_{k})i_{k+1}p}.
\end{split}
\end{align}
Symmetrizing \eqref{syomint1} over the uncontracted indices yields the first identity in \eqref{syomint}. Relabeling $i_{k+1}$ as $q$ in \eqref{syomint0} and contracting the result with $\sY^{p}\,_{i_{k+1}i_{k+2}}\,^{q}$ yields
\begin{align}\label{syomint2}
\begin{split}
\tbinom{k+2}{2}&\sY^{p}\,_{i_{k+1}i_{k+2}}\,^{q}\met(\om)_{i_{1}\dots i_{k}pq} \\
&=\sY^{p}\,_{i_{k+1}i_{k+2}}\,^{q}\left(h_{pq}\om_{i_{3}\dots i_{k}} + kh_{p(i_{1}}\om_{i_{2}\dots i_{k})q} + kh_{q(i_{1}}\om_{i_{2}\dots i_{k})p} + \tbinom{k}{2}h_{(i_{1}i_{2}}\om_{i_{3}\dots i_{k})pq}\right)\\
& = \rictr(\sY)_{i_{k+1}i_{k+2}}\om_{i_{1} \dots i_{k}} - k\sY_{i_{k+2}}\,^{q}\,_{i_{k+1}(i_{1}}\om_{i_{2}\dots i_{k})q}\\
&\quad - k\sY_{i_{k+1}}\,^{q}\,_{i_{k+2}(i_{1}}\om_{i_{2}\dots i_{k})q}
+ \tbinom{k}{2}\sY^{p}\,_{i_{k+1}i_{k+2}}\,^{q}h_{(i_{1}i_{2}}\om_{i_{3}\dots i_{k})pq}
\end{split}
\end{align}
Symmetrizing \eqref{syomint2} over the uncontracted indices yields the second identity in \eqref{syomint}. Similarly,
\begin{align}\label{ytrcom}
\begin{split}
k&(\tr \op{\sY}(\om))_{i_{1}\dots i_{k-2}} = kh^{i_{k-1}i_{k-2}}\left(\rictr(\sY)_{p(i_{1}}\om_{i_{1}\dots i_{k})}\,^{p} + (1-k)\sY_{p(i_{1}i_{2}}\,^{q}\om_{i_{3}\dots i_{k})q}\,^{p} \right)\\
& = \left(2\rictr(\sY)^{pq}\om_{i_{1}\dots i_{k-2}pq} + (k-2)\rictr(\sY)_{p(i_{1}}(\tr \om)_{i_{2}\dots i_{k-2})}\,^{p} \right) \\
&\quad -2 \left(\rictr(\sY)_{p}\,^{q}\om_{i_{1}\dots i_{k-2}q}\,^{p} + \tbinom{k-2}{2}\sY_{p(i_{1}i_{2}}\,^{q}(\tr \om)_{i_{3}\dots i_{k-2})q}\,^{p} \right)\\
& = (k-2)\left( \rictr(\sY)_{p(i_{1}}(\tr \om)_{i_{2}\dots i_{k-2})}\,^{p} + (3-k)\sY_{p(i_{1}i_{2}}\,^{q}(\tr \om)_{i_{3}\dots i_{k-2})q}\,^{p}\right)
= (k-2)\op{\sY}(\tr \om)_{i_{1}\dots i_{k-2}}.
\end{split}
\end{align}
That $\op{\sY}(h^{\sprod k}) = 0$ follows from $\op{\sY}(h) = 0$ and \eqref{hycommute} by induction.
By \eqref{hycommute}, $k\op{\sY}(\met^{i}(\om)) = (k-2i)\met^{i}\op{\sY}(\om)$ and $k\tr^{i}\op{\sY}(\om) = (k-2i)\op{\sY}(\tr^{i}\om)$ for all $\om \in S^{k}(\std)$. Consequently, $\op{\sY}(\met^{i}\tr^{i}\om) = \met^{i}\tr^{i}\op{\sY}(\om)$. Since $\om - \tf \om$ is in the image of $\met$, there results $\op{\sY}(\tf \om) = \tf \op{\sY}(\om)$ for all $\om \in S^{k}(\std)$. That $\op{\sY}$ maps trace-free symmetric tensors to trace-free symmetric tensors follows from this observation or from the second identity of \eqref{hycommute}. A more conceptual proof of this claim goes as follows. Define $Y(\om) = k\op{\sY}(\om)$ for $\om \in S^{k}(\std)$. Then \eqref{hycommute} means that $Y$ commutes with the $\mathfrak{sl}(2, \rea)$ triple determined by the operators $E$, $F$, and $H$, so must preserve the decomposition of symmetric tensors into their primitive parts.
\end{proof}
\section{Differential operators on trace-free symmetric tensors}\label{differentialoperatorssection}
This section studies some operators acting on trace-free symmetric tensors. Let $E$ and $F$ be bundles of tensors on $M$. A metric $h$ on $M$ determines a pairing $\ilp \al, \be \irp = \int_{M}\lb \al, \be \ra \,d\vol_{h}$ of sections $\al, \be \in \Ga(E)$, at least one of which is compactly supported. Write $\iln \om \irn^{2} = (\om, \om)$.
Because the fibers of $TM$ and $\ctm$ carry canonically dual flat centroaffine structures, constructions applicable to dual vector spaces $\ste$ and $\std$ apply fiberwise to $TM$ and $\ctm$ without change. By definition $S(TM)= \oplus_{k\geq 0}\Ga(S^{k}(TM))$ (respectively $S_{0}(TM)= \oplus_{k\geq 0}\Ga(S^{k}_{0}(TM)))$ is the graded vector bundle comprising finite linear combinations of (trace-free) completely symmetric tensors. It becomes a graded algebra when equipped with the fiberwise multiplication $\sprod$. By definition $\pol(\ctm)$ is the graded subalgebra of $\cinf(\ctm)$ comprising functions polynomial in the fibers of $\ctm$ and of globally bounded degree. Regarding $\pol(\ctm)$ as a subspace of $\cinf(\ctm)$ it acquires from the tautological Poisson structure on $\ctm$ a Poisson structure defined, for $X \in \Ga(S^{k}(TM))$ and $Y \in \Ga(S^{l}(TM))$ and any torsion-free affine connection $\nabla$, by
\begin{align}\label{schoutendefined}
\{X, Y\}^{i_{1}\dots i_{k+l-1}} = k X^{p(i_{1}\dots i_{k-1}}\nabla_{p}Y^{i_{k}\dots i_{k+l-1})} - l Y^{p(i_{1}\dots i_{l-1}}\nabla_{p}X^{i_{l}\dots i_{k+l-1})}.
\end{align}
Given a metric $h_{ij}$, the connection in \eqref{schoutendefined} may be taken to be its Levi-Civita connection $D$, and $\{h, X\} = 2D^{(i_{1}}X^{i_{2}\dots i_{k+1})}$ for any $X \in \Ga(S^{k}(TM))$ (in the bracket $\{h, X\}$ the notation $h$ refers to the dual bivector $h^{ij}$). Using $h$, $S(TM)$ and $S(\ctm)$ are identified by index raising and lowering, and for $X \in \Ga(S^{k}(TM))$ and $\om \in \Ga(S^{k}(\ctm))$ one defines $X^{\flat}_{i_{1}\dots i_{k}} = X^{j_{1}\dots j_{k}}h_{i_{1}j_{1}}\dots h_{i_{k}j_{k}} \in \Ga(S^{k}(\ctm))$ and defines $\om^{\sharp} \in \Ga(S^{k}(TM))$ dually. Since index raising and lowering induce symmetric algebra isomorphisms, there results on $S(\ctm)$ the Poisson bracket $\{\al, \be\}$ defined by $\{\al, \be\} = \{\al^{\sharp}, \be^{\sharp}\}^{\flat}$.
The divergence operator $\div:\Ga(\ctm \tensor E) \to \Ga(E)$ defined by
\dpu{&\div(\om)_{i_{1}\dots i_{k}} = D^{p}\om_{pi_{1}\dots i_{k}},&& \om \in \Ga(\ctm \tensor E),}
is the formal adjoint of the negative of the covariant derivative $-D$.
\begin{lemma}
Let $h$ be a pseudo-Riemannian metric on $M$.
The operator $\dsym: \Ga(S^{k}(\ctm)) \to \Ga(S^{k+1}(\ctm))$ defined by $\dsym(\al) = \tfrac{k+1}{2}\{h, \al\}$ satisfies $[e^{\tr}, \dsym] = 2\div e^{\tr}$.
\end{lemma}
\begin{proof}
The identity $[e^{\tr}, \dsym] = 2\div e^{\tr}$ follows from the validity of
\dps{\label{trih}
\tfrac{k+1}{2}\tr^{i}\{h, \al\} = \tfrac{k+1-2i}{2}\{h, \tr^{i}\al\} + 2i\div(\tr^{i-1}\al),}
for all $\al \in \Ga(S^{k}(\ctm))$ and $i \geq 1$, which is proved by induction on $i$. The case $i = 1$ is a direct computation:
\dps{\label{trih1}
\tfrac{k+1}{2}&(\tr \{h, \al\})_{i_{1}\dots i_{k-1}} = (k+1)h^{pq}D_{(i_{1}}\al_{i_{2}\dots i_{k-1}pq)} \\
&= 2D^{p}\al_{i_{1}\dots i_{k-1}p} + (k-1)D_{(i_{1}}(\tr \al)_{i_{2}\dots i_{k-1})} = 2\div(\al)_{i_{1}\dots i_{k-1}} + \tfrac{k-1}{2}\{h, \tr \al\}_{i_{1}\dots i_{k-1}}.
}
Note that $\tr$ and $\div$ commute. If there holds \eqref{trih}, then, using \eqref{trih} and \eqref{trih1},
\begin{align}
\begin{split}
\tfrac{k+1}{2}&\tr^{i+1}\{h, \al\} = \tr\left(\tfrac{k+1}{2} \tr^{i}\{h, \al\}\right) = \tfrac{k+1-2i)}{2}\tr\{h, \tr^{i}\al\} + 2i\div(\tr^{i}\al) \\
& = \tfrac{k-1 - 2i}{2}\{h, \tr^{i+1}\al\} + 2\div (\tr^{i}\al) + 2i\div(\tr^{i}\al)
= \tfrac{k+1 - 2(i+1)}{2}\{h, \tr^{i+1}\al\} + 2(i+1)\div(\tr^{i}\al),
\end{split}
\end{align}
and this proves \eqref{trih}.
\end{proof}
Let $\tf:\Ga(\symk) \to \Ga(\symkt)$ be the $h$-orthogonal projection onto the completely trace-free part and define $\clie:\Ga(\symk) \to \Ga(\symkp)$ by
\dpu{\clie(\om) = \tfrac{1}{2}\tf \{h, \om\},}
which is the trace-free part of the completely symmetrized covariant derivative of $\om$.
Explicitly, for $\om \in \Ga(\symkt)$, $\clie(\om) = \tfrac{1}{2}\{h, \om\} - \tfrac{k}{n + 2(k-1)} h\sprod \div(\om)$, or
\dps{
\label{cliedefined}
\clie(\om)_{i_{1}\dots i_{k+1}}& =D_{(i_{1}}\om_{i_{2}\dots i_{k+1})} - \tfrac{k}{n+2(k-1)}h_{(i_{1}i_{2}}\div(\om)_{i_{3}\dots i_{k+1})} .}
By definition, $\clie$ is the formal adjoint of the composition $-\div \circ \tf$, meaning $\ilp \clie(\al), \be\irp= -\ilp \al , \div \tf(\be)\irp$ for $\al \in \Ga(\symk)$ and $\be \in \Ga(\symkp)$, at least one of which has compact support. If $X \in \Ga(TM)$, then $\clie(X^{\flat})$ is the Lie derivative of the conformal structure $[h]$ along $X$, for
\begin{align}\label{cliex}
2\clie(X^{\flat})_{ij} = 2\mr{DX^{\flat}} = 2D_{(i}X_{j)} - \tfrac{2}{n}D_{p}X^{p} h_{ij} =\tf(\lie_{X}h)_{ij} = (\lie_{X}[h])_{ij}.
\end{align}
The identity \eqref{cliex} motivates using the notation, $\clie$, resembling that for the Lie derivative.
Let $\precod^{k+1}(\std) \subset \tensor^{k+1}\std$ be the the space of trace-free $(k+1)$-tensors $\phi_{iji_{1}\dots i_{k-1}}$ having the symmetries determined by the Young projector given by symmetrization over the rows followed by anti-symmetrization over the columns of the Young diagram corresponding to the partition $(n1)$, so satisfying $\phi_{iji_{1}\dots i_{k-1}} = \phi_{[ij]i_{1}\dots i_{k-1}} = \phi_{ij(i_{1}\dots i_{k-1})}$ and $\phi_{[iji_{s}]i_{1}\dots \hat{i}_{s}\dots i_{k-1}} = 0$ for any $1 \leq s \leq k-1$.
Define a differential operator $\klie:\Ga(\symk) \to \Ga(\precod^{k+1}(\ctm))$ to be the trace-free part of $D_{[i}\om_{j]i_{1}\dots i_{k-1}}$. If $k > 1$ and $\om \in \Ga(\symkt)$,
\begin{align}\label{kliedefined}
\begin{split}
\klie&(\om)_{ij i_{1}\dots i_{k-1}} = D_{[i}\om_{j]i_{1}\dots i_{k-1}} - \tfrac{1}{n+k-3}\sum_{s = 1}^{k-1}h_{i_{s}[i}\div(\om)_{j]i_{1}\dots \hat{i}_{s}\dots i_{k-1}}\\
& = D_{[i}\om_{j]i_{1}\dots i_{k-1}} - \tfrac{k-1}{2(n+k-3)}\left(h_{i(i_{1}}\div(\om)_{i_{2}\dots i_{k-1})j} -h_{j(i_{1}}\div(\om)_{i_{2}\dots i_{k-1})i} \right),\qquad \text{if} \,\, k > 1,\\
\end{split}
\end{align}
which is the completely trace-free part of $ D_{[i}\om_{j]i_{1}\dots i_{k-1}}$.
For $\ga \in \Ga(\ctm) = \Ga(S^{1}_{0}(\ctm))$, $\klie(\ga)_{ij} = D_{[i}\ga_{j]} = \tfrac{1}{2}d\ga_{ij}$.
In the $k = 1$ case recall that the convention here is $S^{1}_{0}(\ctm) = \ctm$.
Checking the equality of the two different expressions for the trace part of \eqref{kliedefined} is straightforward. By definition, $\klie(\om)_{iji_{1}\dots i_{k-1}} = \klie(\om)_{[ij]i_{1}\dots i_{k-1}}$, $\klie(\om)_{iji_{1}\dots i_{k-1}} = \klie(\om)_{ij(i_{1}\dots i_{k-1})}$, and there vanishes the antisymmetrization of $\klie(\om)_{iji_{1}\dots i_{k-1}}$ over $ij$ and any $i_{s}$.
From the definition of $\klie$, it follows that its formal adjoint $\kliea:\Ga(\precod^{k+1}(\ctm)) \to \Ga(\symkt)$ satisfies
\begin{align}
\label{klieaklie}
&\kliea(\phi)_{i_{1}\dots i_{k}} = -D^{p}\phi_{p(i_{1}\dots i_{k})},&&
\kliea\klie(\om)_{i_{1}\dots i_{k}} = -D^{p}\klie(\om)_{p(i_{1}\dots i_{k})},&
\end{align}
for $\om \in \Ga(\symkt)$, and $\phi_{iji_{1}\dots i_{k-1}} \in \Ga(\precod^{k+1}(\ctm))$.
If $\om \in \Ga(\symkt)$ then $D\om$ has a pure trace part, and parts in the submodules of $\Ga(\symkpt)$ with symmetries corresponding to the partitions $(n+1)$ and $(n1)$. Lemma \ref{domdecompositionlemma} describes explicitly the decomposition of $D\om$ into these parts. Some definitions are needed for its statement.
The linear map $\ih:\Ga(\symkmt)\to \Ga(\ctm \tensor \symkt)$ defined by
\begin{align}
\ih(\om)_{ii_{1}\dots i_{k}} = \tfrac{k(n+2(k-2))}{(n+k-3)(n+2(k-1))}h_{i(i_{1}}\om_{i_{2}\dots i_{k})} + \tfrac{k(1-k)}{(n+k-3)(n+2(k-1))}h_{(i_{1}i_{2}}\om_{i_{3}\dots i_{k})i},
\end{align}
is characterized by the properties that its image is contained in $\Ga(\ctm \tensor \symkt)$
and that the nontrivial traces of $\ih(\om)$ equal $\om$. In particular, it is injective. For $f \in \cinf(M) = \Ga(S^{0}_{0}(\ctm))$, $\ih(f)_{ij} = \tfrac{1}{n}fh_{ij}$ while for $\ga_{i} \in \Ga(\ctm) = \Ga(S^{1}_{0}(\ctm))$, $\ih(\ga)_{ijk} = \tfrac{2}{(n-1)(n+2)}\left(nh_{i(j}\ga_{k)} - \ga_{i}h_{jk}\right)$.
For $\om \in \Ga(\symkt)$ define $\tlie(\om)_{ii_{1}\dots i_{k}} = \tfrac{2k}{k+1}\klie(\om)_{i(i_{1}\dots i_{k})}$,
which is completely trace-free and satisfies $\tlie(\om)_{i(i_{1}\dots i_{k})} = \tlie(\om)_{ii_{1}\dots i_{k}}$ and $\tlie(\om)_{(i_{1}\dots i_{k+1})} = 0$. Using $\klie(\om)_{[iji_{1}]\dots i_{k-1}} = 0$ it can be checked that $\tlie(\om)_{[ij]i_{1}\dots i_{k-1}} = \klie(\om)_{iji_{1}\dots i_{k-1}}$. The identity \eqref{normdom} of Lemma \ref{domdecompositionlemma} is a special case of \cite[Lemma $2.24$]{Bourguignon-Hijazi-Milhorat-Moroianu-Moroianu}.
\begin{lemma}\label{domdecompositionlemma}
For $\om_{i_{1}\dots i_{k}} \in \Ga(\symkt)$ there hold
\begin{align}\label{domkl}
\begin{split}
D\om & = \clie(\om) + \tlie(\om) + \ih(\div(\om)),
\end{split}\\\label{normdom}
\begin{split}
|D\om|^{2} & = |\clie(\om)|^{2} + |\tlie(\om)|^{2} + \tfrac{k(n+2(k-2))}{(n+k-3)(n+2(k-1))}|\div(\om)|^{2}\\
& = |\clie(\om)|^{2} + \tfrac{2k}{k+1}|\klie(\om)|^{2} + \tfrac{k(n+2(k-2))}{(n+k-3)(n+2(k-1))}|\div(\om)|^{2}.
\end{split}
\end{align}
\end{lemma}
\begin{proof}
Substituting the definitions of $\clie(\om)$ and $\klie(\om)$ into
\begin{align}\label{dsym2}
D_{i}\om_{i_{1}\dots i_{k}} = D_{(i}\om_{i_{1}\dots i_{k})} + \tfrac{2}{k+1}\sum_{s = 1}^{k}D_{[i}\om_{i_{s}]i_{1}\dots \hat{i}_{s}\dots i_{k}}.
\end{align}
and simplifying the trace terms yields \eqref{domkl}.
Alternatively, it is straightforward to check that the right sides of \eqref{dsym2} and \eqref{domkl} are the same modulo pure trace terms. On the other hand from the properties characterizing $\ih$ it follows that the traces of the right sides of \eqref{dsym2} and \eqref{domkl} are the same. This verifies \eqref{domkl}. Contracting \eqref{domkl} with $D^{i}\om^{i_{1}\dots i_{k}}$ gives \eqref{normdom}
\end{proof}
It is immediate from \eqref{normdom} that $\ker D \cap \Ga(S^{k}_{0}(\ctm)) = \ker \clie\cap \ker \klie \cap \ker \div \cap \Ga(S^{k}_{0}(\ctm))$.
\begin{lemma}\label{ellipticlemma}
Let $(M, h)$ be an $n$-dimensional Riemannian manifold.
\begin{enumerate}
\item The differential operator $\clie:\symkt \to \symkpt$ has injective symbol, so
\begin{align}
\symkpt = \clie(\symkt) \oplus (\ker \div \cap \symkpt),
\end{align}
and $\div \clie:\symkt \to \symkt$ is an elliptic operator. If $M$ is compact, then $\div \clie$ is nonpositive and $\ker \div \clie = \ker \clie$.
\item For any $c \geq \tfrac{n+2(k-2)}{2(n+k-3)}$, the operator $\diamond_{c} = (\klie, \sqrt{c}\div):\Ga(\symkt) \to \Ga(\precod^{k+1}(\ctm) \oplus S^{k-1}_{0}(\ctm))$ has injective symbol, so
\begin{align}
\diamond_{c}^{\ast}\diamond_{c} = -\kliea\klie + c\clie\div
\end{align}
is an elliptic operator. If $M$ is compact, then $\diamond_{c}$ is nonpositive and $\ker \diamond_{c} = \ker \klie \cap \ker \div$.
\item For any $c \geq \tfrac{(k+1)(n+2(k-1))}{2k(n+k-3)}$, the operator $\kpc_{c} = (\klie, \sqrt{c}\clie):\Ga(\symkt) \to \Ga(\precod^{k+1}(\ctm) \oplus S^{k+1}_{0}(\ctm))$ has injective symbol, so
\begin{align}
\kpc_{c}^{\ast}\kpc_{c} = -\kliea\klie + c\div\clie
\end{align}
is an elliptic operator. If $M$ is compact, then $\kpc_{c}$ is nonpositive and $\ker \kpc_{c} = \ker \klie \cap \ker \clie$.
\end{enumerate}
\end{lemma}
\begin{proof}
Write $\sbl_{\clie}(Z)(\phi)$, $\sbl_{\klie}(Z)(\phi)$, and $\sbl_{\div}(Z)(\phi)$ for the symbols of $\clie$, $\klie$, and $\div$ applied to the vector $Z^{i}$ and $\phi \in \Ga(\symkt)$. Write $(i(Z)\phi)_{i_{1}\dots i_{k-1}} = Z^{p}\phi_{pi_{1}\dots i_{k-1}}$.
Straightforward computations show
\begin{align}\label{cliesymbolnorm}
\begin{split}
|\sbl_{\clie}(Z)(\phi)|^{2}& = \tfrac{1}{k+1}|Z|^{2}|\phi|^{2} + \tfrac{k(n+2(k-2))}{(k+1)(n+2(k-1))}|i(Z)\phi|^{2},
\end{split}\\
\label{kliesymbolnorm}
\begin{split}
|\sbl_{\klie}(Z)(\phi)|^{2}
& = \tfrac{1}{2}\left(|Z|^{2}|\phi|^{2} -|i(Z)\phi|^{2}\right) - \tfrac{k-1}{2(n+k-3)}|i(Z)\phi|^{2} = \tfrac{1}{2}\left(|\phi|^{2} - \tfrac{n+2(k-2)}{n+k-3}|i(Z)\phi|^{2}\right).
\end{split}
\end{align}
When $k = 1$ and $n = 2$ the coefficient of the pure trace terms in \eqref{kliesymbolnorm} should be understood in a limiting sense.
By \eqref{cliesymbolnorm}, if $\sbl_{\clie}(Z)(\phi) = 0$ for nonzero $Z$, then $\phi = 0$, so $\clie$ has injective symbol. The remaining claims follow from standard elliptic operator theory as in \cite[section $4$]{Berger-Ebin}. If $M$ is compact, then $\ilp \div \clie \om, \om\irp = - \iln\clie(\om)\irn^{2} - c\iln \div \om \irn^{2} \leq 0$ and $\div\clie\om = 0$ if and only if $\clie(\om) = 0$.
Combining $|\sbl_{\div}(Z)(\phi)|^{2} = |\imt(Z)\phi|^{2}$ with \eqref{kliesymbolnorm} yields
\begin{align}
|\sbl_{\diamond_{c}}(Z)(\phi)|^{2} = \tfrac{1}{2}|Z|^{2}|\phi|^{2} + \left(c - \tfrac{1}{2}\tfrac{n+2(k-2)}{n+k-3}\right)|i(Z)\phi|^{2},
\end{align}
from which the injectivity of $\sbl_{\diamond_{c}}(Z)$ is apparent. The ellipticity of $\diamond_{c}$ follows from standard elliptic operator theory as in \cite[section $6$]{Berger-Ebin}. If $M$ is compact, then $\ilp \diamond_{c}\om, \om\irp = - \iln\klie(\om)\irn^{2} - c\iln \div \om \irn^{2} \leq 0$ and $\diamond_{c}\om = 0$ if and only if $\klie(\om) = 0$ and $\div \om = 0$.
Combining \eqref{cliesymbolnorm} and \eqref{kliesymbolnorm} shows
\begin{align}
|\sbl_{\diamond_{c}}(Z)(\phi)|^{2} = \tfrac{k+1 + 2c}{2(k+1)}|Z|^{2}|\phi|^{2} + \left(\tfrac{ck(n+2(k-2)}{(k+1)(n + 2(k-1))} - \tfrac{1}{2}\tfrac{n+2(k-2)}{n+k-3}\right)|i(Z)\phi|^{2},
\end{align}
from which the injectivity of $\sbl_{\kpc_{c}}(Z)$ is apparent. The ellipticity of $\kpc_{c}$ follows from standard elliptic operator theory. If $M$ is compact, then $\ilp \kpc_{c}\om, \om\irp = - \iln\klie(\om)\irn^{2} - c\iln \clie \om \irn^{2} \leq 0$ and $\kpc_{c}\om = 0$ if and only if $\klie(\om) = 0$ and $\clie(\om) = 0$.
\end{proof}
Let $\sR_{ijk}\,^{l}$ be the curvature tensor of the Levi-Civita connection $D$ of the metric $h$.
For $\om \in \Ga(\symk)$ there holds $2D_{[i}D_{j]}\om_{i_{1}\dots i_{k}} = -k\sR_{ij(i_{1}}\,^{p}\om_{i_{2}\dots i_{k})p}$. Tracing this in $i$ and $i_{k}$ yields
\begin{align}
D^{p}D_{j}\om_{i_{1}\dots i_{k-1}p} - D_{j}D^{p}\om_{i_{1}\dots i_{k-1}p} = R_{j}\,^{p}\om_{i_{1}\dots i_{k-1}p} + (1-k)R^{p}\,_{j(i_{1}}\,^{q}\om_{i_{2}\dots i_{k-1})pq}.
\end{align}
Symmetrizing over the free indices yields
\begin{align}\label{differentialr}
D^{p}D_{(i_{1}}\om_{i_{2}\dots i_{k})p} - D_{(i_{1}}D^{p}\om_{i_{2}\dots i_{k})p} = \op{\sR}(\om)_{i_{1}\dots i_{k}}.
\end{align}
Thus $\op{\sR}(\om)$ measures the failure of the commutativity of the symmetrized covariant derivative and the divergence operator. For this reason $\op{\sR}(\om)$ occurs in Weitzenböck type formulas.
\begin{lemma}\label{culaplemma}
Let $(M, h)$ be a Riemannian manifold of dimension $n \geq 2$. For $\al \in \rea$ define a formally self-adjoint second order elliptic differential operator $\culap_{\al}: \Ga(S^{k}_{0}(\ctm)) \to \Ga(S^{k}_{0}(\ctm))$ by $\culap_{\al}\om = \lap_{h}\om +\al \op{\sR}(\om)$.
\begin{enumerate}
\item If $\al = -1$, then
\begin{align}
\label{lapom3} \culap_{-1} = \lap_{h} \om - \op{\sR}(\om)& = \tfrac{n+2(k-2)}{n+k-3}\clie \div(\om) - 2\kliea\klie(\om),
\end{align}
is an elliptic operator on $\Ga(\symkt)$. If $M$ is compact, then $\culap_{-1}$ is nonpositive and $\ker \culap_{-1} \cap \Ga(S^{k}_{0}(\ctm)) = \ker \klie \cap \ker \div \cap \Ga(S^{k}_{0}(\ctm))$.
\item If $\al = \tau(k) = \tfrac{k}{n+k-2}$, then
\begin{align}
\label{lapom2}\begin{split}
\culap_{\tau(k)}\om = \lap_{h} \om + \tfrac{k}{n+k-2}\op{\sR}(\om)& = \tfrac{n+2(k-1)}{n+k-2}\div \clie(\om) - \tfrac{2k(n+k-3)}{(k+1)(n+k-2)}\kliea\klie(\om),
\end{split}
\end{align}
is an elliptic operator on $\Ga(\symkt)$. If $M$ is compact, then $\culap_{\tau(k)}$ is nonpositive and $\ker \culap_{\tau(k)} \cap \Ga(S^{k}_{0}(\ctm)) = \ker \klie \cap \ker \clie \cap \Ga(S^{k}_{0}(\ctm))$.
\item If $-1 < \al < \tau(k) = \tfrac{k}{n+k-2}$ then $\culap_{\al}$ is an elliptic operator on $\Ga(\symkt)$. If $M$ is compact, then $\culap_{\al}$ is nonpositive and $\ker \culap_{\al} \cap \Ga(S^{k}_{0}(\ctm)) = \ker D \cap \Ga(S^{k}_{0}(\ctm))$.
\end{enumerate}
\end{lemma}
\begin{proof}
For $\om \in \Ga(\symkt)$, straightforward computations using the Ricci identity and \eqref{cliedefined} show
\begin{align}
\label{liediv}
\begin{split}
\clie \div(\om)_{i_{1}\dots i_{k}} &= D_{(i_{1}}D^{p}\om_{i_{2}\dots i_{k})p} + \tfrac{1-k}{(n+2(k-2))}h_{(i_{1}i_{2}}D^{p}D^{q}\omega_{i_{3}\dots i_{k})pq},
\end{split}\\
\begin{split}
\label{divlie}
\lap_{h}\om + k\op{\sR}(\om) & = (k+1)\div \clie(\om) - \tfrac{k(n+2(k-2))}{n+2(k-1)}\clie\div(\om).
\end{split}
\end{align}
Contracting \eqref{domkl} with $D^{i}$ and using \eqref{liediv} and \eqref{klieaklie} gives
\dps{\label{lapom}
\lap_{h}\om& = \div \clie(\om) + \tfrac{k(n+2(k-2))}{(n+k-3)(n+2(k-1))}\clie \div(\om) - \tfrac{2k}{k+1}\kliea\klie(\om).
}
Solving \eqref{divlie} for $\lap_{h}\om$ and substituting the result into \eqref{lapom} yields
\begin{align}\label{klieweitzenbock}
\begin{split}
&\op{\sR}(\om) = \div \clie(\om) - \tfrac{(n+k-2)(n+2(k-2))}{(n+k-3)(n+2(k-1))}\clie\div(\om) + \tfrac{2}{k+1}\kliea\klie(\om).
\end{split}
\end{align}
Equation \eqref{divlie} and \eqref{klieweitzenbock} are the analogues of the corresponding identities for operators on antisymmetric forms, for example \cite[Equations $(2.8)$ and $(2.9)$]{Semmelmann}. Rewriting \eqref{lapom} in two different ways using \eqref{divlie} gives \eqref{lapom3} and \eqref{lapom2}. The ellipticity of $\culap_{\al}$ in these cases, and its nonpositivity when $M$ is compact, follow from Lemma \ref{ellipticlemma}. Being convex combinations of the elliptic operators $\culap_{-1}$ and $\culap_{\tau(k)}$, the operators $\culap_{\al}$ for $-1 < \al < \tau(k)$ are elliptic. If $M$ is compact, the same argument shows that $\culap_{\al}$ is nonpositive. By \eqref{lapom3},
\begin{align}\label{culap1}
\culap_{\al} \om = (1+\al)\div \clie(\om) + \tfrac{(n+2(k-2))(k - \al(n+k-2))}{(n+k-3)(n+2(k-1))}\clie \div(\om) + \tfrac{2(\al - k)}{k+1}\kliea\klie(\om),
\end{align}
from which follows $\ker \culap_{\al} \cap \Ga(S^{k}_{0}(\ctm)) \supset \ker D \cap \Ga(S^{k}_{0}(\ctm))$.
If $M$ is compact and $\om \in \ker \culap \cap \Ga(S^{k}_{0}(\ctm))$, integrating the right side of \eqref{culap1} gives
\begin{align}
0 = (1+\al)\iln\clie(\om)\irn^{2} + \tfrac{(n+2(k-2))(k - \al(n+k-2))}{(n+k-3)(n+2(k-1))}\iln\div(\om)\irn^{2} + \tfrac{2(k-\al)}{k+1}\iln\klie(\om)\irn^{2},
\end{align}
and together with \eqref{domkl} this implies $\ker \culap_{\al} \cap \Ga(S^{k}_{0}(\ctm)) \subset \ker D \cap \Ga(S^{k}_{0}(\ctm))$.
\end{proof}
\begin{remark}
Let $M$ be compact. Define a functional $\cubic_{\al}$ with arguments a Riemannian metric $h_{ij}$ and a tensor $\om \in \Ga(S^{k}_{0}(\ctm))$ by $\cubic_{\al}(h, \om) = -\ilp\om,\culap_{\al}\om\irp$. For fixed $h$ the first variation of $\cubic_{\al}(h, \om)$ in $\om$ yields the equation $\culap_{\al}\om = 0$. Lemma \ref{culaplemma} implies that for $-1 \leq \al \leq \tfrac{k}{n+k-2}$ the functional $\cubic_{\al}(h, \om)$ is nonnegative.
\end{remark}
\begin{remark}
The \emph{Lichnerowicz Laplacian} $\lich$ is the formally self-adjoint operator which acts on an arbitrary rank $k$ covariant tensor $\om_{i_{1}\dots i_{k}}$ by $-\lap_{h}\om + k\op{\sR}(\om)$ (see \cite[page $27$]{Lichnerowicz-propagateurs} for the definition of $\op{\sR}(\om)$ for general tensors $\om$). The linearization of the Ricci curvature of the metric $h$ at the symmetric two-tensor $a_{ij}$ is $\tfrac{1}{2}\lich a_{ij} + D_{(i}D^{p}a_{j)p}$. On differential forms the Lichnerowicz Laplacian restricts to the usual Hodge Laplacian. The Lichnerowicz operator restricts to $-\culap_{-k}$ on $\Ga(S^{k}_{0}(\ctm))$.
\end{remark}
\begin{remark}
The special case of $\culap_{-1} = \lap_{h} - \op{\sR}$ acting on sections of $S^{2}_{0}(\ctm)$ was studied by J. Simons in \cite{Simons}, and this case of Lemma \ref{culaplemma} is given in \cite[section $6$.c]{Berger-Ebin}.
\end{remark}
\begin{remark}
As is shown in \cite{Berger-Ebin}, an infinitesimal deformation of an Einstein metric $h$ on a compact manifold is identified with an $\om \in \Ga(S^{2}_{0}(\ctm))\cap \ker \div$ solving $\lap \om = 2\op{\sR}(\om) - \tfrac{2\sR}{n}\om$. As is summarized in \cite[section $12$.H]{Besse} (the notations there are different than those here), using this equation in conjunction with the positivity conditions given by integrating \eqref{divlie} and \eqref{lapom3} gives a proof of the criterion of N. Koiso \cite[Theorem $3.3$]{Koiso-nondeformability}, for the rigidity of an Einstein metric, in particular showing that an Einstein metric of negative sectional curvature is rigid provided $n \geq 3$.
\end{remark}
Because the operator $\op{\sY}$ associated with $\sY \in \mcurv(\std)$ is self-adjoint it determines a quadratic form defined by $\qY(\om) = \lb \om, \op{\sY}(\om)\ra$ for $\om \in E$ on any $\op{\sY}$-invariant subspace $E \subset \tensor^{k}\std$.
\begin{corollary}\label{weitzenbockcorollary}
Let $(M, h)$ be a Riemannian manifold of dimension $n \geq 2$. For $\om \in \Ga(\symkt)$ there hold
\begin{align}
\label{lapomsq}\tfrac{1}{2}\lap_{h}|\om|^{2} & = |D\om|^{2} + (k+1)\lb \om, \div \clie(\om)\ra - \tfrac{k(n+2(k-2))}{n+2(k-1)}\lb \om, \clie\div(\om)\ra - k\qR(\om).\\
\label{lapomdivlie}\begin{split}
\tfrac{1}{2}\lap_{h}|\om|^{2} & = |D\om|^{2} + \tfrac{n+2(k-1)}{n+k-2}\lb \om, \div \clie(\om)\ra - \tfrac{2k(n+k-3)}{(k+1)(n+k-2)}\lb \om, \kliea\klie(\om)\r
- \tfrac{k}{n+k-2}\qR(\om).
\end{split}\\
\label{lapomliediv}\tfrac{1}{2}\lap_{h}|\om|^{2} & = |D\om|^{2} + \tfrac{n+2(k-2)}{n+k-3}\lb\om, \clie \div(\om)\ra - 2\lb\om, \kliea\klie(\om)\ra + \qR(\om) .
\end{align}
\end{corollary}
\begin{proof}
Contracting \eqref{divlie}, \eqref{lapom2}, and \eqref{lapom3} with $\om$ yields \eqref{lapomsq}-\eqref{lapomliediv}.
(Any two of \eqref{lapomsq}-\eqref{lapomliediv} imply the third.)
\end{proof}
\begin{remark}
If $\om\in \Ga(\symkt) \cap \ker \div \cap \ker \klie$, \eqref{lapomliediv} and
\begin{align}\label{qrom}
\begin{split}
\qR(\om) &= \qW(\om) + \tfrac{n+2(k-2)}{n-2}\lb \rictr(\om \kwedge \om),\rictr(\sR)\ra + \tfrac{1-k}{(n-1)(n-2)}\sR_{h} |\om|_{h}^{2}.
\end{split}
\end{align}
together yield
\begin{align}
\label{lapomliediv2}\tfrac{1}{2}\lap_{h}|\om|^{2} & = |D\om|^{2} + \qW(\om) + \tfrac{n+2(k-2)}{n-2}\lb \rictr(\om \kwedge \om),\rictr(\sR)\ra + \tfrac{1-k}{(n-1)(n-2)}\sR_{h} |\om|_{h}^{2},
\end{align}
where $\sW_{ijkl} \in \mcurv(\std)$ is the conformal Weyl tensor.
This recovers \cite[Corollary $4.2$]{Liu-Simon-Wang-conformallyflat}.
\end{remark}
\section{Vanishing theorems for conformal Killing and divergence free Codazzi tensors}\label{vanishingsection}
This section defines conformal Killing and trace-free Codazzi tensors and proves vanishing theorems for them.
The vanishing theorems for conformal Killing tensors and trace and divergence free Codazzi tensors are analogous to the somewhat stronger vanishing theorems for symmetric tensors on Kähler manifolds obtained by S. Kobayashi in \cite{Kobayashi-holomorphicsymmetric, Kobayashi-holomorphictensor}.
Because $\clie$ and $\klie$ are constructed by taking trace-free parts they behave well with respect to conformal changes of the metric.
The Levi-Civita connections $\tD$ and $D$ of conformally related pseudo-Riemannian metrics $\tilde{h}_{ij} = fh_{ij}$ are related by $\tD - D = 2\si_{(i}\delta_{j)}\,^{k} - h_{ij}h^{kp}\si_{p}$ with $2\si_{i} = d\log f_{i}$ and $\si^{i} = h^{ip}\si_{p}$. Write $\sbl_{\clie}(Z)(\phi)$ for the symbol of $\clie$ applied to the vector $Z^{i}$ and $\phi \in \Ga(\symkt)$, and similarly for $\klie$ and $\div$.
For $\om \in \Ga(\symkt)$, $0 < f \in \cinf(M)$, and $\al \in \rea$, there hold
\begin{align}
\begin{aligned}
&\clie_{\tilde{h}}(f^{\al}\om) = f^{\al}\left(\clie_{h}(\om) + 2(\al -k)\sbl_{\clie}(\si^{\sharp})(\om)\right),&\\
&\klie_{\tilde{h}}(f^{\al}\om) = f^{\al}\left(\klie_{h}(\om) + (2\al + 1-k)\sbl_{\klie}(\si^{\sharp})(\om)\right),\\
&f\div_{\tilde{h}}(f^{\al}\om) = f^{\al}\left(\div_{h}(\om) + (n-2 + 2\al)\imt(\si^{\sharp})\om\right),
\end{aligned}
\end{align}
so that $\clie$, $\klie$, and $\div$ are conformally invariant in the sense that for $0 < f \in \cinf(M)$ there hold
\begin{align}\label{lieconf}
&\clie_{\tilde{h}}(f^{k}\om) = f^{k}\clie_{h}(\om), && \klie_{\tilde{h}}(f^{(k-1)/2}\om) = f^{(k-1)/2}\klie_{h}(\om),&
&f\div_{\tilde{h}}(f^{1-n/2}\om) = f^{1-n/2}\div_{h}(\om).
\end{align}
Define $\clie^{\sharp}:\Ga(\symktv) \to \Ga(\symkptv)$ by $\clie^{\sharp}(X) = \clie(X^{\flat})^{\sharp}$. Then \eqref{lieconf} implies the invariance $f\clie^{\sharp}_{\tilde{h}}(X) = \clie^{\sharp}_{h}(X)$, so that while $\lie^{\sharp}$ depends on $h$, the subspace $\ker \clie^{\sharp} \cap \Ga(\symktv)$ does not. A \emph{conformal Killing tensor} of rank $k$ is an element of $\ker \clie^{\sharp} \cap \Ga(\symktv)$.
A \emph{conformal Codazzi tensor} is an element of $\ker \klie \cap \Ga(\symkt)$. A divergence-free element of $\ker \klie \cap \Ga(\symkt)$ is a \emph{trace-free Codazzi tensor}. These have been studied previously in \cite{Stepanov, Shandra-Stepanov-Mikes}.
\begin{remark}
Except in ranks one and two, conformal Killing tensors are not as well studied as their antisymmetric counterparts, the conformal Killing forms, for which \cite{Semmelmann} is a good reference. Probably their most natural occurrence is as the symbols of symmetries of the Laplacian; see \cite{Eastwood-laplacian, Gover-Silhan, Levasseur-Stafford, Michel-Somberg-Silhan, Sumitomo-Tandai}. For further background on Killing and conformal Killing tensors see also \cite{Heil-Jentsch, Mclenaghan-Milson-Smirnov, Rani-Edgar-Barnes, Schobel, Takeuchi-killing, Woodhouse}.
\end{remark}
The \emph{Cartan product} $\al \cprod \be \in S^{k+l}_{0}\sted$ of $\al \in S^{k}_{0}\std$ and $\be \in S^{l}_{0}\std$ defined by $\al \cprod \be = \tf(\al \sprod \be)$ makes $S_{0}(\sted) =\oplus_{k \geq 0}S^{k}_{0}\std$ into a graded associative algebra (see \cite[Supplement]{Dynkin-maximal} or \cite{Eastwood-cartan} for background). This claim can be justified by showing that the ideal in $(S(\sted), \sprod)$ generated by $h$ equals the kernel $\ker \tf$ of the graded linear map $\tf:S(\sted) \to S_{0}(\sted)$ sending $\al \in S^{k}(\ste)$ to its trace-free part. That the symmetrized covariant derivative is a derivation of the graded algebra of symmetric Killing tensors is \cite[Lemma $1.3$]{Sumitomo-Tandai}. Lemma \ref{cliederivationlemma} is the corresponding statement, that the operator $\clie$ acting on conformal Killing tensors is a derivation with respect to the Cartan product.
\begin{lemma}\label{cliederivationlemma}
On a pseudo-Riemannian manifold $(M, h)$, the operator $\clie$ is a derivation with respect to the Cartan product on $\Ga(S_{0}(\ctm))$. Consequently, on a conformal manifold $(M, [h])$, the subspace $\ck(TM, [h]) = \oplus_{k \geq 0}\ker \clie^{\sharp} \cap \Ga(S^{k}_{0}(TM)) \subset \Ga(\symat(TM))$ comprising finite linear combinations of conformal Killing tensors is a subalgebra with respect to the Cartan product.
\end{lemma}
\begin{proof}
For $\al \in \Ga(\symkt)$ and $\be \in \Ga(\symlt)$, $\al \sprod \be = \tf(\al \sprod \be) + h\sprod \ga$ for $\ga \in \Ga(S^{k+l-2}(\ctm))$, so
\dps{
\{h, \tf(\al \sprod \be)\} = \{h, \al \sprod \be\} + \{h, h \sprod \ga\} = \{h, \al \sprod \be\} + h \sprod\{h, \ga\}.
}
Hence, because $\tf:(S(\ctm), \sprod) \to (S_{0}(\ctm), \cprod)$ is a graded linear homomorphism,
\begin{align}\label{cprodleibniz}
\begin{split}
2\clie(\al \cprod \be)&= \tf \{h, \al \cprod \be\} = \tf \{h, \tf(\al\sprod \be)\} = \tf \{h, \al \sprod \be\} = \tf\left(\{h, \al\}\sprod \be + \al \sprod \{h, \be\}\right) \\
&= \tf (\{h, \al\})\cprod \be + \al \cprod \tf (\{h, \be\}) = 2\left(\clie(\al)\cprod \be + \al \cprod \clie(\be)\right).
\end{split}
\end{align}
The identity \eqref{cprodleibniz} shows $\ck(\ctm, h) = \oplus_{k \geq 0}\ker \clie \cap \Ga(\symkt) \subset \Ga(\symat(\ctm))$ is a subalgebra under Cartan product. While $\ck(\ctm, h)$ depends on the choice of $h \in [h]$, it is linearly isomorphic to $\ck(\ctm, e^{f}h)$ by the graded linear map sending $\om_{i_{1}\dots i_{k}}$ to $f^{k}\om_{i_{1}\dots i_{k}}$, and both are are identified with $\ck(TM, [h])$ via index raising, so $\clie^{\sharp}$ is a derivation of $S_{0}(TM)$ with kernel $\ck(TM, [h])$.
\end{proof}
It is convenient to say that $\qR$ is positive or negative (semi-)definite on $\symk$ or $\symkt$ if it is positive (semi-)definite or negative (semi-)definite as a quadratic form on $\Ga(\symk)$ or $\Ga(\symkt)$. Since, by \eqref{hycommute}, $\qR(h^{\sprod k}) = 0$ for any $k \geq 1$, $\qR$ is not definite on $\Ga(S^{2k}(\ctm))$ for any $k \geq 1$.
\begin{theorem}[\cite{Fox-ahs}]\label{bochnerliedivtheorem}
Let $(M, h)$ be a compact Riemannian manifold of dimension $n > 2$.
\begin{enumerate}
\item\label{bcc1} If $\qR$ is nonnegative on $S^{k}_{0}(\ctm)$ then any $\om \in \Ga(\symkt) \cap \ker \klie \cap \ker \div$ is parallel. If moreover $\qR$ is at some point of $M$ strictly positive on $S^{k}_{0}(\ctm)$ then $\Ga(\symkt) \cap \ker \klie \cap \ker \div = \{0\}$.
\item\label{bcc2} If $\qR$ is nonpositive on $S^{k}_{0}(\ctm)$ then any rank $k$ conformal Killing tensor is parallel, and if, moreover, $\qR$ is at some point of $M$ strictly negative on $S^{k}_{0}(\ctm)$, then any rank $k$ conformal Killing tensor is identically zero.
\end{enumerate}
\end{theorem}
\begin{proof}
For a compactly supported $\om \in \Ga(\symkt)$, integrating any of \eqref{lapomsq}-\eqref{lapomliediv} by parts against the Riemannian volume $\vol_{h}$ and simplifying the result using \eqref{normdom} yields
\begin{align}\label{bigbochner}
\begin{split}
\tfrac{2}{k+1}\iln|\klie(\om)\irn^{2} & + \tfrac{(n+k-2)(n+2(k-2))}{(n+k-3)(n+2(k-1))}\iln\div(\om)\irn^{2} - \iln\clie(\om)\irn^{2} = \int_{M}\qR(\om) \,d\vol_{h}.
\end{split}
\end{align}
The identity \eqref{bigbochner} generalizes the usual integrated Bochner identities for harmonic one-forms and conformal Killing vector fields.
If $\om \in \ker \klie \cap \ker \div$ and $\qR \geq 0$ then \eqref{bigbochner} shows that $\om \in \ker \clie$ and from \eqref{normdom} it follows that $D\om = 0$. If moreover $\qR$ is somewhere positive then \eqref{lapomliediv} shows $\om \equiv 0$. If $\om \in \ker \clie$ and $\qR \leq 0$ then \eqref{bigbochner} shows that $\om \in \ker \klie \cap \ker \div$ and from \eqref{normdom} it follows that $D\om = 0$. If, moreover, $\qR$ is somewhere negative then \eqref{lapomdivlie} shows $\om \equiv 0$.
\end{proof}
\begin{remark}
A result very similar to \eqref{bcc1} of Theorem \ref{bochnerliedivtheorem} was obtained in \cite[Theorem $2$]{Stepanov}. The difference is that the operator $\op{\sR}$ and corresponding quadratic form $\qR$ considered in \cite{Stepanov} are slightly different (they differ by a term involving the Ricci curvature). The claim here is very slightly more general, but, when the curvature is assumed to have a definite sign, since this sign is inherited by the Ricci tensor, the ambit of application of the claims is the same.
\end{remark}
\begin{corollary}\label{bochnercorollary}
Let $h$ be a Riemannian metric on a compact manifold $M$ of dimension $n > 2$.
\begin{enumerate}
\item\label{bcc1b}(Stepanov, \cite[Theorem $2$]{Stepanov}; see also \cite[Corollary $1$]{Shandra-Stepanov-Mikes}) If $h$ has nonnegative sectional curvature, then $\om \in \Ga(\symkt) \cap \ker \klie \cap \ker \div$ is parallel. If, moreover, the sectional curvature is strictly positive at some point of $M$ then $\Ga(\symkt) \cap \ker \klie \cap \ker \div = \{0\}$.
\item(Dairbekov-Sharafutdinov, \cite[Theorem $1.6$]{Dairbekov-Sharafutdinov}; see also \cite[Proposition $6.6$]{Heil-Moroianu-Semmelmann})\label{bcc2b} If $h$ has nonpositive sectional curvature, then a rank $k$ conformal Killing tensor is parallel, and if, moreover, the sectional curvature is strictly negative at some point of $M$, then a rank $k$ conformal Killing tensor is identically zero.
\end{enumerate}
\end{corollary}
\begin{proof}
Both claims follow from Theorem \ref{bochnerliedivtheorem} once it is known that a sign condition on the sectional curvature of $h$ implies the same sign condition for $\qR$ on $\Ga(\symkt)$. When $k = 2$ this follows from the proof of \cite[Proposition $6.1$]{Berger-Ebin}, but that argument (which is direct) does not extend straightforwardly to the case $k > 2$. For conformal Killing tensors, claim \eqref{bcc2} is \cite[Theorem $1.6$]{Dairbekov-Sharafutdinov}. A proof of the result of Dairbekov-Sharafutdinov for all $k$ based on Weitzenböck formulas as in Theorem \ref{bochnerliedivtheorem} was given as \cite[Proposition $6.6$]{Heil-Moroianu-Semmelmann}; they show via an elegant integration in the fibers argument that the condition \eqref{bcc2} of Theorem \ref{bochnerliedivtheorem} follows from the nonpositivity of the sectional curvature. Their argument works equally well assuming nonnegativity of the sectional curvature, and combined with Theorem \ref{bochnerliedivtheorem}, this yields corollary \ref{bochnercorollary}. Alternatively, in the case of trace-free Codazzi tensors, the proof of \cite[Corollary $1$]{Shandra-Stepanov-Mikes} shows how to deduce the required nonnegativity for $k > 2$ from the Berger-Ebin argument.
\end{proof}
\begin{remark}
The $n = 2$ case of Corollary \ref{bochnercorollary} was proved in \cite[section $3$]{Fox-2dahs}.
The $k = 2$ case of \eqref{bcc1b} of Corollary \ref{bochnercorollary} is stated as \cite[Theorem $5.1$]{Liu-Simon-Wang-conformallyflat}, where this statement is generalized to higher rank traceless Codazzi tensors supposing the background metric is conformally flat.
\end{remark}
\section{Coupling Einstein equations to symmetric tensors}\label{couplingsection}
This section describes Einstein-like equations coupling a metric to a trace-free symmetric $k$-tensor.
Given $\om \in \Ga(\symkt)$ define one-forms by
\begin{align}
\begin{split}
&\kcons_{h}(\om)_{i} = \om^{p_{1}\dots p_{k}}\klie_{h}(\om)_{ip_{1}\dots p_{k}}, \qquad \ccons_{h}(\om)_{i} = \om^{p_{1}\dots p_{k}}\clie_{h}(\om)_{ip_{1}\dots p_{k}}, \\
&\divcons_{h}(\om)_{i} = \om_{i}\,^{p_{1}\dots p_{k-1}}\div_{h}(\om)_{p_{1}\dots p_{k-1}}.
\end{split}
\end{align}
By \eqref{lieconf}, with $\si_{i} = \tfrac{1}{2}d\log f_{i}$,
\begin{align}
\begin{split}
f\kcons_{fh}(f^{\al}\om) & = f^{2\al + 1 -k}\left( \kcons_{h}(\om) + \tfrac{2\al + 1 - k}{2}\left(|\om|^{2}_{h}\si_{i} - \tfrac{n+2(k-2)}{n+k-3}\imt(\si^{\sharp})\rictr(\om \kwedge \om) \right)\right),\\
f\ccons_{fh}(f^{\al}\om) & = f^{2\al + 1 -k}\left( \ccons_{h}(\om) + \tfrac{2(\al - k)k}{k+1}\left(|\om|^{2}_{h}\si_{i} + \tfrac{n+2(k-2)}{n+2(k-1)}\imt(\si^{\sharp})\rictr(\om \kwedge \om) \right)\right),\\
f\divcons_{fh}(f^{\al}\om) & = f^{2\al + 1-k}\left( \divcons_{h}(\om) + (n -2 + 2\al)\imt(\si^{\sharp})\rictr(\om \kwedge \om) \right).
\end{split}
\end{align}
\begin{lemma}\label{divrictrlemma}
Let $M$ be a manifold of dimension $n \geq 2$ and let $h$ be a pseudo-Riemannian metric. For $\om \in \Ga(\symkt)$ there hold
\begin{align}
\label{divrictr}
\begin{split}
\div(\rictr(\om \kwedge \om))_{i} & = -\tfrac{2}{k+1}\kcons(\om)_{i} + \ccons(\om)_{i} + \left(1 + \tfrac{n-2}{(n+k-3)(n + 2(k-1))}\right)\divcons(\om)_{i}\\
& = -\tfrac{2}{k+1}\kcons(\om)_{i} + \ccons(\om)_{i} + \tfrac{(n+ k-2)(n+ 2(k-2)) + 2(n-2)}{(n+k-3)(n + 2(k-1))}\divcons(\om)_{i},
\end{split}\\
\label{divnorm}\tfrac{1}{2}\div(|\om|^{2}h) & = \tfrac{1}{2}D_{i}|\om|^{2} = \tfrac{2k}{k+1}\kcons(\om)_{i} + \ccons(\om)_{i} + \tfrac{k(n+2(k-2))}{(n+k-3)(n+2(k-1))}\divcons(\om)_{i},
\end{align}
\begin{align}
\label{divrictrk}\tfrac{1}{2}D_{i}|\om|^{2} - \div(\rictr(\om \kwedge \om))_{i} & = 2\kcons(\om)_{i} - \tfrac{n-2}{n+k-3}\divcons(\om)_{i},\\
\label{divrictrc}\tfrac{1}{2k}D_{i}|\om|^{2} + \div(\rictr(\om \kwedge \om))_{i} & = \tfrac{k+1}{k}\ccons(\om)_{i} + \tfrac{n+ 2k}{n+2(k-1)}\divcons(\om)_{i},
\end{align}
where when $k = 1$ \eqref{divrictr}-\eqref{divrictrc} have sense as written if $\rictr(\om \kwedge \om)_{ij}$ is interpreted as $\om_{i}\om_{j}$.
\end{lemma}
\begin{proof}
Suppose $k \geq 2$ and let $\om \in \Ga(\symkt)$.
Contracting $\om_{i_{1}\dots i_{k}}$ with $D_{i}\om_{i_{1}\dots i_{k}}$ and using \eqref{domkl} yields
\begin{align}
\begin{split}
\tfrac{1}{2}D_{i}|\om|^{2} & = \om^{i_{1}\dots i_{k}}D_{i}\om_{i_{1}\dots i_{k}} = \tfrac{2k}{k+1}\kcons(\om)_{i} +\ccons(\om)_{i} + \tfrac{k(n+2(k-2))}{(n+k-3)(n+2(k-1))}\divcons(\om)_{i},
\end{split}
\end{align}
which is \eqref{divrictr}. Contracting $\om_{i_{1}\dots i_{k}}$ with $D_{i_{1}}\om_{i_{2}\dots i_{k}i}$ and using \eqref{domkl} yields
\begin{align}\label{divnormpre}
\begin{split}
\om^{i_{1}\dots i_{k}}D_{i_{1}}\om_{i_{2}\dots i_{k}i} & = - \tfrac{2}{k+1}\kcons(\om)_{i} + \ccons(\om)_{i} + \tfrac{2(1-k)}{(n+k-3)(n+2(k-1))}\divcons(\om)_{i}.
\end{split}
\end{align}
Differentiating $\rictr(\om \kwedge \om)_{ij} = \om_{ia_{1}\dots a_{k-1}}\om_{j}\,^{a_{1}\dots a_{k-1}}$ yields
\begin{align}\label{divrictrom}
\begin{split}
w^{i_{1}\dots i_{k}}D_{i_{1}}\om_{i_{2}\dots i_{k}i} &
= \div(\rictr(\om \kwedge \om))_{i} - \divcons(\om)_{i}.
\end{split}
\end{align}
Substitututing \eqref{divrictrom} in \eqref{divnormpre} yields \eqref{divnorm}. Taking linear combinations of \eqref{divrictr} and \eqref{divnorm} in different ways yields \eqref{divrictrk} and \eqref{divrictrc}.
The $k = 1$ cases of \eqref{divrictrk} and \eqref{divrictrc} follow from
\begin{align}
\begin{split}
\div(\ga \tensor \ga - \tfrac{1}{2}|\ga|^{2}_{h}h)_{i} & = \div(\ga)\ga_{i} - 2\ga^{p}D_{[i}\ga_{p]} = \div(\ga)\ga_{i} - 2\ga^{p}\klie(\ga)_{ip},\\
\div(\ga \tensor \ga + \tfrac{1}{2}|\ga|^{2}_{h}h)_{i} & = \div(\ga)\ga_{i} + 2\ga^{p}D_{(i}\ga_{p)} = \tfrac{n+2}{2}\div(\ga)\ga_{i} + 2\ga^{p}\clie(\ga)_{ip},
\end{split}
\end{align}
valid for any $\ga \in \Ga(\ctm)$.
\end{proof}
\begin{remark}
Among the possible linear combinations of \eqref{divrictr} and \eqref{divnorm}, the combinations \eqref{divrictrc} and \eqref{divrictrk} are distinguished by the absence of $\kcons(\om)$ or $\ccons(\om)$. This is relevant because the conformal scaling of $\kcons(\om)$, $\ccons(\om)$, and $\divcons(\om)$ is different. The first two rescale in a way that depends on $k$, while the last rescales in a way that depends only on dimension. This means that given a conformal structure $[h]$ it makes sense to impose that either $\kcons(\om)$ or $\ccons(\om)$ vanish, but does not make sense to require that both vanish. The vanishing of $\divcons(\om)$ can then be treated as condition selecting within the conformal class.
\end{remark}
Recall the convention $S^{1}_{0}\std = \std$. Define linear operators $\stpm:\Ga(\symkt) \to \Ga(S^{2}\ctm)$ by
\begin{align}\label{stmpdefined}
\begin{aligned}
\stp(\om)_{ij} &= \begin{cases}
(\om \tensor \om)_{ij} - \tfrac{1}{2}|\om|^{2}_{h}h_{ij} & k = 1,\\
\rictr(\om \kwedge \om)_{ij} - \tfrac{1}{2}|\om|^{2}_{h}h_{ij} & k \geq 2,
\end{cases}&&
\stm(\om)_{ij} &= \begin{cases}
(\om \tensor \om)_{ij} + \tfrac{1}{2}|\om|^{2}_{h}h_{ij} & k = 1,\\
\rictr(\om \kwedge \om)_{ij} + \tfrac{1}{2k}|\om|^{2}_{h}h_{ij} & k \geq 2.
\end{cases}
\end{aligned}
\end{align}
Note the asymmetry in the definitions; there is a $k$ in the definition of $\stm$ that is not present in the definition of $\stp$.
The $\pm$ in the notation is motivated by thinking of sections of $S^{k}_{0}TM$ as sections of $S^{-k}_{0}\ctm$ (although this has no sense), so corresponding to negative integers. By definition,
\begin{align}\label{trstmp}
&\tr \stp(\om) = \tfrac{2-n}{2}|\om|^{2}_{h}, & &\tr \stm(\om) = \tfrac{n+2k}{2k}|\om|^{2}_{h}.
\end{align}
The operators $\stpm$ depend on $h$. When it is necessary to indicate this dependence there is written $\stpm_{h}$. From the identities, for $\om \in \Ga(\symkt)$,
\begin{align}
&\om\kwedge_{fh}\om = f^{2-k}\om\kwedge \om,& &\rictr_{fh}(\om\kwedge_{fh}\om) = f^{1-k}\rictr_{h}(\om\kwedge \om), & & |\om|_{fh}^{2} = f^{-k}|\om|^{2}_{h},
\end{align}
for $0 < f \in \cinf(M)$, it follows that the operators $\stpm$ are conformally invariant in the sense that
\begin{align}
&\stpm_{fh}(f^{\al}\om)_{ij} = f^{2\al + 1-k}\stpm_{h}(\om)_{ij}, & &\om \in \Ga(\symkt).
\end{align}
\begin{corollary}\label{divsicorollary}
Let $(M, h)$ be a pseudo-Riemannian manifold of dimension $n \geq 2$. For $\om \in \Ga(\symkt)$,
\begin{align}
\label{divstpm}
&\div(\stp(\om))_{i} = -2\kcons(\om)_{i} + \tfrac{n-2}{n+k-3}\divcons(\om)_{i},&&
\div(\stm(\om))_{i} = \tfrac{k+1}{k}\ccons(\om)_{i} + \tfrac{n+ 2k}{n+2(k-1)}\divcons(\om)_{i}.
\end{align}
\end{corollary}
\begin{proof}
This is immediate from Lemma \ref{divrictrlemma} and \eqref{stmpdefined}.
\end{proof}
\begin{lemma}\label{tracefreeconstantlemma}
Let $(M, h)$ be a pseudo-Riemannian manifold $M$ of dimension $n \geq 3$. Consider sequences of tensors $\{\om(k)\in\Ga(S^{k}_{0}(\ctm)): k \geq 1\}$ and $\{\ga(k)\in\Ga(S^{k}_{0}(\ctm)): k \geq 1\}$ such that
\begin{itemize}
\item $\om^{(k)} \in\Ga(S^{k}_{0}(\ctm)) \cap \ker \divcons \cap \ker \kcons$,
\item $\ga^{(k)} \in\Ga(S^{k}_{0}(\ctm)) \cap \ker \divcons \cap \ker \ccons$, and
\item the tensors $\stcod_{ij} = \sum_{k \geq 1}a_{k}\stp(\om^{(k)})_{ij}$ and $\stkill_{ij} = \sum_{k \geq 1}b_{k}\stm(\om^{(k)})_{ij}$ converge pointwise to smooth sections of $S^{2}(\ctm)$ for some sequences $\{a_{k} \in \rea: k \geq 1\}$ and $\{b_{k} \in \rea: k \geq 1\}$.
\end{itemize}
Then the equations
\begin{align}\label{stressenergy2}
\sR_{ij} - \tfrac{1}{2}\sR_{h}h_{ij} + \tfrac{n-2}{2n}\ka h_{ij} = \stcod_{ij} + \stkill_{ij}
\end{align}
are consistent and
\begin{align}\label{stressenergyconstant}
\ka = \sR_{h} - \sum_{k \geq 1}a_{k}|\om^{(k)}|_{h}^{2} + \sum_{k \geq 1}\tfrac{n+2}{k(n-2)}b_{k}|\ga^{(k)}|^{2}_{h}
\end{align}
is a constant.
If $n > 2$ the equations \eqref{stressenergy2} hold for some $\ka \in \rea$ if and only if
\begin{align}\label{tracefreericcondition}
\sR_{ij} - a_{1}\om^{(1)}_{i}\om^{(1)}_{j} - \sum_{k \geq 2}a_{k}\rictr(\om^{(k)}\kwedge \om^{(k)})_{ij} - b_{1}\ga^{(1)}_{i}\ga^{(1)}_{j} - \sum_{k \geq 2}b_{k}\rictr(\ga^{(k)}\kwedge \ga^{(k)})_{ij} = \tfrac{\ka}{n} h_{ij},
\end{align}
in which case $\ka$ has the form \eqref{stressenergyconstant}.
\end{lemma}
\begin{remark}
Note that the equations \eqref{stressenergy2} make sense for $h$ of any signature, although for indefinite signature $h$ the expressions $|\om^{(k)}|^{2}_{h}$ need not be positive.
\end{remark}
\begin{proof}[Proof of Lemma \ref{tracefreeconstantlemma}]
By Corollary \ref{divsicorollary}, $\stcod_{ij}$ and $\stkill_{ij}$ are divergence free. Taking the divergence of both sides of \eqref{stressenergy2} shows that $\ka$ must be constant if \eqref{stressenergy2} is to admit solutions. The form \eqref{stressenergyconstant} for $\ka$ follows by tracing both sides of \eqref{stressenergy2}. Evidently \eqref{stressenergy2} implies \eqref{tracefreericcondition}. If there holds \eqref{tracefreericcondition}, taking the divergence of \eqref{tracefreericcondition} and using the traced differential Bianchi identity $2D^{p}\sR_{ip} = D_{i}\sR_{h}$ and Corollary \ref{divsicorollary}, shows
\begin{align}
0 = \tfrac{n-2}{n}\left( \sR_{h} - \sum_{k \geq 1}a_{k}|\om^{(k)}|_{h}^{2} + \sum_{k \geq 1}\tfrac{n+2}{k(n-2)}b_{k}|\ga^{(k)}|^{2}_{h} \right),
\end{align}
so that, since $n > 2$, $\ka$ as in \eqref{stressenergyconstant} is constant. With \eqref{tracefreericcondition} this implies \eqref{stressenergy2}.
\end{proof}
\begin{corollary}\label{hypothesiscorollary}
If $h$ is a Riemannian metric on a compact manifold $M$ of dimension $n \geq 2$, and $\om \in \Ga(\symkt)\cap \ker(\lap_{h} - \op{\sR})$, then $\om$ satisfies the hypotheses of Lemma \ref{tracefreeconstantlemma}.
\end{corollary}
\begin{proof}
Since $M$ is compact, and $\lap_{h} - \op{\sR} = \culap_{-1}$, Lemma \ref{culaplemma} shows that $\ker(\lap_{h} - \op{\sR}) \cap \Ga(S^{k}_{0}(\ctm)) = \ker \klie \cap \ker \div \cap \Ga(S^{k}_{0}(\ctm))$.
\end{proof}
Let $\st_{ij} \in \Ga(S^{2}\ctm)$. A pseudo-Riemannian metric $g_{ij}$ solves the \emph{Einstein equations with energy momentum tensor $\sT_{ij}$ and cosmological constant $\cosmo$} if
\begin{align}
\sR_{ij} - \tfrac{1}{2}\sR_{g}g_{ij} = 8\pi \sT_{ij} - \cosmo g_{ij}.
\end{align}
In this case, it follows from the traced differential Bianchi identity that $\st_{ij}$ is divergence free.
\begin{corollary}
On an $n$-dimensional manifold $M$ a pair $(h, \om)$ comprising a pseudo-Riemannian metric $h_{ij}$ and a tensor $\om \in \Ga(\symkt)$ solves the system
\begin{align}\label{stressenergy}
\begin{split}
0 & = -2(n+ k-3)\kcons(\om) + (n-2)\divcons(\om),\\
\sR_{ij} &- \tfrac{1}{2}\sR_{h}h_{ij} + \tfrac{n-2}{2n}\ka h_{ij} = c\left(\rictr(\om\kwedge \om)_{ij} - \tfrac{1}{2}|\om|_{h}^{2}h_{ij}\right) = c\stp(\om)_{ij},
\end{split}
\end{align}
for some real constants $\ka$ and $c$ if and only if $h$ solves the Einstein equations with energy momentum tensor $\tfrac{c}{8\pi}\stp(\om)_{ij}$ and cosmological constant $\cosmo = \tfrac{n-2}{2n}\ka = \tfrac{n-2}{2n}\left(\sR_{h} - c|\om|^{2}_{h}\right)$.
\end{corollary}
The system \eqref{stressenergy} has been written so as to make readily apparent its formal resemblance to the Einstein-Maxwell system for a metric and a two-form.
\begin{remark}\label{signremark}
In \eqref{stressenergy}, the absolute value of $c$ can always be absorbed into $\om$, but its sign cannot, and the qualitative properties of the solutions of the resulting equations depend on this sign.
If $c$ has the wrong sign, the solutions of \eqref{stressenergy} may not admit any possible physical interpretation. Physical considerations focus attention on solutions of the Einstein equations for which the energy momentum tensor satisfies some energy condition, such as the \emph{weak energy condition} that $x^{i}x^{j}\sT_{ij} \geq 0$ for all timelike vector fields $x^{i}$, where $x^{i}$ is \emph{timelike} if $|x|^{2}_{h} < 0$. (Such a condition is vacuous if $h_{ij}$ is Riemannian.) Changing the sign of $c$ destroys such a condition. However, as examples coming from the study of submanifolds show, the equations \eqref{stressenergy} have mathematical interest even without such an energy condition (see Example \ref{constraintequationsexample}).
\end{remark}
\begin{example}
In the case $k = 1$, for a solution $(h, \om)$ of \eqref{stressenergy}, the one-form $\om$ is $h$-harmonic. If, moreover, $\om = d\phi$ for some $\phi \in \cinf(M)$, then $\phi$ solves the wave equation $\lap_{h}\phi = 0$, and in Lorentzian signature can be interpreted as a massless scalar field, so the pair $(h, d\phi)$ satisfies the Einstein scalar field equations. In the Riemannian case, were $M$ compact, then $\phi$ would be harmonic, so constant, and $\om$ would vanish identically, but if $\om$ is not required to be exact, there can still be interesting solutions in Riemannian signature.
\end{example}
Thinking of $\om \in \Ga(\symkt)$ as a $S^{k-1}_{0}(\ctm)$-valued one-form, $(\om \kwedge \om)_{ijkl}$ can be viewed as a curvature term.
Lemma \ref{projectivehiggslemma} shows that a metric and $\om \in \Ga(\symkt)\cap \ker \div \cap \ker \klie$ such that the modified curvature $\sR_{ijkl} - \tfrac{1}{4}(\om \kwedge \om)_{ijkl}$ is projectively flat, meaning it is a multiple of $(h\kwedge h)_{ijkl}$, yield a solution of \eqref{stressenergy}.
\begin{lemma}\label{projectivehiggslemma}
Let $h_{ij}$ be a pseudo-Riemannian metric on a manifold $M$ of dimension $n \geq 3$. Suppose $\om \in\Ga(\symkt) \cap \ker \div$ satisfies $\om^{a_{1}\dots a_{k}}\klie(\om)_{ia_{1}\dots a_{k}} = 0$ and that there is $\ka \in \cinf(M)$ such that the curvature $\sR_{ijkl}$ of $h_{ij}$ satisfies
\begin{align}\label{projectivehiggs}
\sR_{ijkl} - c(\om \kwedge \om)_{ijkl} = -\tfrac{\ka}{n(n-1)}(h\kwedge h)_{ijkl},
\end{align}
for some $c \in \rea$.
Then $h$ and $\om$ solve the equations \eqref{stressenergy}. In particular, $\ka$ is a constant.
If $n = 3$, then $h$ and $\om$ solve \eqref{projectivehiggs} for $c \in \rea$ if and only if they solve \eqref{stressenergy}.
\end{lemma}
\begin{proof}
By \eqref{rictralbe}, tracing \eqref{projectivehiggs} yields $\sR_{ij} = \tfrac{\ka}{n}h_{ij} + c\rictr(\om \kwedge \om)_{ij}$. Tracing this yields $\sR_{h} = \ka + c|\om|^{2}_{h}$, and calculating the trace-free part of $\sR_{ij}$ yields the last equation of \eqref{stressenergy}. From Lemma \ref{tracefreeconstantlemma} it follows that $\ka$ is a constant.
If $(h, \om)$ solves \eqref{stressenergy}, then, by \eqref{tfweyl} and \eqref{stressenergy},
\begin{align}\label{stressenergyinverse}
\begin{split}
&\sR_{ijkl} - c(\om \kwedge \om)_{ijkl}\\
&= \sW_{ijkl} - c\tf(\om \kwedge \om)_{ijkl} -\tfrac{2}{n-2}\tf\left( \rictr(\sR) - c\rictr(\om \kwedge \om)\right) \kwedge h - \tfrac{1}{n(n-1)}\left( \sR_{h} - c|\om|^{2}_{h}\right) h \kwedge h \\
&= \sW_{ijkl} - c\tf(\om \kwedge \om)_{ijkl} - \tfrac{\ka}{n(n-1)} h \kwedge h,
\end{split}
\end{align}
where the conformal Weyl tensor $\sW_{ijkl}$ of $h$ is the trace-free part of $\sR_{ijkl}$. If $n = 3$, then there vanish $\sW_{ijkl}$ and $\tf(\om \kwedge \om)_{ijkl}$, and \eqref{stressenergyinverse} shows that $h$ and $\om$ solve \eqref{projectivehiggs}.
\end{proof}
The qualitative properties of solutions of \eqref{projectivehiggs} depend strongly on the signs of the parameters $c$ and $\ka$.
\section{Examples of solutions to the coupled equations}\label{examplesection}
This section records examples of solutions of the equations \eqref{stressenergy}.
\begin{example}\label{constraintequationsexample}
Lemma \ref{hypersurfacelemma} yields solutions of \eqref{stressenergy} with $c$ having either sign.
\begin{lemma}\label{hypersurfacelemma}
Let $(N, g)$ be an $(n+1)$-dimensional pseudo-Riemannian space form with scalar curvature $\sR_{g}$ and let $i:M \to N$ be an immersion of an $n$-dimensional hypersurface $M$ such that $h_{ij} = i^{\ast}(g)_{ij}$ is nondegenerate. Let $\Pi_{ij}$ be the second fundamental form of the immersion $i$ defined with respect to a unimodular transverse vector field $Z^{I}$ orthogonal to $i(M)$ and satisfying $\ep = |Z|^{2}_{g} \in \{\pm 1\}$. If $g$ has constant curvature $\hat{R}_{IJK}\,^{L}= -\tfrac{2\sR_{g}}{n(n+1)}g_{K[I}\delta_{J]}\,^{L}$, and $i(M)$ has mean curvature zero, then $(h, \Pi)$ solves \eqref{projectivehiggs} with $c = -\ep$ and $\ka = \tfrac{n-1}{n+1}\sR_{g}$.
\end{lemma}
\begin{proof}
Let $\Pi_{ij}$ and $S_{i}\,^{j}$ be the second fundamental form and shape operator with respect to the normal field $Z^{I}$. Then $\Pi_{ij} = \ep S_{i}\,^{p}h_{pj}$. Let $D$ be the Levi-Civita connection of $h_{ij}$. The Gauss-Codazzi equations imply that $D_{[i}\Pi_{j]k} =0$ and $\sR_{ijkl} = \sR_{ijk}\,^{p}h_{pl} = -\ep(\Pi \kwedge \Pi)_{ijkl} - \tfrac{\sR_{g}}{n(n+1)}(h \kwedge h)_{ijkl}$. If the immersion has mean curvature zero, then $\Pi_{p}\,^{p} = 0$, so $D^{p}\Pi_{ip} = D_{i}\Pi_{p}\,^{p} = 0$ and $\Pi_{ij} \in \Ga(S^{2}_{0}(\ctm))\cap \ker \div \cap \ker \klie$.
\end{proof}
\end{example}
\begin{example}\label{affinesphereexample}
The notions described next are special cases of more general ones introduced in \cite{Fox-ahs}.
Let $h_{ij}$ be a pseudo-Riemannian metric and let $\nabla$ be a torsion-free affine connection satisfying $\nabla_{[i}h_{j]k} = 0$ and $\nabla_{i}\det h = 0$. The torsion-free \emph{conjugate} connection $\bnabla$ defined by $\bnabla = \nabla + h^{kp}\nabla_{i}h_{jp}$ satisfies $\bnabla_{i}h_{jk} = -\nabla_{i}h_{jk}$, so also $\bnabla_{[i}h_{j]k} = 0$ and $\bnabla_{i}\det h = 0$ too. The conjugate connection of $\bnabla$ is $\nabla$, so conjugacy is an involution on the space of torsion-free affine connections satisfying $\nabla_{[i}h_{j]k} = 0$ and $\nabla_{i}\det h = 0$. Such a connection has \emph{self-conjugate} curvature if its curvature tensor equals the curvature tensor of its conjugate connection.
A cooriented $n$-dimensional hypersurface $M$ in $(n+1)$-dimensional flat affine space equipped with the standard parallel volume form is \emph{nondegenerate} if its second fundamental form is everywhere nondegenerate. This condition does not depend on the choice of a vector field transverse to the hypersurface. With the given coorientation, the second fundamental form determines on $M$ a conformal structure $[h]$. A choice of transverse vector field determines an induced torsion-free affine connection, $\nabla$, and a pseudo-Riemannian metric, $h$, representing $[h]$ (the second fundamental form with respect to the given transversal). The equiaffine normal is the transverse vector field determined by the requirements that it be consistent with the given coorientation and satisfy $\nabla_{[i}h_{j]k} = 0$ and $\nabla_{i}\det h = 0$. The last condition is equivalent to the requirement that the volume density determined by $h$ equals that determined by interior multiplying the transversal in the ambient volume form. The connection, $\nabla$, and metric, $h$, associated with the equiaffine normal are called the \emph{equiaffine connection} and \emph{equiaffine (or Blaschke) metric}.
The conormal map associating with $p \in M$ the annihilator of the tangent space $T_{p}M$ takes values in the projectivization of the vector space $\ste$ dual to the ambient affine space. Because $M$ is nondegenerate this conormal map is an immersion and the connection $\bnabla$ conjugate to $\nabla$ represents the flat projective structure on the hypersurface obtained by pulling back that on the projectivization $\proj(\ste)$. In particular $\bnabla$ is projectively flat (one says that $\nabla$ is conjugate projectively flat).
Let $\tfrac{1}{2}L_{ij}\,^{k}$ be the difference tensor of the Levi-Civita connection, $D$, of the equiaffine metric and the equiaffine connection, $\nabla$, so that $D = \nabla + \tfrac{1}{2}L_{ij}\,^{k}$, and write $L_{ijk} = L_{ij}\,^{p}h_{pk}$. Then $\nabla_{i}h_{jk} = L_{i(jk)}$, and from $\nabla_{[i}h_{j]k} = 0$ it follows that $L_{ijk} = L_{(ijk)}$, while from $\nabla_{i}\det h = 0$ it follows that $L_{ip}\,^{p} =0$, so that $L_{ijk} \in \Ga(S^{3}_{0}(\ctm))$. The tensor $L_{ijk}$ is known as the \emph{Fubini-Pick form}; the constant factor $1/2$ is conventional.
The \emph{equiaffine mean curvature}, $\amc$, of $M$ is the average of the eigenvalues of the shape operator associated with the equiaffine normal.
The hypersurface $M$ is an \emph{affine sphere} if the lines spanned by its equiaffine normals are all parallel or all meet in a point (called the \emph{center} of the affine sphere). Equivalently, the equiaffine shape operator is a multiple of the identity endomorphism (in this case the multiple equals the equiaffine mean curvature and is constant). The curvature of the equiaffine metric $h$ of an affine sphere satisfies $\sR_{ijkl} - \tfrac{1}{4}(L \kwedge L)_{ijkl} = -2\amc(h \kwedge h)_{ijkl}$, so $(h, L)$ is a pair satisfying the hypotheses of Lemma \ref{projectivehiggslemma}. For background on affine spheres see \cite{Calabi-completeaffine, Loftin-survey} (these treat only the case of convex affine spheres, but the local computations are the same in any signature).
A convex affine sphere $M$ cooriented to the concave side and having equiaffine mean curvature $\amc < 0$ is said to be \emph{hyperbolic}. By a theorem due to S.~Y. Cheng and S.-T. Yau \cite{Cheng-Yau-mongeampere, Cheng-Yau-affinehyperspheresI, Loftin-survey}, the interior of a pointed proper open convex cone is foliated by hyperbolic affine spheres having center at its vertex, asymptotic to the cone, and for which the equiaffine metric is complete. The Cheng-Yau theorem on the existence of hyperbolic affine spheres thus guarantees many solutions to \eqref{stressenergy2} when $k = 3$, moreover for which $h$ is a complete Riemannian metric.
\end{example}
Theorem \ref{ahstheorem} shows that on a compact manifold a solution of a particular case of \eqref{stressenergy} is equivalent to the existence of a torsion-free affine connection that is Einstein-like in the sense that its Ricci curvature is a constant multiple of the metric.
Example \ref{affinesphereexample} shows that these conditions are satisfied by the equiaffine metric and affine connection induced on a convex nondegenerate hypersurface in flat affine space (in this case the Codazzi tensor is twice the usual cubic form). Since in this case the affine connection has projectively flat conjugate and there hold the stronger equations \eqref{projectivehiggs}, this shows that \eqref{stressenergy} is a strict relaxation of \eqref{projectivehiggs} in the sense that a solution of \eqref{stressenergy} need not come from a solution of \eqref{projectivehiggs}.
\begin{theorem}\label{ahstheorem}
Let $M$ be a manifold of dimension $n > 2$, and let $h$ be a Riemannian metric on $M$ having Levi-Civita connection $D$ and curvature $\sR_{ijk}\,^{l}$.
\begin{enumerate}
\item\label{ahsfor} Suppose $\om \in \Ga(S^{3}_{0}(\ctm))$ is Codazzi, meaning $D_{[i}\om_{j]kl} = 0$, and $(h, \om)$ is a solution of \eqref{stressenergy} for $\ka \in \rea$ and $c = 1$. The torsion-free affine connection $\nabla^{\pm} = D \pm \om_{ij}\,^{k}$ has the following properties:
\begin{enumerate}
\item\label{ahs1} $\nabla^{\pm}_{[i}h_{j]k} = 0$ and $\nabla_{i}\det h = 0$.
\item\label{ahs2} The curvature $R^{\pm}_{ijk}\,^{l}$ satisfies $R^{+}_{ijkl} = R^{+}_{ijk}\,^{p}h_{pl} = \sR_{ijkl} + 2\om_{pl[i}\om_{j]k}\,^{p} = R^{-}_{ijk}\,^{p}h_{pl} = R^{-}_{ijkl}$. In particular, $R^{+}_{ijkl} = R^{-}_{ijkl}$ has the symmetries of a metric curvature tensor.
\item\label{ahs3} $R^{\pm}_{ij} = R^{\pm}_{pij}\,^{p} = \sR_{ij} - \om_{ip}\,^{q}\om_{jq}\,^{p} = \tfrac{\ka}{n}h_{ij}$ and $R^{\pm} = h^{ij}R^{\pm}_{ij} = \sR_{h} - |\om|^{2} = \ka$ is constant.
\end{enumerate}
\item\label{ahsback}
Suppose $\nabla$ is a torsion-free affine connection satisfying $\nabla_{[i}h_{j]k} = 0$ and $\nabla_{i}\det h = 0$ having self-conjugate curvature $R_{ijk}\,^{l}$ satisfying $R_{ij} = R_{pij}\,^{p} = \tfrac{\ka}{n}h_{ij}$ for some constant $\ka$. Then the difference tensor $\om_{ij}\,^{k} = \nabla - D$ satisfies $\om_{ijk} = \om_{ij}\,^{p}h_{pk} \in \Ga(S^{3}_{0}(\ctm))$, $D_{[i}\om_{j]kl} = 0$, and $\ka = \sR_{h} - |\om|^{2}$ is constant, and the pair $(h, \om)$ solves \eqref{stressenergy} with constants $\ka$ and $c = 1$.
\end{enumerate}
\end{theorem}
\begin{proof}
First suppose the conditions of \eqref{ahsfor}.
Claims \eqref{ahs1}-\eqref{ahs3} follow from straightforward computations as follows. By the definition of $\nabla^{\pm}$ and the symmetry of $\om_{ijk}$, $\nabla^{\pm}h_{ij} = \mp 2 \om_{ijk}$. Antisymmetrizing yields $\nabla^{\pm}_{[i}h_{j]k} = 0$. Because $\om_{ijk}$ is trace-free, $\nabla^{\pm}_{i}\det h = h^{pq}\nabla^{pm}_{i}h_{pq} = \mp 2\om_{ip}\,^{p} = 0$. This proves \eqref{ahs1}. The curvature of $\nabla^{\pm}$ satisfies
\begin{align}
R^{\pm}_{ijk}\,^{l} = \sR_{ijk}\,^{l} \pm 2D_{[i}\om_{j]k}\,^{l} + 2\om_{p[i}\,^{l}\om_{j]k}\,^{p} = \sR_{ijk}\,^{l} + 2\om_{p[i}\,^{l}\om_{j]k}\,^{p},
\end{align}
the last equality because $D_{[i}\om_{j]kl} = 0$. This shows \eqref{ahs2} and \eqref{ahs3} follows by taking traces. The constancy of $\ka$ was assumed, and is equivalent to the constancy of $R^{\pm}$, for,
Now suppose $\nabla$ is as in \eqref{ahsback}. Define $\om_{ijk} = \om_{ij}\,^{p}h_{pk}$ by $\nabla - D = \om_{ij}\,^{k}$. Because $\nabla$ and $D$ are torsion-free, $\om_{[ij]k} = 0$. Then $\nabla_{i}h_{jk} = -2\om_{i(jk)}$, so $0 = \nabla_{[i}h_{j]k} = -\om_{[ij]k} - \om_{k[ij]} = -\om_{k[ij]}$, showing that $\om_{ijk} = \om_{(ijk)}$ and $\nabla_{i}h_{jk} = -2\om_{ijk}$. Similarly, $0 = \nabla_{i}\det h= h^{pq}\nabla_{i}h_{pq} = -\om_{ip}\,^{p}$, so $\om_{ijk} \in \Ga(S^{3}_{0}(\ctm))$. The connection $\bnabla$ conjugate to $\nabla$ is defined by $\bnabla = \nabla + h^{kp}\nabla_{i}h_{jp} = D + \om_{ij}\,^{k} - 2\om_{ij}\,^{k} = D- \om_{ij}\,^{k}$, so $\nabla$ and $\bnabla$ have the forms $\nabla^{\pm} = D \pm \om_{ij}\,^{k}$.
The curvature $R^{\pm}_{ijk}\,^{l}$ of $\nabla^{\pm}$ satisfies
\begin{align}
R^{\pm}_{ijkl} = R^{\pm}_{ijk}\,^{p}h_{pl} = \sR_{ijkl} \pm 2D_{[i}\om_{j]kl} + 2\om_{pl[i}\om_{j]k}\,^{p},
\end{align}
so that $\nabla = \nabla^{+}$ has self-conjugate curvature if and only if $D_{[i}\om_{j]kl} = 0$. In this case $\ka h_{ij} = R_{ij} = R_{pij}\,^{p} = \sR_{ij} - \om_{ip}\,^{q}\om_{jq}\,^{p}$ and $n\ka = h^{ij}R_{ij} = \sR_{h} - |\om|^{2}$. Using the differential Bianchi identity, $D_{[i}\om_{j]kl} = 0$, and $\om_{ip}\,^{p} = 0$ there results
\begin{align}
\begin{split}
nD_{i}\ka & = D_{i}(\sR - |\om|^{2}) = 2D^{p}\sR_{ip} - 2\om^{abc}D_{i}\om_{abc} = 2D^{p}\sR_{ip} - 2\om^{abc}D_{a}\om_{bci}\\
& = 2D^{p}\sR_{ip} - 2D^{a}(\om^{abc}\om_{bci}) + 2\om_{bci}D^{a}\om_{a}\,^{bc}
= 2D^{p}(\sR_{ip} - \om_{ia}\,^{b}\om_{pb}\,^{a}) = 2D^{p}R^{\pm}_{ip} = 2D_{i}\ka.
\end{split}
\end{align}
Since $n > 2$ this implies $\ka = \sR_{h} - |\om|^{2}$ is constant.
\end{proof}
\begin{example}\label{kahlerexample}
Let $(N, \Om)$ be a $2n$-dimensional symplectic manifold and let $i:M \to N$ be a Lagrangian immersion. Tensors on $M$ are labeled with lowercase Latin indices and tensors on $N$ are labeled with uppercase Latin indices.
A \emph{(para/pseudo)-Kähler structure} is a triple $(G, A, \Omega)$ comprising a (pseudo-)Riemannian metric $G_{IJ}$, an integrable (para)complex structure, and a symplectic form $\Omega_{IJ}$, which are \emph{compatible} in the sense that
\begin{align}\label{aphcompatibility}
&A_{I}\,^{P}A_{P}\,^{J} = \ep\delta_{I}\,^{J}, &\Omega_{IJ} = A_{I}\,^{P}G_{PJ},& &A_{I}\,^{P}\Omega_{PJ} = \ep G_{IJ},& &\Omega_{IP}G^{JP} = A_{I}\,^{J},
\end{align}
where, when $\ep = 1$ the qualifier \emph{para} applies, and when $\ep = -1$ the qualifier \emph{pseudo} is used if $G_{IJ}$ is not Riemannian signature.
A (para/pseudo)-Kähler manifold $(N, G, A)$ has \emph{constant (para)-holomorphic sectional curvature $4c$} if its Levi-Civita connection $\hnabla$ has curvature $\hat{R}_{IJK}\,^{L}$ of the form
\begin{align}\label{chsc}
\hat{R}_{IJK}\,^{L}
= 2c\left( \delta_{[I}\,^{L}G_{J]K}- \ep A_{[I}\,^{L}\Omega_{J]K} + \ep\Omega_{IJ}A_{K}\,^{L}\right).
\end{align}
If $\dim N = 2n$, then the Ricci and scalar curvatures are $\hR_{IJ} = 2c(n+1)G_{IJ}$ and $\hR = 4cn(n+1)$.
An immersion $i:M \to N$ into a (para/pseudo)-Kähler manifold $(N, \Omega, A, G)$ is \emph{nondegenerate} if the induced tensor $h = i^{\ast}(G)$ is nondegenerate.
Let $(N, G, A, \Omega)$ be a $2n$-dimensional (para/pseudo)-Kähler manifold with canonical (Levi-Civita) connection $\hnabla$ and let $i:M \to N$ be a nondegenerate Lagrangian immersion with second fundamental form $\Pi(X, Y)$ equal to the projection of $\hnabla_{X}Ti(Y)$ onto the normal bundle of $M$. Define $\Pi_{ijk} = \Pi_{(ijk)} = \Pi^{\hnabla}_{ij}\,^{Q}\Omega_{Qk}$ on $M$, so
$\Pi(X, Y, Z) = \Omega(\hnabla_{X}Ti(Y), Ti(Z))$. The tensor $\Pi$ is symmetric because $\hnabla$ is torsion-free and the immersion is Lagrangian.
Let $D$ be the Levi-Civita connection of the metric $h_{ij} = i^{\ast}(G)_{ij}$ on $M$. Tensors defined on $M$ are raised and lowered using $h_{ij}$ and $h^{ij}$.
Let $\hat{R}_{IJK}\,^{L}$ be the curvature of $\hnabla$ and let $\sR_{ijk}\,^{l}$ be the curvature of $D$. Define $\Pi_{ij}\,^{k} = h^{kp}\Pi_{ijp} = \Pi_{ij}\,^{k}$. Note that $\Pi_{ij}\,^{k}$ is not the second fundamental form as such, as its upper index is twisted by the (para)-complex structure.
Let $P \in \Ga(\eno(i^{\ast}(TN)))$ be projection onto $Ti(TM)$ along $ATi(TM)$. It is claimed that, for $X, Y \in \Ga(TM)$ there hold
\begin{align}\label{pksplitting}
\begin{aligned}
&Ti(D_{X}Y) = P\hnabla_{X}Ti(Y),\\
&\hnabla_{X}Ti(Y) = Ti(D_{X}Y) + \ep ATi(\Pis(X, Y)),&&
\hnabla_{X}ATi(Y) = ATi(D_{X}Y) + Ti(\Pis(X, Y)).
\end{aligned}
\end{align}
Because $i$ is nondegenerate $Ti(TM)$ and $ATi(TM)$ are transverse, so $P$ is well-defined. By definition of the second fundamental form there is induced on $M$ a torsion-free connection $D$ such that $\hnabla_{X}Ti(Y) = Ti(D_{X}Y) + \Pi(X, Y)$ where $\Pi(X, Y)$ is the projection of $\hnabla_{X}Ti(Y)$ onto $ATi(TM)$ along $Ti(TM)$. In particular, $Ti(D_{X}Y) = P\hnabla_{X}Ti(Y)$ by definition of $D$. Then
\begin{align}
\begin{split}
(D_{X}h)(Y, X) & = XG(Ti(Y), Ti(Z)) - G(Ti(D_{X}Y), Ti(Z) ) - G(Ti(Y), Ti(D_{X}Z)) \\&= (\hnabla_{X}G)(Ti(Y), Ti(Z)) + G(\pi(X, Y), Ti(Z)) + G(Ti(Y), \pi(X, Z)) = 0,
\end{split}
\end{align}
shows that $D$ is the Levi-Civita connection of $h$. By the definition of $\Pis$ there holds
\begin{align}
\begin{split}
\Om(\pi(X, Y), Ti(Z)) &= \Om(\hnabla_{X}Ti(Y), Ti(Z)) = \Pi(X, Y, Z)= h(\Pis(X, Y), Z) \\&= G(Ti(\Pis(X, Y)), Ti(Z)) = \ep\Om(ATi(\Pis(X, Y)), Ti(Z)),
\end{split}
\end{align}
for all $X, Y, Z \in \Ga(TM)$, and by the nondegeneracy of $\Om$ this shows $\pi(X, Y) = \ep ATi(\Pis(X, Y))$, so that $\hnabla_{X}Ti(Y) = Ti(D_{X}Y) + \ep ATi(\Pis(X, Y))$. The identity $\hnabla_{X}ATi(Y) = ATi(D_{X}Y) + Ti(\Pis(X, Y))$ follows because $A$ is $\hnabla$-parallel.
The Lagrangian immersion has \emph{mean curvature zero} if $\Pi_{ip}\,^{p} = 0$.
\begin{lemma}\label{constantsectlemma}
Let $i:M \to N$ be a nondegenerate mean curvature zero Lagrangian immersion of the $n$-dimensional manifold $M$ in the $2n$-dimensional (para/pseudo-)Kähler manifold $(N, G, \Om, A)$ with constant (para)-holomorphic curvature $4\hat{c}$. Let $h_{ij} = i^{\ast}(G)_{ij}$ be the induced metric. Then $(h, \Pi)$ solves \eqref{projectivehiggs} with $c = \ep$ and $\ka = \hat{c}n(n-1)$.
\end{lemma}
\begin{proof}
From \eqref{pksplitting} it follows that there are tensors $\sT_{ijk}\,^{l}$ and $\sN_{ijk}\,^{l}$ on $M$ having the algebraic symmetries of a curvature tensor of metric type and such that
\begin{align}\label{stntdefined}
\hat{R}(X, Y)Ti(Z) = Ti(\sT(X, Y)Z) + \ep ATi(\sN(X, Y)Z)
\end{align}
for all $X, Y, Z \in \Ga(TM)$. Straightforward computations using \eqref{pksplitting} show that
\begin{align}\label{kic}
&\sT_{ijk}\,^{l} = \sR_{ijk}\,^{l} - 2\ep (\Pi \kwedge \Pi)_{ijkl},&
&\sN_{ijk}\,^{l} = 2D_{[i}\Pi_{j]k}\,^{l} = 2\klie(\Pi).
\end{align}
Let $\sT_{ij} = \sT_{pij}\,^{p}$, $\sN_{ij} = \sN_{pij}\,^{p}$, $\sT = \sT_{p}\,^{p}$, and $\sN_{p}\,^{p} = 0$.
Tracing \eqref{kic} yields
\begin{align}
\label{gic2b}&\sT_{ij} = \sR_{ij} -\ep\rictr(L \kwedge L)_{ij} , & &\sT = \sR - \ep|L|^{2}_{h},&&\sN_{ij} = \div(\Pi)_{ij} .
\end{align}
Let $\hat{R}_{IJKL} = \hat{R}_{IJK}\,^{P}G_{PL}$. Suppressing notation indicating the differential of $i$, it follows from \eqref{stntdefined} that $\sT_{ijkl} = \hat{R}_{ijkl}$ and $\sN_{ijkl} = A_{i}\,^{P}\hat{R}_{Pjkl}$. If $G$ has constant (para)-holomorphic sectional curvature, it follows from \eqref{chsc} that $\hat{R}_{IJKL} = 2\hat{c}(G_{L[I}G_{J]K} + \ep\Om_{L[I}\Om_{J]K} + \ep \Om_{IJ}\Om_{KL})$ and $A_{I}\,^{P}\hat{R}_{PJKL} = 2\hat{c}(\Om_{L[I}G_{J]K} + G_{K[I}\Om_{J]L} + 2G_{IJ}\Om_{KL})$. Pulling these back to $M$ via $i$ yields $\sT_{ijkl} = \hat{R}_{ijkl} =2ch_{l[i}h_{j]k}$ and $\sN_{ijkl} = A_{i}\,^{P}\hat{R}_{Pjkl} = 0$. The remaining identities in
\begin{align}\label{stsnconst}
&\sN_{ijk}\,^{l} = 0, &&\sT_{ijk}\,^{l} = 2\hat{c}\delta_{[i}\,^{l}h_{j]k},& &\sT_{ij} = \hat{c}(n-1)h_{ij}, && \sT = cn(n-1).
\end{align}
are obtained by taking traces. Substituting \eqref{stsnconst} in \eqref{kic}, and \eqref{gic2b} shows that $\Pi_{ijk} \in \ker \klie \cap \ker \div$ and with the curvature $\sR_{ijk}\,^{l}$ of the Levi-Civita connection $D$ of $h_{ij}$ satisfies
\begin{align}
\sR_{ijkl} & = -\hat{c}(h \kwedge h)_{ijkl} + \ep (\Pi \kwedge \Pi)_{ijkl},
\end{align}
and so also $\sR_{ij} = \hat{c}(n-1)h_{ij} + \ep \rictr(\Pi \kwedge \Pi)_{ij}$ and $\sR = \hat{c}n(n-1) + \ep |\Pi|^{2}_{h}$.
\end{proof}
\end{example}
There seems to be no general existence result already known for the case $k > 3$.
\begin{example}
Let $G$ be a connected compact simple Lie group of dimension greater than $3$ with Lie algebra $\g$, let $h_{ij} = -B_{ij}$ be the bi-invariant metric determined by the negative of the Killing form of $\g$, and note that $h_{ij}$ is Einstein with Ricci curvature $\sR_{ij} = \tfrac{1}{4}h_{ij}$.
Let $\om_{i_{1}\dots i_{k}} \in S^{k}(\g^{\ast})$ be the complete polarization of a homogeneous degree $k$ polynomial $P$ invariant under the adjoint action of $G$ on $\g$. More precisely, suppose that $P$ is one of a set of homogeneous generators of the ring of invariant polynomials on $\g$ and that $k \geq 3$. If $\g$ has rank $l$ then the ring of invariant polynomials on $\g$ is generated by $l$ algebraically independent homogeneous elements (see e.g. \cite[section VIII.$8$]{Bourbaki-lie}). The degrees $2 = u_{1} < \dots < u_{l}$ of the homogeneous generators are given in terms of the exponents $m_{1}< \dots < m_{l}$ of the Weyl group $W$ of $\g$ by $u_{i} = m_{i} + 1$, and satisfy $u_{1}\cdot\dots \cdot u_{l} = |W|$ and $2(u_{1} + \dots + u_{l}) = \dim \g + l$.
Since $G$ acts on $\g$ orthogonally, the $h$-Laplacian is invariant under the $G$-action, so $\lap_{h}P$ is again an invariant polynomial. Let $E$ be the invariant polynomial corresponding to $h$. The harmonic part $Q$ of $P$ is obtained by subtracting from $P$ a linear combination of terms of the form $E^{s}\lap_{h}^{s}P$ ($s > 0$). Since each of these terms is $G$-invariant, so is $Q$. Since the homogeneous generators of the ring of invariant polynomials are algebraically independent, it cannot be that $P$ is a linear combination of powers of $E$, so $Q$ is not null. It follows that it may be supposed from the beginning that $P$ is $\lap_{h}$-harmonic, or, equivalently, that $\om_{i_{1}\dots i_{k}}$ is trace-free. In particular, this shows that on $\g$ there exists a $\lap_{h}$-harmonic homogeneous $G$-invariant polynomial of degree at least $3$.
\begin{lemma}\label{stressenergyexamplelemma}
Let $G$ be a connected compact simple Lie group of dimension greater than $3$ with Lie algebra $\g$, let $h_{ij} = -B_{ij}$ be the bi-invariant metric on $G$ determined by the negative of the Killing form $B_{ij}$ of $\g$, and let $D$ be its Levi-Civita connection. Suppose $k \geq 3$ and let $\om_{i_{1}\dots i_{k}} \in S^{k}(\g^{\ast})$ be the complete polarization of a $\lap_{h}$-harmonic homogeneous $G$-invariant polyonomial $P$ of degree $k$.
The pair $(h, \om)$ solves the equations \eqref{stressenergy2} on $G$.
\end{lemma}
\begin{proof}
The invariance of $P$ means that $0 = kc_{i(i_{1}}\,^{p}\om_{i_{2}\dots i_{k})p} = -2D_{i}\om_{i_{1}\dots i_{k}}$, so that $\om$ is parallel, and so in the kernel of $\culap_{-1}$ by \eqref{klieweitzenbock}. Let $\si_{ij} = \om_{ii_{1}\dots i_{k-1}}\om_{j}\,^{i_{1}\dots i_{k-1}}$. Then
\begin{align}
\begin{split}
c_{ij}\,^{p}\si_{pk} & = -c_{ji}\,^{p} \om_{pi_{1}\dots i_{k-1}}\om_{k}\,^{i_{1}\dots i_{k-1}} = (k-1)c_{j(i_{1}}\,^{p}\om_{i_{2}\dots i_{k-1})pi}\om_{k}\,^{i_{1}\dots i_{k-1}}\\
& = (k-1)c_{j(p}\,^{i_{1}}\om_{i_{2}\dots i_{k-1})ki_{1}}\om_{i}\,^{pi_{2}\dots i_{k-1}}
= - c_{jk}\,^{i_{1}}\om_{i_{1}pi_{2}\dots i_{k-1}} \om_{i}\,^{pi_{2}\dots i_{k-1}} = -c_{jk}\,^{p}\si_{ip},
\end{split}
\end{align}
showing that $\si_{ij}$ is an invariant bilinear form. By the simplicity of $\g$ there is a constant $c$ such that $\si_{ij} = ch_{ij}$, and tracing this equality shows that $c = |\om|^{2}_{h}/\dim \g$. It follows that $(h, \om)$ solves the equations \eqref{stressenergy2}.
\end{proof}
\end{example}
\begin{example}
When the metric $h_{ij}$ is flat, the equations \eqref{stressenergy2} admit purely algebraic solutions. In this example $h_{ij}$ is a flat Riemannian metric on an $n$-dimensional vector space $\ste$, although many of the claims make sense in other signatures. Let $x^{i}$ denote the radial Euler vector field. Let $E(x) = |x|^{2}_{h}$. Let $\lap = \lap_{h}$. If $F(x)$ is a function on $\ste$, let $F_{i_{1}\dots i_{k}} = D_{i_{1}}\dots D_{i_{k}}F$. Sometimes it is convenient to write $D^{(k)}F$ for the $k$-fold covariant derivative of $F$ (for example $D^{(2)}F$ is the Hessian of $F$). If $F$ is a polynomial homogeneous of degree $g$ then $x^{i_{1}}\dots x^{i_{j}}F_{i_{1}\dots i_{j}} = g(g-1)\dots(g-j+1)F$. In this case $F_{i_{1}\dots i_{g}}$ is a constant tensor, so parallel.
\begin{lemma}\label{epequivalencelemma}
Let $F$ be an $h$-harmonic polynomial homogeneous of degree $g\geq 2$ on the $n$-dimensional Euclidean vector space $(\ste, h)$. Let $\om_{i_{1}\dots i_{g}} = F_{i_{1}\dots i_{g}}$. The pair $(h, \om)$ solves the equations \eqref{stressenergy2} (with $\ga = 0$) if and only if there is $c \in \rea$ such that $P$ solves any one of the following equivalent equations:
\begin{align}\label{sepoly}
0 & = F_{ip_{1}\dots p_{g-1}}F_{j}\,^{p_{1}\dots p_{g-1}} - ch_{ij},\\
\label{sepoly2}
0 & = D_{i}D_{j}\left(|D^{(g-1)}F|^{2} - cE\right),\\
\label{sepoly3}
0 & = |D^{(g-1)}F|^{2} - cE.
\end{align}
In this case $c = |D^{(g)}F|^{2}$.
\end{lemma}
\begin{proof}
The equivalence of \eqref{sepoly} and \eqref{sepoly2} follows from the vanishing of $F_{i_{1}\dots i_{g+1}}$. Because $|D^{(g-1)}F|^{2} - cE$ is a homogeneous quadratic polynomial, its Hessian vanishes if and only if it vanishes identically, so the equations \eqref{sepoly} and \eqref{sepoly2} are equivalent to \eqref{sepoly3}.
\end{proof}
\end{example}
\begin{example}
A hypersurface in a Riemannian space form is \emph{isoparametric} if its principal curvatures are constant. The question of classifying isoparametric hypersurfaces was posed and partially solved by E. Cartan in \cite{Cartan-cubic, Cartan-isoparametric, Cartan-isoparametricconstantcurvature}. See \cite{Chi}, \cite{Siffert-isoparametric}, and \cite{Thorbergsson} and for background. In \cite{Munzner-I, Munzner-II} it is shown that for an isoparametric hypersurface in a constant curvature $(n-1)$-dimensional sphere $\sphere^{n-1} = \{x \in \ste: E(x) = 1\}$:
\begin{itemize}
\item the number $g$ of distinct principal curvatures satisfies $g \in \{1, 2, 3, 6\}$;
\item if the distinct principal curvatures are ordered $\la_{1} > \la_{2} > \dots > \la_{g}$, the multiplicities, $m_{i}$, of the $\la_{i}$ satisfy $m_{i} = m_{i+2}$ (indices modulo $6$), so that there are at most two distinct multiplicities $m_{1}$ and $m_{2}$ (moreover, if $g < 4$ then $m_{1} = m_{2}$ always); and
\item every such hypersurface arises as a level set of the restriction to the sphere of a polynomial $P:\ste \to \rea$ homogeneous of degree $g$ and satisfying the equations
\begin{align}\label{munznerequations}
&|dP|^{2} = g^{2}E^{g - 1},& &\lap P = \tfrac{m_{2} - m_{1}}{2}g^{2}E^{\tfrac{g}{2} - 1}.
\end{align}
\end{itemize}
A polynomial $P$ solving \eqref{munznerequations} is called a \emph{Cartan-Münzner polynomial}. Examples of solutions for which the resulting hypersurfaces are not extrinsically homogeneous are known when $g = 4$; see for example \cite{Ferus-Karcher-Munzner}.
\begin{theorem}\label{isoparametrictheorem}
Let $P$ be a Cartan-Münzner polynomial homogeneous of degree $g\geq 2$ and having multiplicities $m_{1}$ and $m_{2}$ on the $n$-dimensional Euclidean vector space $(\ste, h)$. Then the trace-free part $\om_{i_{1}\dots i_{g}}$ of $P_{i_{1}\dots i_{g}}$ solves \eqref{sepoly}, so the pair $(h, \om)$ solves the equations \eqref{stressenergy2} (with $\ga = 0$).
\end{theorem}
\begin{proof}
It suffices to show that $P$ solves an equation of the form \eqref{sepoly3}. The following identity is needed. For any polynomial $F$ homogeneous of degree $g$ there holds
\begin{align}
\begin{split}
\label{lapeip}
\lap(E^{i}F) & = 2i(n + 2(g + i - 1))E^{i-1}F + E^{i}\lap F.
\end{split}
\end{align}
In particular, the special case $f = 1$ yields $\lap E^{i} = 2i(n+2(i-1))E^{i-1}$.
In the case $m_{1} = m_{2}$, the polynomial $P$ is harmonic and so $\om_{i_{1}\dots i_{g}} = P_{i_{1}\dots i_{g}}$. In this case applying $\lap^{g-2}$ to the first equation of \eqref{munznerequations} and simplifying the result using \eqref{lapeip} yields
\begin{align}
2^{g-2}g(g!)(n+2(g-2))\dots (n+ 2)E = \lap^{g-2}(g^{2}E^{g-1}) = \lap^{g-2}|dP|^{2} = 2^{g-2}|D^{(g-1)}P|^{2}.
\end{align}
Differentiating this yields
\begin{align}
D_{i}D_{j}|D^{(g-1)}P|^{2} = 2g(g!)(n+2(g-2))\dots (n+ 2) h_{ij},
\end{align}
which suffices to show that $P$ solves \eqref{sepoly3}. The argument in the general case is similar, but more involved. Since $m_{1} = m_{2}$, if $g < 4$, it can be supposed that $g \in \{4, 6\}$. In particular, $g$ is even. Let $P = \sum_{i = 0}^{g/2}E^{i}Q^{(g-2i)}$ be the Lefschetz decomposition of $P$ into its harmonic components. Here $Q^{(g-2i)}$ is a harmonic polynomial homogeneous of degree $g - 2i$, and the decomposition is uniquely determined. Applying $\lap$ to both sides and using \eqref{lapeip} yields
\begin{align}
\tfrac{m_{2} - m_{1}}{2}g^{2}E^{\tfrac{g}{2} - 1} = \lap P = \sum_{i = 1}^{g/2}2i(n+2(g - i -1))E^{i-1}Q^{(g-2i)}.
\end{align}
By the uniqueness of the Lefschetz decomposition, this implies $Q^{(g-2i)} = 0$ if $0 < i < g/2$. Hence
\begin{align}\label{pqred1}
P = Q + \tfrac{(m_{2} - m_{1})g}{2(n+g - 2)}E^{g/2}
\end{align}
where $Q$ is a harmonic polynomial homogeneous of degree $g$. Note that the desired tensor $\om_{i_{1}\dots i_{g}}$ equals $Q_{i_{1}\dots, i_{g}}$. Calculating the differential of \eqref{pqred1} using the homogeneity of $Q$ yields
\begin{align}
g^{2}E^{g-1} = |dP|^{2} = |dQ|^{2} + \tfrac{(m_{2} - m_{1})g^{3}}{(n+g - 2)}E^{g/2 -1}Q + \tfrac{(m_{2} - m_{1})^{2}g^{2}}{(n+g - 2)^{2}}E^{g-1},
\end{align}
so that
\begin{align}\label{pqred2}
0 = |dQ|^{2} + g^{3}\tfrac{(m_{2} - m_{1})}{(n+g - 2)}E^{g/2 -1}Q + g^{2}\left(\tfrac{(m_{2} - m_{1})^{2}}{(n+g - 2)^{2}} - 1\right)E^{g-1}.
\end{align}
Applying $\lap^{g-2}$ to both sides of \eqref{pqred2} and simplifying using \eqref{lapeip} yields
\begin{align}
0 = 2^{g-2}\left(|D^{(g-1)}Q|^{2} + \left(\tfrac{(m_{2} - m_{1})^{2}}{(n+g - 2)^{2}} - 1\right)g(g!)(n+2(g-2))(n+2(g-3))\dots (n+2)E\right).
\end{align}
Hence
\begin{align}
Q_{ip_{1}\dots p_{g-1}}Q_{j}\,^{p_{1}\dots p_{g-1}} = \left(1 - \tfrac{(m_{2} - m_{1})^{2}}{(n+g - 2)^{2}}\right)g(g!)(n+2(g-2))(n+2(g-3))\dots (n+2)h_{ij}.
\end{align}
Because $\om_{i_{1}\dots i_{g}}$ is parallel, this suffices to prove the claim.
\end{proof}
\end{example}
\begin{example}
Let $E = \{e_{1}, \dots, e_{n}\}$ be an enumeration of the edge set of a finite $k$-regular graph with vertex set $V$. The partial Steiner system $\B$ determined by the incidence of edges in the given graph is the collection of $k$-element subsets (blocks) of $\bar{n} = \{1, \dots, n\}$ such that $I = \{i_{1}, \dots, i_{k}\} \in \B$ if and only if the edges $e_{i_{1}}, \dots, e_{i_{k}}$ are incident at some vertex in $V$. Let $\ste$ be the $n$-dimensional real vector space generated by $E$ and equip $\ste$ with the flat Riemannian metric $h$ with respect to which $E$ is an ordered orthonormal basis. Let $x_{1}, \dots, x_{n}$ be the coordinates of $x \in \ste$ with respect to the ordered basis $E$. The quadratic form $Q(x)$ determined by $h$ is $Q(x) = \sum_{i = 1}^{n}x_{i}^{2}$. Associate with $I \in \B$ the monomial $x_{I} = x_{i_{1}}\dots x_{i_{k}}$. Let $\ep \in \{\pm 1\}^{\B}$, so that $\ep_{I} \in \{\pm 1\}$ for each $I \in \B$.
\begin{lemma}\label{graphpolynomiallemma}
Let $\B$ be the partial Steiner system determined by the incidence of edges in a finite $k$-regular graph. For the homogeneity $k$ polynomial $P(x) = \sum_{I \in \B}\ep_{I}x_{I}$
associated with a $k$-regular graph and a choice of signs $\ep \in \{\pm 1\}^{\B}$, the pair $(h, D^{(k)}P)$ solves \eqref{stressenergy2}.
\end{lemma}
\begin{proof}
The component $\tfrac{\pr^{k-1}P}{\pr x_{i_{1}}\dots \pr x_{i_{k-1}}}$ is nonzero if and only if $\{i_{1}, \dots, i_{k-1}\}$ is contained in some block $I = \{i_{1}, \dots, i_{k}\}$ in $\B$ (because any $k-1$ edges meet at most one vertex there is at most one such block). In this case $\tfrac{\pr^{k-1}P}{\pr x_{i_{1}}\dots \pr x_{i_{k-1}}} = \ep_{I}x_{i_{k}}$. Since there are $(k-1)!$ orderings of the distinct indices $i_{1}, \dots, i_{k-1}$ and since the index of a given edge appears in exactly two blocks there results $|D^{(k-1)}P|^{2}_{h} = 2(k-1)!Q$. Because no variable $x_{i}$ appears in any monomial of $P$ with a power higher than one, $P$ is $h$-harmonic. By Lemma \ref{epequivalencelemma} this shows that the pair $(h, D^{(k)}P)$ solves \eqref{stressenergy2}.
\end{proof}
\end{example}
\section{Refined Kato inequalities for trace-free symmetric tensors}\label{katosection}
The inequality \eqref{kato} of Lemma \ref{katolemma} generalizes the estimate for the second fundamental form of a minimal hypersurface proved in \cite{Schoen-Simon-Yau}. With a $1$ in place of $\tfrac{n+k-2}{n+2(k-1)}$, it follows from the Cauchy-Schwarz inequality, and is known as a \emph{Kato inequality}. The estimates \eqref{kato}-\eqref{kato3} are \emph{refined Kato inequalities} in the sense of \cite{Calderbank-Gauduchon-Herzlich} and \cite{Branson-kato}, and can be deduced from the results in either of those papers. In particular the results of \cite[Section $6$]{Calderbank-Gauduchon-Herzlich} include Lemma \ref{katolemma}, and the discussion at the very end of \cite[Section $6$]{Calderbank-Gauduchon-Herzlich} gives the explicit constants of \eqref{kato}-\eqref{kato3} for the cases $k = 1, 2$.
(The $k = 2$ case of \eqref{kato} had earlier been stated in \cite[section $4$]{Bourguignon-magic}.) To keep the exposition self-contained, here there is given a direct proof of \eqref{kato}-\eqref{kato3} following the general procedure described in the introduction of \cite{Branson-kato}, and not utilizing general representation theoretic machinery.
\begin{lemma}\label{katolemma}
Let $M$ be an $n$-dimensional manifold with a Riemannian metric $h$ having Levi-Civita connection $D$, and let $\phi_{i_{1}\dots i_{k}} \in \Ga(\symkt)$, where $k \geq 1$. If $\phi \in \Ga(\symkt)\cap \ker \klie \cap \ker \div$ (that is $D_{[i}\phi_{j]i_{1}\dots i_{k-1}} = 0$) then where $|\phi|^{2} \neq 0$ there holds
\begin{align}\label{kato}
|d|\phi||^{2} \leq \tfrac{n+k-2}{n+2(k-1)}|D\phi|^{2}.
\end{align}
If $\phi \in \Ga(\symkt)\cap \ker \clie \cap \ker \div$ (that is $D_{(i_{1}}\phi_{i_{2}\dots i_{k+1})} = 0$) then where $|\phi|^{2} \neq 0$ there holds
\begin{align}\label{kato2}
|d|\phi||^{2} \leq \tfrac{k}{k+1}|D\phi|^{2}.
\end{align}
If $\phi \in \Ga(\symkt)\cap \ker \clie \cap \ker \klie$ (that is $D\phi$ is pure trace) then where $|\phi|^{2} \neq 0$ there holds
\begin{align}\label{kato3}
|d|\phi||^{2} \leq \tfrac{k}{n+2(k-1)}|D\phi|^{2}.
\end{align}
\end{lemma}
\begin{proof}
Write $\sbl_{\clie}(Z)(\phi)$ for the symbol of $\clie$ applied to the vector $Z^{i}$ and $\phi \in \Ga(\symkt)$, and similarly for $\klie$ and $\div$. Write $(i(Z)\phi)_{i_{1}\dots i_{k-1}} = Z^{p}\phi_{pi_{1}\dots i_{k-1}}$ and suppose $Z^{i}$ has unit norm. When $k = 1$ and $n = 2$ the coefficient of the pure trace terms in \eqref{kliesymbolnorm} should be understood in a limiting sense. The nonnegativity of $|\sbl_{\klie}(Z)(\phi)|^{2}$ in \eqref{kliesymbolnorm} yields
\begin{align}\label{klieinequality}
\tfrac{n+2(k-2)}{n+k-3}|i(Z)\phi|^{2} \leq |\phi|^{2}.
\end{align}
Together \eqref{cliesymbolnorm} and \eqref{klieinequality} give
\begin{align}\label{sblclie}
|\sbl_{\clie}(Z)(\phi)|^{2} \leq \tfrac{1}{k+1}|\phi|^{2} + \tfrac{k(n+k-3)}{(k+1)(n+2(k-1))}|\phi|^{2} = \tfrac{n+k-2}{n+2(k-1)}|\phi|^{2}.
\end{align}
Contracting \eqref{domkl} with $\sbl_{D}(Z)(\om)$ and using the Cauchy-Schwarz inequality shows
\begin{align}\label{katoineq1}
\begin{split}
&\tfrac{1}{2}Z^{i}d_{i}|\phi|^{2} = Z^{i}\phi^{i_{1}\dots i_{k}}D_{i}\phi_{i_{1}\dots i_{k}} = \lb \sbl_{D}(Z)(\phi), D\phi\ra \\
& = \lb\sbl_{\clie}(Z)(\phi), \clie(\phi)\ra + \tfrac{2k}{k+1} \lb \sbl_{\klie}(Z)(\phi), \klie(\phi)\ra + \tfrac{k(n+2(k-2))}{(n+k-3)(n+2(k-1))}\lb i(Z)\phi, \div(\phi)\ra,\\
& \leq |\sbl_{\clie}(Z)(\phi)||\clie(\phi)| + \tfrac{2k}{k+1}| \sbl_{\klie}(Z)(\phi)|| \klie(\phi)| + \tfrac{k(n+2(k-2))}{(n+k-3)(n+2(k-1))}|i(Z)\phi|| \div(\phi)|.
\end{split}
\end{align}
Suppose $\phi \in \Ga(\symkt) \cap \ker \klie\cap \ker \div$. By \eqref{normdom}, $|D\phi|^{2} = |\clie(\phi)|^{2}$. Substituting this and \eqref{sblclie} into \eqref{katoineq1} gives
\begin{align}
\begin{split}
|\phi|^{2}|Z^{i}d_{i}|\phi||^{2} = \tfrac{1}{4}|Z^{i}d_{i}|\phi|^{2}|^{2} &\leq |\sbl_{\clie}(Z)(\phi)|^{2}|\clie(\phi)|^{2}
= |\sbl_{\clie}(Z)(\phi)|^{2}|D\phi|^{2} \leq \tfrac{n+k-2}{n+2(k-1)}|\phi|^{2}|D\phi|^{2}.
\end{split}
\end{align}
This holds for all unit norm $Z^{i}$, so shows \eqref{kato}.
Suppose $\phi \in \Ga(\symkt) \cap \ker \clie\cap \ker \div$. By \eqref{normdom}, $|D\phi|^{2} = \tfrac{2k}{k+1}|\klie(\phi)|^{2}$, and, by \eqref{kliesymbolnorm}, $2|\sbl_{\klie}(Z)(\phi)|^{2} \leq |\phi|^{2}$. In \eqref{katoineq1} these give
\begin{align}
\begin{split}
|\phi|^{2}|Z^{i}d_{i}|\phi||^{2} = \tfrac{1}{4}|Z^{i}d_{i}|\phi|^{2}|^{2} &\leq (\tfrac{2k}{k+1})^{2}|\sbl_{\klie}(Z)(\phi)|^{2}|\klie(\phi)|^{2} \\& = \tfrac{2k}{k+1}|\sbl_{\klie}(Z)(\phi)|^{2}|D\phi|^{2} \leq \tfrac{k}{n+1}|\phi|^{2}|D\phi|^{2}.
\end{split}
\end{align}
This holds for all unit norm $Z^{i}$, so shows \eqref{kato2}.
Suppose $\phi \in \Ga(\symkt) \cap \ker \clie\cap \ker \klie$. By \eqref{normdom}, $|D\phi|^{2} =\tfrac{k(n+2(k-2))}{(n+k-3)(n+2(k-1))}|\div(\phi)|^{2}$. With \eqref{klieinequality} in \eqref{katoineq1} this gives
\begin{align}
\begin{split}
|\phi|^{2}|Z^{i}d_{i}|\phi||^{2}& = \tfrac{1}{4}|Z^{i}d_{i}|\phi|^{2}|^{2} \leq\left(\tfrac{k(n+2(k-2))}{(n+k-3)(n+2(k-1))}\right)^{2}|i(Z)\phi|^{2}| \div(\phi)|^{2} \\
&= \tfrac{k(n+2(k-2))}{(n+k-3)(n+2(k-1))}|i(Z)\phi|^{2}|D\phi|^{2} \leq \tfrac{k}{n+2(k-1)}|\phi|^{2}|D\phi|^{2}.
\end{split}
\end{align}
This holds for all unit norm $Z^{i}$, so shows \eqref{kato3}.
\end{proof}
\begin{remark}
When $n = 2$ the inequalities \eqref{kato}, \eqref{kato2}, and \eqref{kato3} are in fact equalities; see section $3$ of \cite{Fox-2dahs}. Since here attention is focused on the case $n > 2$, further discussion is omitted.
\end{remark}
\begin{remark}\label{katoremark}
The proof of Lemma \ref{katolemma} shows that if $\phi \in \Ga(\symkt)$ then the largest eigenvalue $\mu$ of the symmetric two-tensor $\rictr(\phi \kwedge \phi)_{ij}$ satisfies $\mu \leq \tfrac{n+k-3}{n+2(k-2)}|\phi|^{2}$. By \eqref{klieinequality} for any vector field $X^{i}$ there holds $X^{i}X^{j}\rictr(\phi \kwedge \phi)_{ij} = |i(X)\rictr(\phi)|^{2} \leq \tfrac{n+k-3}{n+2(k-2)}|\phi|^{2}|X|^{2}$, which suffices to show the claim. This means $\rictr(\phi \kwedge \phi)_{ij} \leq \tfrac{n+k-3}{n+2(k-2)}|\phi|^{2}h_{ij}$.
\end{remark}
\begin{lemma}\label{swlemma}
Let $h$ be a Riemannian metric on a manifold $M$ of dimension $n > 2$ and let $\om \in \Ga(\symkt)$. Wherever $\om \neq 0$ there hold
\begin{align}
\label{sharplapom}& |\om|^{(n+2(k-1))/(n+k-2)}\lap_{h}|\om|^{(n-2)/(n+k-2)}\geq \tfrac{n-2}{n+k-2}\qR(\om), && \text{if}\,\, \om \in \ker \klie \cap \ker \div,\\
\label{sharplapom2}& |\om|^{(k+1)/k}\lap_{h}|\om|^{(k-1)/k}\geq -\tfrac{2(k-1)}{k+1}\lb \om, \kliea\klie(\om)\ra, && \text{if}\,\, \om \in \ker \clie \cap \ker \div,\\
\label{sharplapom3}& |\om|^{(n+2(k-1))/k}\lap_{h}|\om|^{(2-n)/k}\leq \tfrac{n-2}{n+k-2}\qR(\om),&& \text{if}\,\, \om \in \ker \klie \cap \ker \clie.
\end{align}
\end{lemma}
\begin{proof}
Let $\om \in \Ga(\symkt)$. Wherever $|\om| > 0$ there holds
\begin{align}\label{lapnormla}
\tfrac{1}{2\la}|\om|^{2(1-\la)}\lap_{h}|\om|^{2\la} = \tfrac{1}{2}\lap_{h}|\om|^{2} + 2(\la - 1)|d|\om||^{2}.
\end{align}
Combining \eqref{lapnormla} with \eqref{katolemma}, \eqref{lapom}, \eqref{lapomdivlie}, \eqref{lapomliediv}, and Lemma \ref{katolemma} yields \eqref{sharplapom}-\eqref{sharplapom3}.
\end{proof}
\begin{remark}
If $h$ is flat and $f \in \cinf(M)$ is harmonic, then $\om_{i_{1}\dots i_{k}}= D_{i_{1}}\dots D_{i_{k}}f \in \Ga(\symkt) \cap \ker \klie \cap \ker \div$. By Lemma \ref{swlemma} the function $|\om|^{p}$ is subharmonic for all $p \geq (n-2)/(n+k-2)$. For the flat Euclidean connection on $\rea^{n}$ and $k = 1$ this is \cite[Theorem $A$]{Stein-Weiss-harmonic}, and for $k >1$ it is \cite[Theorem $1$]{Calderon-Zygmund}. In the opposite direction, \cite[Theorem $2(b)$]{Stein-Weiss} shows that on flat Euclidean space the best $p$ for which $|\om|^{p}$ is subharmonic is $(n-2)/(n+k-2)$, and \cite[Theorem $2(a)$]{Stein-Weiss} shows that on flat Euclidean space, given any section $\om$ of $\symkt$ there is around every point a neighborhood $U$ and a harmonic function $f \in \cinf(U)$ such that on $U$ there holds $\om_{i_{1}\dots i_{k}}= D_{i_{1}}\dots D_{i_{k}}f$.
If $f \in \cinf(M)$ is harmonic then $df \in \ker \klie \cap \ker \div$, and Lemma \ref{swlemma} shows that
\begin{align}
(n-1) |df|^{n/(n-1)}\lap_{h}|df|^{(n-2)/(n-1)}\geq (n-2)\rictr(\sR)(df, df).
\end{align}
If $h$ has nonnegative Ricci curvature it follows that $|df|^{p}$ is subharmonic for any $p \geq (n-2)/(n-1)$.
\end{remark}
\begin{remark}
Suppose $\om_{ij} \in \Ga(S^{2}_{0}(\ctm))\cap \ker \klie \cap \div$. If the sectional curvature is nonnegative then $\qR$ is nonnegative on $S^{2}_{0}(\ctm)$ so $|\om|^{(n-2)/n}$ is subharmonic by \eqref{sharplapom}; if $M$ is compact this means $|\om|$ is constant, and by \eqref{lapomliediv} this implies $\om$ is parallel. Moreover, if the sectional curvature is positive at some point, then $\qR$ is positive on $S^{2}_{0}(\ctm)$ at that point and $\om$ must be identically zero. This recovers Theorem \ref{codazzitensortheorem}.
\begin{theorem}[M. Berger - D. Ebin, \cite{Berger-Ebin}]\label{codazzitensortheorem}
On a compact Riemannian manifold with nonnegative sectional curvature, a $b_{ij} \in \Ga(S^{2}(\ctm))$ such that $D_{[i}b_{j]k} = 0$ and $D_{i}b_{p}\,^{p} = 0$ is parallel; if the sectional curvature is somewhere positive, then $b_{ij}$ is a constant multiple of the metric.
\end{theorem}
\begin{corollary}\label{harmoniccorollary}
The following are equivalent for a Riemannian manifold $(M, h)$ of dimension $n \geq 3$.
\begin{enumerate}
\item\label{hc1} The curvature $\sR_{ijkl}$ of $h$ is harmonic, meaning $D_{p}\sR_{ijk}\,^{p} = 0$.
\item\label{hc2} The trace-free part $\mr{\sRic}_{ij}$ of its Ricci tensor is in $\ker \klie \cap \ker \div$, so is a Codazzi tensor.
\item\label{hc3} Its Ricci tensor is a Codazzi tensor and its scalar curvature is constant.
\end{enumerate}
If $n \geq 4$, then these conditions are equivalent to
\begin{enumerate}
\setcounter{enumi}{3}
\item\label{hc4} The Weyl curvature $\sW_{ijkl}$ of $h$ is harmonic, meaning $D_{p}\sW_{ijk}\,^{p} = 0$.
\end{enumerate}
If $h$ satisfies any of these equivalent conditions and $M$ is compact, then that $h$ have nonnegative sectional curvature which is somewhere positive implies $h$ is Einstein.
\end{corollary}
\begin{proof}
For any Riemannian metric, $\div(\mr{\sRic})_{i} = \tfrac{n-2}{2n}D_{i}\sR$ and
\begin{align}
2\klie(\mr{\sRic})_{ijk} = D_{p}\sR_{ijk}\,^{p} + \tfrac{2}{n(n-1)}h_{k[i}D_{j]}\sR = 2D_{i}\sRic_{j]k} + \tfrac{2}{n(n-1)}h_{k[i}D_{j]}\sR .
\end{align}
It follows that, if $n > 2$, then $\mr{\sRic}_{ij} \in \ker \klie \cap \ker \div$ if and only if $D_{p}\sR_{ijk}\,^{p} = 0$. In this case $D_{i}\sR = 0$, so $\sRic \in \ker \klie \cap \ker \div$ and $\sR$ is constant. That \eqref{hc3} implies \eqref{hc2} is immediate. When $n > 3$, the identity
\begin{align}
D_{p}\sW_{ijk}\,^{p} = \tfrac{2(n-3)}{n-2}\left(\klie(\mr{\sRic})_{ijk} + \tfrac{n-2}{2n(n-1)}h_{k[i}D_{j]}\sR \right),
\end{align}
implies the equivalence of \eqref{hc3} and \eqref{hc4}. Applying Theorem \ref{codazzitensortheorem} to $\mr{\sR}_{ij}$ yields the final claim.
\end{proof}
The equivalencies of Corollary \ref{harmoniccorollary} are essentially \cite[Lemma $1$]{Derdzinski-harmonic}. The final claim \eqref{hc4} might be new.
\end{remark}
\section{Constraints on solutions}\label{constraintsection}
This section obtains a priori constraints on the growth of solutions of \eqref{projectivehiggs}. This requires bounds on complicated tensorial equations. The bounds presented are sharp for $k\leq 3$ but could be improved for $k > 3$. It would also be interesting to extend, even partially, such constraints to solutions of \eqref{stressenergy}.
\begin{lemma}
Let $(\ste, h)$ be a Euclidean vector space of dimension $n \geq 2$. For $k \geq 2$ and $\om \in S^{k}_{0}\std$ there hold
\begin{align}
\label{lijnorm}
&\tfrac{n+k-3}{n + 2(k-2)}|\om|^{4}_{h} \geq |\rictr(\om \kwedge \om)|^{2}_{h} \geq \tfrac{1}{n}|\om|^{4}_{h},&\\
\label{lijnormb}
&\tfrac{(n-2)(n+k-2)}{n(n + 2(k-2))}|\om|^{4}_{h} \geq |\tf\rictr(\om \kwedge \om)|^{2}_{h},& \\
\label{lijklnorm}
&4|\om|_{h}^{4} \geq |\om \kwedge \om|^{2}_{h
\geq \tfrac{2}{n(n-1)}|\om|_{h}^{4},&\\
\label{lijklnormb}
&\left(4 - \tfrac{2}{n(n-1)}\right)|\om|_{h}^{4} \geq |\tf(\om \kwedge \om)|^{2}_{h} + \tfrac{4}{n-2}|\mr{\rictr(\om\kwedge \om)}|_{h}^{2} \geq |\tf(\om \kwedge \om)|^{2}_{h}.
\end{align}
\end{lemma}
\begin{proof}
For $x_{i} \in \std$, the nonnegativity of the squared norm of
\begin{align}
x_{[i}\om_{j]i_{1}\dots i_{k-1}} - \tfrac{1}{n+k-3}\sum_{s = 1}^{k-1}h_{i_{s}[i}\om_{j]i_{1}\dots \hat{i}_{s} \dots i_{k-1}p}x^{p},
\end{align}
(where $\hat{i}_{s}$ denotes the omission of the index $i_{s}$) yields
\begin{align}
\begin{split}
0 &\leq 2x^{i}\om^{ji_{1}\dots i_{k-1}}\left(x_{[i}\om_{j]i_{1}\dots i_{k-1}} - \tfrac{1}{n+k-3}\sum_{s = 1}^{k-1} h_{i_{s}[i}\om_{j]i_{1}\dots \hat{i}_{s} \dots i_{k-1}p} x^{p} \right) \\
&= |x|^{2}_{h}|\om|^{2}_{h} - |\imt(x)\om|^{2}_{h} - \tfrac{k-1}{n+k-3}|\imt(x)\om|^{2}_{h}= |x|^{2}_{h}|\om|^{2}_{h} - \tfrac{n + 2(k-2)}{n+k-3}|\imt(x)\om|^{2}_{h},
\end{split}
\end{align}
so that
\begin{align}
\rictr(\om \kwedge \om)_{ij}x^{i}x^{j} = |\imt(x)\om|^{2}_{h} \leq \tfrac{n+k-3}{n+2(k-2)}|x|^{2}_{h}|\om|^{2}_{h}.
\end{align}
This shows $\tfrac{n+k-3}{n+2(k-2)}|x|^{2}_{h}|\om|^{2}_{h}h_{ij} - \rictr(\om \kwedge \om)_{ij}$ is positive semidefinite. Because $\rictr(\om \kwedge \om)_{ij}$ is also positive semidefinite and the endomorphisms $\tfrac{n+k-3}{n+2(k-2)}|\om|^{2}_{h}\delta_{i}\,^{j} - \rictr(\om \kwedge\om)_{i}\,^{j}$ and $\rictr(\om \kwedge \om)_{i}\,^{j}$ commute, contracting with $\rictr(\om \kwedge \om)^{ij}$ yields
\begin{align}\label{tfricnorm}
\tfrac{n+k-3}{n+2(k-2)}|\om|^{4}_{h} \geq |\rictr(\om \kwedge \om)|^{2}_{h} = |\tf\rictr(\om\kwedge \om)|^{2}_{h} + \tfrac{1}{n}|\om|^{4}_{h},
\end{align}
from which the inequalities \eqref{lijnorm} and \eqref{lijnormb} follow.
From \eqref{tfweylnorm} there follows
\begin{align}\label{lijklnorm2}
\begin{split}
|\om \kwedge \om|^{2}_{h} &= |\tf (\om \kwedge \om)|^{2}_{h} + \tfrac{4}{n-2}|\mr{\rictr(\om\kwedge \om)}|_{h}^{2} + \tfrac{2}{n(n-1)}|\om|^{4}_{h}.
\end{split}
\end{align}
Using the nonnegativity of $|\tf (\om \kwedge \om)|^{2}_{h}$ and $|\mr{\rictr(\om\kwedge \om)}|_{h}^{2}$ yields the second inequality in \eqref{lijklnorm}.
There remains to show the leftmost inequality of \eqref{lijklnorm}.
For $\tau_{ij} \in S^{2}_{0}\std$,
\begin{align}
\begin{split}
0 &\leq (\tau_{ij}\om_{klq_{1}\dots q_{k-2}} - \tau_{kl}\om_{ijq_{1}\dots q_{k-2}})(\tau^{ij}\om^{klq_{1}\dots q_{k-2}} - \tau^{kl}\om^{ijq_{1}\dots q_{k-2}})
= |\tau|^{2}_{h}|\om|^{2}_{h} - |\imt(\tau)\om|^{2}_{h},
\end{split}
\end{align}
where $(\imt(\tau)\om)_{i_{1}\dots i_{k-2}} = \tau^{pq}\om_{pqi_{1}\dots i_{k-2}}$. This means that the symmetric quadratic form defined on $S^{2}_{0}\std$ by contracting $\tau^{ij}\tau^{kl}$ with the tensor $|\om|^{2}_{h}h_{k(i}h_{j)l} - \om_{ijq_{1}\dots q_{k-2}}\om_{kl}\,^{q_{1}\dots q_{k-2}}$ is positive semidefinite. Since the quadratic form defined on $S^{2}_{0}\std$ by contracting $\tau^{ij}\tau^{kl}$ with $\om_{ijq_{1}\dots q_{k-2}}\om_{kl}\,^{q_{1}\dots q_{k-2}}$ is also positive semidefinite and the corresponding endomorphisms commute it follows that
\begin{align}
0 \leq (|\om|^{2}_{h}h_{k(i}h_{j)l} - \om_{ijq_{1}\dots q_{k-2}}\om_{kl}\,^{q_{1}\dots q_{k-2}})\om^{ij}\,_{p_{1}\dots p_{k-2}}\om^{kl}\,^{p_{1}\dots p_{k-2}}
\end{align}
so that
\begin{align}\label{om4}
\om_{ijq_{1}\dots q_{k-2}}\om_{kl}\,^{q_{1}\dots q_{k-2}}\om^{ij}\,_{p_{1}\dots p_{k-2}}\om^{kl}\,^{p_{1}\dots p_{k-2}} \leq |\om|^{4}_{h}.
\end{align}
Define
\begin{align}
\mt = \om_{j}\,^{kq_{1}\dots q_{k-2}}\om_{k}\,^{ip_{1}\dots p_{k-2}}\om_{i}\,^{l}\,_{q_{1}\dots q_{k-2}}\om_{l}\,^{j}\,_{p_{1}, \dots p_{k-2}}.
\end{align}
From
\begin{align}
\begin{split}
0 &\leq 2\om_{k(i}\,^{p_{1}\dots p_{k-1}}\om_{j)lp_{1}\dots p_{k-2}}\om^{k(i}\,_{q_{1}\dots q_{k-2}}\om^{j)lq_{1}\dots q_{k-2}}\\
& = \om_{kip_{1}\dots p_{k-2}}\om_{jl}\,^{p_{1}\dots p_{k-2}}\om^{ki}\,_{q_{1}\dots q_{k-2}}\om^{jlq_{1}\dots q_{k-2}} + \mt
\end{split}
\end{align}
and
\begin{align}
\begin{split}
|\om \kwedge \om|^{2}_{h} &= 2\om_{kip_{1}\dots p_{k-2}}\om_{jl}\,^{p_{1}\dots p_{k-2}}\om^{ki}\,_{q_{1}\dots q_{k-2}}\om^{jlq_{1}\dots q_{k-2}} - 2\mt
\end{split}
\end{align}
it follows that
\begin{align}\label{lijkllijkl}
\begin{split}
|\om \kwedge \om|^{2}_{h} &= 2\om_{kip_{1}\dots p_{k-2}}\om_{jl}\,^{p_{1}\dots p_{k-2}}\om^{ki}\,_{q_{1}\dots q_{k-2}}\om^{jlq_{1}\dots q_{k-2}} - 2\mt\\
& \leq 4\om_{kip_{1}\dots p_{k-2}}\om_{jl}\,^{p_{1}\dots p_{k-2}}\om^{ki}\,_{q_{1}\dots q_{k-2}}\om^{jlq_{1}\dots q_{k-2}} \leq 4|\om|_{h}^{4},
\end{split}
\end{align}
the last equality by \eqref{om4}. This shows \eqref{lijklnorm}. Combining the leftmost inequality of \eqref{lijklnorm} with \eqref{lijklnorm2} yields
\begin{align}
\left(4 - \tfrac{2}{n(n-1)}\right)|\om|_{h}^{4} \geq |\tf(\om \kwedge \om)|^{2}_{h} + \tfrac{4}{n-2}|\mr{\rictr(\om\kwedge \om)}|_{h}^{2} \geq |\tf(\om \kwedge \om)|^{2}_{h}.
\end{align}
This shows \eqref{lijklnormb}.
\end{proof}
By \eqref{tfweylnorm}, for $\om \in S^{k}_{0}\std$ there holds
\begin{align}\label{tfweylomom}
\begin{split}
|\om \kwedge \om|^{2}_{h} &= |\tf (\om \kwedge \om)|^{2}_{h} + \tfrac{4}{n-2}|\tf \rictr(\om \kwedge \om)|_{h}^{2} + \tfrac{2}{n(n-1)}|\om|^{4}_{h}.
\end{split}
\end{align}
From \eqref{tfweylomom} there follows
\begin{align}\label{tfomom}
\tfrac{k-1}{2}|\om \kwedge \om|^{2} + |\rictr(\om \kwedge \om)|^{2} = \tfrac{k-1}{2}|\tf (\om \kwedge \om)|^{2}_{h} + \tfrac{n+2(k-2)}{n-2}|\tf \rictr(\om \kwedge \om)|_{h}^{2} + \tfrac{n+k-2}{n(n-1)}|\om|^{4}_{h},
\end{align}
for $\om \in S^{k}_{0}\std$.
The proofs of theorems \ref{scalarcurvaturetheorem} and \ref{simonstheorem} depend on estimating \eqref{tfomom} from above and below.
The estimates \eqref{lijklnorm} and \eqref{lijklnormb} can be improved when $k \leq 3$.
\begin{lemma}
Let $(\ste, h)$ be a Euclidean vector space of dimension $n \geq 2$. For $\om \in S^{2}_{0}\std$,
\begin{align}
\label{lijklnormbk2}
&2\tfrac{n-2}{n-1}|\om|^{4} = |\tf(\om\kwedge \om)|^{2} + \tfrac{2n}{n-2}|\tf\rictr(\om\kwedge \om)|^{2} \geq |\tf(\om \kwedge \om)|^{2}_{h},\\
\label{lijklnormbk2b}
&2|\om|^{4} = |\om\kwedge \om|^{2} + 2|\rictr(\om\kwedge \om)|^{2}.
\end{align}
\end{lemma}
\begin{proof}
By \eqref{tfweyl},
\begin{align}
\tf(\om \kwedge \om)_{ijkl} = 2\om_{k[i}\om_{j]l} + \tfrac{2}{n-2}(\rictr(\om \kwedge \om)_{k[i}h_{j]l} - \rictr(\om \kwedge \om)_{l[i}h_{j]k}) -\tfrac{2}{(n-1)(n-2)}|\om|^{2}h_{k[i}h_{j]l}.
\end{align}
Hence
\begin{align}\label{k2partial}
\begin{split}
|\tf(\om\kwedge \om)|^{2} &= \lb \tf(\om \kwedge \om), \om \kwedge \om \ra \\
& = 4\om^{ki}\om^{jl}\om_{k[i}\om_{j]l} + \tfrac{8}{n-2}\om^{ki}\om^{jl}\rictr(\om \kwedge \om)_{k[i}h_{j]l} - \tfrac{4}{(n-1)(n-2)}|\om|^{2}\om^{ki}\om^{jl}h_{k[i}h_{j]l}\\
& = 2|\om|^{4} - 2|\rictr(\om\kwedge \om)|^{2} -\tfrac{4}{n-2}|\rictr(\om\kwedge \om)|^{2} + \tfrac{2}{(n-1)(n-2)}|\om|^{4}\\
& = 2(1 + \tfrac{1}{(n-1)(n-2)})|\om|^{4} - \tfrac{2n}{n-2}|\rictr(\om\kwedge \om)|^{2},
\end{split}
\end{align}
so that
\begin{align}\label{k2partialb}
\begin{split}
|\tf(\om\kwedge \om)|^{2} &+ \tfrac{2n}{n-2}|\tf\rictr(\om\kwedge \om)|^{2} + \tfrac{2}{n-1}|\om|^{4}\\
&= |\tf(\om\kwedge \om)|^{2} + \tfrac{2n}{n-2}|\rictr(\om\kwedge \om)|^{2} - \tfrac{2}{(n-1)(n-2)}|\om|^{4} = 2|\om|^{4},
\end{split}
\end{align}
which yields \eqref{lijklnormbk2} after rearranging terms. Combining \eqref{tfomom} with \eqref{lijklnormbk2} yields \eqref{lijklnormbk2b}.
\end{proof}
The key step in the proof of Lemma \ref{k3normlemma} comes from the proof of Theorem $4.2$ in \cite{Chen-Ogiue}.
\begin{lemma}\label{k3normlemma}
Let $(\ste, h)$ be a Euclidean vector space of dimension $n \geq 2$. For $\om \in S^{3}_{0}\std$,
\begin{align}
\label{lijklnormk3}
&\tfrac{2n-1}{n}|\om|^{4} \geq |\om \kwedge \om|^{2} + |\rictr(\om \kwedge \om)|^{2}\geq |\om \kwedge \om|^{2
\geq \tfrac{2}{n(n-1)}|\om|^{4},&\\
\label{lijklnormbk3}
&2\tfrac{n-2}{n-1}|\om|^{4} \geq |\tf(\om \kwedge \om)|^{2} + \tfrac{n+2}{n-2}|\mr{\rictr(\om\kwedge \om)}|^{2} \geq |\tf(\om \kwedge \om)|^{2}.
\end{align}
\end{lemma}
\begin{proof}
Let $\{e(1)^{i}, \dots, e(n)^{i}\}$ be an $h$-orthonormal basis of $\ste$. In terms of the endomorphisms $\om(i)_{j}\,^{k} = e(i)^{p}\om_{pj}\,^{k} \in \eno(\ste)$, $1 \leq i \leq n$,
\begin{align}\label{co0}
[\om(i), \om(j)]_{kl} = -2e(i)^{a}e(j)^{b}\om_{k[a}\,^{p}\om_{b]lp} = -e(i)^{a}e(j)^{b}(\om \kwedge \om)_{abkl}.
\end{align}
By \cite[Lemma $1$]{Chern-Docarmo-Kobayashi}, for symmetric endomorphisms $A_{i}\,^{j}$ and $B_{i}\,^{j}$ of $\ste$ there holds $|[A, B]|^{2}_{h} \leq 2|A|^{2}_{h}|B|^{2}_{h}$, and applied with \eqref{co0} this yields
\begin{align}\label{co1}
\begin{split}
|\om \kwedge \om|^{2}_{h} & = \sum_{i= 1}^{n}\sum_{j = 1}^{n}| [\om(i), \om(j)]|^{2} = \sum_{i= 1}^{n}\sum_{j \neq i}| [\om(i), \om(j)]|^{2}\leq 2\sum_{i= 1}^{n}\sum_{j\neq i}|\om(i)|^{2}|\om(j)|^{2}.
\end{split}
\end{align}
There hold
\begin{align}\label{co2}
\begin{split}
\sum_{i = 1}^{n}|\om(i)|^{4} & = \left(\sum_{i = 1}^{n}|\om(i)|^{2}\right)^{2} - \sum_{i= 1}^{n}\sum_{j\neq i}|\om(i)|^{2}|\om(j)|^{2}= |\om|^{4} - \sum_{i= 1}^{n}\sum_{j\neq i}|\om(i)|^{2}|\om(j)|^{2},\\
\sum_{i = 1}^{n}|\om(i)|^{4} & =\tfrac{1}{2(n-1)}\sum_{i = 1}^{n}\sum_{j \neq i}\left(|\om(i)|^{2} - |\om(j)|^{2}\right)^{2} + \tfrac{1}{n-1}\sum_{i = 1}^{n}\sum_{j \neq i}|\om(i)|^{2}|\om(j)|^{2}.
\end{split}
\end{align}
The observations \eqref{co2}, which are key to the proof, appear in the proof of Theorem $4.2$ in \cite{Chen-Ogiue}.
Summing $\tfrac{2n-1}{n}$ times the first equation of \eqref{co2} with $\tfrac{1-n}{n}$ times the second equation of \eqref{co2} yields
\begin{align}\label{co3}
\begin{split}
\sum_{i = 1}^{n}|\om(i)|^{4} &= \tfrac{2n-1}{n} |\om|^{4} - \tfrac{1}{2n}\sum_{i = 1}^{n}\sum_{j \neq i}\left(|\om(i)|^{2} - |\om(j)|^{2}\right)^{2} - 2 \sum_{i= 1}^{n}\sum_{j\neq i}|\om(i)|^{2}|\om(j)|^{2}.
\end{split}
\end{align}
There holds $\rictr(\om \kwedge \om)_{ab}e(i)^{a}e(j)^{b} = \lb \om(i), \om(j)\ra$. Because $\rictr(\om \kwedge \om)$ is symmetric, the $h$-orthonormal basis $\{e(1)^{i}, \dots, e(n)^{i}\}$ can be chosen to be also orthogonal with respect to $\rictr(\om\kwedge \om)$ so that $ \lb \om(i), \om(j)\ra = 0$ if $i \neq j$. In this case,
\begin{align}\label{co5}
|\rictr(\om \kwedge \om)|^{2} & = \sum_{i = 1}^{n}\sum_{j = 1}^{n}\lb \om(i), \om(j)\ra^{2} = \sum_{i = 1}^{n}|\om(i)|^{4}.
\end{align}
Combining \eqref{co1}, \eqref{co2}, and \eqref{co5} yields
\begin{align}
\begin{split}
|\om \kwedge \om|^{2}_{h} & + |\rictr(\om \kwedge \om)|^{2} \leq 2\sum_{i= 1}^{n}\sum_{j\neq i}|\om(i)|^{2}|\om(j)|^{2} + \sum_{i = 1}^{n}|\om(i)|^{4}\\
& = \tfrac{2n-1}{n} |\om|^{4} - \tfrac{1}{2n}\sum_{i = 1}^{n}\sum_{j \neq i}\left(|\om(i)|^{2} - |\om(j)|^{2}\right)^{2}\leq \tfrac{2n-1}{n} |\om|^{4},
\end{split}
\end{align}
which proves the leftmost inequality of \eqref{lijklnormk3}. Combining this with \eqref{tfweylomom} yields \eqref{lijklnormbk3}.
\end{proof}
It follows from \eqref{rictralbe} and \eqref{syom} that for $\al, \be \in S^{k}\std$ and $\sY \in \mcurv(\std)$,
\begin{align}\label{qyalbe}
\lb \al, \op{\sY}(\be)\ra = \lb \rictr(\al \kwedge \be), \rictr(\sY)\ra + \tfrac{(k-1)}{2}\lb \al \kwedge \be, \sY\ra.
\end{align}
When $n > 2$ and $\al, \be \in S^{k}_{0}\std$, substituting \eqref{tfweyl} into \eqref{qyalbe} yields
\begin{align}\label{qyalbetr}
\begin{split}
\lb \al, \op{\sY}(\be)\ra &= \tfrac{(k-1)}{2}\lb \al \kwedge \be, \tf \sY\ra + \tfrac{n+2(k-2)}{n-2}\lb \rictr(\al \kwedge \be), \mr{\rictr(\sY)}\ra + \tfrac{n+k - 2}{n(n-1)}\scal(\sY)\lb \al, \be \ra.
\end{split}
\end{align}
Taking $\be = \al \in S^{k}\std$ in \eqref{qyalbe} yields
\begin{align}\label{qyalal}
\qY( \al) = \lb \rictr(\al \kwedge \al), \rictr(\sY)\ra + \tfrac{(k-1)}{2}\lb \al \kwedge \al, \sY\ra.
\end{align}
Theorem \ref{cyestimatetheorem} is needed in the proof of Theorem \ref{scalarcurvaturetheorem}. The statement of Theorem \ref{cyestimatetheorem} can be found in the form given here, in the context of Hermitian manifolds, as \cite[Theorem $4.1$]{Tosatti-schwarz}.
\begin{theorem}[{S.~Y. Cheng and S.~T. Yau, \cite[Corollary $1$ on p. $857$]{Cheng-Yau-affinehyperspheresI}, \cite[Corollary to Theorem $8$ on p. $353$]{Cheng-Yau-differentialequations}}]\label{cyestimatetheorem}
Let $(M, g)$ be a complete $n$-dimensional Riemannian manifold with Ricci curvature bounded from below by $-\ka(n-1)g_{ij}$ for a real constant $\ka \geq 0$. Suppose $u \in C^{2}(M)$ is nonnegative and not identically $0$ and satisfies $\lap u \geq Bu^{1 + \si} - Au$ for constants $B > 0$, $\si > 0$, and $A \in \rea$.
If $A \leq 0$, then $u$ is identically zero, while, if $A > 0$, there holds $\sup_{M}u \leq |A/B|^{1/\si}$.
\end{theorem}
\begin{theorem}\label{scalarcurvaturetheorem}
Let $M$ be a manifold of dimension $n \geq 3$. Suppose $h$ is a complete Riemannian metric which with $\om \in \Ga(\symkt) \cap \ker \div \cap \ker \klie$ solves \eqref{projectivehiggs} for $c> 0$ and $\ka \in \rea$.
\begin{enumerate}
\item If $\ka \geq 0$ then $\om$ is identically zero, and $h$ is a metric of constant sectional curvature.
\item If $\ka < 0$ then the scalar curvature $\sR_{h} = c|\om|^{2} + \ka$ of $h$ is nonpositive.
\end{enumerate}
\end{theorem}
\begin{proof}
By \eqref{projectivehiggs}, $\rictr(\sR)_{ij} = c\rictr(\om \kwedge \om)_{ij} + \tfrac{\ka}{n}h_{ij}$, $\sW_{ijkl} = c \tf(\om \kwedge \om)_{ijkl}$, $\tf \rictr(\sR)_{ij} = c \tf \rictr(\om \kwedge \om)$, and $\scal(\sR) = c|\om|^{2}_{h} +\ka$. Together with \eqref{qyalbe}, \eqref{qyalbetr}, and \eqref{tfomom} these observations yield
\begin{align}\label{phscal1}
\begin{split}
\qR(\om) &= \tfrac{k-1}{2}\lb \sR, \om \kwedge \om \ra + \lb \rictr(\sR), \rictr(\om\kwedge \om)\ra \\
& = c\left(\tfrac{k-1}{2}|\om \kwedge \om|^{2}_{h} + |\rictr(\om \kwedge \om)|^{2}_{h} \right) + \tfrac{\ka(n+k-2)}{n(n-1)}|\om|^{2}_{h}\\
& = c\left( \tfrac{k-1}{2}|\tf(\om \kwedge \om)|^{2}_{h} + \tfrac{n+2(k-2)}{n-2}|\tf \rictr(\om\kwedge \om)|^{2}_{h} + \tfrac{n+k-2}{n(n-1)}|\om|^{4}\right) + \tfrac{\ka(n+k-2)}{n(n-1)}|\om|^{2}_{h}.
\end{split}
\end{align}
Since $c > 0$, it follows from \eqref{phscal1}, \eqref{lijnorm}, and \eqref{lijnormb} that
\begin{align}
\label{phscal2}
\begin{split}
\qR(\om) & \geq \tfrac{\ka(n+k-2)}{n(n-1)}\left(c|\om|^{2}_{h} + \ka\right)|\om|^{2}_{h}.
\end{split}
\end{align}
Substituting \eqref{phscal2} into \eqref{sharplapom} of Lemma \ref{swlemma} yields
\begin{align}
\label{phscal3}
\begin{split}
\lap_{h}|\om|^{\tfrac{n-2}{n+k-2}} &\geq \tfrac{n-2}{n+k-2}\qR(\om) |\om|_{h}^{- \tfrac{n+2(k-1)}{n+k-2}}\geq \tfrac{n-2}{n(n-1)}\left(c|\om|^{2}_{h} + \ka\right)|\om|^{\tfrac{n-2}{n+k-2}}_{h}
\\&= \tfrac{n-2}{n(n-1)}\left( c\left(|\om|^{\tfrac{n-2}{n+k-2}}_{h}\right)^{1 + \tfrac{2(n+k-2)}{n-2}} + \ka|\om|^{\tfrac{n-2}{n+k-2}}_{h}\right).
\end{split}
\end{align}
Because $h$ is complete, Theorem \ref{cyestimatetheorem} applies. If $\ka \geq 0$, it implies $\om$ vanishes. In this case $h$ is a metric of constant sectional curvature. If $\ka < 0$, Theorem \ref{cyestimatetheorem} yields $\sup_{M}|\om|^{2}_{h} \leq -\ka/c$. This implies $\sup_{M}\sR_{h} \leq 0$.
\end{proof}
\begin{theorem}\label{simonstheorem}
Let $M$ be a compact oriented manifold of dimension $n \geq 3$. Suppose the Riemannian metric $h$ and $\om \in \Ga(\symkt) \cap \ker \div \cap \ker \klie$ solve \eqref{projectivehiggs} for $c < 0$ and $\ka \in \rea$. Then
\begin{align}\label{simonsinequality}
&0 \geq \int_{M}|\om|^{2}\left(\tfrac{n+k-2}{n(n-1)}\ka + c(1 + \tfrac{(2n+1)(k-1)}{n})|\om|^{2}\right)d\vol_{h}, & &\text{if}\,\, k > 3.\\
\label{simonsinequalityk3}
&0 \geq \int_{M}|\om|^{2} \left( \tfrac{n+1}{n(n-1)}\ka + \tfrac{2n-1}{n}c|\om|^{2}\right)d\vol_{h}.& &\text{if}\,\, k = 3.\\
\label{simonsinequalityk2}
&0 \geq \int_{M}|\om|^{2}\left(\tfrac{1}{n-1}\ka + c|\om|^{2}\right)d\vol_{h}, & &\text{if}\,\, k = 2.
\end{align}
\end{theorem}
\begin{proof}
The equation \eqref{phscal1} remains valid and yields
\begin{align}\label{phscal4}
\begin{split}
\qR(\om) & = c\left( \tfrac{k-1}{2}|\tf(\om \kwedge \om)|^{2}_{h} + \tfrac{n+2(k-2)}{n-2}|\tf \rictr(\om\kwedge \om)|^{2}_{h} + \tfrac{n+k-2}{n(n-1)}|\om|^{4}\right) + \tfrac{\ka(n+k-2)}{n(n-1)}|\om|^{2}.
\end{split}
\end{align}
When $k = 2$, combining \eqref{phscal4} with \eqref{lijklnormbk2} yields
\begin{align}\label{phscalk2}
\begin{split}
\qR(\om) & = \tfrac{c}{2}\left(|\tf(\om \kwedge \om)|^{2}_{h} + \tfrac{2n}{n-2}|\tf \rictr(\om\kwedge \om)|^{2}_{h} + \tfrac{2}{n-1}|\om|^{4}\right) + \tfrac{ \ka}{n-1}|\om|^{2}\\
& =c|\om|^{4}+ \tfrac{\ka}{n-1}|\om|^{2} = |\om|^{2}\left( \tfrac{\ka}{n-1} + c|\om|^{2}\right).
\end{split}
\end{align}
Combining \eqref{lapomliediv} and \eqref{phscalk2} yields
\begin{align}
\begin{split}
0 &= \tfrac{1}{2}\int_{M}\lap_{h}|\om|^{2}d\vol_{h} = \int_{M}\left(|D\om|^{2} + \qR(\om)\right)d\vol_{h} = \int_{M}\left(|D\om|^{2} + |\om|^{2}\left( \tfrac{\ka}{n-1}\ka + c|\om|^{2}\right)\right)d\vol_{h},
\end{split}\end{align}
which shows \eqref{simonsinequalityk2}.
When $k = 3$, combining \eqref{phscal4} with \eqref{lijklnormk3} yields
\begin{align}\label{phscalk3}
\begin{split}
\qR(\om) & = c\left(|\tf(\om \kwedge \om)|^{2} + \tfrac{n+2}{n-2}|\tf \rictr(\om\kwedge \om)|^{2} + \tfrac{n+1}{n(n-1)}|\om|^{4}\right) + \tfrac{ \ka(n+1)}{n(n-1)}|\om|^{2}\\
& \geq \tfrac{c(2n-1)}{n}|\om|^{4}+ \tfrac{\ka(n+1)}{n(n-1)}|\om|^{2} = |\om|^{2}\left(\tfrac{n+1}{n(n-1)}\ka + \tfrac{2n-1}{n}c|\om|^{2}\right).
\end{split}
\end{align}
Combining \eqref{lapomliediv} and \eqref{phscalk3} yields
\begin{align}
\begin{split}
0 &= \tfrac{1}{2}\int_{M}\lap_{h}|\om|^{2}d\vol_{h} = \int_{M}\left(|D\om|^{2} + \qR(\om)\right)d\vol_{h}
\\ &= \int_{M}\left(|D\om|^{2} + |\om|^{2}\left( \tfrac{n+1}{n(n-1)}\ka + \tfrac{2n-1}{n}c|\om|^{2}\right)\right)d\vol_{h},
\end{split}\end{align}
which shows \eqref{simonsinequalityk3}.
When $k > 3$, combining \eqref{phscal4} with \eqref{lijnormb} and \eqref{lijklnormb} yields
\begin{align}\label{phscal5}
\begin{split}
\qR(\om) & \geq c\left( \tfrac{k-1}{2}\left(4 - \tfrac{2}{n(n-1)}\right) + \tfrac{n+k-2}{n} + \tfrac{n+k-2}{n(n-1)}\right) |\om|^{4}+ \tfrac{\ka(n+k-2)}{n(n-1)}|\om|^{2}\\
& = |\om|^{2}\left(\tfrac{n+k-2}{n(n-1)}\ka + c\left((k-1)\left(2 - \tfrac{1}{n(n-1)}\right) + \tfrac{n+k-2}{n-1}\right)|\om|^{2}\right)\\
& = |\om|^{2}\left(\tfrac{n+k-2}{n(n-1)}\ka + c\left(1 + \tfrac{(2n+1)(k-1)}{n}\right)|\om|^{2}\right),
\end{split}
\end{align}
Combining \eqref{lapomliediv} and \eqref{phscal4} yields
\begin{align}
\begin{split}
0 &= \tfrac{1}{2}\int_{M}\lap_{h}|\om|^{2}d\vol_{h} = \int_{M}\left(|D\om|^{2} + \qR(\om)\right)d\vol_{h}
\\&
\geq \int_{M}|\om|^{2}\left(\tfrac{n+k-2}{n(n-1)}\ka + c\left(1 + \tfrac{(2n+1)(k-1)}{n}\right)|\om|^{2}\right)d\vol_{h},
\end{split}\end{align}
which shows \eqref{simonsinequality}.
\end{proof}
\begin{remark}
Applied with $h_{ij}$ the induced metric and $\om_{ij}$ the second fundamental form of a mean curvature zero compact immersed hypersurface in the $(n+1)$-dimensional round sphere of scalar curvature $n(n+1)$ as in Example \ref{constraintequationsexample}, Theorem \ref{simonstheorem} recovers the specialization to such hypersurfaces of \cite[Theorem $5.3.2$]{Simons} (which applies to compact submanifolds of arbitrary codimension). Concretely, in this case, $c = -1$ and $\sR_{h} - |\om|^{2} = \ka = n(n-1)$, so \eqref{simonsinequalityk2} becomes
\begin{align}\label{simonsk2}
0 \geq \int_{M}|\om|^{2}\left(n -|\om|^{2}\right)d\vol_{h} = -n^{2}(n-1)^{2}\int_{M}\left(1 - \tfrac{\sR_{h}}{n(n-1)}\right)\left(\tfrac{n-2}{n-1} - \tfrac{\sR_{h}}{n(n-1)}\right)d\vol_{h}.
\end{align}
which recovers \cite[Theorems $5.3.2$ and $5.3.3$]{Simons}.
From \eqref{simonsk2} it follows that either $\sR_{h} = 1$ and $\om$ is identically zero, so that the hypersurface is a totally geodesic hypersphere; $\sR_{h} = 2n(n-1)$ and $|\om|^{2} = 2n$, in which case $\om$ is parallel; or $\inf_{M}\sR_{h} < 2n(n-1)$, which is \cite[Corollary $5.3.3$]{Simons}.
\end{remark}
\begin{remark}
Applied with $h_{ij}$ the induced metric and $\om_{ijk}$ the twisted second fundamental form of a mean curvature zero compact immersed Lagrangian submanifold in a $2n$-dimensional Kähler manifold of constant holomorphic section curvature $\hat{c}$ as in Lemma \ref{constantsectlemma} of Example \ref{kahlerexample}, Theorem \ref{simonstheorem} recovers the specialization to such hypersurfaces of \cite[Theorem $4.1$]{Chen-Ogiue}. Concretely, in this case, $c = -1$, $\ka = \hat{c}n(n-1)$, and $|\om|^{2} = \hat{c}n(n-1) - \sR_{h}$, so \eqref{simonsinequalityk3} becomes
\begin{align}\label{chenogiue}
\begin{split}
0 &\geq \int_{M}|\om|^{2}\left( \tfrac{n(n+1)}{2n-1}\hat{c} - |\om|^{2}\right)d\vol_{h}
= n^{2}(n-1)^{2}\int_{M}\left(\hat{c} - \tfrac{\sR_{h}}{n(n-1)}\right)\left(\tfrac{\sR_{h}}{n(n-1)} - \tfrac{2n(n-2)}{(2n-1)(n-1)}\hat{c}\right)d\vol_{h},
\end{split}
\end{align}
which recovers \cite[Theorem $4.1$]{Chen-Ogiue}.
From \eqref{chenogiue} it follows that, if $\hat{c} > 0$, then either $\sR_{h} = n(n-1)\hat{c}$ and $\om$ is identically zero, so that the hypersurface is a totally geodesic hypersphere; $\sR_{h} = \tfrac{2n^{2}(n-2)}{2n-1}\hat{c}$ and $|\om|^{2} =\tfrac{n(n+1)}{2n-1}\hat{c} $, in which case $\om$ is parallel; or $\inf_{M}\sR_{h} < \tfrac{2n^{2}(n-2)}{2n-1}\hat{c}$.
\end{remark}
\bibliographystyle{amsplain}
\def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$}
\def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def\cprime{$'$}
\def\Dbar{\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex \accent"16\hss}D}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\dbar{\leavevmode\hbox
to 0pt{\hskip.2ex \accent"16\hss}d} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,108,101,562,991 | arxiv | \section{INTRODUCTION}
Correlated electron materials with geometrically frustrated lattice structures hold great promise for novel quantum electronic states. In addition to the quantum spin liquid \cite{anderson73-87,palee08,balents10} observed in $\kappa$-organics \cite{kanoda03,kanoda05,kanoda08,matsuda08} near the Mott transition at half-filling, the sodium cobaltates Na$_x$CoO$_2$ exhibit rich and unconventional phases \cite{takada03,yywang03,schaak03,foo04,alloul09} in a wide range of electron doping $x$. Central to the properties of the sodium cobaltates is the unconventional superconducting (SC) state observed near $x=1/3$ upon water intercalation \cite{takada03}. Despite the intensive search for its possible electronic origin \cite{shastry03,baskaran03,ogata03,qhwang04,kuroki04,mazin04,motrunich04,mochizuki05,eremin08,
zhouwang08,kiesel13,ziyangmeng}, the nature and pairing mechanism of the SC phase have been a controversial and unresolved issue. Contrary to conventional wisdom, several experiments suggest that the many-electron ground state at superconducting concentrations may be in close proximity to certain hidden electronic ordered phases \cite{hasan06,yang05,balicas06,rivadulla06,matano08,cava09}. Although various ordered states have been conjectured near x=1/3 \cite{baskaran03xxx,kuroki04,motrunich04,foussats06,an06,hasan06,wrobel07,ohkawa10} and argued to be relevant for superconductivity, almost all were based on the idea of Coulomb jamming where a strong extended interaction $V$ drives a Wigner crystal-like charge ordered insulating state with a large gap to single-particle excitations which is inconsistent with experiments. The nature and the microscopic origin of the textured electronic states if they exist, and the idea that electronic fluctuation mediated superconductivity arises in their proximity have remained enigmatic due to the lack of concrete understanding of the strong correlation effect and its interplay with geometric frustration in layered triangular lattice Mott-Hubbard systems.
Even for the simplest Hubbard model, its possible electronic ground states as a function of doping have not been understood on the triangular lattice.
In this paper, we study the ground state properties and the phase diagram of the triangular lattice Hubbard model. We show that, upon electron doping, new stable phases with textured charge and spin order (both collinear and coplanar) arise as a result of geometric frustration and strong correlation and provide insights to the cobaltate unconventional normal and SC states. Specifically, we construct a spin-rotation invariant slave boson theory capable of describing both charge and noncollinear magnetic superstructures to study the ground states as a function of Hubbard $U$ and electron doping $x$. We find that adding electrons turns the frustrated 120$^\circ$ N\'eel ordered insulator at half-filling into a 3-sublattice noncollinear antiferromagnetic (AF) metal which is stable at low-doping but undergoes a Lifshitz transition accompanied by incipient charge ordering. The magnetic frustration begins to alleviate in the presence of charge inhomogeneity, and a novel AF insulator emerges at $x=1/3$ where the unfrustrated collinear spin-density resides on the underlying honeycomb lattice sites
and coexists with moderate $\sqrt{3}\times\sqrt{3}$ charge density order. We obtain the phase diagram in the regime $0\le x\le0.45$, discuss the nature of the phases and the phase transitions, and illustrate the evolution of the Fermi surface (FS).
Remarkably, the strongly correlated ground states near $x=1/3$ can be viewed as doping into the ``1/3 AF insulator'', giving rise to metallic phases with small electron or hole FS pockets accommodating the excess carriers. We compare our findings to recent experiments and argue that the enhanced spin and charge fluctuations together with the narrowed quasiparticle band and the nested FS pockets may have important implications for the electronic origin of the SC phase in sodium cobaltates.
\section{SPIN ROTATIONAL INVARIANT SLAVE BOSON THEORY FOR NONCOLLINEAR SPIN AND CHARGE TEXTURED STATES}
The triangular lattice Hubbard model is given by,
\begin{equation}
H=-\sum_{ij,\sigma}t_{ij}c^{\dagger}_{i\sigma} c_{j\sigma}
+U\sum_{i}{\hat n}_{i\uparrow}{\hat n}_{i\downarrow}-\mu\sum_{i\sigma} c_{i\sigma}^\dagger c_{i\sigma},
\label{hrs}
\end{equation}
where $c^{\dagger}_{i\sigma}$ creates a spin-$\sigma$ electron; $U$ is the on-site Coulomb repulsion; and ${\hat n}_{i\sigma}$ the density operator. The first three nearest neighbor hoppings $t_{ij}=(t_1,t_2,t_3)=(-202, 35, 29)$ meV produce a tight-binding dispersion with a bandwidth $W=1.34$eV for the $a_{1g}$-band in the cobaltates \cite{zhou05,zhouwang07}. To study the interplay between strong correlation and magnetic frustration, we use the Kotliar-Ruckenstein slave-boson formulation \cite{kr} with full spin-rotation invariance \cite{cli,fresard}. This strong-coupling theory correctly describes the weakly interacting limit ($U\to0$), recovers and extends the Gutzwiller approximation to the spin-rotation invariant case for all $U$ \cite{kr,cli,fresard}. By studying the spatially unrestricted solutions, we can probe inhomogeneous, textured electronic states induced by strong correlation and geometrical frustration \cite{zhouwang07}.
The local Hilbert space of the Hubbard model is represented by a spin-1/2 fermion $f_\sigma$ and six bosons: $e$ (holon), $d$ (doublon), and $p_\mu$ ($\mu=0,1,2,3$) such that an empty site $\vert0\rangle=e^\dagger \vert\text{vac}\rangle$, a doubly occupied site $|\!\!\uparrow\downarrow\rangle=d^\dagger f_\downarrow^\dagger f_\uparrow^\dagger \vert\text{vac}\rangle$, and a singly occupied site $\vert \sigma\rangle= f_{\sigma^\prime}^\dagger p_{\sigma^\prime\sigma}^\dagger\vert \text{vac}\rangle$ with sums over repeated spin indices. The spin-rotation invariance is achieved in the SU(2) representation of the $2\times2$ matrix ${\bf p}$, i.e. $p_{\sigma\sigma^\prime}^\dagger={1\over\sqrt{2}} p_\mu^\dagger \tau_{\sigma\sigma^\prime}^\mu$ where ${\bf \tau}^{1,2,3}$ and ${\bf \tau}^0$ are the Pauli and identity matrices \cite{cli}. The local spin operator ${\bf S}_i^{x,y,z}={1\over2}\tr(\tau^{1,2,3}{\bf p}_i^\dagger {\bf p_i})$. For the completeness of the Hilbert space,
\begin{equation}
Q_i=e_i^\dagger e_i+d_i^\dagger d_i+\tr({\bf p}_i^\dagger {\bf p}_i)=1.
\label{constraint1}
\end{equation}
The equivalence between the fermion and boson representations of the particle and spin density further requires,
\begin{equation}
L_i^\mu=\tr(\tau^\mu {\bf p}_i^\dagger {\bf p}_i ) + 2\delta _{\mu ,0} d_i^\dagger d_i -\sum\limits_{\sigma \sigma '} f_{i\sigma}^\dagger \tau_{\sigma \sigma '}^\mu f_{i\sigma '}=0.
\label{constraint2}
\end{equation}
For electron doping, it is convenient to work in the hole-picture. Accordingly, the sign of the hopping term in Eq.~(\ref{hrs}) is reversed. Moreover, at electron doping concentration $x$, the average density of holes is given by $n=(1/N)\sum_{i\sigma} \langle f_{i\sigma}^\dagger f_{i\sigma} \rangle=1-x$. The Hamiltonian can thus be written as,
\begin{eqnarray}
H&=&\sum_{ij}
t_{ij} f^{\dagger}_{i\sigma_1} Z_{i\sigma_1\sigma}^\dagger Z_{j\sigma\sigma_2} f_{j\sigma_2}
+U\sum_{i}d_i^\dagger d_i \nonumber \\
&-&\mu\sum_{i} f_{i\sigma}^\dagger f_{i\sigma}-\sum_i\lambda_i(Q_i-1)-\sum_{i\mu}\lambda_i^\mu L_i^\mu,
\label{hsb}
\end{eqnarray}
where $\lambda_i$ and $\lambda_i^\mu$ are Lagrange multipliers enforcing the constraints
in Eqs.~(\ref{constraint1}) and (\ref{constraint2}). The renormalization factors for the hopping term has the matrix form \cite{cli,fresard}
\begin{equation}
{\bf Z}_i={\bf L}_i^{-1/2}(e_i^\dagger {\bf p}_i+{\bf {\overline p}}_i^\dagger d_i){\bf R}_i^{-1/2},
\label{zfactor}
\end{equation}
where ${\bf L_i}=(1-d_i^\dagger d_i)\tau_0-{\bf p_i}^\dagger {\bf p}_i$, ${\bf R}_i=(1-e_i^\dagger e_i)\tau_0-{\bf {\overline p_i}}^\dagger {\bf{\overline p}}_i$, and ${\bf{\overline p}}_i={\hat{\bf T}}{\bf p}_i{\hat{\bf T}}^{-1}$ is the time-reversal transformed ${\bf p}_i$.
The saddle-point solution of the functional-integral for Eq.~(\ref{hsb}) corresponds to condensing all boson fields $(e_i,d_i,p_{i\mu},\lambda_i,\lambda_i^\mu)$
and determining their values self-consistently by minimizing the ground state energy $\langle H \rangle$ \cite{kr,cli}.
Real space unrestricted searches for the lowest energy states indicate that, in the doping regime $0\le x\le 0.40$, the uniform paramagnetic (PM) ground state becomes unstable above a critical $U$ toward textured electronic states that always emerge with $\sqrt{3}\times\sqrt{3}$ superstructures. To determine the ground state properties of these textured phases, the phase structure and the phase transitions, it turns out to be necessary to go beyond real space calculations and study much larger systems. To this end,
we construct a superlattice formulation of the theory where each supercell contains 3 sites labeled by $\ell=1(A),2(B),3(C)$. The Hamiltonian (\ref{hsb}) becomes
\begin{eqnarray}
&H&=\sum_{\ell\ell^\prime,k}
K_{\ell\ell^\prime}(k)
f^{\dagger}_{\ell k \sigma_1} Z_{\ell\sigma_1\sigma}^\dagger Z_{\ell^\prime\sigma\sigma_2} f_{\ell^\prime k \sigma_2}
+U\sum_{\ell}d_\ell^\dagger d_\ell\nonumber \\
&-&\mu\sum_{\ell k} f_{\ell k\sigma}^\dagger f_{\ell k\sigma}-\sum_\ell\lambda_\ell(Q_\ell-1)-\sum_{\ell\mu}\lambda_\ell^\mu L_\ell^\mu,
\label{hsb-spercell}
\end{eqnarray}
where $k$, defined in the reduced zone, is the crystal momentum associated with the superlattice translation symmetry. The hopping matrix elements are $K_{\ell\ell^\prime}^*(k)=K_{\ell^\prime\ell}(k)$,
\begin{eqnarray}
K_{11}&=& 2t_2
[\cos k_+ + \cos k_- +\cos (\sqrt{3} k_x )] - \mu,
\nonumber \\
K_{12}&=& t_1 (1 + e^{-ik_+} + e^{ik_-})+t_3 [2\cos(\sqrt{3}k_x ) + e^{-i3k_y }],
\nonumber \\
K_{13}&=& t_1 [1 + e^{-ik_+} + e^{- i\sqrt{3} k_x}]
\nonumber \\
&& \qquad + t_3 [2\cos k_- + e^{-i(\frac{3\sqrt{3}}{2}k_x + \frac{3}{2}k_y )}],
\nonumber \\
K_{23}&=& t_1 (1 + e^{-ik_-} + e^{-i\sqrt{3} k_x })
\nonumber \\
&& \qquad + t_3 [2\cos k_+ + e^{-i(\frac{3\sqrt{3}}{2}k_x - \frac{3}{2}k_y )}],
\nonumber
\end{eqnarray}
and $K_{11} = K_{22} = K_{33}$, with $k_{\pm}={\sqrt{3}\over2}k_x\pm{3\over2}k_y$.
Minimizing the energy leads to the following self-consistency equations at each of the 3 sites in the supercell,
\begin{eqnarray}
\sum\limits_{\ell_1,\ell_2}\frac{{\partial T_{\ell_1,\ell_2} }}{{\partial e_\ell }} &+& 2\lambda_\ell e_\ell = 0,
\nonumber \\
\sum\limits_{\ell_1,\ell_2}\frac{{\partial T_{\ell_1,\ell_2} }}{{\partial d_\ell }} &+& (2\lambda_\ell - 4\lambda_\ell^0 + 2U)d_\ell = 0,
\nonumber
\\
\sum\limits_{\ell_1,\ell_2}\frac{{\partial T_{\ell_1,\ell_2} }}{{\partial p_\ell^0 }} &+& 2\lambda_\ell p_\ell^0 - 2\sum_\mu\lambda_\ell^\mu p_\ell^\mu = 0,
\nonumber \\
\sum\limits_{\ell_1,\ell_2}\frac{{\partial T_{\ell_1,\ell_2} }}{{\partial p_\ell^\alpha}}
&+&2\lambda_\ell p_\ell^\alpha - 2\lambda_\ell^0 p_\ell^\alpha-2\lambda_\ell^\alpha p_\ell^0=0, \ \alpha=1,2,3,
\nonumber
\end{eqnarray}
where $T_{\ell_1,\ell_2}=\sum_k K_{\ell_1\ell_2}(k)Z^\dagger_{\ell_1\sigma_1\sigma} Z_{\ell_2\sigma \sigma_2} \langle f_{\ell_1 k \sigma_1}^\dagger f_{\ell_2 k\sigma_2}\rangle$ is the quantum averaged kinetic energy between sites $\ell_1$ and $\ell_2$. These equations, together with the quantum averaged constraints (\ref{constraint1}) and (\ref{constraint2}), are solved numerically
by discretizing the reduced zone with typically $600\times600$ points to allow accurate determinations of the ground state properties. We verified that all of our results are reproducible in the even larger $3\times3$ supercell calculations.
\begin{figure}
\begin{center}
\fig{3.4in}{fig1.eps}\caption{Phase diagram of the triangular lattice Hubbard model. $(U_{\rm te},x_{\rm te})$ denotes the tetra-critical point (red circle) where the first order transition line (black), two second order transition lines, and the Lifshitz transition (dashed line) meet. Note the different horizontal scale for $x\le0.2$.}
\end{center}
\vskip-0.5cm
\end{figure}
\section{RESULTS AND DISCUSSIONS}
The obtained phase diagram is shown in Fig.~1. The stable phases in the wide region of doping $0\le x\le0.4$ are spin-charge textured electronic states for large enough $U$.
The strongly correlated electronic states are highlighted by two dramatically different insulating states at $x=0$ and $x=1/3$ (marked by red-lines). The insulating state at half-filling sets in above $U_{\rm 120}= 1.34W$ with noncollinear, 3-sublattice, $120^\circ$ N\'eel order due to magnetic frustration as shown in Fig.~2a, in good agreement with numerical renormalization group calculations \cite{yoshiyoka}. Due to the quenching of charge fluctuations at large $U$ at half-filling, the charge density is uniform.
Remarkably, at $x=1/3$, a novel textured insulating state emerges above $U_{c2}=2.22W$ with {\em unfrustrated} collinear AF order on the underlying honeycomb lattice as shown in Fig.~2b. The avoided magnetic frustration in this ``1/3 AF insulator'' is achieved via moderate $\sqrt{3}\times\sqrt{3}$ charge order: on one of the 3 sublattices, the charge density is larger and the spin density vanishes. We first describe the evolution of ground states between these strong coupling insulators as a function of $x$, and then study the transitions in the ground state at a fixed doping as a function of $U$.
\begin{figure}
\begin{center}
\fig{3.2in}{fig2.eps}\caption{Magnetic ordered insulating states at large U. (a) $120^\circ$ noncollinear N\'eel order at $x=0$ and $U=2W$. (b) Unfrustrated AF order on the underlying honeycomb lattice with charge order at $x=1/3$ and $U=3W$. Solid circles indicate the charge density.}
\end{center}
\vskip-0.5cm
\end{figure}
It is instructive to start with the 3-sublattice 120$^\circ$ AF insulator (120$^\circ$-AFI). It originates from the geometrically frustrated AF correlation on the triangular lattice. The noncollinear magnetic order splits the 3 subbands into 6 spin-nondegenerate bands with the lowest three filled in the half-filled insulating state. Electron doping leads to the occupation of the fourth band, and the noncollinear AF metal (N-AFM) emerges with an electron FS enclosing the zone center ($\Gamma$ point). With increasing $x$, the FS grows with a volume of $x$ and the ordered moments decrease due to carrier hopping. The subband gaps are reduced accordingly but are nonzero and the N-AFM remains stable for a wide doping range as seen in Fig.~1 until the growing hexagonal FS begins to make point-contact with the $\sqrt{3}\times\sqrt{3}$ reduced zone boundary form the inside near $x\simeq0.3$ and a Lifshitz transition takes place through umklapp scattering (dotted-dash line in Fig.~1). Fig.~3a and 3b display the FS before and after the transition, showing the FS topology change and the emergence of small hole FS pockets across the Lifshitz transition. It should be noted that although there is no additional lattice symmetry breaking associated with the Lifshitz transition, the $\sqrt{3}\times\sqrt{3}$ charge order becomes prominent as do the deviations of the spin-density on the 3-sublattices from the 120$^\circ$ order, when the system enters the noncollinear spin-charge ordered AF metal (NSCO-AFM) phase shown in Fig.~1. Interestingly, the emergence of charge inhomogeneity allows the alleviation of magnetic frustration in the NSCO-AFM phase and the collinear spin-charge ordered AF metal (CSCO-AFM) with AF order on the unfrustrated honeycomb lattice eventually prevails for $x>1/3$. At $x=1/3$, the lower two of the three spin-degenerate bands are filled with 4 electrons per unit cell, leading to the ``1/3 AF insulator'', which we denote as collinear spin-charge ordered AF insulator (CSCO-AFI).
\begin{figure}
\begin{center}
\fig{3.2in}{fig3.eps}\caption{FS of electronic textured phases at U=3W and doping (a) $x=0.28$. (b) $x=0.32$. (c) $x=0.36$ (d)$x=0.45$.
}
\end{center}
\vskip-0.5cm
\end{figure}
Next, we turn to the phase transitions as a function of $U$ at a fixed doping. At half-filling, a first order transition separates the PM metal from the 120$^\circ$-AFI with a two-component magnetic order parameter. We find that the first order line extends and terminates at a tetra-critical point $(U_{\rm te},x_{\rm te})=(1.7W,0.2)$. For $x>x_{\rm te}$, the first order line splits into three continuous transitions with increasing $U$ as shown in Fig.~1: PM $\rightarrow$ CSCO-AFM $\rightarrow$ NSCO-AFM $\rightarrow$ N-AFM. The origin of the tetra-critical point has to do with the FS of the {\em PM metal} making contact with the reduced zone boundary from the outside at $x_{\rm te}$. The latter induces $\sqrt{3}\times\sqrt{3}$ charge order through umklapp scattering, which enables the magnetic order parameters to develop successively in the CSCO-AFM and the NSCO-AFM phases. Increasing $U$ further for $0.2<x<0.3$, the NSCO-AFM phase meets the phase boundary of the Lifshitz transition to the N-AFM phase as the FS pockets overlap and transform into the hole FS centered around $\Gamma$-point shown in Fig.~3a.
\begin{figure}
\begin{center}
\fig{3.4in}{fig4.eps}\caption{(a) Schematic phase diagram at $x=1/3$ as a function of $U$. The evolution of the charge density, magnitude and orientation of the spin density on the 3 sublattices sketched in (a) are shown quantitatively in (b), (c), and (d) respectively.}
\end{center}
\vskip-0.5cm
\end{figure}
\begin{figure}
\begin{center}
\fig{3.4in}{fig5.eps}\caption{Band dispersion (left panel) and FS topology at $x=1/3$:(a) CSCO-AFM at U=1.97W, (b) NSCO-AFM at U=2.08W, (c) C-FRM at U=2.18W, and (d) CSCO-AFI at U=6.7W. The single-particle gap in the CSCO-AFI phase is shown in (e) as a function of $U/W$.}
\end{center}
\vskip-0.5cm
\end{figure}
In Figs. 4 and 5, we provide quantitative results on the phase evolution at $x=1/3$. With increasing $U$, the PM metal becomes unstable and makes a transition at $U_{c1}=1.94W$ into the CSCO-AFM phase, where gaps open due to umklapp scattering along the $M-K$ and the $K-\Gamma$ directions as shown in Fig.~5a, producing three subbands in the folded zone and truncating the FS into six electron and hole pockets. The electronic texture (Fig.~4a) is identical to the one in the CSCO-AFI phase above $U_{c2}$.
As shown in Figs.~4b-d, sublattice A has a higher charge density but zero spin density, whereas collinear AF ordered spin moments reside on sublattices B and C with lower charge densities, forming an underlying honeycomb lattice. One would have expected that this charge-spin ordered semimetal (SM) phase to evolve continuously into the CSCO-AFI as the magnitude of the order parameters increases with increasing $U$, thus gapping out the entire FS. However, this is not the case. This SM phase is stable only in a small region (see Fig.~1) until $U_{cp}=1.98W$ above which noncollinear (coplanar), two-component magnetic order emerges; a magnetic moment develops on sublattice A while the existing moments on sublattices B and C cant away from $180^\circ$ (Fig.~4a). Due to the noncollinearity of the magnetic order, the 3 spin-degenerate bands split into six shown in Fig.~5b in this NSCO-AFM phase. The evolution of the charge and spin density on the 3 sublattices, $n_\ell$ and $m_\ell$, as well as the relative angles between the ordered spin moments $\theta_{\ell\ell^\prime}$ are shown in Figs.~4b-d as a function of $U$. This NSCO-AFM phase spans a wider region $1.98W <U <2.15W$. Due to the interplay of the charge and spin degrees of freedom, $n_\ell$, $m_\ell$, and $\theta_{\ell\ell^\prime}$ are nonmonotonic functions and show intricate evolutions with $U$. With the emergence of $m_A$, the noncollinear magnetic order first moves towards the $120^\circ$ state ($\theta_{\ell\ell^\prime} \to 120^\circ$), but quickly reverses path since the growing $m_{B,C}$ accompanying the decrease of $n_{B,C}$ prefers to be AF correlated ($\theta_{BC}\to 180^\circ$) while $\theta_{AB}$ remains degenerate with $\theta_{AC}$. In order to alleviate frustration, the charge density $n_A$ continues to increase such that $m_A$ reduces. As can be seen in Fig.~4b-d, surprisingly, the path toward the CSCO-AFI above $U_{c2}$ is interrupted by an incipient collinear ferrimagnetic metal (C-FRM) phase at $U_{FR}=2.15W$, where $n_C$$(n_B)$ increases (decreases) sharply such that $n_C\simeq n_A > n_B$ and $m_C\simeq m_A < m_B/2$. To minimize frustration, the larger spin moment $m_B$ is AF correlated with the smaller and parallel $m_A$ and $m_C$ ($\theta_{AB}=\theta_{BC}=180^\circ$, $\theta_{AC}=0$). The net ferromagnetic moment splits the spin degeneracy such that there remains six quasiparticle bands shown in Fig.~5c. The C-FRM phase is stable until $U_{c2}$ where a redistribution of the charge/spin density takes place to further minimize magnetic frustration: $n_A$ increases to 1.36 and $m_A$ decreases to zero; while $n_B$ and $n_c$ approaches the common value of $1.32$ and $m_B$ and $m_C$ to $0.18$ in the large U limit.
An insulating gap opens as the system enters the CSCO-AFI phase as shown in Fig.~5d-e, which is the stable phase for $U > U_{c2}$. Compared to the CSCO-AFM phase just above $U_{c1}$, the spin moments on $B$ and $C$ sublattices have grown and rotated by $90^\circ$ above $U_{c2}$.
We stress that the charge ordering necessary for the emergence of these textured states near $x=1/3$ arises from the Lifshitz transition
and is very different from the $\sqrt{3}\times\sqrt{3}$ Wigner crystal-like state driven by Coulomb jamming due to a large next-nearest neighbor $V$ \cite{motrunich04}. Moreover, the ``1/3 AF insulator'' is different from the fully charge-disproportionate state with a large insulating gap proposed in LSDA+U calculations \cite{pickett}. Indeed, as shown in Fig.~5e, the small excitation gap in the CSCO-AFI phase opens at $U_{c2}$ and only reaches about $53$meV in the large-$U$ limit.
It is remarkably that the spin-charge textured ground states occupy such a significant portion of the phase diagram around $x=1/3$. Indeed, the large-$U$ phase structure can be generically understood as either electron ($x>1/3$) or hole ($x<1/3$) doping into the corresponding ``1/3 AF insulator'', leading to correlated metallic phase with nested electron or hole FS pockets in Figs.~3(b) and 3(c). For example, for $x>1/3$, the excess carriers give rise to the CSCO-AFM metal phase with electron FS pockets centered around the zone corners. As shown in Figs.~ 3(c) and 3(d), the latter grow with increasing $x$ until they touch and coalesce to trigger a transition into the uniform PM phase above $x=0.4$ in the phase diagram Fig.~1.
\section{CONCLUSIONS}
We conclude with a discussion of the implications on the sodium cobaltates. Theoretical estimates \cite{zhou05,ishida05,gtwang08,shorikov11} and the valence band resonant photoemission \cite{hasan04} suggest $U=3\sim5$eV for the Co $d$-electrons typical of $3d$ transition metals. Together with the bandwidth $W\simeq1.34$eV, the value of $U/W=2.2 - 3.7$ puts the cobaltates near $x=1/3$ in the regime of the textured states on the phase diagram with small electron and/or hole FS pockets. There are experimental indications from ARPES that the PM phase with the large $a_{1g}$ FS is in proximity to such hidden ordered phases \cite{hasan06,yang05}. Moreover, quantum oscillations find remarkably small FS pockets at $x=0.3$
possibly due to electronic superstructures \cite{balicas06}. The main reason that such states have not been widely observed in unhydrated cobaltates is likely due to the disordered Na dopant ions \cite{noteonu}. Indeed, magnetic susceptibility measurements in thermally annealed samples around $x=0.36$ find evidence for a magnetic ordered state \cite{rivadulla06}. We believe that water intercalation expands the c-axis and provides
effective screening of the dopant potential, making the physical properties more suitable for the 2D triangular lattice Hubbard model description.
Indeed, NMR experiments find that the principal effect of hydration is to reveal enhanced spin fluctuations at low temperatures, when compared to unhydrated single crystals at the same nominal Na concentrations \cite{matano08}. More direct evidence supporting this view comes from hydrated samples at $x\simeq0.3$, where a specific heat anomaly observed at a critical temperature near $7$K was unaffected by a $9$T magnetic field
and identified as associated with density wave order \cite{cava09}.
We thus propose that the cobaltates near $x=1/3$ are in proximity to such ``hidden'' textured phases with spin and charge order and the enhanced electronic fluctuations can mediate the SC pairing interaction.
\section{ACKNOWLEDGMENTS}
This work is supported in part by DOE DE-FG02-99ER45747 and NSF DMR-0704545. ZW thanks Aspen Center for Physics for hospitality.
|
1,108,101,562,992 | arxiv | \section{Introduction}
\label{sec1}
Recently it was argued \cite{GL} that the probability to count $N$ indistinguishable particles, bosons or fermions, in binned-together output ports of a unitary $M$-port, averaged over the Haar-random unitary matrix representing the multiport or, for a fixed multiport, over the input configurations of the particles, takes the asymptotic Gaussian form as $N\to \infty$ with the particle density $N/M$ being constant. The quantum statistics of bosons or fermions enters the Gaussian law precisely through the particle density. In this respect, the random multiport with identical particles at its input can be thought of as a quantum variant of the Galton board (invented to expose the convergence of the multinomial distribution to a Gaussian one) for the indistinguishable identical particles correlated due to their quantum statistics.
The quantum asymptotic Gaussian law generalizes to the quantum-correlated particles the well-known asymptotic result for the multinomial distribution, originally due to works of A.~de Moivre, J.-L.~Lagrange, and P. S.~Laplace (for a historical review, see Ref. \cite{Hald}). The multinomial distribution applies when the identical particles are sent one at a time at the input (i.e., they are distinguishable particles). The asymptotic Gaussian law exposes the effect of the quantum statistics of identical particles on their behavior, explored previously \cite{E5,MCMS,BB,StatBench}, applicable to the setups where randomness plays a key role, such as the multiphoton propagation through disordered media \cite{DM,QS2P,PNCDM}. It is known that the quantum interference may result in the common forbidden events for bosons and fermions in some special (symmetric) multiports \cite{MPI,FourExp}, obscuring the role played by the quantum statistics. It should be stressed that the complexity of behavior of bosonic particles in linear unitary networks asymptotically challenges the digital computers, which is the essence of the Boson Sampling idea \cite{AA,SCBS} with the proof-of-principle experiments \cite{E5,E1,E2,E3,E4,ULO,E6,E7}. The Gaussian law is applicable to the scattershot Boson Sampling \cite{SCBS,E6}, where randomness in the setup is due to the heralded photon generation in random input ports (see also section \ref{sec2} below).
The main purpose of the present work is to give a mathematically rigorous derivation of the asymptotic Gaussian law previously but heuristically proposed before \cite{GL} with some numerical confirmation. The main technical tool in the proof is the discovered factorization of the average $r$-bin counting probability distribution as a series of layered probability distributions for the binary case (two bins). For instance, this fact is used to show the equivalence of the classical asymptotic Gaussian law for the $r$-bin partition to the de Moivre-Laplace theorem \cite{Hald}. This equivalence extends to the quantum case as well, suggesting the interpretation of the respective asymptotic Gaussian law as a quantum version of the de Moivre-Laplace theorem, where the events (particle counts in this case) are quantum-correlated due to the indistinguishability of the particles.
Section \ref{sec2} contains a brief statement of the problem and a rigorous formulation of the main results in the form of two theorems. The theorems are proven in section \ref{sec3}, where the binary classical case is briefly reminded in section \ref{sec3A} and the $r$-bin case is considered in section \ref{sec3B}. The $r$-bin quantum case is analyzed in section \ref{sec3C}, where similarities of the classical and quantum cases are highlited. Appendices A, B, and C contain mathematical details of the proof. In section \ref{sec4} the results in theorems 1 and 2 are generalized to \textit{arbitrary} (mixed) input state of indistinguishable particles. Finally, in section \ref{sec5} a brief summary of the results is given.
\section{The counting probability of identical particles in a random multiport with binned-together output ports}
\label{sec2}
Consider a unitary quantum $M$-port network (i.e., where there are $M$ input and output ports) described by the unitary matrix $U$ connecting the input $|k,in\rangle$ and output $|l,out\rangle$ states as follows $|k,in\rangle = \sum_{l=1}^M U_{kl}|l,out\rangle$ and whose output ports are partitioned into $r$ bins having $\mathbf{K}\equiv (K_1,\ldots,K_r)$ ports. We are interested in the probability of counting $N$ noninteracting identical particles, impinging at the network input, in the binned-together output ports, as in Fig. \ref{F1} (for more details see also Ref. \cite{GL}). We are interested in the average probability in a random unitary multiport (except where stated otherwise, here and below the term ``average'' means the average over the Haar-random unitary matrix $U$; where necessary we will use the notation $\langle \ldots \rangle$). A random unitary optical multiport can be experimentally realized with a very high fidelity \cite{ULO} and without explicit matrix calculations \cite{DDHRU}. We are interested in the probability in binned-together output ports since in the quantum case the average probability of an output configuration of indistinguishable bosons is exponentially small and, therefore, hard to estimate experimentally (see below).
Let us consider first distinguishable particles.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig1.eps}
\caption{A quantum network, having a unitary matrix $U$, with $N$ indistinguishable identical particles at its input and binned-together output ports (two or more bosons as well as classical particles may share the same input port). The braces illustrate the successive binary partition into $r$ bins ($r=3$ in this case). We are interested in the probability of counting $\mathbf{n} = (n_1,n_2,n_3)$ particles in the output bins with $K_1,K_2,K_3$, where $K_3 = M - K_1-K_2$. } \label{F1}
\end{center}
\end{figure}
The term ``distinguishable particles" applies here to quantum particles in different states with respect to the degrees of freedom not affected by a multiport \cite{HOM,Ou,VS,Tichy}, such as the arrival time in the case of photons (e.g., particles sent one at a time through the multiport; however, the time-resolving detection scheme \cite{Tamma1,Tamma2} unitarily mixes also the internal degrees of freedom, making them the operating modes). The average probability for a single particle from input port $k$ to land into bin $i$ reads \cite{GL} $p_i = \sum_{l\in K_i} \langle |U_{k,l}|^2\rangle = q_i \equiv K_i/M$, since $\langle |U_{kl}|^2\rangle = 1/M$, where $|U_{k,l}|^2$ is the probability of the transition $k\to l$. For identical particles sent one at a time through a random multiport, the average probability to count $\mathbf{n}\equiv (n_1,\ldots,n_r)$ particles in the output bins becomes a multinomial distribution
\begin{equation}
P^{(D)}(\mathbf{n}|\mathbf{K}) = \frac{N!}{\prod_{i=1}^rn_i!}\prod_{i=1}^rq_i^{n_i}.
\label{E1}\end{equation}
Under the above conditions, Eq. (\ref{E1}) applies also for a fixed unitary multiport with the averaging performed over the uniformly random input ports of the particles \cite{GL}.
Now let us consider indistinguishable particles (assuming in the case of fermions up to one particle per network port, as in Fig. \ref{F1}). For the input $\mathbf{k}= (k_1,\ldots,k_N$) and output $\mathbf{l}= (l_1,\ldots,l_N$) configurations the average probability of the transition $\mathbf{k}\to \mathbf{l}$ is just the inverse of the number of Fock states of $N$ bosons (fermions) in $M$ ports: $ p^{(B)}(\mathbf{l}|\mathbf{k})= \frac{N!}{(M+N-1)\ldots M}$ $\bigl( p^{(F)}(\mathbf{l}|\mathbf{k}) = \frac{N!}{M \ldots (M-N +1)}\bigr)$. This observation leads to the following quantum equivalent of Eq. (\ref{E1}) (with the upper signs for bosons and the lower ones for fermions) \cite{GL}
\begin{eqnarray}
\fl \qquad P^{(B,F)}(\mathbf{n}|\mathbf{K}) &=&\frac{N!}{(M\pm N\mp1)\ldots M}\prod_{i=1}^r\frac{(K_i\pm n_i\mp 1)\ldots K_i}{n_i!} \nonumber\\
\fl &=&
P^{(D)}(\mathbf{n}|\mathbf{K}) \frac{\prod_{i=1}^r\left(\prod_{l=0}^{n_i-1}\left[1\pm l/K_i\right]\right)}{\prod_{l=0}^{N-1}\left[1\pm l/M\right]}\equiv P^{(D)}(\mathbf{n}|\mathbf{K})Q^{(\pm)}(\mathbf{n}|\mathbf{K}).
\label{E2}\end{eqnarray}
As in the classical case, the probability formula (\ref{E2}) applies also to a fixed unitary multiport with the averaging performed uniformly over the input configurations $\mathbf{k}$ allowed by the quantum statistics \cite{GL}. Moreover, as shown in section \ref{sec4} below, the average probability in Eq. (\ref{E2}) actually applies to \textit{arbitrary} mixed input state of $N$ indistinguishable particles.
Now, our focus on the binned-together output ports for a large multiport and large number of particles can be explained. Assuming that only asymptotically polynomial in $N$ number of experimental runs is accessible (due to the decoherence or as in verification protocols for the Boson Sampling \cite{E5,BB,AA,E6,Drumm}), an exponentially small in $N$ probability cannot be estimated. Assuming scaling up for a fixed particle density $\alpha= N/M$, the probability of a particular configuration of indistinguishable bosons at the output of a random $M$-port, on average, is asymptotically exponentially small in $N$ (see also Ref. \cite{Drumm,Arkhipov}):
\begin{equation}
\fl p^{(B)}(\mathbf{l}|\mathbf{k}) = \sqrt{2\pi N(1+\alpha)}e^{-\gamma N }\left[1+ \mathcal{O}\left(\frac{1}{N}\right)\right],\;
\gamma = \ln\left(\frac{1}{\alpha}\right)+ \left(1\pm \frac{1}{\alpha}\right) \ln\left(1\pm \alpha\right).
\end{equation}
Though the density $\alpha$ is not fixed below (in the statement of theorem 1), it is natural to consider the asymptotic limit at a fixed density, i.e., when both the number of particles and the number of ports tend to infinity (for bosons, there is also the high-density case $M = \mathcal{O}(1)$ as $N\to \infty$, see Corollary 1 below).
Eqs. (\ref{E1}) and (\ref{E2}) are good approximations to the average counting probability in the binned-together output modes for $N$ identical particles in the disordered media \cite{DM,QS2P,PNCDM}, chaotic cavities \cite{MCMS}. It also can be applied to the scattershot Boson Sampling \cite{SCBS,E7} (due to the uniform averaging over the input configurations with up to one particle per input port in the low density limit $M \gg N^2$, when the contribution from the bunched configurations scales as $\mathcal{O}(N^2/M)$ \cite{Arkhipov}). In such setups, Eq. (\ref{E1}) is not, however, the exact average probability for distinguishable (classical) particles, since for $N$ simultaneous distinguishable particles at the input there is an extra factor \cite{GL} due to the correlations between the matrix elements $|U_{kl}|^2$.
In the proof of the asymptotic Gaussian law of Ref. \cite{GL} we will use that the $r$-bin case can be considered as a set of layered $r-1$ binary cases, as illustrated in Fig. \ref{F1} to the second layer. Indeed, both in the classical and quantum cases there is an exact factorization of the average counting probability into binned-together output ports (see \ref{appA}):
\begin{eqnarray}
P(\mathbf{n}|\mathbf{K}) &= & P(n_1,N_1-n_1|K_1,M_1-K_1)P(n_2,N_2-n_2|K_2,M_2-K_2)\nonumber\\
&\ldots &P(n_{r-1},N_{r-1}-n_{r-1}|K_{r-1},M_{r-1}- K_{r-1}),
\label{E3}
\end{eqnarray}
where $N_1=N$, $M_1=M$ and for $s=2,\ldots,r-1$
\begin{eqnarray}
N_s = N - \sum_{i=1}^{s-1}n_i,\quad M_s = M - \sum_{i=1}^{s-1}K_i.
\label{E4}\end{eqnarray}
Eq. (\ref{E3}) shows the key role played by the binary case, for which we will use also another notation $P_{N,M}(n|K) = P(n,N-n|K,M-K)$, with the independent variables as the arguments. Below we will need the following definitions (for $s\ge 2$):
\begin{equation}
\bar{q}_s = \frac{K_s}{M_s} = \frac{q_s}{1-\sum_{i=1}^{s-1}q_i},\quad \bar{x}_s = \frac{n_s}{N_s} = \frac{x_s}{1-\sum_{i=1}^{s-1}x_i}, \quad x_i = \frac{n_i}{N},
\label{E5}\end{equation}
where $0\le \bar{q}_s\le 1$ takes the place of $q_s$ in the $s$th layer of the binary partition of the average classical probability for the $r$-bin case in Eq. (\ref{E3}), i.e., $P^{(D)}_{N_s,M_s}(n_s|K_s) = \frac{N_s!}{n_s! (N_s-n_s)!} {\bar{q}_s}^{\,n_s}(1-\bar{q}_s)^{N_s-n_s}$. Let us differentiate by $\sigma$ the three cases of identical particles, where bosons correspond to $\sigma = +$, fermions to $\sigma =-$, and distinguishable particles to $\sigma = 0$. In the case when $Y$ is of order $X$, i.e, when there is such $C>0$ (independent of $X$) that $Y\le C X$, we use the notation $Y = \mathcal{O}(X)$. The following two theorems state the main results.
\begin{theorem}
Consider the Haar-random unitary $M$-port with the binned together output ports into $r$ sets of $K_1,\ldots,K_r$ ports. Then, as $N,M\to \infty$ for a fixed $q_i = K_i/M>0$, the average probability to count $\mathbf{n}=(n_1,\ldots,n_r)$ identical particles into the $r$ bins such that
\begin{equation}
|n_i - N q_i| \le AN^{\frac23-\epsilon},\quad A>0,\quad 0<\epsilon< \frac16
\label{E6}\end{equation} has the following asymptotic form
\begin{equation}
\fl \quad
P^{(\sigma)}(\mathbf{n}|\mathbf{K}) = \frac{\exp\left\{-N\sum_{i=1}^r \frac{(x_i-q_i)^2}{2(1+\sigma\alpha)q_i} \right\} }{\left( 2\pi [1+\sigma \alpha]N\right)^{\frac{r-1}{2}} \prod_{i=1}^r\sqrt{q_i}}\left\{1+ \mathcal{O}\left(\frac{(1-\alpha\delta_{\sigma,-})^{-3} }{N^{3\epsilon}}+\frac{\alpha\delta_{\sigma,+}}{N}\right)\right\}.
\label{E7}\end{equation}
\end{theorem}
An important note is in order. In the course of the proof (see section \ref{sec3}) it is also established that the $r$-bin asymptotic Gaussian on the right hand side of Eq. (\ref{E7}) satisfies the same factorization as the average counting probability, Eq. (\ref{E3}), to the error of the asymptotic approximation.
For a finite density $\alpha$, in the quantum case the error in Eq. (\ref{E7}) scales as $ \mathcal{O}(N^{-3\epsilon})$. In this case we get $x_i = q_i$ in the limit $N\to \infty$ for bosons, fermions, and distinguishable particles. In the usual presentation of the classical result $\epsilon = 1/6$ \cite{Gnedenko}, with this choice the error in Eq. (\ref{E7}) scales as $\mathcal{O}(N^{-\frac12})$ (this choice, however, invalidates the error estimate in theorem 2, Eq. (\ref{E8}) below, thus it is not allowed).
Since not all particle counts are covered by Eq. (\ref{E6}), theorem 1 does not guarantee the asymptotic Gaussian to be an uniform approximation for all $\mathbf{n}$. However, if Eq. (\ref{E6}) is violated, then the respective particle counts occur with an exponentially small probability (asymptotically undetectable in an experiment with only a polynomial in $N$ number of runs). This is stated in the following theorem.
\begin{theorem}
The average probability of the particle counts $\mathbf{n}$ violating Eq. (\ref{E6}) for $N,M\to \infty$ and $\alpha= N/M$ being fixed satisfies
\begin{equation}
P^{(\sigma)}(\mathbf{n}|\mathbf{K}) = \mathcal{O}\left(N^{s(\sigma)} \exp\left\{-\frac{A^2}{1+\sigma\alpha} N^{\frac13-2\epsilon}\right\}\right),
\label{E8}\end{equation}
where $A$ is from Eq. (\ref{E6}), whereas $s(0) = 1/2$ (distinguishable particles), $s(+) = 1/2$ (bosons) and $s(-) = 5/2$ (fermions).
\end{theorem}
In Ref. \cite{GL} the high-density limit for bosons was mentioned, realized for $N\to\infty$ and $M = \mathcal{O}(1)$. This case is a corollary to theorem 1.
\begin{corrolary}
As $N\to \infty$ and a fixed $M \gg 1$ the average probability to count $\mathbf{n}=(n_1,\ldots,n_r)$ identical particles into $r$ bins with $K_1,\ldots,K_r$ output ports of a Haar-random unitary $M$-port, such that Eq. (\ref{E6}) being satisfied, has the following approximate asymptotic form
\begin{equation}
\! \! \! \! \! \! \!\!\!\! \! \! \! \! \!
P^{(B)}(\mathbf{n}|\mathbf{K}) = \frac{M^{\frac{r-1}{2}}\exp\left\{-M\sum_{i=1}^r \frac{(x_i-q_i)^2}{2q_i} \right\} }{\left( 2\pi N^2\right)^{\frac{r-1}{2}} \prod_{i=1}^r\sqrt{q_i}}\left\{1+ \mathcal{O}\left(\frac{1}{M} + \frac{1}{N^{3\epsilon}}\right) \right\}.
\label{E9}\end{equation}
\end{corrolary}
Corollary 1 tells us that in the limit $N\to \infty$ the relative particle counting variables $x_1,\ldots,x_r$ are approximated by the continuous Gaussian random variables (similar as in the classical case in Ref. \cite{Gnedenko}):
\begin{equation}
x_i = q_i + \frac{\xi_i}{\sqrt{M}},
\label{E10}\end{equation}
where $\xi_1,\ldots,\xi_r$ are random variables satisfying the constraint $\sum_{i=1}^r\xi_i = 0$ with a Gaussian joint probability density
\begin{equation}
\rho = \frac{\exp\{-\sum_{i=1}^r \frac{\xi_i^2}{2q_i}\} }{\left(2\pi\right)^{\frac{r-1}{2}} \prod_{i=1}^r \sqrt{q_i}}.
\label{E11}\end{equation}
(The factor $\left(\frac{M}{N^2}\right)^{\frac{r-1}{2}}$ In Eq. (\ref{E9}) allows to convert the sum $\sum_{\mathbf{n}}P^{(B)}(\mathbf{n})=1$ into the integral $I_M$ of $\rho $ in Eq. (\ref{E11}) over $\xi_1,\ldots,\xi_r$ with $0\le \xi_i\le \sqrt{M}$. The latter is exponentially close to $1$ for $M\gg 1$: $I_M = 1-e^{-\mathcal{O}(M)}$.)
\section{Proof of the Theorems }
\label{sec3}
\subsection{The binary classical case}
\label{sec3A}
Let us first consider the classical binary case $r=2$. Denote by $K_2$ the Kullback-Leibler divergence
\begin{equation}
K_2(x|q) = x \ln\left(\frac{ x}{q}\right) +(1-x) \ln \left(\frac{ 1-x}{1-q}\right).
\label{E12}\end{equation}
Using Stirling's formula $ n! = \sqrt{2\pi (n+\theta_n)}(n/e)^n$, where $\frac{1}{6}<\theta_n<1.77$ for $ n\ge 1$ and $\theta_0 = \frac{1}{2\pi}$
\cite{Mortici}, we have
\begin{eqnarray}
\fl \qquad P^{(D)}_{N,M}(n|K) &= &\left[\frac{1+ \theta_N/N}{2\pi N (x +\theta_n/N) (1-x + \theta_{N-n}/N)}\right]^\frac12\left(\frac{x}{q}\right)^{-n}
\left(\frac{1-x}{1-q}\right)^{-N+n}
\nonumber\\
\fl & = & \frac{\exp\left\{ -NK_2(x|q) \right\}}{\sqrt{2\pi Nq(1-q)}}\left[1+\mathcal{O}\left(\frac{1}{N^{\frac13+\epsilon}}\right)\right],
\label{E13}\end{eqnarray}
since from Eq. (\ref{E6})
\[
\frac{x}{q}\frac{1-x}{1-q} \ge \left(1 - \frac{A}{qN^{\frac13+\epsilon}}\right)\left(1 - \frac{A}{(1-q)N^{\frac13+\epsilon}}\right) =
1 + \mathcal{O}\left(\frac{1}{N^{\frac13+\epsilon}}\right).
\]
By expanding the Kullback-Leibler divergence (\ref{E12}) using Eq. (\ref{E6}),
\begin{equation}
K_2(x|q) = \frac{(x-q)^2}{2q(1-q)} +\mathcal{O}\left(\frac{1}{N^{3\epsilon}}\right),
\label{E14}\end{equation}
and substituting the result in Eq. (\ref{E13}) we get Eq. (\ref{E7}) for the binary classical case.
To show Eq.~(\ref{E8}) consider the first line of Eq. (\ref{E13}), valid for all $0\le x \le 1$, and observe that $(x+\theta_{n}/N)(1-x+\theta_{N-n}/N)\ge (2\pi N)^{-2}$. We obtain
\begin{equation}
P^{(D)}_{N,M}(n|K) \le 2\pi \sqrt{N} \exp\left\{ -N K_2(x|q) \right\}\left[1+\mathcal{O}\left(\frac{1}{N}\right)\right].
\label{P2D}\end{equation}
Then using Pinsker's inequality \cite{Pinsker}
\begin{equation}
K_2(x|q) \ge (x-q)^2
\label{E15}\end{equation}
and that by Eq. (\ref{E6}) $|x-q| > A N^{-\frac13-\epsilon}$ for $\epsilon <1/6$ we obtain the required scaling of Eq. (\ref{E8}) from Eq.~(\ref{P2D}):
\begin{equation}
\fl P^{(D)}_{N,M}(n|K) \le 2\pi \sqrt{N} \exp\left\{ -A^2N^{\frac13-2\epsilon} \right\}\left[1+\mathcal{O}\left(\frac{1}{N}\right)\right]
= \mathcal{O}\left(\sqrt{N} \exp\left\{-A^2 N^{\frac13-2\epsilon}\right\}\right).
\label{E16}\end{equation}
\subsection{The $r$-bin classical case}
\label{sec3B}
Let us consider the average probability for the general $r$-bin classical case. We can employ the factorization into $r-1$ binary probabilities given by Eqs. (\ref{E3})-(\ref{E4}). First of all, let us show the equivalence of Eq. (\ref{E6}) to the following set of conditions (see Eq. (\ref{E5})):
\begin{equation}
|n_l - N_l \bar{q}_l| \le \bar{A} N^{\frac23-\epsilon}, \quad \bar{A}>0, \quad l = 1,\ldots, r-1.
\label{E17}\end{equation}
To this end it is enough to observe that
\begin{equation}
n_l-N_l \bar{q}_l = n_l - Nq_l + \bar{q}_l \sum_{i=1}^{l-1} (n_i - N q_i), \quad l = 1,\ldots,r-1.
\label{E18}\end{equation}
Indeed, the relations in Eq. (\ref{E18}) are invertible, whereas $n_r - Nq_r = - \sum_{l=1}^{r-1}(n_l - Nq_l)$. To prove the classical $r$-bin case in theorems 1 and 2 one can proceed as follows. If Eq. (\ref{E6}) is satisfied for all $i$, then so is Eq. (\ref{E17}). Using Eq. (\ref{E13}) for the binary case into Eq. (\ref{E3}) we have
\begin{equation}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! P^{(D)}(\mathbf{n}|\mathbf{K}) = \prod_{l=1}^{r-1} P^{(D)}_{N_l,M_l}(n_l|K_l) =\prod_{l=1}^{r-1} \frac{\exp\left\{ -N_lK_2(\bar{x}_l|\bar{q}_l) \right\}}{\sqrt{2\pi N_l\bar{q}_l(1-\bar{q}_l)}}\left[1+\mathcal{O}\left(\frac{1}{N^{\frac13+\epsilon}}\right)\right]
\label{E19}\end{equation}
From Eq. (\ref{E5}) we get
\begin{equation}
N_l\bar{q}_l(1-\bar{q}_l) = Nq_l \frac{1-\sum_{i=1}^lq_i}{1-\sum_{i=1}^{l-1}q_i}\left(1 - \frac{\sum_{i=1}^l n_i/N - q_i}{1-\sum_{i=1}^{l-1}q_i} \right),
\label{E20}\end{equation}
therefore the denominator in Eq. (\ref{E19}) becomes equal to that in Eq. (\ref{E7}) to the necessary error, i.e.,
\begin{equation}
\prod_{l=1}^{r-1} N_l\bar{q}_l(1-\bar{q}_l) = N^{r-1}\prod_{i=1}^r q_i \left[1 + \mathcal{O}\left(\frac{1}{N^{\frac13+\epsilon}}\right) \right]
\label{E21}\end{equation}
(since $1/3 +\epsilon > 3\epsilon$ for $\epsilon < 1/6$, the error conforms with that in Eq. (\ref{E7})). In its turn, the term in the exponent in Eq. (\ref{E19}) can be reshaped using the following identity for the Kullback-Leibler divergence (see \ref{appB})
\begin{equation}
\sum_{l=1}^{r-1} N_l K_2(\bar{x}_l|\bar{q}_l) = N\sum_{i=1}^r x_i \ln \left(\frac{ x_i}{q_i}\right)\equiv N K_r(\mathbf{x}|\mathbf{q}).
\label{E22}\end{equation}
By expanding both sides of Eq. (\ref{E22}) into powers of $x_i-q_i$ and comparing the terms to the leading order in $N$ we obtain the following asymptotic identity
\begin{equation}
\sum_{l=1}^{r-1} N_l \frac{(\bar{x}_l-\bar{q}_l)^2}{2\bar{q}_l(1-\bar{q}_l)} = N \sum_{l=1}^{r-1} \frac{(x_l-q_l)^2}{2q_l} + \mathcal{O}\left(\frac{1}{N^{3\epsilon}}\right).
\label{E23}\end{equation}
Substituting Eqs. (\ref{E21}) and (\ref{E23}) into Eq. (\ref{E19}) we obtain Eq. (\ref{E7}) for the $r$-bin classical case.
To show (\ref{E8}) for the $r$-bin classical case, let us select the factorization (\ref{E3}) such that $i=1$ is the first violation of Eq.~(\ref{E6}). Then the probability $P^{(D)}_{N,M}(n_1|K_1)$ appearing in Eq.~(\ref{E3}) satisfies Eq.~ (\ref{E8}), as proven above in the binary case. This observation results in Eq. (\ref{E8}) for the $r$-bin classical case and concludes the proof of the theorems in the classical case.
One important note. In the course of the proof of the theorems for the $r$-bin case we have also shown that the asymptotic Gaussian for the $r$-bin case is a product of the asymptotic Gaussians for the $r-1$ binary cases to the same accuracy as in Eq. (\ref{E7}), i.e., due to the equivalence of Eqs. (\ref{E6}) and (\ref{E17}) the general case follows from the binary case.
\subsection{The quantum case}
\label{sec3C}
Let us consider the quantum factor $Q^{(\pm)} (\mathbf{n}|\mathbf{K})$ (recall that $+$ is for bosons and $-$ is for fermions) introduced in the second line in Eq. (\ref{E2}), which accounts for the correlations between the indistinguishable particles due to their quantum statistics. By the following asymptotic identity \cite{GL}
\begin{equation}
\prod_{l=0}^n \left[1\pm \frac{l}{m}\right] = \left( 1\pm \frac{n}{m}\right)^{n\pm m +1/2} e^{-n}\left[1+\mathcal{O}\left(\frac{n}{m(m\pm n)}\right)\right]
\label{E24}\end{equation}
(see also \ref{appC}) when $N,M\to \infty$ we get for $n_i$ satisfying Eq. (\ref{E6}) (i.e., $n_i = N\left[q_i + \mathcal{O}(N^{-\frac13-\epsilon})\right]\to \infty$)
\begin{eqnarray}
\fl && Q^{(\pm)}(\mathbf{n}|\mathbf{K}) \equiv \frac{\prod_{i=1}^r\left(\prod_{l=0}^{n_i-1}\left[1\pm l/K_i\right]\right)}{\prod_{l=0}^{N-1}\left[1\pm l/M\right]} = \frac{\prod_{i=1}^r\left(\prod_{l=0}^{n_i}\left[1\pm l/K_i\right]\right)}{\prod_{l=0}^{N}\left[1\pm l/M\right]} \frac{1\pm \frac{N}{M}}{\prod_{i=1}^{r}\left[1\pm n_i/K_i\right]} \nonumber\\
\fl && = \frac{\prod_{i=1}^r\left(1\pm n_i/K_i\right)^{n_i\pm K_i -1/2}}{\left(1\pm N/M\right)^{N\pm M -1/2}} \left[1 + \mathcal{O}\left( \sum_{i=1}^r\frac{n_i}{K_i(K_i\pm n_i)}+\frac{N}{M(M\pm N)}\right)\right]
\label{E25}\end{eqnarray}
(with the upper signs for bosons and the lower ones for fermions). Note that in the case of fermions $n_i \le K_i$ (for $n_i>K$ quantum factor is equal to zero). Now, let us clarify the order of the error in Eq. (\ref{E25}). From Eq. (\ref{E6}) we get
\[
K_i \pm n_i = (M\pm N)q_i \mp[ Nq_i - n_i ] \ge \left|\frac{1\pm \alpha}{\alpha} q_iN - A N^{2/3-\epsilon}\right|.
\]
Thus we can estimate
\[
\frac{n_i}{K_i(K_i\pm n_i)} = \mathcal{O}\left( \frac{\alpha^2}{(1\pm \alpha)N}\right).
\]
Taking this into account, let us rewrite Eq. (\ref{E25}) as follows
\begin{equation}
Q^{(\pm)}(\mathbf{n}|\mathbf{K}) = \frac{\exp\{(N\pm M)K_r(\mathbf{X^{(\pm)}}|\mathbf{q}) \}}{(1\pm \alpha)^{\frac{r-1}{2}} \prod_{i=1}^r \sqrt{\frac{X^{(\pm)}_i}{q_i}}} \left[1+ \mathcal{O}\left(\frac{\alpha^2}{(1\pm \alpha)N}\right)\right],
\label{E26}\end{equation}
where $K_r$ is defined in Eq. (\ref{E22}) and we have introduced new variables $X^{(\pm)}_i$ (analogs of $x_i$ of Eq. (\ref{E5}) in the quantum case)
\begin{equation}
\fl \quad X^{(\pm)}_i \equiv \frac{K_i \pm n_i}{ M \pm N} = \frac{q_i \pm \alpha x_i}{1\pm \alpha}, \quad 0\le X^{(\pm)}_i \le 1,\quad X^{(\pm)}_i - q_i = \frac{\pm\alpha}{1\pm \alpha}(x_i - q_i).
\label{E27}\end{equation}
Now, if Eq. (\ref{E6}) is satisfied, we can separate the leading order in the quantum factor by expanding the Kullback-Leibler divergence (similar as in Eq. (\ref{E23})), whereas in the denominator in Eq. (\ref{E26}) we have \mbox{$X^{(\pm)}_i/q_i = 1 +\mathcal{O}\left(\frac{\alpha N^{-1/3-\epsilon}}{1\pm\alpha}\right)$}. By selecting the leading order error for $0<\epsilon <1/6$ we get (recall that $\sigma = +$ for bosons and $\sigma = -$ for fermions):
\begin{equation}
\fl Q^{(\sigma)}(\mathbf{n}|\mathbf{K}) = \frac{\exp\left\{N \frac{\sigma \alpha}{1+\sigma \alpha}\sum_{i=1}^r\frac{(x_i-q_i)^2}{2q_i}\right\}}{(1+\sigma \alpha)^{\frac{r-1}{2}} } \left[1+ \mathcal{O}\left(\frac{\alpha\delta_{\sigma,+}}{N}+\frac{\alpha^3}{(1+\sigma \alpha)^3N^{3\epsilon}}\right)\right].
\label{E29}\end{equation}
For bosons the first term in the error on the right hand side of Eq. (\ref{E29}) can dominate the second in the high-density case $\alpha\to \infty$, thus we have to keep it. To obtain Eq. (\ref{E7}) in the quantum case one can just multiply the result of Eq. (\ref{E7}) for the classical probability, proven in section \ref{sec3B}, by the quantum factor in
Eq.~(\ref{E29}) and select the leading order error terms (observing the possibility that $\alpha\to\infty$ for bosons and $\alpha\to 1$ for fermions). This proves theorem 1 in the quantum case.
One important observation is in order. The $r$-bin quantum factor $Q^{(B,F)}$ of Eq. (\ref{E25}) is simply a product of the $r-1$ binary quantum factors $Q^{(B,F)}_{N_l,M_l}$ defined similar as in Eq. (\ref{E25}), but with $M_l$ and $N_l$ as in the factorization formula (\ref{E3}) and $\bar{X}^{(\pm)}_l$ defined as in Eq. (\ref{E5}). This fact simply follows from the factorization formula (\ref{E3}) valid in the classical and quantum cases. The same factorization is valid also for the leading order of the respective quantum factors, up to the error term in Eq. (\ref{E29}). In fact, one proceed to prove Eq. (\ref{E7}) in the general $r$-bin case using the binary case, similar as it was done in section \ref{sec3B}. Indeed, there is the following identity for the Kullback-Leibler divergence (an analog of Eq. (\ref{E22}))
\begin{equation}
\fl \sum_{l=1}^{r-1} (N_l \pm M_l) K_2(\bar{X}^{(\pm)}_l|\bar{q}_l) = (N\pm M) \sum_{l=1}^r X^{(\pm)}_l \ln\left(\frac{ X^{(\pm)}_i}{q_i}\right)=(N\pm M)K_r(\mathbf{X}^{(\pm)}|\mathbf{q}),
\label{E30}\end{equation}
which is proved via the same steps as the respective identity (\ref{E22}) in the classical case (\ref{appB}).
Let us now prove theorem 2 in the quantum case. The quantum result in Eq. (\ref{E8}) can be shown by reduction to the binary case, similar as in section \ref{sec3B}, where $i=1$ is the first index of violation of Eq. (\ref{E6}). Consider the respective quantum probability $P^{(B,F)}_{N,M}(n_1|K_1)=P^{(D)}_{N,M}(n_1|K_1) Q^{(B,F)}_{N,M}(n_1|K_1) $ which enters the factorization (\ref{E3}) in the quantum case (below we drop the subscript $1$ for simplicity).
Let us first focus on the case of bosons. From Eq. (\ref{C10}) of \ref{appC} for $M\gg 1$ (see Eqs. (\ref{E12}) and (\ref{E27})) we get
\begin{equation}
\fl \qquad Q^{(B)}_{N,M}(n|K) < (1+\alpha)\exp\left\{ N\left(1+\frac{1}{\alpha}\right)K_2(X^{(+)}|q)\right\}\left[ 1+\mathcal{O}\left(\frac{1}{M}\right)\right].
\label{E31}\end{equation}
The quantum probability $P^{(B)}_{N,M}(n|K) = P^{(D)}_{N,M}(N|K)Q^{(B)}_{N,M}(n|K)$ involves a combination of two Kullback-Leibler divergencies, see Eqs. (\ref{P2D}) and (\ref{E31}),
\begin{equation}
\fl \qquad NK_2(x|q) - N\left(1+\frac{1}{\alpha} \right)K_2(X^{(+)}|q) = N\left[K_2(x|X^{(+)}) +\frac{1}{\alpha} K_2(q|X^{(+)})\right].
\label{E32}\end{equation}
By using Pinsker's inequality (\ref{E15}) and Eq. (\ref{E27}) we obtain
\begin{equation}
\fl \qquad K_2(x|X^{(+)}) +\frac{1}{\alpha} K_2(q|X^{(+)}) > \left( X^{(+)}-x\right)^2 + \frac{1}{\alpha}\left(X^{(+)}- q\right)^2 = \frac{(x-q)^2}{1+\alpha}.
\label{E33}\end{equation}
Finally, taking into account that by our assumption $|x-q| > A N^{-1/3-\epsilon}\,$ and that $\alpha $ is fixed in theorem 2, from Eqs. (\ref{P2D}), (\ref{E31})-(\ref{E34}) we get the required estimate
\begin{equation}
P^{(B)}_{N,M}(n|K) < 2\pi \sqrt{N}(1+\alpha)\exp\left\{-\frac{{A}^2}{1+\alpha}N^{\frac13 -2\epsilon} \right\}\left[ 1+\mathcal{O}\left(\frac{1}{N}\right)\right].
\label{E34}\end{equation}
Now let us turn to the case of fermions. First of all, we have to consider the maximal possible count number $n=K$ (or $N-n = M-K$ which amounts to renaming the variables, but not both since $N<M$). In this case, under the condition that $\alpha$ is fixed, the average quantum counting probability reads (see Eq. (\ref{E2}))
\begin{eqnarray}
\fl P^{(F)}(n|K) &=& \frac{N!}{(M-N+1)\ldots M} \frac{(M-K-[N-K]+1)\ldots (M-K)}{(N-K)!} \nonumber\\
\fl &=&\left( \frac{N}{M}\right)^K \prod_{l=1}^{K-1} \frac{1-l/K}{1-l/M} < \left( \frac{N}{M}\right)^K = \exp\left\{-N \frac{\ln(1/\alpha)}{q}\right\}
\label{E35}\end{eqnarray}
i.e., falls faster with $N$ than the estimate in Eq. (\ref{E8}). Consider now the opposite case $n\le K-1$ and $N-N\le M-K-1$. We have in this case $X^{(-)}, 1-X^{(-)} \ge 1/(M-N) = \frac{\alpha}{(1-\alpha)N}$. Therefore, from Eq. (\ref{C10}) of \ref{appC} we get
\begin{eqnarray}
\fl Q^{(F)}_{N,M}(n|K)& <& \frac{q(1-q)\exp\{-N(1/\alpha-1)K_2(X^{(-)}|q)\}}{(1-\alpha)^2X^{(-)}(1-X^{(-)})}\left[ 1+\mathcal{O}\left(\frac{1}{M} \right)\right]\nonumber\\
\fl &=& \mathcal{O}\left(N^2 \exp\left\{-\frac{{A}^2}{1-\alpha} N^{\frac13-2\epsilon}\right\}\right),
\label{E36}\end{eqnarray}
where we have expanded the Kullback-Leibler divergence as in Eq. (\ref{E14}), used Eq. (\ref{E27}) and Pinsker's inequality (\ref{E15}) together with the assumption $|x-q| > {A} N^{-1/3-\epsilon}\,$ for $\epsilon <1/6$. Recalling the respective classical bound (\ref{E16}) we get Eq. (\ref{E8}) for fermions. This concludes the proof of theorem 2 in the quantum case.
Finally, as in the classical case, in the quantum case the asymptotic Gaussian for the $r$-bin partition in Eq. (\ref{E7}) is a product of the asymptotic Gaussians for the binary partitions, which appear in Eq. (\ref{E3}), to the accuracy of the approximation in Eq. (\ref{E7}), due to the analogous identity Eq. (\ref{E30}). This fact relates the statements of theorems 1 and 2 to those of the binary case via the equivalence of Eqs. (\ref{E6}) and (\ref{E17}).
\section{Abitrary (mixed) input state}
\label{sec4}
In section \ref{sec2} in the formulation of theorems 1 and 2 we have assumed a Fock input state $|{\bf n},in\rangle = |n_1,\ldots,n_M;in\rangle $ of $N$ indistinguishable identical particles (where for bosons $n_k$ is arbitrary, whereas for fermions $n_k\le 1$). However, it is easy to see that the theorems generalize to an arbitrary input state
\begin{equation}
\rho = \sum_{{\bf n},{\bf m}} \rho_{{\bf n},{\bf m}} |{\bf n},in\rangle \langle {\bf m},in|,
\label{AE1}\end{equation}
where the summation is over $|{\bf n}| = |{\bf m}| = N$ ($|{\bf n}| \equiv n_1+\ldots n_M$). Let us consider bosons first. Using the expansion of the input Fock state $|{\bf n},in\rangle $ over the output $|{\mathbf s},out\rangle$ \cite{AA}
\begin{equation}
|{\bf n},in\rangle = \sum_{{\mathbf s}}\frac{1}{\sqrt{{\bf n}!{\mathbf s}!}}\mathrm{per}(U[{\bf n}|{\mathbf s}])|{\mathbf s},out\rangle,
\label{AE2}\end{equation}
where the summation is over all $|{\mathbf s}|=N$, ${\bf n}! \equiv n_1!\ldots n_M!$, and per($\ldots$) denotes the matrix permanent \cite{Minc}, in our case of the submatrix of the $M$-port matrix $U$ built on the rows and columns corresponding to the occupations ${\bf n}$ and ${\mathbf s}$, respectively. Given the input state in Eq. (\ref{AE1}), the average probability to detect an output configuration ${\mathbf l}$, corresponding to occupations ${\mathbf s}$, reads
\begin{equation}
p^{(B)}({\mathbf l}|\rho) = \frac{1}{{\mathbf s}!} \sum_{{\bf n},{\bf m}} \frac{\rho_{{\bf n},{\bf m}}}{\sqrt{{\bf n}!{\bf m}!}}\langle \mathrm{per}(U[{\bf n}|{\mathbf s}]) \left(\mathrm{per}(U[{\bf m}|{\mathbf s}])\right)^*\rangle.
\label{AE3}\end{equation}
Let us evaluate the average by expanding the matrix permanents
\begin{eqnarray}
\label{AE4}
& & \langle \mathrm{per}(U[{\bf n}|{\mathbf s}]) \left(\mathrm{per}(U[{\bf m}|{\mathbf s}])\right)^*\rangle = \langle\sum_{\sigma_{1,2}\in \mathcal{S}_N} \prod_{i=1}^N U_{k^{}_{\sigma_1(i)},l^{}_i}U^*_{k^\prime_{\sigma_2(i)},l^{}_i} \rangle\nonumber\\
& & = \sum_{\sigma_{1,2}\in \mathcal{S}_N} \sum_{\nu,\tau\in \mathcal{S}_N}\mathcal{W}(\tau\nu)\prod_{i=1}^N\delta_{k^\prime_{\sigma_2(i)},k^{}_{\sigma_1\nu(i)}}\delta_{l_i,l_{\tau(i)}}\nonumber\\
& =& \delta_{{\bf n},{\bf m}} \sum_{\sigma_{1,2}\in \mathcal{S}_N} \sum_{\nu,\tau\in \mathcal{S}_N} \sum_{\chi\in\mathcal{S}_{\bf n}} \sum_{\mu\in \mathcal{S}_{\mathbf s}} \mathcal{W}(\tau\nu)\delta_{\sigma_1\nu\sigma^{-1}_2,\chi} \delta_{\tau,\mu}\nonumber\\
& & = \delta_{{\bf n},{\bf m}} \sum_{\sigma_{1,2}\in \mathcal{S}_N} \sum_{\mu\in \mathcal{S}_{\mathbf s}}\sum_{\chi\in\mathcal{S}_{\bf n}} \mathcal{W}(\mu \sigma^{-1}_1\chi \sigma_2) = \delta_{{\bf n},{\bf m}} {\mathbf s}!{\bf n}! N! \sum_{\sigma\in \mathcal{S}_N} \mathcal{W}(\sigma)\nonumber\\
&& = \frac{\delta_{{\bf n},{\bf m}} {\mathbf s}!{\bf n}!N!}{(M+N-1)\ldots M},
\end{eqnarray}
where ${\bf k}=(k_1,\ldots,k_N)$ and ${\bf k}^\prime=(k^\prime_1,\ldots,k^\prime_N)$ are the input ports corresponding to the occupations ${\bf n}$ and ${\bf m}$, respectively, ${\mathbf l} = (l_1,\ldots,l_N)$ are the output ports corresponding to the occupations ${\mathbf s}$, $\mathcal{S}_N$ is the group of permutations of $N$ elements (the symmetric group), whereas $\mathcal{S}_{\bf n} \equiv \mathcal{S}_{n_1}\otimes \ldots \otimes \mathcal{S}_{n_M}$, $\mathcal{W}$ is the Weingarten function of the unitary group \cite{W1,W2}, and $\delta_{{\bf n},{\bf m}} \equiv \prod_{i=1}^M \delta_{n_i,m_i}$. We have used the known expression for the last sum on the right hand side of Eq. (\ref{AE4}) (derived in the Supplemental material to Ref. \cite{BB}).
Eq. (\ref{AE4}) tells us that the non-diagonal elements of the mixed state in Eq. (\ref{AE1}) do not contribute to the average probability, if the averaging is performed over the Haar-random unitary matrix $U$ (the ratio on the right hand side of Eq. (\ref{AE4}) is the average probability $p^{(B)}({\mathbf l}|{\bf k})$, see section \ref{sec2}). Since, theorems 1 and 2 hold for any Fock input state $|{\bf n},in\rangle$, we conclude that they hold for the general input of Eq. (\ref{AE1}).
For fermions, an analog of Eqs. (\ref{AE3}) and (\ref{AE4}) (in this case $m_i,n_i,s_i\le 1$) are obtained by replacing the permanent by the determinant, which results in the appearance of the sign functions $\mathrm{sgn}(\sigma_{1,2})$ and $\mathrm{sgn}(\sigma)$ ($\sigma = \sigma_1\sigma_2$) in Eq. (\ref{AE4}) where there are $\sigma_{1,2}$ and $\sigma$. In this case the last summation in Eq. (\ref{AE4}) reads N!$\sum_{\sigma\in \mathcal{S}_N} \mathrm{sgn}(\sigma)\mathcal{W}(\sigma) = \frac{N!}{M \ldots (M-N +1)}$ (see the Supplemental material to Ref. \cite{BB}) i.e., we get the average probability $p^{(F)}({\mathbf l}|{\bf k})$. The same conclusion holds.
\section{Conclusion}
\label{sec5}
We have given a rigorous formulation of the results on the asymptotic form of the average counting probability of identical particles in the binned-together output ports of the Haar-random multiports, presented recently in Ref. \cite{GL} with only a heuristic derivation and some numerical evidence. The key observation was that, both in the classical and quantum cases, there is a convenient factorization of the average probability for the $r$-bin case into $r-1$ average counting probabilities for the two-bin case. Moreover, the results of Ref. \cite{GL} were extended to an arbitrary mixed input state of $N$ indistinguishable particles.
In the classical case, we have shown that the de Moivre-Laplace theorem, which provides an asymptotic form of the binary average counting probability, actually applies also to the $r$-bin case via the above factorization. The asymptotic Gaussian form also satisfies the mentioned factorization to an error of the same order as in the Moivre-Laplace theorem. Finally, though we have considered a physical model involving a random unitary multiport, where the probabilities of $r$ events are rationals (each probability equal to a fraction of the respective number of ports), the results apply for a general multinomial distribution with arbitrary such probabilities (since the factorization is derived for the general probabilities).
Our primary interest, however, was the quantum case, when there are correlations between the identical particles due to their quantum statistics. We have formulated and proven a quantum analog of the de Moivre-Laplace theorem for the indistinguishable identical bosons and fermions (and generalized it to the $r$-bin case), where again the binary case applies to the $r$-bin case by the above mentioned factorization (and, similarly to the classical case, the asymptotic Gaussian also satisfies the same factorization to the order of the approximation error). Therefore, besides giving a rigorous formulation of the recently discovered quantum asymptotic Gaussian law, we have also provided an illuminating insight on how the general $r$-bin case reduces to the binary case.
Our results have immediate applications for the counting probability (in the binned-together output modes) of identical particles propagating in the disordered media, chaotic cavities, and also for the scattershot version of the Boson Sampling.
\section{Acknowledgements}
The research was supported by the National Council for Scientific and Technological Development (CNPq) of Brazil, grant 304129/2015-1, and by the S{\~a}o Paulo Research Foundation (FAPESP), grant 2015/23296-8.
|
1,108,101,562,993 | arxiv | \section{Introduction}
We develop an application, called HoloFEM, for solving Poisson's equation with the finite element method (FEM) using Microsoft's mixed reality glasses HoloLens \cite{hololens}.
The aim is to set up and solve a partial differential equation (PDE) in the real world geometry surrounding the HoloLens user, and then visualise the computed solution on top of the real surroundings in mixed reality.
Partial differential equations are used to model many physical processes, such as fluid flow, heat transport and electromagnetic fields to name a few.
We consider here the Poisson equation, which serves a prototypical example of a PDE. This equation models steady-state diffusion and electrostatic potential.
The Poisson problem is mathematically formulated as: Find the solution $u: \Omega \to \mathbb{R}$ such that
\begin{equation}
\left\{
\begin{split}
- \Delta u & = f && \mbox{in } \Omega, \\
u & = g && \mbox{on } \partial\Omega,
\end{split}
\right. \label{eqpoisson}
\end{equation}
where $\Omega \subset \mathbb{R}^3$ is the solution domain, $\Delta$ is the Laplace operator, $f$ is a given source, $g$ is a given boundary value of the solution, and $\partial\Omega$ is the boundary of $\Omega$.
Since various PDEs often show up in modelling and engineering problems, there could be a potential use for mixed reality PDE-solving software that allows a user to define a problem, compute a solution, and study it on the spot. Say for example that we would like to know how a dangerous substance spreads in a room after a leak has sprung. The heat equation could be used as a simplistic model for describing this situation, potentially making applications like HoloFEM useful in building planning and safety engineering.
Augmented, mixed, and virtual reality are already used in engineering, architecture, and design, see for example \cite{Bendinger:2004aa, Heuveline:2011aa}, but to our knowledge there is currently no other mixed reality PDE-solving software.
\section{Technical description}
The workflow of the application HoloFEM has three main stages.
The first one is the meshing stage, where a computational mesh is generated from the environment.
This is followed by the simulation stage, where the mathematical problem is formulated and solved.
Finally, the solution is visualised in the last stage.
\subsection{Meshing}
The Microsoft HoloLens can scan the user's surroundings and extract a discrete representation of the geometry, in the form of a surface mesh.
This mesh is not adequate for numerical computations -- firstly, it is a surface mesh, while we need a volume mesh to solve \eqref{eqpoisson}, and secondly, the mesh quality is rather poor.
Instead, the main planes are extracted from the surface mesh. These will represent the walls, the floor and the ceiling of the room in which the user is located.
From this geometric representation, the computational volume mesh is constructed.
The procedure so far is summarised in Figure \ref{figgp}. The steps in the mesh generation are shown in Figure \ref{figmp}.
\begin{figure}[h]
\includegraphics[scale=0.048]{figures/geometrypipeline}
\caption{The geometry pipeline shows the steps in going from a spatial scan of the surroundings to a volume mesh.}
\label{figgp}
\end{figure}
\begin{figure}[H]
\includegraphics[scale=0.05]{figures/meshingpipeline}
\caption{The meshing pipeline demonstrates the generation of a volume mesh from a space, i.e., the last arrow in the geometry pipeline.}
\label{figmp}
\end{figure}
\subsection{Simulation}
The user may configure the problem parameters by placing point sources in the surroundings and setting boundary conditions on the walls, floor, and ceiling of the room.
This is done in a similar fashion to how holograms are usually placed with the HoloLens.
When the user is satisfied with the problem specification, the problem is discretised with the finite element method.
The FEniCS form compiler \cite{ffc,logg2012automated} is used to generate code for the finite element assembly, see Figure \ref{figsp}. This means that this part of the program can easily be generalised to handle other PDEs.
The assembled sparse linear system can be solved with standard techniques, e.g., preconditioned Krylov subspace methods.
\begin{figure}[H]
\includegraphics[scale=0.05]{figures/simulationpipeline}
\caption{The simulation pipeline outlines how a mesh and PDE-problem are used to obtain a solution with FEM.}
\label{figsp}
\end{figure}
\subsection{Visualisation}
The visualisation of the numerical solution takes advantage of the mixed reality technology of the HoloLens.
Holograms are placed at nodal points of the computational mesh, or at the cell centers, and are superimposed on the real world background.
This simplistic approach suffices for the current prototype.
More sophisticated approaches could be developed specific to the engineering applications considered.
\section{Demo overview}
In its current state the application HoloFEM works mainly with voice commands. A user, wearing the HoloLens, starts by saying ``Scan room''. This initiates the scanning phase in which the surroundings are scanned by looking around the room. During the scanning phase a surface mesh is produced that shows what surfaces have already been scanned, see the left part of Figure \ref{figs2m}. After the scanning phase is completed, the approximative geometric representation of the room (a prism with polygonal base) is automatically created. A tetrahedral mesh is then generated with the voice command ``Generate mesh''. In the right part of Figure \ref{figs2m}, parts of a space and a volume mesh are displayed.
\begin{figure}[H]
\includegraphics[scale=0.084]{figures/surface}
\includegraphics[scale=0.084]{figures/space_mesh}
\caption{Meshes. \emph{Left}: Surface mesh (yellow) of the surroundings. \emph{Right}: Space (blue) and generated volume mesh (green).}
\label{figs2m}
\end{figure}
When the mesh has been generated, the user may define additional problem data. The voice command for defining a source is ``Create source''. This places a source in front of the user. The voice command for defining boundary conditions is ``Set boundary value''. This sets the solution to be zero on the wall the user is looking at. In Figure \ref{figproblemdata}, visualised problem data are shown.
\begin{figure}[H]
\includegraphics[scale=0.084]{figures/data_source}
\includegraphics[scale=0.084]{figures/data_boundary}
\caption{Problem data represented by holograms. \emph{Left}: Source (fire ball) placed in room. \emph{Right}: Zero value boundary conditions (ice patches) on wall.}
\label{figproblemdata}
\end{figure}
Once the desired problem data have been defined it is time to solve the problem. The voice command for that is ``Solve problem''. After the problem has been solved, the solution is automatically visualised. The solution values at the nodal points of the volume mesh are represented by spheres. The size of a sphere is proportional to the solution value. The spheres are also coloured according to an RGB-scale, where blue represents low values and red high values. See Figure \ref{figsolution} for a visualised solution.
\begin{figure}[H]
\includegraphics[scale=0.084]{figures/solution_mesh}
\includegraphics[scale=0.084]{figures/solution_data}
\caption{Solution represented by spheres. \emph{Left}: Solution and tetrahedral mesh used in simulation. \emph{Right}: Solution together with problem data used in simulation.}
\label{figsolution}
\end{figure}
\section{Conclusions}
We have developed an application for solving and visualising a PDE model with the Microsoft HoloLens.
A user wearing the HoloLens can scan the surroundings, define a mathematical model and see the numerical solution superimposed on the real world,
all within a matter of seconds. This has potential applications in building planning and safety engineering.
Future development includes extension to other PDE models and more sophisticated visualisation.
\bibliographystyle{ACM-Reference-Format}
|
1,108,101,562,994 | arxiv |
\section{Free-Agent Model}
\label{free-agent-discounted-cost-models}
The impossibility of constant factor delegation for the standard model, as discussed in Proposition \ref{prop:impos1}, motivates us to design efficient delegation strategies for variants of this model as defined in Section \ref{model-variants}. We observe that the impossibility is aided by the fact that the agent's expected utility for each element is very close to the probing cost, so the principal cannot restrict the agent on any element they want to be probed. An initial attempt to circumvent this failure might design a model where the principal can take on a larger proportion of the probing cost so that they can more freely restrict the agent's behavior. However, the principal's expected utility for each element is similarly close to their probing cost, so they cannot take on a large enough share of the cost without their own expected utility becoming negative.
As a new approach to achieving constant delegation gaps, we will now consider delegation in the free-agent model. Recall that this model removes the agent's probing costs but requires that they always break ties in favor of the principal. This model can be applied in settings where it is standard for the principal to incur the total probing cost. As a simple example, an organization (modeled by the principal) might pay the full travel and lodging expenses associated with interviewing candidates for an available position. The interviewer (agent) can then freely choose to interview (probe) candidates and make recommendations of their own choosing.
We will start by showing that there are constant discounted-cost approximations for this model for any constant discount factor $\delta$ and certain downward-closed constraints.
\subsection{Efficient Delegation for the Free-Agent Model with Discounts}
\label{combined-free-agent-discounted-cost-model}
In Proposition \ref{prop:unif_matroids_freagent}, we propose a $(\delta, \delta')$-factor strategy for $k$-uniform matroid constraints for any $0 \leq \delta \leq 1/2$ and $\delta' \geq \delta$. We show that it is possible to design $\delta$-factor agent-agnostic delegation for the free-agent model with a constant discount factor $\delta' \geq \delta$ on costs for $k$-uniform matroid constraints. Recall that $Z_i^{\min} = \min\{X_i,\tau_i\}$, where $\tau_i$ is the solution to $\mathbb E[(X_i-\tau_i)]=c_i$.
\begin{proposition} \label{prop:unif_matroids_freagent}
Let $I$ be an instance of the free-agent model with a $k$-uniform matroid constraint. Then there exists a $(\delta,\delta')$-factor delegation strategy for any $0 \leq \delta \leq 1/2$ and $\delta'\geq \delta$.
\end{proposition}
\begin{proof}
For $0\leq \delta < 1/2$, it is sufficient to prove the theorem for $\delta = \delta'$ as $(\delta,\delta)$-factor delegation is also a $(\delta,\delta')$ delegation for any $\delta'\geq \delta$. Consider the delegation strategy in which the principal sets a threshold $T$ such that $\Pr[|\{i : Z_i^{\min} \geq T\}| \geq k] = \delta$ and restricts the agent to the set of elements $S = \{i: \tau_i \geq T\}$. Among the elements in $S$, they will accept any combination of outcomes of utility at least $T$ (subject to the $k$-uniform matroid constraint):
\begin{equation*}
\mathcal R = \{ \{ (i, x_i, y_i) : i \in S_k \} : S_k \subseteq S \text{~and~} |S_k| \le k \text{~and all~} (x_i, y_i) \in \operatorname*{supp}(\mu_i) \text{~and all~} x_i \ge T \}
\end{equation*}
We will show that $\mathcal R$ achieves an $\delta$-factor of $\mathbb{E}[\texttt{OPT}]$ when the principal pays $1 - \delta$ factor of the total probing cost. Now, let's first bound $\mathbb{E}[\texttt{OPT}]$:
\begin{align*}
\mathbb{E}[\texttt{OPT}]
&= \mathbb{E} \left[ \max_{Q : |Q| \leq k} \sum_{i \in Q} Z^{\min}_{i} \right]\\
&\leq kT + \mathbb{E} \left[ \max_{Q : |Q| \leq k} \sum_{i \in Q} (Z^{\min}_{i}-T)_+\right] \\
&\leq kT + \sum_{i=1}^n \mathbb{E}[(Z^{\min}_i-T)_+]\\
&= kT + \sum_{i\in S} \mathbb{E}[(Z^{\min}_i-T)_+]
\end{align*}
The last equality holds because for all $i \notin S$, $\tau_i < T$ implies that $Z^{\min}_i < T$. Hence $(Z^{\min}_i - T)_+ = 0$ with probability $1$. Now, we claim that for all $i \in S$, we have $(Z^{\min}_i - T)_+ = (X_i - T)_+ - (X_i - \tau_i)_+$ with probability $1$. Recall that $\tau_i \geq T$ for all $i \in S$. So when $X_i \geq \tau_i \geq T$ we have that $i \in S$: $ (X_i - T)_+ - (X_i - \tau_i)_+ = \tau_i - T = (Z^{\min}_i - T)_+$, and when $X_i < \tau_i$ we similarly get $ (X_i - T)_+ - (X_i - \tau_i)_+ = (X_i - T)_+ = (Z^{\min}_i - T)_+$. Therefore, we can modify the upper bound on $\mathbb{E}[\texttt{OPT}]$ as follows:
\begin{align}
\mathbb{E}[\texttt{OPT}] &\leq kT + \sum_{i\in S}\left\{(X_i-T)_+ - (X_i-\tau_i)_+\right\} \notag \\
&\leq kT + \sum_{i\in S} \mathbb{E}[(X_i-T)_+] - c(S)
\end{align}
Now we will lower bound the principal's delegated utility under strategy $\mathcal R$. Since the agent does not pay any probing costs, they will (in the worst case) probe all elements in $S$ and propose a set of elements $E'$ with $X_i \geq T$ for each $i \in E'$ (if such elements exist) that maximizes their value $\sum_{i\in E'}Y_i$. Recall that we assume the agent will not probe any elements for which they have $0$ expected utility and do not benefit the principal, so the agent won't probe any elements outside of $S$.
Let $A$ be the set of elements with $Z^{\min}_i \geq T$. By definition of the threshold T,
\begin{align*}
\Pr [|A| \geq k]
&= \Pr[\exists A \subseteq [n], |A| \geq k, Z^{\min}_i \geq T \text{~for all~} i \in A] \\
&= \Pr[\exists A \subseteq S, |A| \geq k, Z^{\min}_i \geq T \text{~for all~} i \in A] \\
&= \Pr[\exists A \subseteq S, |A| \geq k, X_i \geq T \text{~for all~} i \in A] \\
&= \delta
\end{align*}
The above equality shows that there will be at least $k$ element in $S$ with $X_i \geq T$ with probability $\delta$, therefore the principal will at least obtain value $kT$ plus some extra value with probability $\delta$. We assume the worst-case behavior from the agent: they probe all elements in $S$, and if $A$ is the set of elements $i$ for which $X_i \geq T$, then the agent proposes a maximal set of elements in $A$ with the minimum $x_i$ values.
Consider the following three events: $|A| > k$, $1 \leq |A| \leq k$, and $A = \emptyset$. Note that $\Pr[|A| > k ] + \Pr[|A| = k] =\delta$ and $ \Pr[|A| < k] = 1 - \delta$. Moreover, whenever $|A|\leq k$, the agent will select the entirety of $A$ and propose to the principal because they have no incentive to drop any element $i$ with $x_i\geq T$.
Now, we can lower-bound the principal's delegated expected utility for the worst-case agent with $1 - \delta$ discount factor as follows:
\begin{align}
&\mathbb{E}[\texttt{DEL}]\notag \\
&\geq \mathbb{E}[\texttt{DEL} ~|~ |A|>k] \cdot \Pr[|A|>k] + \mathbb{E}[\texttt{DEL} ~|~ 1 \le |A| \le k] \cdot \Pr[1 \le |A| \le k] + \mathbb{E}[\texttt{DEL}|A=\emptyset]\Pr[A=\emptyset] \notag \\
&\geq kT(\Pr[|A|>k]) + \mathbb{E}[\texttt{DEL} | 1 \le |A| \le k] \cdot \Pr[1 \le |A| \le k] - (1 - \delta) c(S) (\Pr[|A|>k]+\Pr[A=\emptyset]) \label{eq:high_value} \\
&\geq \begin{aligned}[t]
&k T(\Pr[|A|>k] + \Pr[|A| = k]) - (1 - \delta)c(S) \\
&+ \sum_{i\in S} \mathbb{E}[X_i - T | X_i \geq T \land |A| \leq k] \Pr[X_i \geq T] \cdot \Pr[|A \setminus i| \leq k - 1]
\end{aligned} \label{ineq:del} \\
&\geq \delta k T + \sum_{i \in S} \mathbb{E}[(X_i - T)_+] \cdot \Pr[|A \setminus i| \leq k - 1] - (1 - \delta) c(S) \notag \\
&\geq \delta k T + \sum_{i \in S} \mathbb{E}[(X_i - T)_+] \cdot \Pr[|A| \leq k] - (1 - \delta) c(S) \notag \\
&\geq \delta k T + (1 - \delta) \sum_{i \in S} \mathbb{E}[(X_i - T)_+] - (1 - \delta) c(S) \notag \\
&= \delta \left(k T + \sum_{i \in S} \mathbb{E}[(X_i - T)_+] - c(S) \right) + (1 - 2 \delta) \left( \sum_{i \in S} \mathbb{E}[(X_i - T)_+] - \sum_{i \in S} \mathbb{E}[(X_i -
\tau_i)_+] \right) \label{eq:separation_of_cost_to_optimal} \\
&\geq \delta \mathbb{E}[\texttt{OPT}] .\label{eq:result_1-uniform}
\end{align}
Inequality~\ref{eq:high_value} holds because the principal will obtain at least utility of $kT$ when $|A|\geq k$. Inequality \eqref{ineq:del} holds because when $1\leq |A|\leq k$, the agent will propose the entire set $A$. The inequality \eqref{eq:result_1-uniform} holds because $\delta \leq 1/2$ and $\tau_i \geq T$ for all $i \in S$ implies that $\mathbb{E}[(X_i-T)_+] - \mathbb{E}[(X_i-\tau_i)_+]\geq 0$. This concludes the proof.
\end{proof}
We now extend the constant delegation gap for the free-agent model with constant discounts to general downward-closed constraints. In Theorem \ref{thm:efficient_delegation_from_OCRS}, we show a reduction from the free-agent model with constant discounts to selectable greedy OCRS. We show that if there exist $\alpha$-selectable greedy OCRS for the polytope $P_\mathcal I = \operatorname{conv} \{\mathrm{1}_S:S\in \mathcal I\}$ then the principal can construct a $(\alpha,1-\alpha)$-factor delegation strategy for the free-agent model with constraint $\mathcal I$. Theorem~\ref{thm:efficient_delegation_from_OCRS} further implies constant factor delegation for the free-agent model with constant discounts for general matroids, matchings, and knapsack constraints.
\begin{theorem}\label{thm:efficient_delegation_from_OCRS}
Given an instance of the free-agent model with constraint $\mathcal I$, if there exists an $\alpha$-selectable greedy OCRS for the polytope $P_\mathcal I = \operatorname{conv} \{\mathrm{1}_S:S\in \mathcal I\}$, then there exists a $(\alpha, \delta)$-factor strategy for the given instance where the discount factor $\delta \geq 1-\alpha$.
\end{theorem}
\begin{proof}
Given an instance of the delegated Pandora's box problem for the free-agent model with elements $E$ and constraint $\mathcal I$, let a random optimal set $I^*$ defined as follows: $I^* = \argmax_{S \in \mathcal I} \sum_{i \in S} Z^{\min}_i$. We define $p_i^* = \Pr[i \in I^*]$ and thresholds $t_i$ such that $\Pr[Z^{\min}_i \geq t_i] = p_i^*$. Notice that $I^* \in \mathcal I$ with probability $1$, we have that $p^*$
is a convex combination of characteristic vectors of
feasible sets $\mathcal I$, and hence, $p^* \in P_\mathcal I$. The principal rejects all elements not in $E' =\{i\in E: p_i^*>0\}$, so the agent has no incentive to probe them. Note that for all elements $i \in E'$, $0 < t_i < \tau_i$. We can bound the optimal utility as follows:
\begin{align*}
\mathbb{E}[\texttt{OPT}]
&\leq \mathbb{E} \left[ \max_{S\in \mathcal I} \sum_{i \in S} Z^{\min}_i \right] = \sum_{i \in E'} \mathbb{E}[Z^{\min}_i ~|~ i \in I^*] \Pr[i \in I^*] \\
&\le \sum_{i \in E'} \mathbb{E}[Z^{\min}_i ~|~ Z^{\min}_i \geq t_i] p^*_i\\
&= \sum_{i \in E'} \mathbb{E}[(Z_i^{\min} - t_i)_+] + \sum_{i \in E'} t_i p^*_i \\
&= \sum_{i \in E'} \mathbb{E}[(X_i - t_i)_+ - (X_i - \tau_i)_+] + \sum_{i \in E'} t_i p^*_i \\
&= \sum_{i \in E'} \mathbb{E}[(X_i - t_i)_+] + \sum_{i \in E} t_i p^*_i - c(E')
\end{align*}
Let $\mathcal I_{p^*} \subseteq \mathcal I$ be the downward closed family generated by $\alpha$-selectable greedy OCRS for $p^* \in P_\mathcal I$. Now, consider a delegation strategy in which the principal accepts a proposal of elements $S$ if and only if $S \in \mathcal I_{p^*}$ and the realizations of all $i \in S$ is greater than or equal $t_i$, i.e.
\begin{equation*}
\mathcal R = \{ \{ (i, x_i, y_i) : i \in Q \} : Q \in \mathcal I_{p^*} \text{~and~} (x_i, y_i) \in \operatorname*{supp}(\mu_i) \text{~and~} x_i \ge t_i \text{~for all~} i \in Q \}
\end{equation*}
Since the agent does not incur any cost for probing, in the worst case they will probe all elements in $E'$. Let $R(t)$ be the set of elements with $X_i \geq t_i$. The agent will always propose some maximal set $I$ with $I \subseteq R(t)$ and $I \in \mathcal I_{p^*}$. More formally, let
\begin{equation*}
\mathcal I_{p^*}^{R(t)} = \{S : (S \in \mathcal I_{p^*}) \text{~and~} (X_i \geq t_i \text{~for all~} i \in S) \text{~and~} (S \cup i' \notin \mathcal I_{p^*} \text{~for all~} i' \in E' \setminus S \text{ with } X_{i'} \geq t_{i'})\}
\end{equation*}
be the family of sets of elements that the agent might propose. In the worst-case, they will propose some such set of elements that minimizes the principal's utility. We can think of this worst-case agent as follows: an almighty adversary who presents elements in the worst possible sequence for all realizations, the agent then picks element $i$ if and only if $X_i \geq t_i$ and the selected set satisfies the feasible constraints $\mathcal I_{p^*}$ \footnote{An almighty adversary knows the coin flips of the agent's strategy, i.e. $\mathcal I_{p^*}$ and $R(t)$. Therefore, an almighty adversary can force the agent to select any $S\in \mathcal I_{p^*}^{R(t)}$ of their choice.}.
Note that this agent has no incentive to pick a set outside of $\mathcal I_{p^*}^{R(t)}$. Since $\mathcal I_{p^*}$ is generated by an $\alpha$-selectable greedy OCRS, given any currently selected set by the agent $S \subseteq R(t)$ and $S \in \mathcal I_{p^*}$, we have $\Pr[S \cup i \in \mathcal I_{p^*}] \geq \alpha$. Therefore for all elements $i \in E$, we have $\Pr[i \in I] \geq \alpha \cdot \Pr[X_i \geq t_i]$. Then
\begin{align*}
\mathbb{E}[\texttt{DEL}]
&= \mathbb{E}\left[\sum_{i \in I} X_i \right] - (1 - \delta) c(E')\\
&\geq \sum_{i \in E'} \mathbb{E}[X_i ~|~ i \in I] \cdot \Pr[i \in I] - (1 - \delta) c(E').
\end{align*}
Since the agent selects an element $i$ only if $X_i \geq t_i$, on the adversarial arrival of elements selected by an almighty adversary, $\mathbb{E}[X_i ~|~ i \in I] = \mathbb{E}[X_i ~|~ X_i \geq t_i]$. We can bound the principal's expected delegation with a constant discount $\delta \geq 1 - \alpha$ as follows:
\begin{align*}
\mathbb{E}[\texttt{DEL}]
&= \sum_{i \in E'} \mathbb{E}[X_i ~|~ i \in I] \cdot \Pr[i \in I] - (1 - \delta) c(E') \\
&\geq \alpha \cdot \sum_{i \in E'} \mathbb{E}[X_i ~|~ X_i \geq t_i] \cdot \Pr[X_i \geq t_i] - \alpha \cdot c(E') \\
&= \alpha \cdot \left\{ \sum_{i \in E'} \mathbb{E}[(X_i - t_i)_+] + \sum_{i \in E} t_i p^*_i - c(E') \right\} \\
&\geq \alpha \cdot \mathbb{E}[\texttt{OPT}]
\end{align*}
Concluding the proof.
\end{proof}
We note that the argument above reduces deterministic delegation, in which the principal chooses their strategy deterministically, to deterministic greedy OCRS. Perhaps surprisingly, it can also reduce deterministic delegation to randomized greedy OCRS as defined in \cite{feldman2016online}. The reason is that any randomized greedy OCRS is randomization over deterministic OCRS, so the reduction constructs a distribution over delegation mechanisms achieving the desired approximation. However, our model of delegation is a Stackelberg game in which the principal moves first, so their best randomized strategy can be no better than their best deterministic strategy. Therefore, the principal can choose the best deterministic strategy from among the distribution provided by the reduction for the same approximation factor.
Theorem~\ref{thm:efficient_delegation_from_OCRS} combined with efficient $\alpha$-selectable greedy OCRS schemes \cite{feldman2016online} implies the following corollary.
\begin{corollary}
There exist $(\alpha,\delta)$-factor delegation strategies (agent-agnostic) for the free-agent model with matroid, matching, and knapsack constraints and constant discount factor $\delta$. Specifically, these constants for matroids, matchings, and knapsacks are $\alpha =1/4, \delta \geq 3/4$, $\alpha = 1/2e, \delta \geq 1 -1/2e$ and $\alpha = 3/2 - \sqrt 2,\delta \geq \sqrt 2 - 1/2$, respectively.
\end{corollary}
\subsection{Free-Agent Model Impossibility without Discounts}
\label{free-agent-model-impossibility}
One of the primary motivations for introducing this model comes from the impossibility in Section \ref{standard-model-impossibility} and an attempt to circumvent one of the challenges with achieving a constant delegation gap. Recall from that section, the instance for which $X_i = n$ and $Y_i = n$ independently with probability $1 / n$ each and $0$ otherwise. Now that the agent does not pay to probe, the principal may choose accept only outcome $(i, n, n)$ from element $i$ because the agent's expected utility from probing $i$ is $n \cdot \Pr[X_i=n] \Pr[Y_i=n] = 1/n > 0$. However, since the agent does not pay to probe, they may probe all elements that can be accepted with nonzero probability so long as they could do better by probing such elements. Therefore, the agent might incur too large a probing cost for the principal compared to what the principal would pay on their own. In Proposition \ref{prop:impos2}, we describe a family of instances of the free-agent model for which the delegation gap is $O(1/n^{1/4})$ without any discounts. Proposition~\ref{prop:impos2} shows that it is impossible to obtain a constant factor delegation gap for the free-agent model without any discounts, even when the agent breaks all ties in favor of the principal. Moreover, it holds even when the agent does not probe all possible elements whose outcome is acceptable with nonzero probability.
\begin{restatable}{proposition}{propimposfreeagent}\label{prop:impos2}
There exists an instance of the free-agent model on $n$ elements with a $1$-uniform matroid constraint such that the delegation gap is $O ({1}/{n^{\frac{1}{4}}})$, even when the agent breaks all ties in favor of the principal.
\end{restatable}
We defer the proof of Proposition~\ref{prop:impos2} to Appendix~\ref{appendix:proof_propo5.5} to prevent interruptions to the flow of the paper.
\begin{comment}
\begin{proposition}\label{prop:impos2}
There exists an instance of the free-agent model of $n$ elements with a $1$-uniform matroid constraint such that the delegation gap is $O ({1}/{n^{\frac{1}{4}}})$ even when the agent breaks all ties in favor of the principal.
\end{proposition}
\begin{proof}
Consider an instance of the free-agent model with $1$-uniform matroid constraint as follows: for each element $i$, let $X_i$ and $Y_i$ be independently distributed as follow:
\begin{equation*}
\begin{aligned}[c]
X_i =
\begin{cases}
\frac{1}{p^2,} & \text{with prob. } p^2 \\
0, & \text{with prob. } 1-p^2
\end{cases}
\end{aligned}
\qquad\qquad
\begin{aligned}[c]
Y_i =
\begin{cases}
\delta_i, & \text{with prob. } \frac 1 2 \\
\mathrm{e}^{n^2}, & \text{with prob. } \frac 1 2
\end{cases}
\end{aligned}
\end{equation*}
where $p = \frac{1}{n^{1/4}}$ and $\delta_i > 0$ are sufficiently small. We set the cost for probing any element $i$ to $c_i = 1 - \frac p 2$ and also observe that $(1 - (1 - p^2)^n) \rightarrow 1$ as $n \rightarrow \infty$.
Once again, the principal's optimal non-delegated expected utility is given by the solution to Weitzman's Pandora's box problem. For each element $i$, we must determine the cap value $\tau_i$ such that $\mathbb{E} (X_i - \tau_i)^+ = c_i$. It's not hard to verify for this instance that $\tau_i = \frac{1}{2p} = \frac{n^{1/4}}{2}$. Then the optimal solution guarantees an expected utility of $\mathbb{E}[\mathbb{E}[\texttt{OPT}]] = \mathbb{E} \max_i \min(X_i, \tau_i)$ where each $\min(X_i, \tau_i)$ takes value $\tau_i$ with probability $p^2$ and $0$ otherwise. Therefore, $\max_i \min(X_i, \tau_i)$ takes value $\tau_i$ with probability $1 - \left( 1 - p^2 \right)^n = 1 - \left( 1 - \frac{1}{n^{1/2}} \right)^n = O(1)$ and the principal gets expected utility
\begin{equation*}
\mathbb{E}[\texttt{OPT}]= O(1) \tau_i = \Theta(n^{1/4}).
\end{equation*}
Now we will bound the principal's optimal delegated expected utility. Consider the delegation strategy defined by some set of acceptable outcomes $\mathcal R$. Given $\mathcal R$, the agent's optimal strategy (assuming they break ties in favor of the principal) can be described as follows: probe elements one by one for which $(i, 1/p^2, e^{n^2}) \in \mathcal R$ and propose the first observed element with $(x_i, y_i) = (1/p^2, e^{n^2})$. If they are unable to find such an element, then they probe elements with only $(i, 0, e^{n^2}) \in \mathcal R$ and propose the first element with $y_i = e^{n^2}$. Finally, they will probe all other elements in some order and propose any element with maximum $\delta_i$ among probed feasible elements.
For each element $i$, the principal has no incentive to accept only $0$ utility outcomes, so an optimal strategy cannot have both $(i, 1/p^2, e^{n^2}) \notin \mathcal R$ and $(i, 1/p^2, \delta_i) \notin \mathcal R$, since then they may incentivize the agent to probe element $i$ (incurring a cost on the principal) without getting any utility back. Moreover, the principal has no incentive to accept any $0$ utility outcomes from an element $i$ even if they accept at least one of $(i, 1/p^2, e^{n^2})$ or $(i, 1/p^2, \delta_i)$. To see why, consider any delegation strategy $\mathcal R$ for which there exists an element $i$ with $(i, 0, \cdot) \in \mathcal R$. There is a nonzero probability that the agent observes only element $i$ with $Y_i = e^{n^2}$ and $X_i = 0$. In this event, dropping $(i, 0, \cdot)$ from $\mathcal R$ does not change the principal's expected utility. Since the agent breaks ties in favor of the principal, in all other cases they will propose an element $i$ with positive $X_i$. Hence, the principal's expected utility does not decrease if $(i, 0, \cdot) \notin \mathcal R$.
Finally, if $\mathcal R$ is an optimal delegation strategy, then for any element $i \in E$, we have that $(i, 1/p^2, \delta_i) \in \mathcal R$ implies that $(i, 1/p^2, e^{n^2}) \in \mathcal R$ and $(i, 1/p^2, e^{n^2}) \in \mathcal R$ implies that $(i, 1/p^2, \delta_i) \in \mathcal R$. Suppose, for the sake of contradiction, that there exists an element $i$ with only $(i, 1/p^2, e^{n^2}) \in \mathcal R$. Then the agent will probe element $i$ last after probing other elements $i'$ with $(i', 1/p^2, e^{n^2}) \in \mathcal R$ and $(i', 1/p^2, \delta_i) \in \mathcal R$. Now, consider the event in which the agent probes element $i$ and it is the only element with $X_i > 0$ among all the probed elements. The probability of such an event is nonzero. However, the agent will not be able to propose an element $i$ if $Y_i = \delta_i$, which happens with probability $1/2$, and in this case the principal ends up paying the cost for probing $i$ without obtaining any value. By adding $(i, 1/p^2, \delta_i)$ to $\mathcal R$, the principal can increase their expected utility conditioned on $i$ being the only element with $X_i > 0$. In all other cases, adding $(i, 1/p^2, \delta_i)$ to $\mathcal R$ does not affect their utility. This contradicts the optimality of $\mathcal R$.
For the other case, suppose there exists an element $i$ with only $(i, 1/p^2, \delta_i) \in \mathcal R$. Again, the agent first probes the elements with both $(i', 1/p^2, \delta_i) \in \mathcal R$ and $(i', 1/p^2, e^{n^2}) \in \mathcal R$. Consider the event in which they do not observe any element with $i'$ with $X_{i'} > 0$ among the elements probed so far. Now, let assume that the agent probes $i$ right after that (this is the best possible scenario for the principal as all other available elements $i'$ are such that $(i', 1/p^2, \cdot) \notin \mathcal R$). Now if $X_i > 0$ and $Y_i = e^{n^2}$, then the agent will not be able to propose element $i$ and the principal pays the cost for probing $i$ without obtaining any value. Hence, adding $(i, 1/p^2, e^{n^2})$ strictly improves the principal's expected utility in this event, and in all other events, it does not affect their utility.
Now, without loss of generality, we can consider any optimal delegation strategy for the principal defined by a set of feasible elements $A = \{1, \dots, k\}$ for which the principal will accept exactly $(i, 1/p^2, e^{n^2})$ and $(i, 1/p^2, \delta_i)$. Since the agent does not incur any cost, they can probe all $k$ elements and propose their favorite acceptable element. However, we assumed that the agent breaks ties in favor of the principal, therefore they will probe elements one by one and will stop probing as soon as they find an element $j \in A$ with $(j, 1/p^2, e^{n^2})$.
If the agent can not find any such element, then they will propose $(e, 1/p^2, \delta_e)$ with the maximum $\delta_e$ among probed elements. Now we can bound the principal's optimal delegated expected utility as:
\begin{align}
\texttt{DEL}
&\leq
\begin{aligned}[t]
& \Pr[X_1 = 1/p^2] \cdot \Pr[Y_1 = e^{n^2}] \left( \frac{1}{p^2} - c \right) \\
&+ \Pr[X_2 = 1/p^2] \cdot \Pr[Y_2 = e^{n^2}] (\Pr[Y_1 = \delta_1] + \Pr[Y_1 = e^{n^2}] \cdot Pr[X_1 = 0]) \left( \frac{1}{p^2} - 2c \right) + \dots \\
&+ \Pr[X_k = 1/p^2] \cdot \Pr[Y_k = e^{n^2}] \prod_{i=1}^{k-1} (\Pr[Y_i = \delta_i] + \Pr[Y_i = e^{n^2}] \cdot Pr[X_i = 0]) \left( \frac{1}{p^2} - kc \right) \\
&+ \left[ \frac{1}{p^2}\left (1 - \prod_{i=1}^{k}(\Pr[X_i = 0] + \Pr[X_i = {1}/{p^2}] + \Pr[Y_i = \delta_i] \right) - ck \right]
\end{aligned} \\
&\leq
\begin{aligned}[t]
& \frac{p^2}{2}\left( \frac{1}{p^2} - c \right) + \frac{p^2}{2} \left( 1-\frac{p^2}{2} \right) \left( \frac{1}{p^2} - 2c \right) + \dots \\
&+ \frac{p^2}{2} \left( 1 - \frac{p^2}{2} \right)^{k-1} \left( \frac{1}{p^2} - kc \right) + \left[ (1 - (1 - p^2)^k) \frac{1}{p^2} - ck \right]
\end{aligned} \notag
\end{align}
To reduce the clutter, let $r = \left( 1 - {p^2}/{2} \right)$. From Appendix B of \cite{boodaghians2020pandora}, we have that $(1 - (1 - p^2)^k) \cdot {1}/{p^2} -ck \leq 1/2$. Using this, we can simplify the above bound as:
\begin{align}
\texttt{DEL}
&\leq \frac{1}{2} \left\{1 + r + r^2 + \dots + r^{k-1} \right\} - \frac{cp^2}{2} \left(1 + 2r + \dots + kr^{k-1} \right) + \frac{1}{2} \notag \\
&= \frac{1}{2} \left(\frac{1 - r^k}{1 - r} \right) - \frac{p^2c}{2} \left\{ \left( \frac{1 - r^k}{(1 - r)^2} \right) - \frac{kr^k}{1 - r} \right\} + \frac{1}{2} \notag \\
&= \frac{1}{p^2} (1 - r^k) - \frac{2c}{p^2} (1 - r^k) + ck r^k + \frac{1}{2} \notag \\
&= \left( \frac{1}{p} - \frac{1}{p^2} \right) (1 - r^k) + k r^k + \frac{1}{2} \notag \\
&\leq \frac{1}{2} + O(ne^{-\sqrt n}) \notag
\end{align}
The above bound on the expected delegation holds for any budget $k$ and outer constraint to the agent. This shows that the delegation gap is at least $O(n^{1/4})$.
Note that the impossibility still holds if the principal samples $\mathcal R$ from any distribution $D$ over the sets of feasible solutions. We can similarly show that the optimal distribution $D^*$ over the feasibile sets has positive support on the solutions $\mathcal R\in \Omega_\mathcal I$ which can be expressed as $\mathcal R = \{(i,1/p^2,e^{n^2}),(i,1/p^2,\delta_i):i\in A\}$ for some $A\subseteq E$. We earlier showed that for any such $\mathcal R$, $\texttt{DEL} \leq O(1/n^{1/4})\cdot \mathbb{E}[\texttt{OPT}]$. Therefore, for any sample of feasible set $\mathcal R$ from $D^*$, $\texttt{DEL}_\mathcal R \leq O(1/n^{1/4})\mathbb{E}[\texttt{OPT}]$. Thus, $\mathbb{E}[\texttt{DEL}_\mathcal R] \leq O(1/n^{1/4})\cdot \mathbb{E}[\texttt{OPT}]$.
\end{proof}
\end{comment}
\subsection{Discounted-Cost Impossibility}
\label{discounted-cost-model-impossibility}
With constant-factor delegation gaps for the free-agent model with discounts and an impossibility for the free-agent model, one might hope that the standard model with constant discounts might admit constant delegation gaps. However, we again have an impossibility. In Proposition~\ref{prop:impo42}, we show that there exists a family of instances of the standard model, parameterized by the number of elements $n$, with a generous discount factor $\delta = 1 - 1/\sqrt n$ for which there does not exist any constant factor delegation strategies. Thus, Proposition~\ref{prop:impo42} shows that there can not exist an $(\alpha, \delta)$-strategy for this problem with constants $\alpha$ and $\delta<1$. See
\begin{restatable}{proposition}{propimposdiscounts}\label{prop:impo42}
There exist instances of the discounted-cost model on $n$ elements with discount factor $\delta = 1 - 1 / \sqrt n$ (the agent and the principal both pay $(1-\delta) c_i$ for all elements, i.e. $c_i / \sqrt n$) for which the delegation gap is $O\left(1 / \sqrt n \right)$.
\end{restatable}
\begin{proof}
For any positive integer $n > 1$ and real $\varepsilon = 1 / n^{\frac{1}{4}}$, let $M$ be a positive integer such that $M = \sqrt n$ and consider the following instance of delegated Pandora's box. We have $n$ identical elements $E = \{1, \dots, n\}$ where each element $i$ has a probing cost $c_i = 1 - \varepsilon$ and random utilities $(X_i, Y_i) \sim \mu_i$. The principal's utility $X_i$ is $n$ with probability $\frac{1}{n}$ and $0$ otherwise. The agent's utility $Y_i$ is $M$ with probability $\frac{1}{M}$ independently of $X_i$ and $0$ otherwise. The constraint is a $1$-uniform matroid and there is no outer constraint. We let the agent break ties in favor of the principal.
First, we will determine the principal's optimal non-delegated expected utility. This is given by the solution tothe generalized Pandora's box problem. For each element $i$, we must determine the cap value $\tau_i$ such that $\mathbb{E} (X_i - \tau_i)^+ = c_i$. It's not hard to verify for this instance that $\tau_i = \varepsilon n$. Then the optimal solution guarantees an expected utility of $U = \mathbb{E} \max_i \min(X_i, \tau_i)$ where each $\min(X_i, \tau_i)$ takes value $\varepsilon n$ with probability $\frac{1}{n}$ and $0$ otherwise. Therefore, $\max_i \min(X_i, \tau_i)$ takes value $\varepsilon n$ with probability $1 - \left( 1 - \frac{1}{n} \right)^n$ and the principal gets expected utility
\begin{equation*}
\mathbb{E}[\texttt{OPT}] = \varepsilon n \left( 1 - \left( 1 - \frac{1}{n} \right)^n \right) \ge \varepsilon n \left( 1 - \frac{1}{e} \right) = \Theta(n^{3/4}).
\end{equation*}
Now, we will bound the principal's delegated expected utility when both the agent and the principal get a discount factor of $\delta>1-1/n^{1/2}$. Consider an arbitrary acceptable set $\mathcal R$ that the principal might commit to. Since the constraint is $1$-uniform, $R$ consists of a set of singleton outcomes. Observe that every element $i$ evaluates to one of four tagged outcomes $(i, n, M)$, $(i, n, 0)$, $(i, 0, M)$, and $(i, 0, 0)$ with probabilities $\frac{1}{n M}$, $\frac{1}{n} \left( 1 - \frac{1}{M} \right) $, $\frac{1}{M} \left( 1 - \frac{1}{n} \right)$, and $\left( 1 - \frac{1}{n} \right) \left( 1 - \frac{1}{M} \right)$, respectively.
Given $R$, let $E^* \subseteq E$ be the subset of elements $i$ for which $(i, 0, M) \in R$, and let $k = |{E^*}|$. Consider any element $i \notin E^*$. If outcome $(i, 0, M) \notin R$, then the agent's increase in expected utility from probing $i$ is at most $M \cdot \frac{1}{nM} - (1 - \varepsilon)(1-\delta) = \frac 1 n - \frac 1 {\sqrt n}(1-\varepsilon) < 0$ for large enough $n$, so they have no incentive to ever probe $i$. Therefore, for the rest of the proof, we assume that $k > 0$.
The agent now faces an instance of Pandora's box problem, so their optimal strategy is to probe elements in order of weakly decreasing cap value (among non-negative cap values) and accept the first acceptable outcome whose value is above its cap. Note that the agent will only probe the elements that belong to $E^*$ We divide the elements in $E^*$ into the following disjoint sets:
\begin{align*}
E^*_1 &= \{i : \{(i,n,M), (i,0,M), (i,n,0)\} \subseteq \mathcal R\}, \\
E^*_2 &= \{i : \{(i,n,M), (i,0,M)\} \subseteq \mathcal R\}, \\
E^*_3 &= \{i : \{(i,0,M), (i,n,0)\} \subseteq \mathcal R\}.
\end{align*}
The optimal strategy for the agent is to first probe the elements in $E^*_1$ and then $E^*_2$ and stop once they find an outcome with utility $M$. If there is no such outcome, then they probe elements in $E^*_3$ and stops once they find an outcome $(i, 0, M)$. However, the principal has no incentive to construct $\mathcal R$ such that $E^*_2 \neq \emptyset$ or $E^*_3 \neq \emptyset$. For the sake of contradiction, let $E_2^* \neq \emptyset$, in that case, consider an event when the agent does not observe $i\in E^*$ with feasible outcome with $Y_i = M$, however, observes $i' \in E_2^*$ with $(i', n, 0)$. Conditioned on this event, the principal can strictly benefit by adding $(i', 0, n)$ to $\mathcal R$. In all other cases, the principal's utility is unchanged by adding $(i', n, 0)$. Therefore $E_2^* = \emptyset$. Similarly, we can show that the principal strictly benefits by adding $(i, n, M)$ to $\mathcal R$ for all $i \in E_3^*$. Hence, for the rest of the proof, we assume that $E^* = E^*_1$.
Consider the utility that the principal gets when the agent finds an outcome of utility $M$. Among the $k = |{E^*}|$ elements that the agent might probe, they find a utility of $M$ with probability $1 - \left( 1 - \frac{1}{M} \right)^k$. Since the principal's utility for the proposed outcome is independent of the agent's, it will have utility $n$ for the principal with probability $\frac{1}{n}$. Since $k \ge 1$, the principal pays a cost of $1 - \varepsilon$ for the first probe. Therefore, the principal expects a utility of at most
\begin{equation*}
\left\{ 1 - \left( 1 - \frac{1}{M} \right)^k\right\} \cdot \left (\frac{n}{n} - (1 - \varepsilon)(1-\delta)\right) = O(1)
\end{equation*}
from this part of the agent's strategy.
Now, with probability $\left( 1 - \frac{1}{M} \right)^k$, the agent doesn't find any outcomes of value $M$. Then the principal pays a cost of $k (1 - \varepsilon)$ in order to probe all $k$ elements in $E^*$. Since the agent breaks ties in favor of the principal, they will propose any acceptable outcomes of value $n$ to the principal. There exists such an outcome with probability at most $1 - \left( 1 - \frac{1}{n} \right)^k$. Therefore, the principal expects a utility of at most
\begin{align*}
\left( 1 - \frac{1}{M} \right)^k \cdot \left\{n \left(1 - \left( 1 - \frac{1}{n} \right)^k \right) - k(1 - \varepsilon)(1-\delta)\right\}
&\le \left( 1 - \frac{1}{M} \right)^k \cdot \left\{k - k(1 - \varepsilon)(1-\delta)\right\} \\
&\le k (\varepsilon + \delta) \left( 1 - \frac{1}{M} \right)^k
\end{align*}
For the sake of exposition, let $f(k) = k \left( 1 - \frac{1}{\sqrt n} \right)^k$. For $k = o(\sqrt n )$, asymptotically, $f(k) = o(\sqrt n)$ and for $k=\omega (\sqrt n)$, $f(k) = \omega (\sqrt n) \mathrm e^{-\frac{\omega (\sqrt n)}{\sqrt n}} = o(\sqrt n)$. For $k=\Theta (\sqrt n)$, $f(k) = \Theta (\sqrt n)$. Therefore, $\max_k f(k) = O(\sqrt n)$ asymptotically.
The above arguments imply that the principal's optimal expected delegation is bounded by $O((\delta + \varepsilon )\sqrt n)+O(1) = O(n^{1/4})$. Hence the delegation gap for the above instance is $O(1/n^{1/2})$.
Note that the impossibility still holds if the principal samples $\mathcal R$ from any distribution $D$ over the sets of feasible solutions. We can similarly show that the optimal distribution $D^*$ over the feasible sets has positive support on the solutions $\mathcal R\in \Omega_\mathcal I$ for which $E^* = E_1^*$. Therefore, for any sample of feasible set $\mathcal R$ from $D^*$, $\mathbb{E}[\texttt{DEL}] = O(1/\sqrt n)\mathbb{E}[\texttt{OPT}]$. Thus, $\mathbb{E}[\texttt{DEL}] = O(1/\sqrt n)\cdot \mathbb{E}[\texttt{OPT}]$.
\end{proof}
\begin{comment}
content...
\begin{proof}
For any positive integer $n > 1$ and real $\varepsilon = 1 / n^{\frac{1}{4}}$, let $M$ be a positive integer such that $M = \sqrt n$ and consider the following instance of delegated Pandora's box. We have $n$ identical elements $E = \{1, \dots, n\}$ where each element $i$ has a probing cost $c_i = 1 - \varepsilon$ and random utilities $(X_i, Y_i) \sim \mu_i$. The principal's utility $X_i$ is $n$ with probability $\frac{1}{n}$ and $0$ otherwise. The agent's utility $Y_i$ is $M$ with probability $\frac{1}{M}$ independently of $X_i$ and $0$ otherwise. The constraint is a $1$-uniform matroid and there is no outer constraint. We let the agent break ties in favor of the principal.
First, we will determine the principal's optimal non-delegated expected utility. This is given by the solution tothe generalized Pandora's box problem. For each element $i$, we must determine the cap value $\tau_i$ such that $\mathbb{E} (X_i - \tau_i)^+ = c_i$. It's not hard to verify for this instance that $\tau_i = \varepsilon n$. Then the optimal solution guarantees an expected utility of $U = \mathbb{E} \max_i \min(X_i, \tau_i)$ where each $\min(X_i, \tau_i)$ takes value $\varepsilon n$ with probability $\frac{1}{n}$ and $0$ otherwise. Therefore, $\max_i \min(X_i, \tau_i)$ takes value $\varepsilon n$ with probability $1 - \left( 1 - \frac{1}{n} \right)^n$ and the principal gets expected utility
\begin{equation*}
\mathbb{E}[\texttt{OPT}] = \varepsilon n \left( 1 - \left( 1 - \frac{1}{n} \right)^n \right) \ge \varepsilon n \left( 1 - \frac{1}{e} \right) = \Theta(n^{3/4}).
\end{equation*}
Now, we will bound the principal's delegated expected utility when both the agent and the principal get a discount factor of $\delta>1-1/n^{1/2}$. Consider an arbitrary acceptable set $\mathcal R$ that the principal might commit to. Since the constraint is $1$-uniform, $R$ consists of a set of singleton outcomes. Observe that every element $i$ evaluates to one of four tagged outcomes $(i, n, M)$, $(i, n, 0)$, $(i, 0, M)$, and $(i, 0, 0)$ with probabilities $\frac{1}{n M}$, $\frac{1}{n} \left( 1 - \frac{1}{M} \right) $, $\frac{1}{M} \left( 1 - \frac{1}{n} \right)$, and $\left( 1 - \frac{1}{n} \right) \left( 1 - \frac{1}{M} \right)$, respectively.
Given $R$, let $E^* \subseteq E$ be the subset of elements $i$ for which $(i, 0, M) \in R$, and let $k = |{E^*}|$. Consider any element $i \notin E^*$. If outcome $(i, 0, M) \notin R$, then the agent's increase in expected utility from probing $i$ is at most $M \cdot \frac{1}{nM} - (1 - \varepsilon)(1-\delta) = \frac 1 n - \frac 1 {\sqrt n}(1-\varepsilon) < 0$ for large enough $n$, so they have no incentive to ever probe $i$. Therefore, for the rest of the proof, we assume that $k > 0$.
The agent now faces an instance of Pandora's box problem, so their optimal strategy is to probe elements in order of weakly decreasing cap value (among non-negative cap values) and accept the first acceptable outcome whose value is above its cap. Note that the agent will only probe the elements that belong to $E^*$ We divide the elements in $E^*$ into the following disjoint sets:
\begin{align*}
E^*_1 &= \{i : \{(i,n,M), (i,0,M), (i,n,0)\} \subseteq \mathcal R\}, \\
E^*_2 &= \{i : \{(i,n,M), (i,0,M)\} \subseteq \mathcal R\}, \\
E^*_3 &= \{i : \{(i,0,M), (i,n,0)\} \subseteq \mathcal R\}.
\end{align*}
The optimal strategy for the agent is to first probe the elements in $E^*_1$ and then $E^*_2$ and stop once they find an outcome with utility $M$. If there is no such outcome, then they probe elements in $E^*_3$ and stops once they find an outcome $(i, 0, M)$. However, the principal has no incentive to construct $\mathcal R$ such that $E^*_2 \neq \emptyset$ or $E^*_3 \neq \emptyset$. For the sake of contradiction, let $E_2^* \neq \emptyset$, in that case, consider an event when the agent does not observe $i\in E^*$ with feasible outcome with $Y_i = M$, however, observes $i' \in E_2^*$ with $(i', n, 0)$. Conditioned on this event, the principal can strictly benefit by adding $(i', 0, n)$ to $\mathcal R$. In all other cases, the principal's utility is unchanged by adding $(i', n, 0)$. Therefore $E_2^* = \emptyset$. Similarly, we can show that the principal strictly benefits by adding $(i, n, M)$ to $\mathcal R$ for all $i \in E_3^*$. Hence, for the rest of the proof, we assume that $E^* = E^*_1$.
Consider the utility that the principal gets when the agent finds an outcome of utility $M$. Among the $k = |{E^*}|$ elements that the agent might probe, they find a utility of $M$ with probability $1 - \left( 1 - \frac{1}{M} \right)^k$. Since the principal's utility for the proposed outcome is independent of the agent's, it will have utility $n$ for the principal with probability $\frac{1}{n}$. Since $k \ge 1$, the principal pays a cost of $1 - \varepsilon$ for the first probe. Therefore, the principal expects a utility of at most
\begin{equation*}
\left\{ 1 - \left( 1 - \frac{1}{M} \right)^k\right\} \cdot \left (\frac{n}{n} - (1 - \varepsilon)(1-\delta)\right) = O(1)
\end{equation*}
from this part of the agent's strategy.
Now, with probability $\left( 1 - \frac{1}{M} \right)^k$, the agent doesn't find any outcomes of value $M$. Then the principal pays a cost of $k (1 - \varepsilon)$ in order to probe all $k$ elements in $E^*$. Since the agent breaks ties in favor of the principal, they will propose any acceptable outcomes of value $n$ to the principal. There exists such an outcome with probability at most $1 - \left( 1 - \frac{1}{n} \right)^k$. Therefore, the principal expects a utility of at most
\begin{align*}
\left( 1 - \frac{1}{M} \right)^k \cdot \left\{n \left(1 - \left( 1 - \frac{1}{n} \right)^k \right) - k(1 - \varepsilon)(1-\delta)\right\}
&\le \left( 1 - \frac{1}{M} \right)^k \cdot \left\{k - k(1 - \varepsilon)(1-\delta)\right\} \\
&\le k (\varepsilon + \delta) \left( 1 - \frac{1}{M} \right)^k
\end{align*}
For the sake of exposition, let $f(k) = k \left( 1 - \frac{1}{\sqrt n} \right)^k$. For $k = o(\sqrt n )$, asymptotically, $f(k) = o(\sqrt n)$ and for $k=\omega (\sqrt n)$, $f(k) = \omega (\sqrt n) \mathrm e^{-\frac{\omega (\sqrt n)}{\sqrt n}} = o(\sqrt n)$. For $k=\Theta (\sqrt n)$, $f(k) = \Theta (\sqrt n)$. Therefore, $\max_k f(k) = O(\sqrt n)$ asymptotically.
The above arguments imply that the principal's optimal expected delegation is bounded by $O((\delta + \varepsilon )\sqrt n)+O(1) \leq O(n^{1/4})$. Hence the delegation gap for the above instance is $O(1/n^{1/2})$.
Note that the impossibility still holds if the principal samples $\mathcal R$ from any distribution $D$ over the sets of feasible solutions. We can similarly show that the optimal distribution $D^*$ over the feasible sets has positive support on the solutions $\mathcal R\in \Omega_\mathcal I$ for which $E^* = E_1^*$. We earlier showed that for any such $\mathcal R$, $\texttt{DEL} \leq O(1/\sqrt n)\cdot \mathbb{E}[\texttt{OPT}]$. Therefore, for any sample of feasible set $\mathcal R$ from $D^*$, $\texttt{DEL}_\mathcal R \leq O(1/\sqrt n)\mathbb{E}[\texttt{OPT}]$. Thus, $\mathbb{E}[\texttt{DEL}_\mathcal R] \leq O(1/\sqrt n)\cdot \mathbb{E}[\texttt{OPT}]$.
\end{proof}
\end{comment}
\begin{comment}
\todo[inline]{Neel: Bug in the proof of Proposition~5.7}
\subsection{Lower Bound on Combined Free-Agent Discounted-Cost Delegation Gap}
In this section, we present the lower bound for the free-agent discounted cost model. We show that there exists an instance of the free-agent discounted-cost model of delegated Pandora's box with discount factor $\delta$, the principal's optimal delegation is upper bounded by $\mathcal O(\delta) \cdot \mathbb{E}[\texttt{OPT}]$. The instance we consider is the slight modification of the instance used in Proposition~\ref{prop:impos2}. The following Proposition shows that the delegation for uniform matroids for the free-agent discounted model obtained in Proposition \ref{prop:unif_matroids_freagent} is tight for $\delta \leq 1/2$ within a constant factor of the discount.
\begin{proposition}
There exists an instance of the free-agent discounted-cost model with $1$-uniform matroid constraint and $\delta$-discount factor such that the delegation gap is $O(\delta)$.
\end{proposition}
\begin{proof}
Consider the example from Proposition \ref{prop:lower_bound} with a modification to the agent's utility:
\begin{equation*}
\begin{aligned}[c]
X_i =
\begin{cases}
\frac{1}{p^2,} & \text{with prob. } p^2 \\
0, & \text{with prob. } 1-p^2
\end{cases}
\end{aligned}
\qquad\qquad
\begin{aligned}[c]
Y_i =
\begin{cases}
\delta_i, & \text{with prob. } \frac 1 2 \\
\texttt{Unif}[\mathrm{e}^{n^2},2\mathrm{e}^{n^2}] & \text{with prob. } \frac 1 2
\end{cases}
\end{aligned}
\end{equation*}
The cost for probing any element $i$ is $c_i = 1 - \frac p 2$. We fix $p = \frac{1}{n^{1/4}}$ and observe that $(1 - (1 - p^2)^n) \rightarrow 1$ as $n \rightarrow \infty$. We showed in Proposition \ref{prop:lower_bound} that $\mathbb{E}[\mathbb{E}[\texttt{OPT}]] = O\left( 1/p \right)$.
We can argue similar to Proposition \ref{prop:lower_bound} that optimal delegation $\mathcal R^*$ does not include element $i$ with only $(i, \cdot, \delta_i) \in \mathcal R^*$ (or $(i, 1/p^2, [e^{n^2}, 2e^{n^2}])$) otherwise the principal can strictly benefit by adding $(i, \cdot, A \subseteq [e^{n^2}, 2e^{n^2}])$ ($(i, 1/p^2, \delta_i)$) to $\mathcal R^*$. This contradicts the optimality of $\mathcal R^*$.
Therefore, following the proof of Proposition \ref{prop:lower_bound}, without loss of generality, the principal's optimal delegation strategy is given by
\begin{equation*}
\mathcal R^* = \{ \{(i, 1/p^2, \delta_i), (i, 1/p^2, A_i \subseteq [e^{n^2}, 2e^{n^2}]) \}: i = 1, \dots, k\}.
\end{equation*}
Notice that since the agent's utility for each element contains a continuous distribution for a larger value. Consider the agent in the middle of probing elements and let $S \subseteq \{1, \dots, k\}$ be the set of currently probed elements. With probability $1$, the agent might be able to obtain a higher value than the best-observed value so far by continuing to probe. Hence, the agent will probe all elements $\{1, \dots, k\}$ with probability $1$. We can bound the principal's optimal $\texttt{DEL}$ as follows:
\begin{align*}
\texttt{DEL}\leq \frac{1}{p^2}\left( 1-(1-p^2)^k\right) - kc(1-\delta).
\end{align*}
Let $f(k) = \frac{1}{p^2}\left( 1-(1-p^2)^k\right) - kc(1-\delta)$. Note that $\left( 1-(1-p^2)^k\right)$ is concave in $k$, hence with non-increasing derivatives. Therefore, $f(k)$ has a unique maxima. We claim that $f(k)$ is maximized for $k\in\left[0,\frac {\delta}{ p^2}\right]$. Hence for large enough $n$,
\begin{align*}
f'(p)& = -(1-\delta)c -\frac 1 {p^2}(1-p^2)^k\ln (1-p^2) \\
&\leq -(1-\delta)c +\frac{p^2 +p^4}{p^2}(1-p^2)^k\\
&\leq -(1-\delta)c + (1+p^2)e^{-\delta}\\
&\leq (-1+\delta + e^{-\delta}) + (p^2e^{-\delta} + \frac p 2-\frac {\delta p} 2) <0
\end{align*}
Hence, maximum expected delegation can be bounded as follows:
\begin{align*}
\texttt{DEL} &\leq \frac{1}{p^2}\left( 1-(1-p^2)^k\right) - kc(1-\delta)\\
&\leq k-kc \leq \frac{\delta}{p^2}(p/2) = O(\delta / p) = O(\delta)\mathbb{E}[\texttt{OPT}]
\end{align*}
Concluding the proof.
\end{proof}
For $\delta > 1/2$, we can construct a lower bound instance of the free-agent discounted-cost model of delegated Pandora's box similar to \citep[Proposition-4.2]{bechtel2020delegated} with negligible costs. For small $\varepsilon << 1$, $X_1 = 1/\varepsilon$ with probability $\varepsilon$ and $0$ otherwise, and $Y_1 = 1 - \varepsilon$ with probability $\varepsilon$ and $0$ otherwise. $X_2 = Y_2 = 1$ with probability $1$. We can set costs $c_1 = c_2 = \varepsilon^2$. Similar to \citep[Proposition-4.2]{bechtel2020delegated}, we can show that the principal can not obtain greater than $\mathbb{E}[\texttt{OPT}]/2$ delegated utility as $\varepsilon \rightarrow 0$ for any discount factor $\delta > 1/2$.
\end{comment}
\section{Introduction}
\label{introduction}
We take the natural next step in the study of delegated stochastic search problems involving multivariate decisions, constraints, and costs. The work of Bechtel and Dughmi \cite{bechtel2020delegated} has provided a fairly thorough understanding of principal-agent delegation in the presence of ``hard'' constraints on the search procedure --- scenarios of this form can be viewed as a principal \emph{delegating} a \emph{stochastic probing} problem to an agent. In this paper, we build a similar understanding when search is associated with cardinal costs instead. Scenarios of this form feature a principal who delegates, to an agent, a combinatorial generalization of the famous Pandora's box problem of Weitzman \cite{weitzman1979optimal}. As in the most relevant prior work on delegation, we imbue the principal with the power of commitment, rendering this a mechanism design problem.\footnote{In particular, a mechanism design problem without money.}
The conceptual starting point in this area is the work of Kleinberg and Kleinberg \cite{kleinberg2018delegated}, who consider a \emph{principal} delegating the selection of one option (we say \emph{element}) out of finitely many to an \emph{agent}. As a running example, consider a firm (the principal) delegating the selection of one job candidate out of many (the elements) to an outside recruitment agency (the agent). Each element is associated with a stochastic reward for both principal and agent, with independence across elements. The agent is tasked with ``exploring'' (we say \emph{probing}) these rewards and proposing one of them, which the principal may choose to accept or reject.
Problems of this form are most natural when exploration is not free, and \citet{kleinberg2018delegated} consider one model featuring a hard constraint on the number of options explored, and a second model featuring cardinal costs associated with exploration for both principal and agent. Bechtel and Dughmi~\cite{bechtel2020delegated} generalize the first model, in particular to settings in which exploration is combinatorially constrained (this is referred to as the \emph{outer} constraint), and multiple elements may be selected subject to another combinatorial constraint (this is referred to as the \emph{inner} constraint). When multiple elements are selected, rewards are additive for both the principal and the agent. In this paper, we similarly generalize the second model of \cite{kleinberg2018delegated}: there is no outer constraint on exploration, but rather per-element probing costs for the principal and agent. Moreover, there again is an inner constraint (which we will often refer to simply as the constraint) on the set of elements selected. Rewards and probing costs are now both additive across elements. The problem being delegated here is a generalized Pandora's box problem, as in \cite{singla2018price}.
There are multiple natural ways of instantiating the utilities of both the principal and the agent, depending on who we assume incurs the exploration (i.e., probing) costs. Some ways in which costs may be shared include:
\begin{itemize}
\item The principal and agent each pay a fixed percentage of the total probing cost. In our running example, the recruitment agency may have a policy in which they only pay a fixed fraction of the cost of interviewing each candidate. Such scenarios fall under our first model which we refer to as the \emph{standard model} of utilities\footnote{As long as the principal or the agent does not pay the entire cost in the standard model, we can re-scale their utilities and without loss of generality, assume that they both pay the equal cost.}. \citet{kleinberg2018delegated} assumes cost-sharing according to the standard model of utilities.
\item The principal pays the full cost of exploration. In our running example, recruitment agencies only commit to investing their time and expertise, where the principal bears the entire cost of exploration. We refer this model as the \emph{free-agent} model.
\item The principal chooses as part of their strategy how individual costs are shared. In our running example, the principal may be willing to pay a large fraction of the cost of interviewing good (in expectation) candidates, but still allows the agent to interview bad (in expectation) candidates so long as they bear most of the cost. We refer to this model as the \emph{shared-cost} model. This model provides the principal with much more power when delegating.
\end{itemize}
No matter our utility model and cost model, we seek mechanisms which approximate the principal's optimal \emph{non-delegated utility}: the maximum expected utility the principal can obtain by solving the search problem themselves. When such a mechanism matches the non-delegated utility up to a factor $\alpha$, we refer to it as an \emph{$\alpha$-factor} mechanism.
\subsection*{Our Models and Results}
In our first model --- which we refer to as the \emph{standard model} of utilities --- we follow in the footsteps of \cite{kleinberg2018delegated} by incorporating the exploration costs into both the principal and agent's utilities.\footnote{Whereas it is not uncommon for the agent in delegation to bear the costs of completing the task, this model also incorporates the costs into the principal's objective. This can capture a principal concerned with optimizing a social objective, as well as scenarios in which probing costs are shared equally by the principal and agent.}
Our results for this model are a mixed bag: When each element's reward distribution has binary support, we obtain constant-approximate delegation mechanisms when the constraint is a matroid. The proof proceeds via a reduction to the matroid prophet inequalities against an almighty adversary from \cite{feldman2016online}. This result generalizes the result of \cite{kleinberg2018delegated} for their second model, which also features binary distributions.
On the other hand, we obtain strong impossibility results for non-binary distributions, ruling out any sublinear (in the number of elements) approximation to the principal's optimal non-delegated utility, even for the rank one matroid. This shows that the result of \cite{kleinberg2018delegated} for their second model, which also features a rank one matroid constraint, can not be generalized to non-binary distributions. Even more emphatically, we rule out certain bicriteria approximations for the standard model of utilities: even if probing costs are discounted by any absolute constant, the principal's delegated utility can not approximate --- up to any constant --- their undelegated utility in the undiscounted setting.
Motivated by our impossibility results for the standard model of utilities, we explore models in which exploration costs are shared unequally between the principal and the agent. In the \emph{free-agent model}, the agent incurs no exploration costs, which are born entirely by the principal. For various constraints such as matroids, matchings, and knapsacks, we obtain bicriteria approximate mechanisms of the following form for various pairs of constants $\alpha, \delta$: the principal's delegated utility in the setting where probing costs are discounted by $\delta$ matches, up to a factor of $\alpha$, their optimal undelegated utility in the undiscounted setting. Our results proceed by reduction to the online contention resolution schemes against an almighty adversary from \cite{feldman2016online}. We complement this with a negative result, ruling out the traditional uni-criteria constant-approximate mechanisms. Specifically, absent any discount on probing costs, no delegation mechanism approximates the principal's optimal undelegated utility up to any constant.
Our final utility model allows the principal to declare, up front as part of their mechanism, an arbitrary split of the probing cost for each element between the principal and the agent. We refer to this as the \emph{shared-cost model}. This turns out to be the most permissive of our models: for constraints including matroids, matchings, and knapsacks, we obtain delegation mechanisms which approximately match, up to a constant, the principal's optimal undelegated utility.\footnote{We note that, since the principal can offload much of the costs of exploration to the agent, there exist instances in which the principal's delegated utility strictly exceeds their undelegated utility. However, we also show that there are simple instances in which the principal's delegated utility is necessarily less than their undelegated utility, ruling out general results with approximation factors exceeding $1$.} Our results here are again by reduction to online contention resolution schemes against an almighty adversary from \cite{feldman2016online}.
Lastly, we also begin a preliminary exploration of randomized mechanisms for delegating the generalized Pandora's box problem. We obtain negative results for a restricted class of randomized mechanisms, and leave open the general question of whether randomization yields significantly more power in this setting, including whether it overcomes some of our impossibility results for deterministic mechanisms.
\subsection*{Additional Discussion of Related Work}
For additional discussion of related work pertaining to delegation, stochastic probing problems, prophet inequalities, and contention resolution, we refer the reader to \cite{bechtel2020delegated}. Also relevant to this paper is the work on generalizations of the Pandora's box problem. In particular, Singla \cite{singla2018price} introduces a model generalizing the $1$-uniform matroid ``inner'' constraint to arbitrary downward-closed constraints and proposes constant-factor algorithms for matroids, matchings, and knapsack constraints. Gamlath et. al. \cite{gamlath2019beating} further improves approximation guarantees for the generalized Pandora's box problem with matching constraints.
\section{Lottery Mechanisms}\label{appendix:lottery}
\label{lottery-mechanisms}
In this section, we consider a class of delegation mechanisms which we call binary lottery mechanisms. These are a class of randomized mechanisms that generalize the deterministic ones used earlier.
Formally, a \emph{lottery mechanism} consists of a menu $\mathcal{R}$ of distributions over solutions. After the principal has announced $\mathcal{R}$ to the agent, they probe elements as usual. However, rather than proposing a single solution to the problem, the agent proposes one of the distributions $D \in \mathcal{R}$ that the principal announced. Then, the principal samples a solution $S \sim D$ from the proposed distribution. If $S$ is a valid solution (feasible in the inner constraint), then the principal accepts and both players receive their respective utilities for $S$ minus the total probing cost. Otherwise, the principal rejects the invalid solution and both players pay the total probing cost with no gain.
A \emph{binary lottery mechanism} is a special case of lottery mechanism in which each distribution $D \in \mathcal{R}$ has support for at most two solutions: one null (status quo) solution and one valid non-null solution. Such a mechanism can be equivalently represented by a set $\mathcal{R}$ of acceptable solutions and a probability $p_S$ for each solution $S \in \mathcal{R}$. Then, the principal accepts proposal $S \in \mathcal{R}$ from the agent with probability $p_S$ and rejects the proposal otherwise. This second representation is the one that we will use for the rest of this section.
Observe that the argument from Section \ref{other-mechanisms} applies only to deterministic multi-round signaling mechanisms. Therefore, such lottery mechanisms may be strictly more powerful than their deterministic counterparts. However, a similar argument can show that we get no increased power from randomized multi-round signaling mechanisms, so it's sufficient to consider only randomized single-proposal mechanisms (lottery mechanisms as defined above).
Since they have fine-tuned control over ``how much'' of each solution to accept, binary lottery mechanisms may seem to give the principal increased delegation power.
However, we will now show that strong impossibilities exist for such mechanisms in the case of the standard model and the free-agent model, generalizing earlier results about deterministic mechanisms.
\begin{proposition}\label{prop:impos_random_1}
There exist instances of the standard model of delegated Pandora's box on $n$ elements for which the delegation gap is $O(\frac{1}{\sqrt n})$ for the class of binary lottery mechanisms.
\end{proposition}
\begin{proof}
For any positive integer $n > 1$ and real $0 < \varepsilon = \frac{1}{\sqrt n}$, and consider the following instance of delegated Pandora's box. We have $n$ identical elements $E = \{1, \dots, n\}$ where each element $i$ has a probing cost $c_i = 1 - \varepsilon$ and random utilities $(X_i, Y_i) \sim \mu_i$. The principal's utility $X_i$ is $n$ with probability $\frac{1}{n}$ and $0$ otherwise. The agent's utility $Y_i$ is $2$ with probability $\frac{1}{2}$ independently of $X_i$ and $0$ otherwise. The inner constraint is a $1$-uniform matroid. We let the agent break ties in favor of the principal. Following the poof of Proposition~\ref{prop:impos1}, we have $\mathbb{E}[\texttt{OPT}] \geq \varepsilon n \left(1 - 1/e\right) = (1-1/e)\sqrt n$.
Now, we will bound the principal's delegated expected utility. Consider an arbitrary acceptable set $\mathcal{R}$ that the principal might commit to. Observe that every element $i$ evaluates to one of four tagged outcomes $(i, n, M)$, $(i, n, 0)$, $(i, 0, M)$, and $(i, 0, 0)$ with probabilities $\frac{1}{n M}$, $\frac{1}{n} \left( 1 - \frac{1}{M} \right) $, $\frac{1}{M} \left( 1 - \frac{1}{n} \right)$, and $\left( 1 - \frac{1}{n} \right) \left( 1 - \frac{1}{M} \right)$, respectively. We let $p^i_{xy}$ denote the probability chosen by the principal of accepting outcome $(i,x,y)$.
Given $\mathcal R$, let $E^* \subseteq E$ be the subset of elements $i$ for which $(n-1)p^i_{02}+ p^i_{n2}\geq n(1-\varepsilon)$. If any element $i\notin E^*$ then the agent's increase in expected utility from probing $i$ is at most $2\cdot \frac {p^i_{n2}}{2n} + 2\cdot \frac{p^i_{02}}{2} \left( 1 - 1/n \right) - (1 - \varepsilon) < 0$, so they have no incentive to ever probe $i$. Let $|E^*|=k$,therefore, the agent will probe no more than the $k$ elements in $E^*$. If $k = 0$, then the agent will not probe anything and both will get $0$ utility. For the remainder of the proof, we assume $k > 0$. Note that the principal has no incentive to set $p^i_{00}>0$ and $p^i_{n0}<1$ for any $i\in E^*$. We can use a similar argument as Proposition~\ref{prop:impos2} to show this formally.
The agent now faces an instance of the Pandora's box problem, so their optimal strategy is to probe elements in order of weakly decreasing cap value (among non-negative cap values) and accept the first outcome whose value is above its cap. Thus, the agent probes elements in the decreasing order of cap values $\tau^y_i = (2p_i - 1 + \varepsilon)/p_i$ where $p_i = \frac{p^i_{n2}}{2n} + \frac{p^i_{02}}{2}(1-1/n)$ until the expected gain from an element exceeds the remaining cap values. It is easy to verify that $2 > \tau_i^y \geq0$ for all $i\in E^*$ by the definition of $E^*$.
First, we assume for all elements $i \in E^*$ that $\tau^y_i>0$. Since the cap value is strictly positive for all $i \in E^*$, the agent will never propose an element with $Y_i=0$ if they find $j\in E^*$ with $Y_j = 2$. Consider the utility that the principal gets when the agent finds an outcome of value $2$. Among the $k = |{E^*}|$ elements that the agent might probe, they find a value of $2$ with probability $1-(1/2)^k$. Since the principal's utility for the proposed outcome is independent of the agent's, it will have value $n$ for the principal with probability $\frac{1}{n}$. Since $k \ge 1$, the principal pays a cost of $1 - \varepsilon$ for the first probe. Therefore, the principal expects a utility of at most $(1-(1/2)^k)(p^i_{n2}\cdot \frac n n - (1- \varepsilon))\leq (1-(1/2)^k)\varepsilon$ from the event when the agent finds some element $i\in E^*$ with $Y_i = 2$.
Now, with probability $\left( \frac 1 2 \right)^k $, the agent doesn't find any outcomes of value $2$. Then the principal pays a cost of $k(1 - \varepsilon)$ in order to probe all $k$ elements in $E^*$. Since the agent breaks ties in favor of the principal, they will propose any acceptable outcomes of value $n$ to the principal. There exists such an outcome with probability at most $1 - \left( 1 - \frac{1}{n} \right)^k$. Therefore, the principal expects a utility of at most
\begin{equation*}
n \left(1 - \left( 1 - \frac{1}{n} \right)^k \right) - k(1 - \varepsilon) \le n \left(1 - \left( 1 - \frac{k}{n} \right) \right) - k \left( 1 - \varepsilon \right) = k\varepsilon
\end{equation*}
from this event. Hence, $\mathbb{E}[\texttt{DEL}] \leq (1-(1/2)^k)\varepsilon + k\varepsilon (1/2)^k \leq O(1)\varepsilon$ for $k\geq 1$. Therefore the delegation gap is $\mathcal O(1/ n)$ when $\tau_i^y > 0$ for $i\in E^*$.
Now, suppose for all elements in $i\in E^*$ that $\tau_i^y = 0$. This implies that $(n-1)p^i_{02} + p^i_{n2} = n(1-\varepsilon)$. In this case, the agent obtains $0$ utility in expectation by probing any element. Thus, the agent will try to break ties in the principal's favor. Let's say the agent probes a set of elements $S$ with observed outcomes $\mathcal S$ where they break ties in favor of the principal at every step. If there exists an element $i$ such that $(i,\cdot,2) \in \mathcal S$, then the agent will never propose an outcome $(j,\cdot,0)$ from $\mathcal S$ because they can obtain better utility by proposing an element $i$ with outcome $(i,\cdot,2)$.
Let $S_t=\{i_1,\dots,i_t\}$ be the set of elements probed by the agent until now with outcome $\mathcal S_t$. Suppose the agent has observed an element $i_\ell \in S_t$ with $Y_{i_\ell} = 2$. In that case, if the agent further probes an element $i$ among the unprobed elements, then they will propose $i$ if and only if $Y_i = 2$. If the agent probes $i$, then the addition in the principal's expected utility is $n\cdot\frac{p^i_{11}}{n}\cdot \Pr[Y_i=2] - c_i <0$. Therefore, the agent will not probe any further elements. Thus, we can conclude that the agent will stop probing elements as soon as they observe an element $i$ such that $Y_i=2$. Similarly, we can show that the agent will stop probing elements if they observe an element $j$ with outcome $(j,n,0)$ before any element $i$ with realization $Y_i = 2$.
We can bound the probability of the agent observing outcome $(\cdot, n, 0)$ before $(\cdot, \cdot, 2)$ by $\frac 1 n ( 1/2 + (1/2)^2 + \dots ) \leq \frac 2 n$. Let us denote the event when the agent finds an element with outcome $(\cdot, n, 0)$ before $(\cdot, \cdot, 2)$ by $\mathcal E_1$. In the event $\mathcal E_1$, the principal obtains value $n$ and pays to probe at least one element. $\mathbb{E}[\texttt{DEL}|\mathcal E_1] \leq (n - 1 + \varepsilon)$. In the event $\mathcal E^c$, the agent observes an element with outcome $(\cdot , \cdot, 2)$ before $(\cdot, n, 0)$. In this event, the agent will propose the first observed element $i$ with $Y_i = 2$. Since the principal's utility for the proposed outcome is independent of the agent's, it will have value $n$ for the principal with probability $\frac{1}{n}$. Since $k \ge 1$, the principal pays a cost of $1 - \varepsilon$ for the first probe. Therefore, $\mathbb{E}[\texttt{DEL}| \mathcal E^c] \leq (\frac n n - 1 + \varepsilon ) = \varepsilon$. We can now bound the expected delegated utility as follows:
\begin{align*}
\mathbb{E}[\texttt{DEL}] &= \mathbb{E}[\texttt{DEL} | \mathcal E]\Pr[\mathcal E] + \mathbb{E}[\texttt{DEL} | \mathcal E^c]\Pr[\mathcal E^c]\\
&\leq \left(\frac 2 n\right) (n-1-\varepsilon ) + \varepsilon \leq O(1).
\end{align*}
Therefore the delegation gap is $\mathcal O(1/\sqrt n)$ when $\tau_i^y = 0$ for $i\in E^*$.
Now, consider the case when $\tau_i^* \geq 0$ for all $i\in E^*$. In this case, the agent first probes elements with positive cap values, and if they are unable to find an element with $Y_i > \tau_i^y$, then they probe elements in $E^*$ with cap value $0$. Therefore we can bound the expected delegation as $\mathbb{E}[\texttt{DEL}] \leq O(1)\varepsilon + O(1) = O(1)$. This shows that the delegation gap is $O(1/\sqrt n)$.
\end{proof}
In the following proposition, we show that there exists an instance of the free-agent model in which the delegation gap for binary lottery mechanisms is $O(1/n^{1/4})$. The instance in Proposition~\ref{prop:impos_random_2} is exactly the same instance described in Proposition~\ref{prop:impos2}. We show that the optimal binary lottery mechanism for the instance described in Proposition~\ref{prop:impos2} coincides with the optimal deterministic mechanism. Hence, the impossibility result for deterministic delegation holds for the class of binary lottery mechanisms as well.
\begin{proposition}\label{prop:impos_random_2}
There exists an instance of the free-agent model on $n$ elements with a $1$-uniform matroid inner constraint such that the delegation gap is $O ({1}/{n^{\frac{1}{4}}})$ for binary lottery mechanisms, even when the agent breaks all ties in favor of the principal.
\end{proposition}\label{prop:lower_bound}
\begin{proof}
Consider an instance of the free-agent model with a $1$-uniform matroid inner constraint, and for each element $i$, let $X_i$ and $Y_i$ be independently distributed as follows:
\begin{equation*}
\begin{aligned}[c]
X_i =
\begin{cases}
\frac{1}{\sqrt n} & \text{with prob. } 1/\sqrt n \\
0, & \text{otherwise}
\end{cases}
\end{aligned}
\qquad\qquad
\begin{aligned}[c]
Y_i =
\begin{cases}
\delta_i, & \text{with prob. } \frac 1 2 \\
\mathrm{e}^{n}, & \text{with prob. } \frac 1 2
\end{cases}
\end{aligned}
\end{equation*}
where $\delta_i > 0$ are sufficiently small. We set the cost for probing any element $i$ to $c_i = 1 - \varepsilon$, where $\varepsilon = \frac {2}{n^{1/4}}$. Following Proposition~\ref{prop:impos2} $\mathbb{E}[\texttt{OPT}] \geq \Theta (n^{1/4})$. For simplicity, let $p = 1/n^{1/4}$.
Now we will bound the principal's optimal delegated expected utility. Consider the delegation strategy defined by some optimal set of acceptable outcomes $\mathcal R$. We let $p^i_{xy}$ denote the optimal probability chosen by the principal of accepting outcome $(i,x,y)$. For ease of notation, we let $p^i_{11} = p^i_{p^2e^n}$, $p^i_{10} = p^i_{p^2\delta_i}$, $p^i_{01} = p^i_{0e^n}$, and $p^i_{00} = p^i_{0\delta_i}$.
For all $i \in E$, we claim that $(p^i_{11} + p^i_{10})/2 > 1 - \varepsilon$ or $p^i_{11} = p^i_{10} = p^i_{01} = p^i_{00} = 0$. Otherwise, if both conditions are broken, the principal obtains $\sqrt n \cdot (p^i_{11} + p^i_{10})\cdot \frac{1}{\sqrt n} - c\leq 0$ utility in expectation whenever the agent probes element $i$, contradicting the optimality of the principal's strategy. As a result, both $p_{10}$ and $p_{11}$ have to be at least $1 - 2\varepsilon$. We now define the set of elements $E^* = \{ i : p^i_{11} > 0 \text{~and~} p^i_{10} > 0 \}$.
Given $\mathcal R$, the agent's optimal strategy can be described as follows: probe elements one by one in the decreasing order of $\tau^y_i = \frac{p^i_{11}}{2\sqrt n} + \frac{p^i_{01}}{2}(1 - 1/\sqrt n)$ and propose the first element with $Y_i = e^n$ if $p^i_{X_i e^n} \geq \max_j \{ p^j_{11}, p^j_{01} \}$ for unprobed $j \in E^*$. The agent will not stop before observing an element $i \in E^*$ with $Y_i = e^n$ because they can always obtain at least $\frac{p_{11}e^n}{2\sqrt n} \geq (1-2\varepsilon)\frac{e^n}{2\sqrt n} >\delta_i$ in expectation by probing any element. Since the principal wants to maximize the chance of accepting any element $i$ with $X_i = 1/\sqrt n$, they will set $p^i_{11}=1$ and $p^i_{01} = 0$.
If the agent is unable to find such an element, then they will propose some element $i$ for which $Y_i = \delta_i$ with the maximum $p^i_{X_i\delta_i} \cdot \delta_i$. Given the agent's optimal strategy, the principal wants to maximize the chance of accepting an element $i$ with $X_i = 1/\sqrt n$ whenever agent proposes such an element. Therefore, $p^i_{10} = 1$ and $p^i_{01} = 0$ for all $i \in E^*$. We have now shown that the optimal binary lottery mechanism in this instance is exactly the optimal deterministic mechanism discussed in Proposition~\ref{prop:impos2}. Hence, following the proof of Proposition~\ref{prop:impos2}, we conclude that the delegation gap with binary lottery mechanisms for the free-agent model is $O(1/n^{1/4})$.
\end{proof}
\section{Notations}
\input{abstract}
\input{introduction}
\input{preliminaries}
\input{model}
\input{standard-model}
\input{free-agent-discounted-cost-models}
\input{shared-costs-model}
\input{open-questions}
\bibliographystyle{abbrvnat}
\section{Missing Proofs from the Paper}
\section{Proof of Proposition~\ref{prop:impos2}}\label{appendix:proof_propo5.5}
\propimposfreeagent*
\begin{proof}
Consider an instance of the free-agent model with a $1$-uniform matroid constraint, and for each element $i$, let $X_i$ and $Y_i$ be independently distributed as follow:
\begin{equation*}
\begin{aligned}[c]
X_i =
\begin{cases}
\frac{1}{p^2,} & \text{with prob. } p^2 \\
0, & \text{with prob. } 1-p^2
\end{cases}
\end{aligned}
\qquad\qquad
\begin{aligned}[c]
Y_i =
\begin{cases}
\delta_i, & \text{with prob. } \frac 1 2 \\
\mathrm{e}^{n^2}, & \text{with prob. } \frac 1 2
\end{cases}
\end{aligned}
\end{equation*}
where $p = \frac{1}{n^{1/4}}$ and $\delta_i > 0$ are sufficiently small. We set the cost for probing any element $i$ to $c_i = 1 - \frac p 2$ and also observe that $(1 - (1 - p^2)^n) \rightarrow 1$ as $n \rightarrow \infty$.
Once again, the principal's optimal non-delegated expected utility is given by the solution to Weitzman's Pandora's box problem. For each element $i$, we must determine the cap value $\tau_i$ such that $\mathbb{E} (X_i - \tau_i)^+ = c_i$. It's not hard to verify for this instance that $\tau_i = \frac{1}{2p} = \frac{n^{1/4}}{2}$. Then the optimal solution guarantees an expected utility of $\mathbb{E}[\mathbb{E}[\texttt{OPT}]] = \mathbb{E} \max_i \min(X_i, \tau_i)$ where each $\min(X_i, \tau_i)$ takes value $\tau_i$ with probability $p^2$ and $0$ otherwise. Therefore, $\max_i \min(X_i, \tau_i)$ takes value $\tau_i$ with probability $1 - \left( 1 - p^2 \right)^n = 1 - \left( 1 - \frac{1}{n^{1/2}} \right)^n = O(1)$ and the principal gets expected utility
\begin{equation*}
\mathbb{E}[\texttt{OPT}]= O(1) \tau_i = \Theta(n^{1/4}).
\end{equation*}
Now we will bound the principal's optimal delegated expected utility. Consider the delegation strategy defined by some set of acceptable outcomes $\mathcal R$. Given $\mathcal R$, the agent's optimal strategy (assuming they break ties in favor of the principal) can be described as follows: probe elements one by one for which $(i, 1/p^2, e^{n^2}) \in \mathcal R$ and propose the first observed element with $(x_i, y_i) = (1/p^2, e^{n^2})$. If they are unable to find such an element, then they probe elements with only $(i, 0, e^{n^2}) \in \mathcal R$ and propose the first element with $y_i = e^{n^2}$. Finally, they will probe all other elements in some order and propose any element with maximum $\delta_i$ among probed feasible elements.
For each element $i$, the principal has no incentive to accept only $0$ utility outcomes, so an optimal strategy cannot have both $(i, 1/p^2, e^{n^2}) \notin \mathcal R$ and $(i, 1/p^2, \delta_i) \notin \mathcal R$, since then they may incentivize the agent to probe element $i$ (incurring a cost on the principal) without getting any utility back. Moreover, the principal has no incentive to accept any $0$ utility outcomes from an element $i$ even if they accept at least one of $(i, 1/p^2, e^{n^2})$ or $(i, 1/p^2, \delta_i)$. To see why, consider any delegation strategy $\mathcal R$ for which there exists an element $i$ with $(i, 0, \cdot) \in \mathcal R$. There is a nonzero probability that the agent observes only element $i$ with $Y_i = e^{n^2}$ and $X_i = 0$. In this event, dropping $(i, 0, \cdot)$ from $\mathcal R$ does not change the principal's expected utility. Since the agent breaks ties in favor of the principal, in all other cases they will propose an element $i$ with positive $X_i$. Hence, the principal's expected utility does not decrease if $(i, 0, \cdot) \notin \mathcal R$.
Finally, if $\mathcal R$ is an optimal delegation strategy, then for any element $i \in E$, we have that $(i, 1/p^2, \delta_i) \in \mathcal R$ implies that $(i, 1/p^2, e^{n^2}) \in \mathcal R$ and $(i, 1/p^2, e^{n^2}) \in \mathcal R$ implies that $(i, 1/p^2, \delta_i) \in \mathcal R$. Suppose, for the sake of contradiction, that there exists an element $i$ with only $(i, 1/p^2, e^{n^2}) \in \mathcal R$. Then the agent will probe element $i$ last after probing other elements $i'$ with $(i', 1/p^2, e^{n^2}) \in \mathcal R$ and $(i', 1/p^2, \delta_i) \in \mathcal R$. Now, consider the event in which the agent probes element $i$ and it is the only element with $X_i > 0$ among all the probed elements. The probability of such an event is nonzero. However, the agent will not be able to propose an element $i$ if $Y_i = \delta_i$, which happens with probability $1/2$, and in this case the principal ends up paying the cost for probing $i$ without obtaining any value. By adding $(i, 1/p^2, \delta_i)$ to $\mathcal R$, the principal can increase their expected utility conditioned on $i$ being the only element with $X_i > 0$. In all other cases, adding $(i, 1/p^2, \delta_i)$ to $\mathcal R$ does not affect their utility. This contradicts the optimality of $\mathcal R$.
For the other case, suppose there exists an element $i$ with only $(i, 1/p^2, \delta_i) \in \mathcal R$. Again, the agent first probes the elements with both $(i', 1/p^2, \delta_i) \in \mathcal R$ and $(i', 1/p^2, e^{n^2}) \in \mathcal R$. Consider the event in which they do not observe any element with $i'$ with $X_{i'} > 0$ among the elements probed so far. Now, let assume that the agent probes $i$ right after that (this is the best possible scenario for the principal as all other available elements $i'$ are such that $(i', 1/p^2, \cdot) \notin \mathcal R$). Now if $X_i > 0$ and $Y_i = e^{n^2}$, then the agent will not be able to propose element $i$ and the principal pays the cost for probing $i$ without obtaining any value. Hence, adding $(i, 1/p^2, e^{n^2})$ strictly improves the principal's expected utility in this event, and in all other events, it does not affect their utility.
Now, without loss of generality, we can consider any optimal delegation strategy for the principal defined by a set of feasible elements $A = \{1, \dots, k\}$ for which the principal will accept exactly $(i, 1/p^2, e^{n^2})$ and $(i, 1/p^2, \delta_i)$. Since the agent does not incur any cost, they can probe all $k$ elements and propose their favorite acceptable element. However, we assumed that the agent breaks ties in favor of the principal, therefore they will probe elements one by one and will stop probing as soon as they find an element $j \in A$ with $(j, 1/p^2, e^{n^2})$.
If the agent can not find any such element, then they will propose $(e, 1/p^2, \delta_e)$ with the maximum $\delta_e$ among probed elements. Now we can bound the principal's optimal delegated expected utility as:
\begin{align}
\mathbb{E}[\texttt{DEL}]
&\leq
\begin{aligned}[t]
& \Pr[X_1 = 1/p^2] \cdot \Pr[Y_1 = e^{n^2}] \left( \frac{1}{p^2} - c \right) \\
&+ \Pr[X_2 = 1/p^2] \cdot \Pr[Y_2 = e^{n^2}] (\Pr[Y_1 = \delta_1] + \Pr[Y_1 = e^{n^2}] \cdot Pr[X_1 = 0]) \left( \frac{1}{p^2} - 2c \right) + \dots \\
&+ \Pr[X_k = 1/p^2] \cdot \Pr[Y_k = e^{n^2}] \prod_{i=1}^{k-1} (\Pr[Y_i = \delta_i] + \Pr[Y_i = e^{n^2}] \cdot Pr[X_i = 0]) \left( \frac{1}{p^2} - kc \right) \\
&+ \left[ \frac{1}{p^2}\left (1 - \prod_{i=1}^{k}(\Pr[X_i = 0] + \Pr[X_i = {1}/{p^2}] + \Pr[Y_i = \delta_i] \right) - ck \right]
\end{aligned} \\
&\leq
\begin{aligned}[t]
& \frac{p^2}{2}\left( \frac{1}{p^2} - c \right) + \frac{p^2}{2} \left( 1-\frac{p^2}{2} \right) \left( \frac{1}{p^2} - 2c \right) + \dots \\
&+ \frac{p^2}{2} \left( 1 - \frac{p^2}{2} \right)^{k-1} \left( \frac{1}{p^2} - kc \right) + \left[ (1 - (1 - p^2)^k) \frac{1}{p^2} - ck \right]
\end{aligned} \notag
\end{align}
To reduce the clutter, let $r = \left( 1 - {p^2}/{2} \right)$. From Appendix B of \cite{boodaghians2020pandora}, we have that $(1 - (1 - p^2)^k) \cdot {1}/{p^2} -ck \leq 1/2$. Using this, we can simplify the above bound as:
\begin{align}
\mathbb{E}[\texttt{DEL}]
&\leq \frac{1}{2} \left\{1 + r + r^2 + \dots + r^{k-1} \right\} - \frac{cp^2}{2} \left(1 + 2r + \dots + kr^{k-1} \right) + \frac{1}{2} \notag \\
&= \frac{1}{2} \left(\frac{1 - r^k}{1 - r} \right) - \frac{p^2c}{2} \left\{ \left( \frac{1 - r^k}{(1 - r)^2} \right) - \frac{kr^k}{1 - r} \right\} + \frac{1}{2} \notag \\
&= \frac{1}{p^2} (1 - r^k) - \frac{2c}{p^2} (1 - r^k) + ck r^k + \frac{1}{2} \notag \\
&= \left( \frac{1}{p} - \frac{1}{p^2} \right) (1 - r^k) + k r^k + \frac{1}{2} \notag \\
&\leq \frac{1}{2} + O(ne^{-\sqrt n}) \notag
\end{align}
The above bound on the expected delegation holds for any budget $k$ and outer constraint to the agent. This shows that the delegation gap is at least $O(n^{1/4})$.
Note that the impossibility still holds if the principal samples $\mathcal R$ from any distribution $D$ over the sets of feasible solutions. We can similarly show that the optimal distribution $D^*$ over the feasibile sets has positive support on the solutions $\mathcal R\in \Omega_\mathcal I$ which can be expressed as $\mathcal R = \{(i,1/p^2,e^{n^2}),(i,1/p^2,\delta_i):i\in A\}$ for some $A\subseteq E$. We earlier showed that for any such $\mathcal R$, $\mathbb{E}[\texttt{DEL}] = O(1/n^{1/4})\cdot \mathbb{E}[\texttt{OPT}]$. Thus, $\mathbb{E}[\texttt{DEL}] = O(1/n^{1/4})\cdot \mathbb{E}[\texttt{OPT}]$.
\end{proof}
\begin{comment}
\subsection{Proof of Proposition~\ref{prop:impo42}} \label{appendix:proof_propo5.6}
\propimposdiscounts*
\begin{proof}
For any positive integer $n > 1$ and real $\varepsilon = 1 / n^{\frac{1}{4}}$, let $M$ be a positive integer such that $M = \sqrt n$ and consider the following instance of delegated Pandora's box. We have $n$ identical elements $E = \{1, \dots, n\}$ where each element $i$ has a probing cost $c_i = 1 - \varepsilon$ and random utilities $(X_i, Y_i) \sim \mu_i$. The principal's utility $X_i$ is $n$ with probability $\frac{1}{n}$ and $0$ otherwise. The agent's utility $Y_i$ is $M$ with probability $\frac{1}{M}$ independently of $X_i$ and $0$ otherwise. The constraint is a $1$-uniform matroid and there is no outer constraint. We let the agent break ties in favor of the principal.
First, we will determine the principal's optimal non-delegated expected utility. This is given by the solution to the generalized Pandora's box problem. For each element $i$, we must determine the cap value $\tau_i$ such that $\mathbb{E} (X_i - \tau_i)^+ = c_i$. It's not hard to verify for this instance that $\tau_i = \varepsilon n$. Then the optimal solution guarantees an expected utility of $U = \mathbb{E} \max_i \min(X_i, \tau_i)$ where each $\min(X_i, \tau_i)$ takes value $\varepsilon n$ with probability $\frac{1}{n}$ and $0$ otherwise. Therefore, $\max_i \min(X_i, \tau_i)$ takes value $\varepsilon n$ with probability $1 - \left( 1 - \frac{1}{n} \right)^n$ and the principal gets expected utility
\begin{equation*}
\mathbb{E}[\texttt{OPT}] = \varepsilon n \left( 1 - \left( 1 - \frac{1}{n} \right)^n \right) \ge \varepsilon n \left( 1 - \frac{1}{e} \right) = \Theta(n^{3/4}).
\end{equation*}
Now, we will bound the principal's delegated expected utility when both the agent and the principal get a discount factor of $\delta>1-1/n^{1/2}$. Consider an arbitrary acceptable set $\mathcal R$ that the principal might commit to. Since the constraint is $1$-uniform, $R$ consists of a set of singleton outcomes. Observe that every element $i$ evaluates to one of four tagged outcomes $(i, n, M)$, $(i, n, 0)$, $(i, 0, M)$, and $(i, 0, 0)$ with probabilities $\frac{1}{n M}$, $\frac{1}{n} \left( 1 - \frac{1}{M} \right) $, $\frac{1}{M} \left( 1 - \frac{1}{n} \right)$, and $\left( 1 - \frac{1}{n} \right) \left( 1 - \frac{1}{M} \right)$, respectively.
Given $R$, let $E^* \subseteq E$ be the subset of elements $i$ for which $(i, 0, M) \in R$, and let $k = |{E^*}|$. Consider any element $i \notin E^*$. If outcome $(i, 0, M) \notin R$, then the agent's increase in expected utility from probing $i$ is at most $M \cdot \frac{1}{nM} - (1 - \varepsilon)(1-\delta) = \frac 1 n - \frac 1 {\sqrt n}(1-\varepsilon) < 0$ for large enough $n$, so they have no incentive to ever probe $i$. Therefore, for the rest of the proof, we assume that $k > 0$.
The agent now faces an instance of Pandora's box problem, so their optimal strategy is to probe elements in order of weakly decreasing cap value (among non-negative cap values) and accept the first acceptable outcome whose value is above its cap. Note that the agent will only probe the elements that belong to $E^*$ We divide the elements in $E^*$ into the following disjoint sets:
\begin{align*}
E^*_1 &= \{i : \{(i,n,M), (i,0,M), (i,n,0)\} \subseteq \mathcal R\}, \\
E^*_2 &= \{i : \{(i,n,M), (i,0,M)\} \subseteq \mathcal R\}, \\
E^*_3 &= \{i : \{(i,0,M), (i,n,0)\} \subseteq \mathcal R\}.
\end{align*}
The optimal strategy for the agent is to first probe the elements in $E^*_1$ and then $E^*_2$ and stop once they find an outcome with utility $M$. If there is no such outcome, then they probe elements in $E^*_3$ and stops once they find an outcome $(i, 0, M)$. However, the principal has no incentive to construct $\mathcal R$ such that $E^*_2 \neq \emptyset$ or $E^*_3 \neq \emptyset$. For the sake of contradiction, let $E_2^* \neq \emptyset$, in that case, consider an event when the agent does not observe $i\in E^*$ with feasible outcome with $Y_i = M$, however, observes $i' \in E_2^*$ with $(i', n, 0)$. Conditioned on this event, the principal can strictly benefit by adding $(i', 0, n)$ to $\mathcal R$. In all other cases, the principal's utility is unchanged by adding $(i', n, 0)$. Therefore $E_2^* = \emptyset$. Similarly, we can show that the principal strictly benefits by adding $(i, n, M)$ to $\mathcal R$ for all $i \in E_3^*$. Hence, for the rest of the proof, we assume that $E^* = E^*_1$.
Consider the utility that the principal gets when the agent finds an outcome of utility $M$. Among the $k = |{E^*}|$ elements that the agent might probe, they find a utility of $M$ with probability $1 - \left( 1 - \frac{1}{M} \right)^k$. Since the principal's utility for the proposed outcome is independent of the agent's, it will have utility $n$ for the principal with probability $\frac{1}{n}$. Since $k \ge 1$, the principal pays a cost of $1 - \varepsilon$ for the first probe. Therefore, the principal expects a utility of at most
\begin{equation*}
\left\{ 1 - \left( 1 - \frac{1}{M} \right)^k\right\} \cdot \left (\frac{n}{n} - (1 - \varepsilon)(1-\delta)\right) = O(1)
\end{equation*}
from this part of the agent's strategy.
Now, with probability $\left( 1 - \frac{1}{M} \right)^k$, the agent doesn't find any outcomes of value $M$. Then the principal pays a cost of $k (1 - \varepsilon)$ in order to probe all $k$ elements in $E^*$. Since the agent breaks ties in favor of the principal, they will propose any acceptable outcomes of value $n$ to the principal. There exists such an outcome with probability at most $1 - \left( 1 - \frac{1}{n} \right)^k$. Therefore, the principal expects a utility of at most
\begin{align*}
\left( 1 - \frac{1}{M} \right)^k \cdot \left\{n \left(1 - \left( 1 - \frac{1}{n} \right)^k \right) - k(1 - \varepsilon)(1-\delta)\right\}
&\le \left( 1 - \frac{1}{M} \right)^k \cdot \left\{k - k(1 - \varepsilon)(1-\delta)\right\} \\
&\le k (\varepsilon + \delta) \left( 1 - \frac{1}{M} \right)^k
\end{align*}
For the sake of exposition, let $f(k) = k \left( 1 - \frac{1}{\sqrt n} \right)^k$. For $k = o(\sqrt n )$, asymptotically, $f(k) = o(\sqrt n)$ and for $k=\omega (\sqrt n)$, $f(k) = \omega (\sqrt n) \mathrm e^{-\frac{\omega (\sqrt n)}{\sqrt n}} = o(\sqrt n)$. For $k=\Theta (\sqrt n)$, $f(k) = \Theta (\sqrt n)$. Therefore, $\max_k f(k) = O(\sqrt n)$ asymptotically.
The above arguments imply that the principal's optimal expected delegation is bounded by $O((\delta + \varepsilon )\sqrt n)+O(1) = O(n^{1/4})$. Hence the delegation gap for the above instance is $O(1/n^{1/2})$.
Note that the impossibility still holds if the principal samples $\mathcal R$ from any distribution $D$ over the sets of feasible solutions. We can similarly show that the optimal distribution $D^*$ over the feasible sets has positive support on the solutions $\mathcal R\in \Omega_\mathcal I$ for which $E^* = E_1^*$. Therefore, for any sample of feasible set $\mathcal R$ from $D^*$, $\mathbb{E}[\texttt{DEL}] = O(1/\sqrt n)\mathbb{E}[\texttt{OPT}]$. Thus, $\mathbb{E}[\texttt{DEL}] = O(1/\sqrt n)\cdot \mathbb{E}[\texttt{OPT}]$.
\end{proof}
\end{comment}
\section{Delegation Model}
\label{sec:model}
In this paper, we will use several slightly different models of delegation which can be viewed as variants of a single \emph{standard model} of delegated Pandora's box. This model formally consists of: two players called the \emph{principal} and the \emph{agent}; a ground set of \emph{elements} $E$; for each element $i \in E$, an independent distribution $\mu_i$ over $\mathbb{R}_{\ge 0} \times \mathbb{R}_{\ge 0}$ giving possible utility pairs for the principal and agent, respectively; for each element $i \in E$, a \emph{probing cost} $c_i \in \mathbb{R}_{\ge 0}$; and a downward-closed set system $\mathcal{M} = (E, \mathcal{I})$ with feasible sets $\mathcal{I}$ over the ground set $E$ (i.e. $\mathcal I \subseteq 2^E$ and if $S \in \mathcal I$ then $T \in \mathcal I$ for any $T \subseteq S$).
Given an element $i$, we let $X_i$ and $Y_i$ be random variables denoting the random value obtained for the principal and agent from element $i$ with joint distribution $(X_i, Y_i) \sim \mu_i$, where $X_i$ and $Y_i$ may be arbitrarily correlated but are independent of random variables from other elements. For any $(x, y) \in \operatorname*{supp}(\mu_i)$, we call $(i, x, y)$ an \emph{outcome} or \emph{realization} of element $i$. For any set of outcomes $\mathcal S = \{ (i_1, x_1, y_1), \dots, (i_k, x_k, y_k) \}$ such that $S = \{ i_1, \dots, i_k \} \in \mathcal{I}$ for distinct $i_1, \dots, i_k$, we call $\mathcal S$ a \emph{solution}. In general, we denote the set of all possible outcomes as $\Omega = \{ (i, x, y) : (x, y) \in \operatorname*{supp}(\mu_i), i \in E \}$ and the set of all solutions with respect to the constraint $\mathcal I$ as $\Omega_{\mathcal I} \subseteq 2^\Omega$.
Given such an instance as described above, the principal and agent play an asymmetric game in which the principal alone has the power to choose the mechanism and accept a solution, and the agent alone has the power to search for solutions. More specifically, in order to learn about the true realization $(X_i, Y_i)$ of an element $i$, the agent can \emph{probe} element $i$. We allow them to probe elements adaptively, choosing what to probe next based on previously realized outcomes. Let us say that the agent ultimately probes the set $\texttt{Probed} \subseteq E$, obtaining outcomes $T$. Depending on the mechanism, they can choose to share information about $T$ with the principal. The principal can \emph{accept} any valid solution $S \subseteq T$, yielding a net utility of $\sum_{(i, x, y) \in S} x - \sum_{i \in \texttt{Probed}} c_i$ for the principal and $\sum_{(i, x, y) \in S} y - \sum_{i \in \texttt{Probed}} c_i$ for the agent. The principal can alternatively choose to \emph{reject} all solutions and maintain the status quo, yielding a net utility of $- \sum_{i \in \texttt{Probed}} c_i$ for both players. Both players have common knowledge of the setup of the problem, including all distributions $\{ \mu_i \}_{i \in E}$ but excluding the true realizations of elements, and they each act to maximize their own expected utility.
As in the models from previous work, we assume that the agent cannot lie by misrepresenting the utilities of a probed outcome or by claiming to have probed an unprobed element. We believe that this is a natural assumption in many settings where outcomes can be easily verified by the principal. Additionally, we assume that the principal has commitment power, i.e. the agent can trust the principal to follow the rules of whatever mechanism they choose. The principal can force the agent to also follow the rules of the mechanism insofar as they can detect violations of the rules. Finally, we also assume that all instances of this problem satisfy $\mathbb{E}[X_i] > c_i$ and $\mathbb{E}[Y_i] > c_i$ for all $i\in E$. The first assumption is without loss of generality, since $\mathbb{E}[X_i] \le c_i$ would imply that the principal has no incentive to probe or accept element $i$, so the agent would not probe it either. The second assumption allows us to avoid uninteresting impossibilities for the delegation gap defined in Section \ref{delegation-gap}, since $\mathbb{E}[Y_i] \le c_i$ would imply that the agent has no incentive to probe or propose element $i$ but the principal may still be able to receive a lot of utility from element $i$.
For this paper, we're interested in \emph{single-proposal} mechanisms as defined in \cite{kleinberg2018delegated} and used in \cite{bechtel2020delegated}. A single proposal mechanism consists of an \emph{acceptable set} $\mathcal{R} \subseteq \Omega_{\mathcal I}$ containing all solutions that the principal is willing to accept. In such a mechanism, the principal starts by declaring their choice of $\mathcal{R}$. The agent responds by adaptively probing any set of elements $\texttt{Probed} \subseteq E$ of their choosing, receiving the set of outcomes $T = \{ (i, X_i, Y_i) : i \in \texttt{Probed} \}$. Once they are done probing, they can \emph{propose} some valid solution $\mathcal S \subseteq T$ to the principal. Finally, the principal can either accept or reject the solution $\mathcal S$. If $\mathcal S$ is not a valid solution, $\mathcal S$ contains misrepresentations of the truth, or $\mathcal S \notin \mathcal{R}$, then the principal must reject $\mathcal S$. We note that this mechanism is deterministic in the sense that the principal chooses a deterministic $\mathcal{R}$ and their response to the agent's choices is deterministic. This is in contrast to the randomized mechanisms discussed briefly after Theorem \ref{thm:efficient_delegation_from_OCRS} and lottery mechanisms as defined in Appendix \ref{lottery-mechanisms}.
Given element $i$, we define the \emph{cap value} or \emph{surplus value} for the principal $\tau^x_i$ as the solution to $\mathbb{E}[(X_i - \tau^x_i)_+] = c_i$. We further define \emph{truncated random variables} $Z^{\min}_i = \min \{ X_i, \tau^x_i \}$ for the principal for all $i\in E$.
We similarly define agent's cap values $\tau^y_i$ as the solution to $\mathbb{E}[(Y_i - \tau^y_i)_+] = c_i$, and the truncated random variable for the agent as $W^{\min}_i = \min \{ Y_i, \tau^y_i \}$. Note that the expected utility (including the probing cost) of a particular element is negative for the elements with negative cap values. We sometimes drop superscript from the principal's cap values and denote $\tau_i^x$ as $\tau_i$ for $i\in E$ whenever it is clear.
\subsection{Delegation Gap}
\label{delegation-gap}
As in \cite{bechtel2020delegated,kleinberg2018delegated}, we are not interested in finding optimal delegation mechanisms so much as finding delegation mechanisms that approximate the principal's optimal \emph{non-delegated} utility. The optimal non-delegated utility refers to the principal's optimal utility when delegating to an agent who shares their interests (alternatively, their optimal utility when they act as both the principal and agent, i.e. they have the power to probe elements and accept solutions). Note that the non-delegated problem that the principal faces is exactly the generalized Pandora's box problem with a downward-closed constraint. Therefore, our main model is a delegated version of this problem, hence why we call it the delegated Pandora's box problem.
Let $\mathbb{E}[\texttt{OPT}]$ be the principal's optimal non-delegated utility. Singla \cite{singla2018price} shows that for any downward closed constraint $\mathcal I$,
\begin{equation*}
\mathbb{E}[\texttt{OPT}] \leq \mathbb{E}\left[\max_{S\in \mathcal I}\sum_{i\in S} Z_i^{\min}\right].
\end{equation*}
Let $\mathbb{E}[\texttt{DEL}_{\mathcal R}]$ be the expected utility of the delegating principal with single-proposal mechanism $\mathcal R$, i.e. the expected utility of the principal who delegates with acceptable set $\mathcal R$ to an agent who acts in order to maximize their own expected utility given $\mathcal R$.
Now, we define $\alpha$-factor delegation strategies, which guarantee the principal at least an $\alpha$-factor of $\mathbb{E}[\texttt{OPT}]$ when they delegate.
\begin{definition}
Fix an instance of the delegated Pandora's box problem. We say that a mechanism $\mathcal{R}$ is an $\alpha$-\emph{factor delegation strategy} for $\alpha \in [0, 1]$ if
\begin{equation*}
\mathbb{E}[\texttt{DEL}_{\mathcal R}] \geq \alpha \cdot \mathbb{E}[\texttt{OPT}].
\end{equation*}
Moreover, we say $\mathcal R$ is an $\alpha$-factor \emph{agent-agnostic} strategy if $\mathbb{E}[\texttt{DEL}_{\mathcal R} ]\geq \alpha \cdot \mathbb{E}[\texttt{OPT}]$ for all instances with the same costs and marginal distributions of the principal's values $\{X_i\}_{i \in E}$, regardless of the distribution of the agent's values $\{Y_i\}_{i \in E}$.
\end{definition}
We sometimes refer to $\alpha$-factor strategies as $\alpha$-delegation and $\alpha$-factor agent-agnostic strategies as $\alpha$ agent-agnostic delegation. Note that if $\alpha$-factor agent-agnostic strategies exist for the principal, then the principal can obtain an $\alpha$-factor of $\mathbb{E}[\texttt{OPT}]$ even when they do not have any information about the distribution of $\{Y_i\}_{i \in E}$.
Now, we define the delegation gap of the family of instances of delegated Pandora's box.
\begin{definition}
The \emph{delegation gap} of a family of instances of delegated Pandora's box is the minimum, over all instances in the family, of the maximum $\alpha$ such that there exists an $\alpha$-factor strategy for that instance. This gap measures the minimum fraction of the principal's non-delegated utility they can achieve when delegating optimally. We similarly define the agent-agnostic delegation gap for agent-agnostic delegation.
\end{definition}
\subsection{More General Mechanisms}
\label{other-mechanisms}
Having now defined our model and the space of single-proposal mechanisms, it is natural to ask about the power and generality of such mechanisms. It might be beneficial for the principal to consider a larger class of mechanisms that have, for example, more signals to choose from and multiple rounds of communication. However, as in previous work on delegation and similar mechanism-design problems, we argue that that any \emph{multi-round signaling mechanism} can be equivalently implemented by a single-proposal mechanism. This allows us to consider only single-proposal mechanisms without loss of generality. Since this type of argument is similar to the revelation principle and is very common in the literature \cite{alonso2008optimal, armstrong2010model, bechtel2020delegated, kleinberg2018delegated}, we will include only an informal sketch here.
Consider any multi-round signaling mechanism $M$. We will construct a single-proposal mechanism $S$ that simulates $M$. In $S$, the principal commits to accepting any solution that they could accept when both players follow $M$. Since the agent following $M$ can predict this set of acceptable solutions and the sequence of probes and signals leading to any such solution, they can act in a way that optimizes their expected utility given the solutions that the principal would accept. Therefore, the agent responding to $S$ can do no better than following the same such optimal sequence of probes and then proposing whichever solution the principal would have accepted under $M$. Since they can do just as well under $S$ and have no reason to deviate from the optimal strategy of $M$, these mechanisms are equivalent.
We note here that this argument applies to deterministic mechanisms. Lottery mechanisms as defined in Appendix \ref{lottery-mechanisms} could have strictly more power than their deterministic counterparts.
\subsection{Model Variants} \label{sec:model_variants}
\label{model-variants}
In this paper, we consider a few different variants of the model and approximation measure as defined above. The first such variant, called the \emph{binary model}, is just a special case of delegated Pandora's box in which the distribution $\mu_i$ of every element $i$ has support for exactly two outcomes: $\bot = (i, 0, 0)$ and $\omega_i = (i, x_i, y_i)$. A simpler version of this model in which the inner constraint is a $1$-uniform matroid was investigated in \cite{kleinberg2018delegated}, and we extend their definition to general matroid inner constraints. As motivation for this model, we consider search problems in which the principal and agent know the full space of possible outcomes but don't know which of those outcomes are feasible. However, they both share a prior probability on the feasibility of each outcome, all outcomes are mutually independent, and the agent can check the feasibility of any element by paying a probing cost. This is also an extension of prior work as described in the introduction.
Second, we consider the \emph{free-agent model}. This model changes only the utility of the agent such that they do not pay the cost of any probed elements. In order to ensure that the agent does not probe all elements and incur too a large cost for the principal, we assume that the agent breaks ties in favor of the principal when deciding what element to probe next. Therefore, if the principal doesn't accept any outcomes from a particular element, then they know that the agent will not probe that element. We motivate this model both by negative results in the standard model and by settings in which the principal is constrained in advance to cover all costs that the agent may incur, e.g. an employer that commits to reimbursing employees for all work-related costs.
Third, we consider \emph{discounted-cost approximations}, a new measure of approximation for delegated Pandora's box problems. Given an instance $I$ of any model of delegated Pandora's box and some \emph{discount factor} $\delta$, consider a new instance $J$ identical to $I$ except that the cost of each element $i$ is $(1 - \delta) c_i$, where $c_i$ is the original cost.
\begin{definition}
We say that a mechanism $\mathcal R$ is an $(\alpha,\delta)$-factor delegation strategy if the principal's delegated utility in the $\delta$-discounted instance $J$ is at least an $\alpha$-factor of their non-delegated utility in the original instance $I$.
\end{definition}
Observe that this is a bi-criteria approximation in which we aim to minimize $\delta$ and maximize $\alpha$. This approximation measure can be used as a means of determining how far the principal's costs are from being able to achieve a constant delegation gap. We additionally motivate it by settings in which the agent pays a smaller cost for searching than the principal would, e.g. a contractor which, through prior experience or economies of scale, is able to save on costs and share these savings with the contractee.
Finally, we consider the \emph{shared-cost model}. This model considers a fixed cost to probe each element that the principal can pay alone or share with the agent. In particular, it allows the principal to set the agent's cost $c'_i$ for element $i$. These costs are announced to the agent along with the acceptable set $\mathcal{R}$. Then, if the agent probes element $i$, they pay a cost of $c'_i$ and the principal pays the remaining cost for that element, i.e. $c_i - c'_i$. To avoid direct transfers of value between the principal and agent, the principal can only choose $0 \le c'_i \le c_i$ so that both costs are nonnegative. We briefly observe that there are instances of this model for which the principal's optimal delegated utility is strictly greater than their optimal non-delegated utility. This is easy to see by considering any instance for which $X_i = Y_i$ for all elements $i$: the principal can set $c'_i = c_i$ and have the agent run their optimal non-delegated strategy while they do not pay any of the costs. Therefore, the delegation gap $\alpha$ of such instances can be greater than $1$. We introduce this model in the hopes that the principal's increased power can lead to better approximations. Furthermore, this model resembles settings in which the principal can choose different reimbursement amounts for each of the agent's actions, but is unable to reimburse more than the true cost (no direct transfers).
\section{Open Questions}
\label{open-questions}
In this work, we explored just some of the many possible models and results related to the delegation of the Pandora's box problem. We leave the following open questions for future work.
\begin{itemize}
\item All of our positive results employ deterministic delegation mechanisms. Can the principal do strictly better in any of these models by using a lottery mechanism instead? Note that in Appendix~\ref{appendix:lottery}, we show impossibilities only for the class of binary lottery mechanisms.
\item Can our results be extended to other families of downward-closed constraint systems or even to broader classes of constraints such as prefix-closed constraints \cite{bradac2019near}?
\item We observe that modeling delegation with a constraint system allows us to describe delegation problems in which solutions may not be independently distributed and probing reveals only part of certain solutions. Therefore, it may be interesting to investigate the delegation gap of problems that relax the independence assumption in ways that cannot be represented by the addition of a constraint system.
\item In Theorem~\ref{thm:efficient_delegation_from_OCRS}, we show that there exists a $(\alpha,\delta)$-factor strategy for the free-agent model with discount $\delta \geq 1-\alpha$ for the constraints $\mathcal I$ if there exists $c$-selectable greedy OCRS scheme for a relaxation of $P_\mathcal I$. However, we do not yet know of any impossibility or constant-factor strategy when $\delta < 1- \alpha$.
\item The shared-cost model is unique among the models in this paper for the possibility of delegation gaps strictly greater than $1$, as explained briefly in Section \ref{model-variants}. This is interesting because such a delegation gap could incentivize the principal to delegate a problem that they have the ability to solve on their own, whereas our other models assume that the principal must delegate. Can we characterize the family of instances of the shared-cost model for which the delegation gap is strictly greater than $1$?
\item For the models with strong impossibility results, can we find nontrivial families of instances with ``friendly'' agents which allow the principal to achieve a constant delegation gap?
\end{itemize}
\section{Preliminaries}
\label{preliminaries}
\subsection{Pandora's Box}
Weitzman’s Pandora’s box problem \cite{weitzman1979optimal} is defined as follows: given probability distributions of $n$ independent random variables $X_1, \dots, X_n$ over $\mathbb R_{\ge 0}$ and their respective probing costs $c_1, \dots, c_n$, adaptively probe a subset $\texttt{Probed} \subseteq [n]$ that maximizes the expected utility:
\begin{equation}
\mathbb{E} \left[ \max_{i \in \texttt{Probed}} \{X_i\} - \sum_{i\in \texttt{Probed}} c_i \right].
\end{equation}
Weitzman \cite{weitzman1979optimal} proposes a simple but optimal strategy for maximizing expected utility. For each element $i \in [n]$, this strategy chooses a \emph{cap value} (sometimes called priority value or surplus value) $\tau_i$ satisfying $\mathbb{E}[(X_i - \tau_i)^+] = c_i$. Then it probes elements in decreasing order of cap value, stopping the first time that the largest observed $X_i$ value exceeds the largest unprobed cap value. Finally, it selects the element $i$ with maximum observed $X_i$.
In this work, we focus on the more general version of the Pandora's box problem defined in \cite{singla2018price}. We are given a set of elements $E$ and a downward-closed constraint $\mathcal I \subseteq 2^E$ over the ground set $E$. The goal is to adaptively probe a set of elements $\texttt{Probed}$ and select a set of feasible elements $S \subseteq \texttt{Probed}$ for which $S \in \mathcal I$ that maximizes the following objective:
\begin{equation}
\mathbb{E} \left[ \sum_{i \in S} X_i - \sum_{i \in \texttt{Probed}} c_i\right]. \label{eq:objective}
\end{equation}
For the remainder of the paper, we will write $X(S) = \sum_{i \in S} X_i$ and $c(S) = \sum_{i \in S} c_i$ in any setting with utilities $\{X_i\}_{i \in E}$ and costs $\{c_i\}_{i \in E}$. We will also refer to $(i, x)$ for any $i \in E$ and $x \in \mathbb R_{\ge 0}$ as a possible \emph{outcome} or \emph{realization} of element $i$.
Singla \cite{singla2018price} proposes constant-factor approximation algorithms for the general Pandora's box problem for many constraints. In particular, these algorithms are optimal for matroids and $2$-approximate for both matching and knapsack constraints.
\subsection{Greedy Prophet Inequality}
An instance of the generalized prophet inequality problem is given by a set system $\mathcal M$ with ground set $E$ and feasible sets $\mathcal I$ and independent random variables $X_i$ supported on $\mathbb R_{\ge 0}$ for all $i \in E$. We take the perspective of the \emph{gambler}, who knows $\mathcal M$ and the distributions of the random variables $\{X_i\}_{i \in E}$. The gambler starts with an empty set $S$ of accepted elements and then observes each element in $E$ in an order chosen by an adversary. For the purposes of this paper, we play against the \emph{almighty adversary} defined in \cite{feldman2016online}, the strongest possible adversary, who knows all the coin flips of the gambler's strategy. When the element $i \in E$ arrives, the gambler learns the realization of $X_i$ and has to decide online whether to accept element $i$ or not based on $(i,x_i)$ and the previously accepted elements $S$. However, they can only accept $i$ if $S \cup \{i\}$ is feasible in $\mathcal M$. The gambler seeks to maximize their utility $\mathbb{E} (X(S)) = \mathbb{E} \left[ \sum_{i \in S} X_i \right]$, and in particular to compete with a \emph{prophet} who plays the same game and knows the realizations of all random variables in advance. If the gambler has a strategy guaranteeing an $\alpha$ fraction of the prophet’s expected utility in expectation, we say that we have an $\alpha$-factor prophet inequality.
We now define a particular class of strategies for the gambler:
\begin{definition}[\emph{Greedy monotone strategy} $\mathcal A_t$]
A greedy monotone strategy $\mathcal A_t$ for the gambler is described by choice of thresholds $t = \{t_i : i\in E\}$ and a downward closed system $\mathcal I_t\subseteq \mathcal I$, and can be expressed as $\mathcal A_t = \{\{(i, x_i) : i \in S \} : S \in \mathcal I_{t} \text{~and~} x_i \geq t_i \text{~for all~} i \in S\}$. A gambler following $\mathcal A_t$ accepts element $i$ with outcome $(i, x_i)$ if and only if $x_i \geq t_i$ and set of elements accepted so far along with the element $i$ stays in $\mathcal I_t$.
\end{definition}
Greedy monotone strategies for the gambler is proposed in \cite{feldman2016online} for matroid, matching, and knapsack constraints that achieve $1/4$, $1/2e$, and $3/2-\sqrt 2$ factor prophet inequality respectively.
\subsection{c-Selectable Greedy OCRS Schemes}
We will give a brief overview of online contention resolution schemes \cite{feldman2016online} in this section.Gi ven a downward-closed family $\mathcal I$ over the ground set of elements $E$ with $|E| = n$, let $P_{\mathcal I} \subseteq [0, 1]^n$ be the convex hull of the indicator vectors of all feasible sets: $P_{\mathcal I} = \operatorname{conv}(\{\mathrm{1}_F : F \in \mathcal I\})$. We say that a convex polytope $P \subseteq [0,1]^n$ is a \emph{relaxation} of $P_\mathcal I$ if it contains the same $\{0,1\}$-points, i.e. $P \cap \{0,1\}^n = P_{\mathcal I} \cap \{0,1\}^n$.
Consider the following online problem: given some $\mathcal I$ as above and some $x \in P_{\mathcal I}$, let $R(x)$ be a random subset of \emph{active} elements, where each element $i \in E$ is active with probability $x_i$ independently of all others. The elements in $E$ are revealed online in an order chosen by an adversary, and when each element $i$ is revealed, we learn whether or not $i \in R(x)$. After we learn the state of element $i$, we must irrevocably decide whether or not to select $i$. An OCRS for $P$ is an online algorithm that selects a subset $S \subseteq R(x)$ such that $S \in \mathcal I$.
\begin{definition}[Greedy $c$-selectable OCRS]
Let $P \subseteq [0,1]^n$ be a relaxation of $P_{\mathcal I}$. A greedy OCRS $\pi$ for $P$ is an OCRS that for any $x \in P$ defines a downward-closed family of sets ${\mathcal I}_x \subseteq \mathcal I$. Then an active element $i$ is selected if, together with the already selected elements, the obtained set is in $\mathcal I_x$. Moreover, we say the greedy OCRS is $c$-selectable if for all $x \in P$ and $i \in E$
\begin{equation*}
\Pr[I \cup \{i\} \in \mathcal I_x \text{~for all~} I \subseteq R(x) \text{~and~} I \in \mathcal I_x] \geq c.
\end{equation*}
\end{definition}
\section{Shared-Cost Model}
\label{cost-sharing}
We now consider the \emph{shared-cost model}, where the principal decides how to split each probing cost with the agent. This final model gives the principal more control over probing costs in another attempt to get constant-factor delegation gaps despite our previous impossibility results. Recall that in this setting, the principal starts by choosing how to split each probing cost, so that the agent pays $c'_i \in [0, c_i]$ and the principal pays the remaining cost $c_i - c'_i \in [0, c_i]$. This model is motivated not only by our earlier impossibilities, but also by settings in which the principal has the power to pay chosen percentages of different costs that the agent may incur. For example, an organization (modeled by the principal) might reimburse chosen percentages of travel and lodging expenses associated with interviewing candidates based on the total amount of cost and expected quality of the candidate. The interviewer (agent) can then choose to interview (probe) candidates and make recommendations of their own choosing, but they must pay the remaining cost on their own.
In Theorem \ref{thm:delegation_for_costsharing}, we show that there exist efficient constant-factor strategies for the principal for a certain class of downward-closed constraints. This positive result uses a reduction from greedy selectable OCRS to efficient delegation for the shared-cost model.
\begin{theorem}\label{thm:delegation_for_costsharing}
If there exists an $\alpha$-selectable greedy OCRS for the polytope $P_{\mathcal I} = \operatorname{conv}\{\mathrm{1}_S : S\in \mathcal I\}$, then there exists an $\alpha/2$-factor delegation strategy for the shared-cost model with inner constraint $\mathcal I$.
\end{theorem}
\begin{proof}
Let $\{ p_i \}_{i \in E}$ be the solution to the following optimization problem:
\begin{align*}
&p = \argmax_{q\in P_{\mathcal I}} \sum_{i\in E} g_i(q_i), \quad \text{ where } \quad g_i(p_i) = p_i \cdot \mathbb E[Z_i^{\min} ~|~ Z_i^{\min} \geq F_i^{-1}(1 - p_i)],
\end{align*}
where $F_i(z)$ for $i\in E$ is the the cumulative distribution function of $Z_i^{\min}$, similar to \cite{feldman2016online} \footnote{We can also modify the optimization for discrete $Z_i^{\min}$ as in \cite{feldman2016online}.}. For $i\in E$, we set a threshold $t_i = \min\{\beta: F_i(\beta) \geq 1-p_i\}$. For any $p \in P_{\mathcal I}$, let $\mathcal I_p \subseteq \mathcal I$ be the downward-closed set system generated by an $\alpha$-selectable greedy OCRS with marginal probabilities $p$. The proof of Theorem 1.12 from \cite{feldman2016online} shows that for any online/adversarial item arrival order, the simple strategy that selects element $i$ if and only if $X_i\geq t_i$ and $S\cup i \in \mathcal I_p$ (where $S$ is the set of selected elements before the arrival of $i$) obtains at least $\alpha \cdot \mathbb{E} [ \max_{T\in \mathcal I}\sum_{i\in T} Z^{\min}_i]\geq \alpha \cdot \mathbb{E}[\texttt{OPT}]$ in expectation. The above strategy is an $\alpha$-factor greedy monotone strategy for the gambler against almighty adversary which can be described as $\mathcal A_t = \{\{(i,x_i): i\in S \}: S\in \mathcal I_{p} \text{ and } x_i \geq t_i \text{~for all~} i\in S\}.$
Given the independent distributions $\{\mu_i\}_{i\in E}$, the principal first computes $d_i = \mathbb{E}[Y_i ~|~ X_i \geq t_i] \cdot \Pr[X_i \geq t_i]$ for each element $i \in E$. If $d_i \le c_i$ for all elements $i \in E$, then the principal selects the agent's costs as $c'_i = d_i$ for all elements. After the cost division, the principal can define their strategy as follows: they accept elements only from the set $F = \{ i \in E : \tau_i\geq t_i \}$ where $\tau_i$ is the principal's cap value for $X_i$. Note that there does not exist $S \in \mathcal I_p$ that contains an element $j\in S$ not belonging to $F$ because the thresholds were defined for the truncated random variable $Z_i^{\min}$. The principal sets the acceptable outcomes as
\begin{equation*}
\mathcal R = \{\{(i, x_i, y_i) : i \in S\} : S \in \mathcal I_p \text{~and all~} (x_i, y_i) \in \operatorname*{supp}(\mu_i) \text{~and all~} x_i \geq t_i \}.
\end{equation*}
Given this delegation strategy, the agent has an expected utility of $\mathbb{E}[Y_i ~|~ X_i \geq t_i] \cdot \Pr[X_i \geq t_i] - c'_i = 0$ for each element $i$ that they might want to probe. Given any set of probed and selected elements $S$, the agent has expected utility $0$ for probing any additional element $i$ such that $S \cup i \in \mathcal I_p$. Hence, the agent has no incentive to deviate from the principal's $\alpha$-factor threshold picking strategy $(\mathcal A_t)$ (from Definition~\ref{def:threshold_picking}) for any probing order, where $\mathcal A_t$ is an $\alpha$-factor greedy monotone strategy for the prophet inequality with random variables $\{Z_i^{\min}\}$ against the almighty adversary defined earlier in the proof. Specifically, if they have already selected elements $S$ and are considering element $i$, they should probe $i$ if and only if $\tau_i\geq t_i$ (otherwise $Z_i^{\min}$ can not be more than $t_i$) and $S \cup i \in \mathcal I_p$, and they should select $i$ if and only if $X_i \geq t_i$. At any given time with selected elements $S$, the agent's expected utility from probing $i$ with $\tau_i\geq t_i$ and $S \cup i \in \mathcal I_p$ is $0$, so there is no incentive to deviate. Since the principal pays at most $c_i$ for the agent to probe each element $i$, Lemma~\ref{lem:reduction-pandora-to-prophet} implies that the principal obtains at least $\alpha \cdot \mathbb{E}[\texttt{OPT}]$ by delegating.
However, the agent's expected utility becomes nonzero for feasible elements when there exists some element $i \in E$ with $d_i > c_i$ because then the principal cannot set $c'_i$ any larger than $c_i$. Hence, the agent doesn't have $0$ expected utility for feasible elements and may not follow the principal's optimal search strategy. In such cases, the fact that the principal does not pay to probe helps us get a similar approximation.
Consider the case $d_i > c_i$ for all $i \in E$. If the principal only accepts elements with $X_i \geq t_i$ then they can safely ask the agent to pay the entire cost, i.e. $c'_i = c_i$. Again, consider the same acceptable set discussed earlier in the proof:
\begin{equation*}
\mathcal R = \{\{(i, x_i, y_i) : i \in S\} : S \in \mathcal I_p \text{~and~} S \subseteq F \text{~and all~} (x_i, y_i) \in \operatorname*{supp}(\mu_i) \text{~and all~} x_i \geq t_i \}.
\end{equation*}
Let $\texttt{Probed}$ and $S$ be the set of elements probed and selected, respectively, by the agent for some fixed realization of all random variables. It is easy to observe that there must be no $i \in \texttt{Probed} \setminus S$ with $X_i \geq t_i$ and $S \cup i \in \mathcal I_p$, otherwise the agent can improve their utility by selecting such an element. Moreover, there is no $i \in F \setminus \texttt{Probed}$ with $S \cup i \in \mathcal I_p$, otherwise, the agent can improve their expected utility, given the realizations of elements in $\texttt{Probed}$, by probing element $i$.
Therefore for any fixed realizations, we can consider the agent that executes $\alpha$-factor greedy monotone strategy $\mathcal A_t$ for $\{Z_i^{\min}\}$ for the following element arrival order: first the elements in $S$, then the elements in $\texttt{Probed} \setminus S$, and finally the elements in $F \setminus \texttt{Probed}$. Strategy $\mathcal A_t$ will select all the elements in $S$, but $\mathcal A_t$ will not select any element in $\texttt{Probed} \setminus S$ because, as we already argued, there is no $i \in \texttt{Probed} \setminus S$ with $X_i \geq t_i$. Moreover, $\mathcal A_t$ will not select any element in $F\setminus \texttt{Probed}$ because there is no $i \in F\setminus \texttt{Probed}$ with $S\cup i \in \mathcal I_p$. Therefore, the agent selects exactly the same elements that the $\alpha$-factor greedy monotone strategy $\mathcal A_t$ for $Z_i^{\min}$ would select for the described element arrival order and any realizations. Since the principal does not pay any cost to probe elements, extra elements probed in $\texttt{Probed}$ set do not affect the principal's utility. Therefore, the principal obtains at least $\alpha \cdot \mathbb{E} [ \max_{T\in \mathcal I}\sum_{i\in T} Z^{\min}_i]\geq \alpha \cdot \mathbb{E}[\texttt{OPT}]$ from delegation because $\mathcal A_t$ obtains at least $\alpha \cdot \mathbb{E} [ \max_{T\in \mathcal I}\sum_{i\in T} Z^{\min}_i]$ against the almighty adversary.
Now, finally we consider the case when there are some elements for which $d_i \le c_i$ and others for which $d_i > c_i$. We define $E_1 = \{i \in E: d_i \le c_i\}$ and $E_2 = \{i \in E: d_i > c_i\}$. The principal can restrict the agent to one of these two sets with with the greater expected $\mathbb{E}[\texttt{OPT}]$ when they follow the corresponding strategy described above. It is easy to show that the principal only loses at most a factor of $1/2$ in this case compared to the others:
\begin{align*}
\mathbb{E}[\texttt{OPT}]
&= \mathbb{E}\left[ \max_{\substack{S_1 \subseteq E_1, S_2 \subseteq E_2 \\ S_1 \cup S_2 \in \mathcal I}}\left(\sum_{i \in S_1} X_i + \sum_{j \in S_2} X_j \right) \right] \leq \mathbb{E}\left[ \max_{\substack{S_1 \subseteq E_1 \\ S_1 \in \mathcal I}} \sum_{i \in S_1} X_i + \max_{\substack{S_2 \subseteq E_1 \\ S_2 \in \mathcal I}} \sum_{j \in S_2} X_j \right]\\
&\leq 2 \max \left\{ \mathbb{E}\left[ \max_{\substack{S_1 \subseteq E_1 \\ S_1 \in \mathcal I}} \sum_{i \in S_1} X_i \right], \mathbb{E}\left[ \max_{\substack{S_2 \subseteq E_2 \\ S_2 \in \mathcal I}} \sum_{j \in S_2} X_j \right] \right\}
\end{align*}
Combining the above arguments, we conclude that there exists an $\alpha/2$-factor delegation strategy for this instance.
\end{proof}
We note that, similarly to Theorem \ref{thm:efficient_delegation_from_OCRS}, this result can reduce deterministic delegation to randomized greedy OCRS.
The following corollary shows that there exists a constant factor delegation gap for the shared-cost model with matroids, matching constraints, and knapsack constraints.
\begin{corollary}
There exists $\alpha$-factor delegation strategies for matroids, matching constraints, and knapsack constraints for the shared-cost model. Moreover these constants are $\alpha = 1/8$, $\alpha = 1/4e$ and $\alpha = 3/4 - 1/\sqrt 2$ for the respective constraints.
\end{corollary}
As we discussed in Section~\ref{sec:model_variants}, the delegation gap for instances of the shared-cost model can be greater than $1$, meaning that the principal benefits from delegating (in expectation) and may choose to do so even if they have the ability to conduct the search on their own. However, we can construct an instance of this model for which the delegation gap is strictly less than $1$, showing that this is not possible in general.
\begin{proposition}
There exists instances of Pandora's box for the shared-cost model with delegation gap $1/2+\varepsilon$ for arbitrary small $\varepsilon > 0$.
\end{proposition}
\begin{proof}
We can construct an instance with $1$-unifrom matroid constraints similar to \citep[Proposition-4.2]{bechtel2020delegated}. Note that the referenced impossibility has cost $0$ and still holds in the context of the shared-cost model, but we reproduce it here with positive (though negligible) costs.
For small $\varepsilon << 1$, let $X_1 = 1/\varepsilon$ with probability $\varepsilon$ and $0$ otherwise, and $Y_1 = 1 - \varepsilon$ with probability $\varepsilon$ and $0$ otherwise, independently of $X_1$. Let $X_2 = Y_2 = 1$ deterministically and set costs $c_1 = c_2 = \varepsilon^2$. We can compute $\mathbb{E}[\texttt{OPT}] = 2-\varepsilon -2\varepsilon^2 - \varepsilon^3 \geq 2-4\varepsilon$.
Consider any cost division $0 \le c'_1 \le c_1$ and $0 \le c'_2 \le c_2$. If the principal accepts element $2$ then the agent will always probe element $2$ and propose. We can enumerate over all possible delegation strategies and show that $\mathbb{E}[\texttt{DEL}] \leq 1$ in all cases. This shows that the delegation gap is $1/(2-4\varepsilon$), concluding the claim.
\end{proof}
We observe that the efficient delegation strategy for the shared-cost model constructed in Theorem~\ref{thm:delegation_for_costsharing} relies on a computation of $c'_i$ that uses information about the joint distribution $\mu_i$. In the following proposition, we show that if the principal has no information about the distribution of $Y_i$, then they can not obtain constant factor delegation for the shared-cost model. This holds because, without any information about $Y_i$, the principal does not have enough information to compute a cost division for which they can guarantee that the agent will probe the element $i$. We formalize our intuition in Proposition~\ref{prop:impos3} that shows that the agent agnostic delegation gap for the shared-cost model is at least $O(1/n^{1/4})$.
\begin{restatable}{proposition}{propimposnoinfoY}\label{prop:impos3}
There exists a family of instances of the shared-cost model with delegation gap $O(1/n^{1/4})$ when the principal has no information about $\{Y_i\}$.
\end{restatable}
\begin{proof}
Consider an instance on elements $E$ with $|E| = n$ and a $1$-uniform matroid constraint over $E$. For each element $i$, let the probing cost be $c_i = c = 1 - 2/ n^{1/4}$ and let the principal's utility be $X_i = \sqrt n$ with probability $1/\sqrt n$ and $X_i = 0$ otherwise. Following Proposition \ref{prop:impos2}, we have that $\mathbb{E}[\texttt{OPT}] = \Theta(n^{1/4})$. Now, consider any delegation mechanism for the principal for the shared-cost model. Let $c_i' = c_i$ be the cost division for each element $i$ in this mechanism, and let $\mathcal R$ be the set of acceptable solutions. Since the principal has no knowledge of the distributions of the agent's utilities, $\mathcal R$ can only consider the principal's utilities $\{X_i\}$. Let $E_1 = \{i \in E: c_i > 0\}$ and $E_2 = \{i \in E: c_i = 0\}$ be a disjoint partition of $E$.
Now we will define the agent's utilities. For each element $i \in E_1$, let $Y_i \sim \texttt{Unif}[0,c_i/2]$ when conditioned on $X_i = \sqrt n$, and $Y_i = n^2$ deterministically when conditioned on $X_i = 0$. For all $i \in E_2$, let $Y_i \sim \texttt{Unif}[e^n,3e^n]$ independent of $X_i$. First, we need to ensure that the described delegation instance has incentive for the agent to participate when they pay the entire cost, i.e. $\mathbb{E}[Y_i] > c_i$. For each element $i \in E_1$, we have $\mathbb{E}[Y_i] = \mathbb{E}[Y_i~|~X_i = \sqrt n]\Pr[X_i = \sqrt n] + \mathbb{E}[Y_i~|~X_i = 0]\Pr[X_i = 0] > (1-1/\sqrt n)n^2 > c_i$ and for $i\in E_2$, $\mathbb{E}[Y_i] = 2e^{n} > c_i$. Note that the principal has no information about $\{Y_i\}$.
Now, consider any single proposal delegation $\mathcal R = \{\{(i,x_i)\}:i\in E, x_i\in \{\sqrt n , 0 \}\}$. We divide all elements $E$ into following disjoint sets given $\mathcal R$:
\begin{align*}
F_1 = \{i\in E_1: (i,\sqrt n ) \in \mathcal R\land (i,0) \notin \mathcal R\} \quad & \quad
F_4 = \{i\in E_2: (i,\sqrt n ) \in \mathcal R\land (i,0) \notin \mathcal R\} \\
F_2 = \{i\in E_1: (i,\sqrt n ) \notin \mathcal R\land (i,0) \in \mathcal R\} \quad & \quad
F_5 = \{i\in E_2: (i,\sqrt n ) \notin \mathcal R\land (i,0) \in \mathcal R\} \\
F_3 = \{i\in E_1: (i,\sqrt n ) \in \mathcal R\land (i,0) \in \mathcal R\} \quad & \quad
F_6 = \{i\in E_2: (i,\sqrt n ) \in \mathcal R\land (i,0) \in \mathcal R\}
\end{align*}
The agent will never probe elements in $F_1$ because for $i\in E_1$, $\mathbb{E}[Y_i~|~X_i = n]-c_i <0$. The agent's optimal strategy is to probe elements in $V = F_4\cup F_5\cup F_6$ (with $|V| = k$) and pick any feasible element with high $Y_i$. If they can not find any feasible elements in $V$ then they probe elements in $F_3$ then $F_2$ until they observe $X_i = 0$. If they fail to observe an element with $X_i = 0$ then they propose element $i\in F_3$ with maximum $Y_i$. Given the agent's optimal strategy, we can bound the principal's optimal expected delegated utility as follows:
\begin{align}
\mathbb{E}[\texttt{DEL}_{\mathcal R}] &\leq \mathbb{E}[\texttt{DEL}_{\mathcal R}~|~\text{agent finds a feasible $i\in V$}] \cdot \Pr[\text{agent finds a feasible $i\in V$}] \notag \\
&\quad + \mathbb{E}[\texttt{DEL}_{\mathcal R}~|~\text{agent does not find a feasible $i\in V$}] \cdot \Pr[\text{agent does not find a feasible $i\in V$}]\notag \\
&\leq \sqrt n (1 - \left(1 - 1/\sqrt n\right)^k) - kc \notag \\
&\quad + \mathbb{E}[\texttt{DEL}_{\mathcal R} ~|~ \exists i\in F_2\cup F_3: X_i = 0] \cdot \Pr[\exists i\in F_2\cup F_3: X_i = 0] \notag \\
&\quad + \mathbb{E}[\texttt{DEL}_{\mathcal R} ~|~\nexists i\in F_2\cup F_3: X_i = 0] \cdot \Pr[\nexists i\in F_2\cup F_3: X_i = 0] \label{eq:bound_when_find_in_V} \\
& \leq [\sqrt n (1 - \left(1 - 1/\sqrt n\right)^k) - kc ]+ \mathbb{E}[\texttt{DEL}_{\mathcal R} ~|~ \nexists i\in F_2\cup F_3: X_i = 0] \cdot \Pr[\nexists i\in F_2\cup F_3: X_i = 0] \label{eq:remove_negative_delegation}\\
&= O(1) + (1/\sqrt n)^{|F_2\cup F_3|} \sqrt n = O(1) \label{eq:when_no_x_is_zero}
\end{align}
Inequality \eqref{eq:bound_when_find_in_V} holds because $\Pr[\text{agent finds a feasible~} i \in V]$ is bounded by $1$. We can further bound $\mathbb{E}[\texttt{DEL}_{\mathcal R} ~|~ \text{agent finds a feasible~}i\in V]$ by assuming that the agent proposes element $i\in V$ with $X_i = \sqrt n$ as long as it exists. Inequality \eqref{eq:remove_negative_delegation} holds because whenever the agent finds $i\in F_2\cup F_3$ with $X_i = 0$, the principal's expected utility is negative, i.e. $\mathbb{E}[\texttt{DEL}_{\mathcal R} ~|~ \exists i\in F_2\cup F_3: X_i = 0]\leq 0$. Inequality \eqref{eq:when_no_x_is_zero} holds because $\sqrt n (1 - \left(1 - 1/\sqrt n\right)^k) - kc = O(1)$ for all $k \leq n$ (Proposition \ref{prop:impos2}) and we ignore the cost paid by the principal in $\mathbb{E}[\texttt{DEL}_{\mathcal R} ~|~ \nexists i\in F_2\cup F_3: X_i = 0]$. Hence, $\mathbb{E}[\texttt{DEL}_{\mathcal R}] = O(1)$. Concluding the proof.
\end{proof}
\section{Standard Model Delegation}
\label{standard-model}
In this section, we consider the delegation gap of the standard model of the delegated Pandora's box problem. We start by looking at the binary model special case, and show that this model has constant-factor delegation gaps for matroid constraints. Then, in Section \ref{standard-model-impossibility}, we show that the standard model (without binary assumption on $\mu_i$) does not admit constant delegation gaps in general, even for the rank one matroid. Before getting to the main result for this model, we analyze the (non-delegated) Pandora's box problem with exogenous order as discussed in \cite{kleinberg2018delegated} for rank one matroids, and extend their result to more general constraints.
\subsection{Non-delegated Generalized Pandora's Box with Exogenous Sequence}
\label{binary-model-preliminaries}
Consider a variant of the generalized Pandora's box problem, which we will call \emph{generalized Pandora's box with exogenous order}, in which the searcher is limited to consider elements in an order that is specified in advance a part of the instance. For each element in this order, the searcher can choose to skip the element without probing, or probe the element and either accept or reject based on the realization. Once the searcher makes a decision about the current element, they cannot undo this decision. This is an extension of a similarly-named model from \cite{kleinberg2018delegated}. We now define the threshold strategy for Pandora's box problem with exogenous ordering. Recall that the cap value $\tau_i$ for an element $i$ is defined by $\mathbb{E}[(X_i - \tau_i)^+] = c_i$, where $X_i$ is the random value of the element and $c_i$ is its cost.
\begin{definition}[Threshold Strategy $(\mathcal A, \{\tau_i\} , \{X_i\})$] \label{def:threshold_picking}
Given a downward-closed family of solutions $\mathcal A$, the \emph{threshold strategy} defined by $\mathcal A$ functions as follows: Consider the searcher who has already accepted outcomes $\mathcal S = \{ (i_1, x_1), \dots (i_k, x_k) \}$ and is deciding what to do about element $i$. They should probe element $i$ if and only if $\mathcal S \cup (i, \tau_i) \in \mathcal A$. Furthermore, they should accept element $i$ if and only if $\mathcal S \cup (i, X_i) \in \mathcal A$.
\end{definition}
With this type of strategy in mind, we can extend the approximation of this problem from rank one matroids in \cite{kleinberg2018delegated} to more general downwards closed constraints. Lemma \ref{lem:reduction-pandora-to-prophet}, which is a corollary of \cite[Theorem 5]{esfandiari2019online}, provides a reduction from generalized Pandora's box with exogenous ordering for arbitrary downwards closed constraints to adversarial greedy prophet inequalities.
\begin{lemma}\label{lem:reduction-pandora-to-prophet}
Let $J$ be an instance of the generalized prophet inequality problem with random variable $Z_i^{\min}=\min\{X_i,\tau_i\}$ for all $i \in E$ and constraint $\mathcal I$. If there exists an $\alpha$-factor greedy monotone strategy for $J$ against the almighty adversary, then there exists an $\alpha$-factor threshold strategy for the Pandora's box instance $I = (E, \{X_i\}, \mathcal I, \{c_i\})$ with exogenous ordering.
\end{lemma}
\begin{proof}
Corollary of Theorem 5 from \cite{esfandiari2019online}.
\begin{comment}
content...
First, we show that for any given arrival order of elements, the greedy monotone strategy $(\mathcal A,\{Z_i^{\min}\})$ for the prophet inequality instance $J$ and the threshold picking strategy $(\mathcal A, \{\tau_i\}, \{X_i\})$ for the Pandora's box instance $I$ select the same elements. We prove this by induction over the order. Before the first element arrives, both strategies clearly select the same (empty) set of elements. Now, assume that before the arrival of element $i$ both strategies accepted the same set of elements $S$ with outcomes $\mathcal S$ (in their respective instances). When element $i$ arrives, the greedy monotone strategy $(\mathcal A, \{Z_i^{\min}\})$ selects $i$ for the instance $J$ if and only if $\mathcal S \cup (i, \min \{ X_i, \tau_i\}) \in \mathcal A$. This is equivalent to the threshold picking strategy $(\mathcal A, \{\tau_i\},\{ X_i\})$ selecting the element $i$ for the instance $I$, because $\mathcal S \cup (i, \tau_i) \in \mathcal A$ (it will probe) and $\mathcal S \cup (i, X_i) \in \mathcal A$ (it will select).
Now, we show that, for any given arrival order of elements, a greedy monotone strategy $(\mathcal A, \{Z_i^{\min}\})$ for prophet inequality instance $J $ and threshold picking strategy $(\mathcal A, \{\tau_i\}, \{X_i\})$ for the Pandora's box instance $I $ achieve the same utility. As we have already shown that both strategies select the same elements for their respective instances, it is enough to show that at each element's arrival, both strategies add the same expected utility for their respective instances. Let $S$ be the set of the selected elements with outcome $\mathcal S$ before element $i$ arrives. Let $t_i$ be the minimum threshold for which $\mathcal S \cup (i, \ell) \in \mathcal A$ for all $\ell \geq t_i$ for the generalized prophet inequality instance. If such a threshold does not exist, then $(\mathcal A, Z_i^{\min})$ for $J$ will not select $i$ for any value of realization and similarly $(\mathcal A, \tau_i, X_i)$ will not probe $i$ for the instance $I$. Therefore, if such a threshold exists and $\tau_i\geq t_i$ then the expected utility obtained by $(\mathcal A,Z_i^{\min})$ in $J$ is as follows:
\begin{align}
\mathbb{E} \left[Z_i^{\min} \mathbbm{1}[Z_i^{\min} \geq t_i] \right]
&= \mathbb{E} \left[X_i \mathbbm{1}[X_i \in [t_i,\tau_i)] \right] + \mathbb{E}\left[\tau_i \mathbbm{1}[X_i \geq \tau_i] \right] \notag \\
&= \mathbb{E}\left[X_i \mathbbm{1}[X_i \geq t_i] \right] + \mathbb{E} \left[(\tau_i - X_i)\mathbbm 1[X_i\geq \tau_i]\right] \notag \\
& =\mathbb{E}\left[X_i \mathbbm{1}[X_i \geq t_i] \right] - \mathbb{E} \left[(X_i-\tau_i)_+\right] = \mathbb{E}\left[X_i \mathbbm{1}[X_i \geq t_i] \right] - c_i.
\end{align}
Now, the increase in the expected utility for the same element $i$ for the instance $I$ by the threshold picking strategy $(\mathcal A, \{\tau_i\}, \{X_i\})$ is $\mathbb{E} \left[ X_i \mathbbm{1}[X_i \geq t_i] \right] - c_i$. In the other case, when $\tau_i <t_i$, $Z_i^{\min}\leq \tau_i <t_i$ and therefore both $(\mathcal A,\{Z_i^{\min}\})$ and $(\mathcal A,\{\tau_i\},\{X_i\})$ will not select and probe the element $i$ respectively for their respective instances $J$ and $I$. Thus, increase in expected utility is $0$ for both $I$ and $J$. Concluding the claim.
Since $(\mathcal A,\{Z_i^{\min}\})$ obtains at least $\alpha\cdot \mathbb{E}[\max_{T\in \mathcal I} \sum_{i\in T}Z^{\min}_i]$ for $I$, the threshold picking strategy $(\mathcal A, \tau_i, X_i)$ also obtains at least $\alpha \cdot \mathbb{E}[\max_{T\in \mathcal I} \sum_{i\in T}Z^{\min}_i]$ for the instance $J$. \cite{singla2018price} shows that the optimal utility for the Pandora's box instance satisfies $\mathbb{E}[\texttt{OPT}]\leq \mathbb{E}[\max_{T\in \mathcal I} \sum_{i\in T}Z^{\min}_i]$. Hence, the threshold picking strategy obtains at least $\alpha \cdot \mathbb{E}[\texttt{OPT}]$.
\end{comment}
\end{proof}
\subsection{Binary Model: Efficient Delegation for Matroids}
\label{binary-model}
Singla \cite{singla2018price} proposes an optimal strategy for Pandora's box with a matroid constraint that can be simplified in the binary setting as follows: probe elements one by one starting from the element with the maximum cap value. Given currently selected elements $S$, probe the next element with the maximum cap value $i$ such that $S\cup i \in \mathcal I$. After probing the element $i$, select $i$ if and only if $X_i>0$.
Consider the binary delegated Pandora's box instance for constraint $\mathcal I$ where the distributions $\mu_i$ of every element $i \in E$ has support on exactly two outcomes: $\bot = (i, 0, 0)$ and $\omega_i = (i, x_i, y_i)$. In the following Theorem, we show that the principal can design a $1/4$-factor strategy $\mathcal R$ for the standard delegation model for $\mu_i$ with binary support and a matroid constraint. The key idea is to use the reduction from Pandora's box with an exogenous order to prophet inequalities as described in Lemma~\ref{lem:reduction-pandora-to-prophet}.
\begin{theorem}
There exists a $1/4$-factor strategy for the binary model of delegated Pandora's box with a matroid constraint.
\end{theorem}
\begin{proof}
Take an instance of the binary model with elements $E$ such that for all $i \in E$, we have $(X_i,Y_i)=(x_i,y_i)$ with probability $p_i$ and $(X_i, Y_i) = (0, 0)$ otherwise. Consider a $1/4$-approximate greedy monotone strategy, as proposed in \cite{feldman2016online}, for the prophet inequality instance with random variables $Z_i^{\min} = \min\{X_i,\tau^x_i\}$ for all $i \in E$ and matroid constraint $\mathcal I$ against the almighty adversary. This strategy is defined by thresholds $t = \{t_i\}_{i \in E}$ and a matroid constraint $\mathcal I_{t}\subseteq \mathcal I$. Given any order of arrival of elements, the gambler selects element $i$ if and only if $Z^{\min}_i \geq t_i$ and the set of all accepted elements (including element $i$) is contained in $\mathcal I_{t}$. Without loss of generality, we assume that $t_i$ is such that $0 < t_i \le x_i$ for all $i \in E$. This is because the gambler has no incentive to accept elements of value $0$ and $\tau^x_i < x_i$ due to the assumption $\mathbb{E}[X_i]>c_i$.
Given thresholds $\{ t_i \}_{i \in E}$, the principal restricts the agent to elements in the set $E'= \{i\in E: \tau^x_i \geq t_i\}$. Let $\mathcal I_{t}^{E'}$ be the matroid constraint obtained by restricting $\mathcal I_{t}$ to the set of elements $E'\subseteq E$. We can describe the gambler's greedy monotone strategy as $\mathcal A = \{\{(i,z_i):i \in S \land z_i\geq t_i \}: S\in \mathcal I^{E'}_t\}$. Now, we define the principal's single proposal mechanism as follows:
\begin{equation*}
\mathcal R = \{\{(i,x_i,y_i): i\in S \}: S\in \mathcal I^{E'}_{t} \text{ and } x_i \geq t_i ~\forall i\in S\}.
\end{equation*}
For all $i\in E'$, $\mu_i$ has binary support, so $Y_i \geq \tau^y_i$ implies that $X_i \geq t_i$, where $\tau_i^y$ is the agent's cap value for element $i$ satisfying $\mathbb{E}[(Y_i - \tau^y_i)_+]=c_i$. Given this set of acceptable solutions $\mathcal R$, the agent faces an instance of Pandora's box on the set of elements $E'$ with matroid constraint $\mathcal I^{E'}_{t}$. Therefore, the agent's optimal strategy can be described as follows \cite{singla2018price}: given the current set of accepted elements $S\subseteq E'$ with $S\in \mathcal I^{E'}_{t}$, probe an element $i \in E' \setminus S$ such that $S \cup i \in \mathcal I^{E'}_{t}$ and $\tau^y_i$ is maximal. Then they will accept element $i$ if and only if $Y_i \geq \tau_i^y$, which is equivalent to selecting element $i$ if and only if $X_i \geq t_i$. Thus, the agent simply implements the threshold strategy $(\mathcal A,\{\tau_i\},\{X_i\})$ for the principal's Pandora's box instance with exogenous order equal to their probing order. Therefore, by Lemma~\ref{lem:reduction-pandora-to-prophet}, we conclude that the principal's expected delegated utility $\mathbb{E}[\texttt{DEL}_{\mathcal R}] \geq 1/4 \cdot \mathbb{E}[\texttt{OPT}]$.
\end{proof}
\subsection{Standard Model Impossibility}
\label{standard-model-impossibility}
Now we will consider the standard model of delegated Pandora's box and show that this problem does not have constant-factor delegation gaps in general, even for rank one matroid constraints. In Proposition \ref{prop:impos1}, we present a family of instances of delegated Pandora's box for which the delegation gap is $O(1/n)$ where $n$ is the number of elements. The main challenge in this model is when the agent pays to probe, the principal needs to construct their acceptable set $\mathcal R$ such that the agent has an incentive to probe all desirable elements. For example, consider an element $i$ for which $c_i = 1 / \sqrt n$, $X_i = n$ with probability $1 / n$ and otherwise $X_i = 0$, and $Y_i = n$ independently with probability $1 / n$ and otherwise $Y_i = 0$. In this case, if the principal only accepts the outcome $X_i = n$, then the agent will not probe element $i$ because their expected utility from probing is $n \times \Pr[X_i=n] \Pr[Y_i=n] - 1 / \sqrt n < 0$ for $n > 1$. In order to ensure that the agent probes such elements, the principal might have to accept undesirable outcomes where $X_i = 0$. Hence, if there are multiple such elements then the principal ends up accepting unwanted outcomes with a high probability that leads to $O(1/n)$ delegation gap. The following Proposition shows the claim formally.
\begin{proposition}\label{prop:impos1}
There exist instances of the standard model of delegated Pandora's box on $n$ elements for which the delegation gap is $O(\frac{1}{n})$.
\end{proposition}
\begin{proof}
For any positive integer $n > 1$ and real $0 < \varepsilon \le \frac{1}{2n}$, let $M$ be a positive integer such that $M \ge n / \varepsilon$ and consider the following instance of delegated Pandora's box. We have $n$ identical elements $E = \{1, \dots, n\}$ where each element $i$ has a probing cost $c_i = 1 - \varepsilon$ and random utilities $(X_i, Y_i) \sim \mu_i$. The principal's utility $X_i$ is $n$ with probability $\frac{1}{n}$ and $0$ otherwise. The agent's utility $Y_i$ is $M$ with probability $\frac{1}{M}$ independently of $X_i$ and $0$ otherwise. The constraint is a $1$-uniform matroid. We let the agent break ties in favor of the principal.
First, we will determine the principal's optimal non-delegated expected utility. This is given by the solution to Weitzman's Pandora's box problem. For each element $i$, we must determine the cap value $\tau_i$ such that $\mathbb{E} (X_i - \tau_i)^+ = c_i$. It's not hard to verify for this instance that $\tau^x_i = \varepsilon n$. Then the optimal solution guarantees an expected utility of $U = \mathbb{E} \max_i \min(X_i, \tau_i)$ where each $\min(X_i, \tau_i)$ takes value $\varepsilon n$ with probability $\frac{1}{n}$ and $0$ otherwise. Therefore, $\max_i \min(X_i, \tau_i)$ takes value $\varepsilon n$ with probability $1 - \left( 1 - \frac{1}{n} \right)^n$ and the principal gets expected utility
\begin{equation*}
\mathbb{E}[\texttt{OPT}] = \varepsilon n \left( 1 - \left( 1 - \frac{1}{n} \right)^n \right) \ge \varepsilon n \left( 1 - \frac{1}{e} \right).
\end{equation*}
Now, we will bound the principal's delegated expected utility. Consider an arbitrary acceptable set $\mathcal R$ that the principal might commit to. Since the constraint is $1$-uniform, $\mathcal R$ consists of a set of singleton outcomes. Observe that every element $i$ evaluates to one of four tagged outcomes $(i, n, M)$, $(i, n, 0)$, $(i, 0, M)$, and $(i, 0, 0)$ with probabilities $\frac{1}{n M}$, $\frac{1}{n} \left( 1 - \frac{1}{M} \right) $, $\frac{1}{M} \left( 1 - \frac{1}{n} \right)$, and $\left( 1 - \frac{1}{n} \right) \left( 1 - \frac{1}{M} \right)$, respectively.
Given $\mathcal R$, let $E^* \subseteq E$ be the subset of elements $i$ for which $(i, 0, M) \in \mathcal R$ and $(i, n, M) \in \mathcal R$, and let $k = |{E^*}|$. Consider any element $i \notin E^*$. If outcome $(i, 0, M) \notin \mathcal R$, then the agent's increase in expected utility from probing $i$ is at most $M \cdot \frac{1}{M} \left( 1 - \frac{1}{n} \right) - (1 - \varepsilon) = \varepsilon - \frac{1}{n} < 0$, so they have no incentive to ever probe $i$. Similarly, if outcome $(i, n, M) \notin \mathcal R$, then the agent's increase in expected utility from probing $i$ is at most $M \cdot \frac{1}{n M} - (1 - \varepsilon) = \varepsilon - \left( 1 - \frac{1}{n} \right) < 0$. Therefore, the agent will probe no more than the $k$ elements in $E^*$. If $k = 0$, then the agent will not probe anything and both will get $0$ utility. For the remainder of the proof, we assume $k > 0$.
The agent now faces an instance of the Pandora's box problem, so their optimal strategy is to probe elements in order of weakly decreasing cap value (among non-negative cap values) and accept the first outcome whose value is above its cap. For all elements $i \in E^*$, we can calculate that the agent's cap is $\varepsilon M > 0$. Then their optimal strategy is to probe elements from $E^*$ in some order $1, \dots, k$ until a value of $M$ appears, which they will propose. If no value of $M$ appears after probing all of $E^*$, then they will stop probing and choose some outcome to propose. Since all probed outcomes have $0$ utility to the agent, they will choose an outcome to propose that maximizes the principal's utility.
Consider the utility that the principal gets when the agent finds an outcome of value $M$. Among the $k = |{E^*}|$ elements that the agent might probe, they find a value of $M$ with probability
$1 - \left( 1 - \frac{1}{M} \right)^k \le \frac{k}{M} \le \frac{\varepsilon k}{n} \le \varepsilon.$
Since the principal's utility for the proposed outcome is independent of the agent's, it will have value $n$ for the principal with probability $\frac{1}{n}$. Since $k \ge 1$, the principal pays a cost of $1 - \varepsilon$ for the first probe. Therefore, the principal expects a utility of at most $\varepsilon (\frac{n}{n} - (1 - \varepsilon)) = \varepsilon^2$ in the event when the agent finds an outcome with value $M$.
Now, with probability $\left( 1 - \frac{1}{M} \right)^k \ge 1 - \varepsilon$, the agent doesn't find any outcomes of value $M$. Then the principal pays a cost of $k(1 - \varepsilon)$ in order to probe all $k$ elements in $E^*$. Since the agent breaks ties in favor of the principal, they will propose any acceptable outcomes of value $n$ to the principal. There exists such an outcome with probability at most $1 - \left( 1 - \frac{1}{n} \right)^k$. Therefore, the principal expects a utility of at most
\begin{equation*}
n \left(1 - \left( 1 - \frac{1}{n} \right)^k \right) - k(1 - \varepsilon) \le n \left(1 - \left( 1 - \frac{1}{n} \right)^k \right) - k \left( 1 - \frac{1}{2n} \right)
\end{equation*}
in the event when the agent does not find an outcome with value $M$. At $k = 1$, this expression evaluates to $\frac{1}{2n} = \varepsilon$. At $k = 2$ it evaluates to $0$. With some calculus and some algebraic manipulations, we can show that this expression is negative for all $k > 2$.
Putting everything together, the principal's delegated expected utility is at most $\varepsilon + \varepsilon^2$, while their non-delegated expected utility is at least $\varepsilon n \left( 1 - \frac{1}{e} \right)$. Therefore, the delegation gap on this instance approaches $\frac{1}{n (1 - 1 / e)} = O(\frac{1}{n})$ as $\varepsilon$ approaches $0$.
\end{proof}
|
1,108,101,562,995 | arxiv | \section{Introduction}
To date, most studies of ferroelectric materials have concentrated on transition metal oxide systems such as PbTiO${_3}$ and BaTiO${_3}$, which feature highly ionic interactions and large Born effective charges. Unfortunately, most known ferroelectric thin films cannot maintain the desired spontaneous electric polarization with decreasing thickness \cite{junquera2003critical,sai2005ferroelectricity}. Termed ``proper ferroelectics,'' in these materials the loss of polarization results from a competing depolarization field that grows in relative strength as the material gets thinner. To overcome this challenge, recent first-principles calculations predict a new family of ferroelectric materials: $ABC$ semiconductors, also known as hexagonal Heuslers \cite{bennett2012hexagonal}. Unlike conventional proper ferroelectrics, many of these materials are predicted to be ``hyperferroelectric,'' proper ferroelectrics that can retain long-range polarization under large depolarization fields \cite{garrity2014hyperferroelectrics}. Compared with most known ferroelectrics, the hexagonal Heusler ferroelectric materials feature covalent bonding, smaller Born effective charge, and smaller band gaps \cite{garrity2014hyperferroelectrics}. Furthermore, while it is difficult to integrate oxide ferroelectrics with commonly used semiconductors \cite{demkov2014integration}, these hexagonal half-Heusler compounds are readily lattice matched to III-V semiconductors \cite{kawasaki2019heusler}.
Of the predicted compounds, hexagonal LiZnSb is one of the more promising hyperferroelectric candidate materials. Density functional theory calculations suggest LiZnSb should have a polarization of 0.56 $C/m^2$, comparable to BaTiO$_3$ \cite{garrity2014hyperferroelectrics, bennett2012hexagonal}. In this compound, the ZnSb atoms form a hexagonal wurtzite structure and the Li atoms stuff at the interstitials. A significant challenge, however, is the existence of a competing nonpolar cubic polymorph (Fig. \ref{struc}(b)), which differs in formation energy from the desired hexagonal phase by only a few meV per formula unit \cite{white2016polytype}. As such, the phase purity of LiZnSb is highly dependent on synthesis route \cite{song2019creation, white2016polytype}. Single crystalline hexagonal films, which are necessary for devices, have not yet been demonstrated. Ferroelectric switching, either in bulk or thin film form, has not yet been reported for any of the $ABC$ ferroelectric candidates.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{structure.pdf}
\caption {\textbf{Crystal structures of hexagonal and cubic LiZnSb.} (a) Hexagonal LiZnSb (\textit{LiGaGe}-type structure), which consists of a wurtzite ZnSb sublattice (yellow and blue atoms) that is stuffed with Li (red atoms). (b) Cubic LiZnSb (half Heusler structure), which consists of a zincblende ZnSb sublattice that is stuffed with Li. These two polymorphs are related by \textit{AB-AB} (hexagonal) versus \textit{ABC-ABC} (cubic) stacking along the $[0001]_{h} \parallel [111]_c$ axes. \textit{h} and \textit{c} denote cubic and hexagonal, respectively. (c) Top-down view of a single monolayer in $(0001)_h \parallel (111)_c$ orientation. In this orientation, a single monolayer of the cubic and hexagonal phase are indistinguishable. }
\label{struc}
\end{figure}
Here, we use molecular-beam epitaxy (MBE) to demonstrate the first growth of single-crystalline, hexagonal LiZnSb thin films. Based on the high volatility of all three elements in this compound \cite{alcock1984vapour}, especially Zn, we find a low temperature ($\leq 175 \degree$C), high Zn flux regime in which the hexagonal polymorph is stabilized over competing phases. We demonstrate the epitaxial growth of LiZnSb films on GaSb (111)B with a sharp interface, as established by X-ray diffraction (XRD), reflection high-energy electron diffraction (RHEED), and scanning transmission electron microscopy (STEM).
\section{Methods}
Epitaxial LiZnSb films were grown in a Veeco GEN 10 MBE system using the PARADIM thin film synthesis facility at Cornell University, an NSF-supported Materials Innovation Platform [www.PARADIM.org]. The typical layer structure consists of 100 nm Zn cap / 20-30 nm LiZnSb / 40-100 nm GaSb buffer / GaSb (111)B substrate, corresponding to a 3.5\% compressive lattice mismatch. Substrates were rinsed with isopropanol followed by de-ionized water and then blow dried with nitrogen before being loaded into the MBE chamber. Following thermal desorption of the native oxide under an Sb$_4$ flux, a GaSb buffer layer was grown at 490 $\degree$C, as measured by a thermocouple calibrated to the oxide-desorption temperature of GaSb. For GaSb growth we use a Sb/Ga atomic flux ratio of 3 as measured by a quartz crystal microbalance. Samples were then cooled under an Sb$_4$ flux to temperatures in the range of 100$\degree$C to 350$\degree$C, before initiating the LiZnSb growth.
For LiZnSb growth, we use standard low-temperature effusion cells loaded with elemental Sb, elemental Zn, and a Li-Sn alloy with a starting composition of about Li$_{0.2}$Sn$_{0.8}$. The Li-Sn alloy, which consists of a mixture of Li$_2$Sn$_5$ and Sn, was used as an alternative to elemental Li due to its increased oxidation resistance. This Li-Sn alloy is prepared in a glove box, but once prepared, it can be exposed to air, greatly simplifying source loading and MBE maintenance. Since the vapor pressure of Li is more than 10$^7$ times larger than the vapor pressure of Sn at the Li-Sn cell temperature of 500$\degree$C to 670$\degree$C, we expect the Sn incorporation into our films to be negligible \cite{vaporhonig}. Due to the high relative volatility of Zn compared to Li and Sb \cite{alcock1984vapour}, we use an excess Zn/Sb atomic flux ratio of 5-25, and Li/Sb atomic flux ratios near 1. These correspond to Zn fluxes of order 10$^{14}$ to 10$^{15}$ atom/cm$^2\cdot$s, and Li and Sb fluxes of order 10$^{13}$ atom/cm$^2\cdot$s. In this regime the resulting film crystal structure is weakly dependent on relative Zn overpressure, and dependins more strongly on growth temperature and Li/Sb flux ratio. After LiZnSb growth, samples were cooled to room temperature under a Zn flux, in order to compensate for Zn desorption. Below 50$\degree$C the excess Zn begins to stick and form a cap. An epitaxial capping layer of Zn was deposited to protect the sample upon removal from vacuum.
For TEM measurements, LiZnSb cross section samples were prepared with a focused ion beam (FIB), followed by final thinning in a Fischione Model 1040 Nanomill using Ar$^+$ ions at 900V. Samples were stored in vacuum and cleaned in a GV10x DS Asher cleaner run at a power of 20 W for 10 min before being transferred into the TEM column. A probe corrected Thermo Fisher Titan STEM operated at 200 kV was used to analyze the sample. An electron probe with 24.5 mrad probe semi-convergence angle and 18.9 pA beam current was formed, achieving sub-Angstrom spatial resolution. High angle annular dark field (HAADF) images were recorded with a Fishione 3000 annular detector covering collection angle ranging from 53.9 mrad to 269.5 mrad.
We performed first-principles density functional theory (DFT) calculations in the local density approximation using ABINIT \cite{gonze2016recent}. The projector augmented wave method \cite{jollet2014generation} with pseudopotentials containing 3 valence electrons for Li ($1s^2 2s^1 2p^0$), 12 for Zn ($4s^2 3d^{10} 4p^0$), and 5 for Sb ($5s^2 5p^3$) was used. An energy cutoff of 680 eV was used for all calculations. The computed lattice constant for the cubic structure, using a $10 \times 10 \times 10$ Monkhorst-Pack $k$-point mesh, is 6.14 \AA, and the computed lattice constants for the hexagonal structure, using a $16 \times 16 \times 12$ mesh, are $a = 4.34$ \AA\ and $c = 7.03$ \AA, in good agreement with previous literatures \cite{bennett2012hexagonal, white2016polytype, toberer2009thermoelectric} and experiments \cite{white2016polytype, song2019creation, toberer2009thermoelectric, nie2014lithiation}. The effects of epitaxial strain in the (0001) plane were investigated through the strained bulk approach, with $a$ ranging from 4.08 \AA\ to 4.60 \AA, corresponding to 6\% compressive and tensile strains. We imposed the epitaxial constraint on the cubic structure by treating the cubic lattice as rhombohedral ($\alpha$ = 120$^\circ$), with a hexagonal supercell using a $16 \times 16 \times 8$ $k$-point mesh. We computed the energy to remove one Li atom from a 72 atom hexagonal supercell, with the supercells in the cubic and hexagonal structures having almost identical shapes, using a $8 \times 8 \times 4$ $k$-point mesh.
\section{Results and Discussion}
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{RHEED.pdf}
\caption{\textbf{Reflection high energy electron diffraction patterns at various stages for Zn-capped LiZnSb films on GaSb (111)B.} (a,b) GaSb buffer layer. (c,d) LiZnSb after 30 minutes (24 nm) of growth at 190 $\degree$C with atomic flux ratios of Li/Sb = 1.1 and Zn/Sb = 13. (e,f) Epitaxial Zn cap. Left column: electron beam oriented along $[ \bar{1} 1 0]_{c} \parallel [1 1 \bar{2} 0]_{h}$. Right column: beam oriented along $[\bar{1} \bar{1} 2]_{c} \parallel [ 2 \bar{1} \bar{1} 0]_{h}$.}
\label{rheed}
\end{figure}
Figure \ref{rheed} shows typical RHEED patterns following the growth sequence. The GaSb buffer layer shows a sharp and streaky $(1 \times 12)$ pattern, indicative of smooth growth (Figs. \ref{rheed}(a) and \ref{rheed}(b)). For the LiZnSb layers (Figs. \ref{rheed}(c) and \ref{rheed}(d)), sharp and streaky $(1\times 1)$ patterns are observed over a wide range of Li/Sb flux ratios (0.4 to 2) and growth temperatures (125 to 350 $\degree$C). For growth temperatures above 225 $\degree$C, even though the RHEED shows a sharp and streaky $(1\times 1)$ pattern indicative of changes in surface termination, no bulk reflections from Li-Zn-Sb phases are observed by post-growth X-ray diffraction, indicating minimal LiZnSb sticking on the surface at elevated temperatures. By lowering the temperature below 225 $\degree$C, XRD signals from film reflections appear.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{XRD.pdf}
\caption{\textbf{Distinguishing hexagonal from cubic LiZnSb by x-ray diffraction (Cu K$_{\alpha}$).} (a) Out-of-plane $\theta-2\theta$ scans for samples grown at two different temperatures. The sample grown at T$_{growth}$=190 $\degree$C (red curve) shows a mixture of hexagonal $000l_h$ and cubic $lll_c$ reflections. The sample growth at a lower temperature of 150 $\degree$C (blue curve), shows only hexagonal $000l_h$ reflections. The inset shows a zoom-in near the $0002_{h}$ reflection of the T$_{growth}$=150 $\degree$C sample. The Kiessig fringe spacing corresponds to a thickness of 24 nm. (b) In-plane $\phi$ scans of the T$_{growth}$=190 $\degree$C mixed-phase sample. Both hexagonal $10\bar{1}3_h$ and cubic $220_c$ LiZnSb reflections for are present. (c) In-plane $\phi$ scans of the pure hexagonal LiZnSb sample grown at T$_{growth}$=150 $\degree$C.}
\label{xrd}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{TEM.pdf}
\caption{\textbf{Cross-sectional STEM along a hexagonal $[11\bar{2}0]$ zone axis. The growth direction points upwards.} (a) Mixed-phase sample. In the STEM image on the left side, distinct layers with the hexagonal structure and the cubic structure are observed. Red lines denote the interfaces between the hexagonal and cubic regions. Arrows denote stacking faults. On the right side, magnified STEM images of the cubic structure and the hexagonal structure are displayed. (b) STEM image of a phase-pure hexagonal LiZnSb sample at the interface between the GaSb substrate and hexagonal LiZnSb. The inset is a high resolution STEM image of the hexagonal phase. Due to the light mass of Li atoms, they cannot be detected by STEM, so we can only see Zn and Sb atoms. The schematic crystal structures are placed on top of the STEM images and the color coding is as follows: red spheres are Li, yellow spheres are Zn, and blue spheres are Sb.}
\label{stem}
\end{figure*}
Figure \ref{xrd}(a) shows the XRD patterns (Cu K$_\alpha$) for two samples, one grown at $190\degree$C and the other grown at $150 \degree$C. For the higher temperature sample, two sets of reflections are observed, corresponding to cubic $lll_c$ reflections and hexagonal $000l_h$ reflections. For the sample grown at lower temperature, only one set of reflections is observed. In the $190 \degree$C sample, the lower angle reflections ($2\theta = 24.29 \degree$ and $49.63 \degree$) correspond to an out-of-plane $d_\perp$ spacing of 7.34 \AA, and the higher angle reflections ($2\theta = 24.93 \degree$ and 51.06$ \degree$) correspond to $d_\perp=7.13$ \AA. In comparison, previous measurements of bulk cubic LiZnSb report $2d_{111}=a \frac{2\sqrt 3}{3}=7.19$ \AA\ ($a=6.23$ \AA\ \cite{white2016polytype,white2018expanding}). For bulk hexagonal LiZnSb the experimental lattice parameters range from $c=7.15$ \AA\ to 7.24 \AA\ depending on Li stoichiometry \cite{nie2014lithiation, song2019creation, toberer2009thermoelectric}, and $c=6.02$ \AA\ for 2D-ZnSb \cite{song2019creation}. Our measured values of $d_\perp$ fall within the range of these reports, and therefore we cannot make an assignment of cubic versus hexagonal reflections from the magnitudes of $d_\perp$ alone.
To distinguish cubic from hexagonal LiZnSb, we perform in-plane $\phi$ scans (Figs. \ref{xrd}(b) and \ref{xrd}(c)). For the higher growth temperature sample we observe both the cubic $220_c$ and hexagonal $10\bar{1}3_h$ LiZnSb reflections, while for the lower temperature sample we observe only the hexagonal $10\bar{1}3_h$. From these measurements, we determine that the lower angle reflections correspond to the hexagonal phase with $c=7.34$ \AA, while the higher angle reflections correspond to the cubic phase with $2 d_{\perp,111}=7.13$ \AA. Projecting the measured $10\bar{1}3_h$ and $220_c$ reflections to the growth plane, we find in-plane lattice parameters of $a=4.43$ \AA\ for hexagonal and $d_{\parallel,110}=4.39$ \AA\ for cubic, respectively. These measurements are in good agreement with previous measurements on bulk samples, which report $a=4.43$ \AA\ for hexagonal \cite{toberer2009thermoelectric, white2016polytype, song2019creation} and $d_{110}=a\sqrt 2=4.41$ \AA\ for cubic \cite{white2016polytype, white2018expanding}. For the single-phase hexagonal film grown at $150 \degree$C, we also observe finite thickness fringes in the $2\theta$ scan (Fig. \ref{xrd}(a), insert), indicative of sharp interfaces between film and substrate. These results suggest that lowering the growth temperature produces phase-pure hexagonal LiZnSb films, while higher temperature growth results in a mixture of cubic and hexagonal polymorphs.
Our assignment of cubic and hexagonal LiZnSb is corroborated by cross-sectional scanning transmission electron microscopy (STEM). For the higher temperature sample (Fig. \ref{stem}(a)) we observe regions of both $ABC-ABC$ and $AB-AB$ stacking along the growth direction, corresponding to cubic and hexagonal phases, respectively. In contrast, for the low temperature sample we observe only the hexagonal phase with $AB-AB$ stacking (Fig. \ref{stem}(b)). STEM of this phase-pure hexagonal sample also shows a sharp interface between the hexagonal LiZnSb film and the cubic GaSb (111)B substrate, consistent with the sharp Kiessig fringes observed by XRD.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{summary.pdf}
\caption{(a) Diagram showing what phases form in thin films grown on GaSb (111)B at different growth temperatures and Li/Sb flux ratios. \textit{``c''} and \textit{``h''} represent the cubic and hexagonal LiZnSb, respectively. (b) $\theta-2\theta$ scans around the GaSb 222 reflection of 20-30 nm thick films grown at fixed Li/Sb flux ratio in the range of 1.03 to 1.07, with varying growth temperature. The corresponding cut through the growth parameter diagram is denoted by a dashed line in (a). (c) $\theta-2\theta$ scans near the GaSb 222 reflection for samples grown at fixed temperature ($167 \pm 7\degree$C), with varying Li/Sb flux ratios. (d) $\theta-2\theta$ scans near the GaSb 222 reflection for ZnSb samples grown at fixed temperature ($212 \pm 12\degree$C), for varying Li/Sb flux ratios.}
\label{phasediagram}
\end{figure}
The phase diagram for MBE growth is summarized in figure 5(a). For a finite region centered near a Li/Sb flux ratio of 1, we find that decreasing the substrate temperature below 175 $\degree$C favors the formation of pure-phase hexagonal films (Fig. 5(b)). For fixed growth temperatures below 175 $\degree$C, increasing the Li/Sb flux ratio beyond 1 leads to the formation of Li$_2$ZnSb with the cubic full Heusler structure (Fig. 5(c)). For growth at moderate temperatures of 200-225$\degree$C, decreasing the Li/Sb flux ratio 0.8 lead to the formation of hexagonal ZnSb (Fig. 5(d)), the same phase found by Li de-intercalation of hexagonal LiZnSb \cite{song2019creation}.
This result is somewhat surprising in light of our DFT calculations for bulk LiZnSb, which show that the cubic phase has lower energy than the hexagonal phase by about 35 meV per formula unit, comparable to the previously reported value of 30 meV \cite{white2016polytype}, and hence the cubic is the expected stable phase. The results are similar using both LDA (local density approximation) and GGA (generalized gradient approximation) functionals \cite{white2018expanding}. Note, however, that 30-35 meV per formula unit is similar in magnitude to the thermal energy, $k_B T = 40$meV, at a growth temperature of $200\degree$C. Therefore we do not expect a strong thermodynamic driving force to prefer one phase over the other. Although a mixture of hexagonal and cubic polymorphs is seen at higher growth temperatures, at low temperature only the hexagonal polymorph is seen.
The formation of the higher-energy hexagonal phase does not appear to be related to epitaxial strain stabilization. In-plane lattice parameters of our MBE grown films appear to be relaxed from that of the GaSb substrate (a = 4.43 \AA\ for hexagonal LiZnSb, $d_{\parallel, 110} = 4.39$ \AA\ for cubic LiZnSb, and $d_{\parallel, 110} = 4.31$ \AA\ for the GaSb substrate). Furthermore, our DFT calculations for the cubic-hexagonal energy difference for epitaxially strained films as a function of strain show that the energy difference of 35 meV/fu is quite insensitive to strain in the range from -6\% to +6\%.
Finally, we checked the effects of Li stoichiometry on the cubic-hexagonal energy difference. Previous Li de-intercalation studies of LiZnSb suggest that LiZnSb is stable over a range of Li composition \cite{song2019creation}, ranging from stoichiometric LiZnSb to the layered 2D polymorph of ZnSb. From first-principles calculations for the relative energy of the cubic and hexagonal phases with one Li removed from each 72 atom supercell, we find that while it is slightly more energetically favorable to form a Li vacancy in the hexagonal structure than in the cubic structure, a vacancy concentration of approximately 50\% would be required for this energy to stabilize the hexagonal phase relative to the cubic phase. Our experimental lattice parameter of $c = 7.34$ \AA\ is much closer to that of nominally stoichiometric bulk LiZnSb ($c = 7.24$ \AA) than to that of 2D ZnSb ($c = 6.02$ \AA). Therefore it is unlikely that Li vacancies are responsible for stabilizing the hexagonal phase.
Given the very small difference in formation energies for cubic and hexagonal compared to $k_B T$, and relative insensitivity to strain and Li stoichiometry, the most likely reason for stabilizing the hexagonal phase at low temperature is kinetics. In support of this idea, we find that after extended exposure to the 200 keV electron beam during TEM measurements, the relative volume fraction of cubic to hexagonal phase increases. Recent wet synthesis of LiZnSb also suggests that kinetics plays a strong role, as the hexagonal phase is favored at lower temperatures and shorter times, while the cubic phase is favored at higher temperatures and longer times \cite{white2018expanding}. Note, however, that the kinetic pathway for a wet synthesis is very different than epitaxy from vapor during MBE growth. Although they are often small, finite temperature effects or changes in the Zn chemical potential may modify the true formation energies sufficiently to change phase selection in this case.
\section{Conclusion}
In this paper, we present the first epitaxial growth of LiZnSb thin films and showed a wide adsorption-controlled growth window in which the hexagonal phase is stabilized with MBE. This study of the MBE growth of LiZnSb provides solutions to the obstacles in growing single crystalline epitaxial films of $ABC$ hyperferroelectric candidates \cite{bennett2012hexagonal,garrity2014hyperferroelectrics} which are composed entirely of elements with relatively high vapor pressures. When combined with metallic $ABC$ films, e.g., LaPtSb and LaAuSb \cite{du2019high, strohbeen2019electronically}, the family of hexagonal Heuslers provides a platform for all-epitaxial ferroelectric and polar metal heterostructures.
\section{Acknowledgments}
This work was supported by the United States Army Research Office (ARO Award number W911NF-17-1-0254). Synthesis efforts at the PARADIM facility were supported by the National Science Foundation (Platform for the Accelerated Realization, Analysis, and Discovery of Interface Materials (PARADIM)) under Cooperative Agreement No. DMR-1539918. High-resolution STEM characterization was supported by the Department of Energy Basic Energy Science (DE-FG02-08ER46547), with facilities supported by Wisconsin MRSEC (DMR-1720415). DFT calculations were supported by the Office of Naval Research (ONR N00014-17-1-2770).
\bibliographystyle{apsrev}
|
1,108,101,562,996 | arxiv | \section{Introduction}
In quantum key distribution (QKD), two parties, Alice and Bob, want to communicate in a secure fashion despite the presence of Eve, who is eavesdropping on their communication channel. They do this through establishing a cryptographic key that is known only to them and no one else \cite{Experimental_QKD,BB84}. However, Alice cannot simply send Bob a key over their communication channel, as Eve will also learn the key by eavesdropping. Therefore, protocols are needed which can distribute an identical key to Alice and Bob over an insecure channel, without Eve discovering it.
Currently, protocols exist that can accomplish this, though many methods of classical encryption base their security on the fact that certain mathematical operations, such as factorising large semiprime numbers, are very difficult to perform using current technology \cite{RSA}. However there is no reason to assume that solving these problems within a reasonable timeframe will continue to be difficult in the future as computing power increases and new algorithms are created.
In quantum protocols on the other hand, we make the assumption that Eve has access to arbitrarily large amounts of computing power while still being able to establish secure communication between Alice and Bob. This is done by basing security on restrictions imposed by the laws of quantum mechanics \cite{BB84}, such as the inability to measure a quantum state without affecting the system. This cannot be overcome through any amount of computing power.
Currently, most QKD protocols use coherent light, produced by lasers, as a method of generating secure keys. An example of this is the Gaussian Modulated Coherent State (GMCS) protocol \cite{GMCS_QKD,ExperimentalPassive,Passive}, where the key is encoded in randomly chosen quadratures of a beam described by randomly distributed coherent states. However, recently more analysis has been done concerning the use of thermal states in QKD \cite{Thermal_1,Thermal_2,Thermal_3}. These involve splitting a beam emitted by a thermal source at a beam splitter and sending the outputs to Alice and Bob respectively. Previous work concerning thermal states showed that they exhibit Hanbury Brown and Twiss correlations \cite{HBT} when split at a beam splitter, and quantum discord, a requirement for quantum key distribution \cite{Discord}.
One of the main factors limiting thermal methods is that noise and thermalisation of states are seen as detrimental for QKD protocols \cite{Noisy_QKD}, however work in this area is valuable due to the widespread use of microwaves in wireless modern communication, such as in WiFi and Bluetooth, in which thermal state QKD could be applied. Coherent state QKD is not suitable for these applications as the devices involved do not broadcast such states.
Here, we analyse a central broadcast protocol using a thermal input, with Eve intercepting the beam sent to Bob in order to eavesdrop. Monte Carlo simulations of the protocol are performed to produce sample bit strings, setting up for future experimental work using microwave sources. The paper begins with a brief overview of thermal states in Section \ref{sec:Thermal-states}, followed by Section \ref{sec:Protocol}, which describes the setup that will be simulated, while Sections \ref{sec:Information-Measurements}-\ref{sec:Covariance-Matrices} describe the measurements and workings.
\section{Thermal states} \label{sec:Thermal-states}
When written in the Fock basis, with \(\hat{a}^{\dagger}\) denoting the creation operator and \(|n\rangle=\frac{\left(\hat{a}^{\dagger}\right)^{n}}{\sqrt{n!}}|0\rangle\) describing an n-photon state, thermal states are given in the form \(\rho_{\mbox{\tiny Th}}=\sum_{n=0}^{\infty}p_{n}|n\rangle\langle n|\). Here, \(p_{n}=\frac{\exp\left(-n\beta\hbar\omega\right)}{1-\exp\left(-\beta\hbar\omega\right)}\) describes a thermal distribution where \(\beta=(k_{B}T)^{-1}\) is the thermodynamic beta. When a beam from a thermal source is input into a beam splitter, correlations are observed in intensity measurements performed on the output beams \cite{Thermal_1,HBT,Photon_Bunching} which are not present when a coherent source is used.
These correlations exist due to the bunched nature of photons in thermal light. When detecting light from a thermal source, photons are not detected in random intervals, but are instead detected in clusters \cite{Photon_Bunching}. High variance in the intensity of thermal light, which is not present with a coherent source, is the result of this bunching.
We aim to take advantage of the correlations produced by this phenomenon to devise a QKD protocol which produces correlated bit strings between Alice and Bob using microwave sources. These bit strings can then be used to create a secure key to allow private communication in the presence of an eavesdropper.
The use of thermal states differentiates this protocol from similar versions involving modulated coherent states. Using a thermal source lets us carry out the protocol with common microwave-based wireless communication equipment instead of relying on fibre.
Additionally, the output of a thermal source and a Gaussian modulated coherent source are statistically equivalent. This allows the application of security proofs for GMCS protocols to thermal protocols. An important distinction to note is that coherent states are superpositions of Fock states, whereas thermal states are a mixture. This allows a Monte Carlo simulation to be used as an appropriate method to model the protocol, through random sampling of Fock states. Here, we will compare the outputs of such a simulation to mutual information values predicted through two separate analytic methods.
\section{The QKD Protocol} \label{sec:Protocol}
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{Figure1.eps}
\caption{\textbf{Protocol Schematic.} A beam produced by a thermal source provides the initial state $\rho$. A series of beam splitters are used to direct the beam to Alice and Bob, with Eve performing a beam splitter attack on the channel leading to Bob. Eve's beam splitter has unknown transmittance and reflectance, $\tau$ and $\mu$, while each other beam splitter is 50:50.\label{fig:Protocol}}
\end{figure}
In the QKD protocol to be analysed, as shown in Figure \ref{fig:Protocol}, we use a central broadcast system in which light from a thermal source is incident on a 50:50 beam splitter. The output beams from this splitter are sent to Alice and Bob. An eavesdropper, Eve, uses a beam splitter attack, intercepting the beam sent to Bob using their own beam splitter of unknown transmittance. The part of the beam transmitted by Eve's beam splitter continues to Bob.
Alice is considered to be in control of the initial source, the first beam splitter, and the channels between the source and her measurement apparatus, while Bob is in control of their beam splitter and its output channels. The channel between the initial beam splitter and Bob is not under Alice or Bob's control, giving a point in the protocol where Eve may interfere with the system.
When each person receives their beam, they use a 50:50 beam splitter to divide the incoming signal into two outputs. Double homodyne detection is employed in order to measure the X quadrature of one beam, and the P quadrature of the second beam as shown in Figure \ref{fig:DoubleHomodyne}. Each person cannot simply measure the X and P quadratures of the single beam they receive as the quadrature operators do not commute. This replaces the common method of measurement in QKD, in which the variable to measure \cite{Ralph_1999} (or the measurement basis \cite{BB84}) is randomly switched in order to ensure security. This method of performing measurements in QKD without random basis switching has been previously used with success for continuous variable QKD protocols \cite{Basis}.
Repeated measurements yield an array of X and P quadrature measurements for each person. For each pair of quadrature measurement outcomes \(\left\{ x_{i},p_{i}\right\}\), Alice, Bob and Eve each calculate \(z_{i}=\sqrt{x_{i}^{2}+p_{i}^{2}}\), producing a distribution of $z$ measurements for each person. This is converted into a bit string by having Alice, Bob, and Eve each find the median value of their distribution, and recording a 0 or a 1 for each $z$ value depending on if it above or below the median. Due to the correlations in the outputs of the beam splitters with a thermal input, this produces a string of correlated bits for each person. At this point, if the protocol has been successfully executed, a key may be distilled from the bit strings, allowing Alice and Bob to communicate securely. Comparing Alice and Bob's results for a subset of measurements allows them to calculate correlation coefficients, to verify that a thermal source was used and correlated bits have been transmitted.
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{Figure2.eps}
\caption{\textbf{Heterodyne Detection.} Heterodyne; or double homodyne, detection. As initially shown in Figure \ref{fig:Protocol}, the measurer splits the incoming signal at a 50:50 beam splitter, and combines the outputs with local oscillators. The X and P quadratures can then be measured separately using two pairs of detectors. \label{fig:DoubleHomodyne}}
\end{figure}
We performed a Monte Carlo simulation of this protocol in Python with QuTiP \cite{QuTiP2,QuTiP1}.
The initial beam is created by randomly sampling Fock state values from the thermal state distribution, with the beam splitters randomly splitting an input beam into a pair of outputs. With a Fock state input, the possible output Fock states of one arm of a beam splitter is described by a binomial distribution. One of these possible outputs is selected at random. This describes a portion of the incident photons being transmitted through the beam splitter, with the remaining portion being reflected. Once all the beam splitters are applied, each person receives a string of randomly distributed Fock state measurements. Due to thermal states being a statistical mixture of Fock states, this is an appropriate method of modelling the system.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{Figure3.eps}
\includegraphics[width=1.1\textwidth]{Figure4.eps}
\caption{\textbf{Correlation Coefficients.} Changes in the correlation coefficient as a sample set of Alice and Bob's measurement results ($n\approx 100000$) are offset with respect to each other. Correlated measurements are observed in the thermal protocol when time delays are taken into account. Also shown are sample scatter graphs comparing Alice and Bob's measurements, with and without offsetting the data streams. \label{fig:Offset}}
\end{figure}
Using a sample measurement set produced through simulation, we can verify that Alice and Bob are receiving correlated measurements by calculating the correlation coefficient as Alice's data is offset relative to Bob. We can see from Figure \ref{fig:Offset} that the correlations survive beam splitters as expected of thermal sources \cite{HBT}. Offsetting Alice and Bob's data streams shows a clear difference between the correlation coefficients for synchronised measurements and random noise, which would not be observed if a coherent source were used.
Given that we have observed correlations between Alice and Bob's data strings, we now derive bit strings from the Fock state measurements and proceed to calculate Shannon mutual informations to test if a secure key can be produced.
\section{Key Rates} \label{sec:Information-Measurements}
After performing Python simulations, the Shannon mutual information; \(I_{S}\left(A;B\right)\), is calculated using the bit strings produced by each person. This is a measurement of the information gained about one of the involved systems from measurement of the other system.
We begin with the definition for the Shannon entropy for a single system, \(H\left(A\right)=-\sum_{i=0}^{n-1}p_{i}\log_{2}\left(p_{i}\right)\). This describes the uncertainty in predicting the outcome should a measurement be performed on the system where there are n possible measurement outcomes, with outcome $i$ having a probability $p_{i}$ of occurring. For a binary bit string with 0 and 1 being the only possible values, this can be simplified to:
\begin{equation}
H\left(A\right)=-p_{0}\log_{2}\left(p_{0}\right)-\left(1-p_{0}\right)\log_{2}\left(1-p_{0}\right).\label{eq:Binary}
\end{equation}
Here, $p_{0}$ is the probability of measuring the 0 outcome. From this, the mutual information \(I_{S}\left(A;B\right)=H\left(A\right)+H\left(B\right)-H\left(AB\right)\) can be defined, where $H\left(AB\right)$ is calculated by iterating over the four possible outcomes of two people measuring separate bit strings. Once the Shannon entropy is calculated for each bit string, and the mutual information values between the bit strings for each person is measured, we can see if key distribution can be performed. Two classical options for producing usable keys are considered, direct reconciliation and reverse reconciliation.
In direct reconciliation, Alice openly shares additional information in order for corrections to be made to Bob's bit string. For this to produce a secure key, it is required \cite{Bounds} that entropy calculations from the produced bit strings satisfy \(K_{DR}=I_{S}\left(A;B\right)-I_{S}\left(A;E\right)>0\).
Alternatively, reverse reconciliation is the opposite method, where Bob provides the information in order for Alice to make corrections. In this case, successfully creating a secure key requires \cite{Bounds} \(K_{RR}=I_{S}\left(A;B\right)-I_{S}\left(B;E\right)>0\). Therefore, if one of these inequalities are satisfied, a secret key can be produced. The secret key rate K in this case is bounded such that \cite{Key_Rate} \(\max\{K_{DR},\,K_{RR}\}\leq K\left(A;B|E\right)\leq\min\{I_{S}\left(A;B\right),\,I_{S}\left(A;B|E\right)\}\).
Here, \(H(X|Y)=H\left(XY\right)-H\left(Y\right)\) describes the conditional mutual information, the uncertainty in a system, X, given a measurement performed in a second system. So far, the Shannon mutual information values have been used, which can verify security in the case of an individual attack by Eve \cite{25km_QKD}, in which they perform measurements on each pulse sent by Alice before any error correction occurs between Alice and Bob.
If neither of the above reconciliation methods are available, advantage distillation through protocols such as Cascade will allow keys to be produced from the bit strings provided any secrecy is present \cite{Thermal_2}. As measurements have already taken place by this point, this is purely classical error correction. Sharing a random subset of the bit strings allows Alice and Bob to estimate the error rate for use in such algorithms.
While the Shannon entropy calculated through simulation is interesting, it is more useful to analyse the protocol through the von Neumann entropy. Here, we will compare the results of two different methods of calculating von Neumann entropy to sample Shannon entropies produced by the simulation.
\section{Mutual Information and State Variance} \label{sec:Mutual-Information-and}
Performing an analysis similar to that done by Qi et. al. (2017) \cite{Passive} we can calculate von Neumann mutual informations, $I_{N}\left(A;B\right)$. Using Alice's quadrature measurements, Bob's corresponding measurements are estimated, along with Alice's uncertainty on Bob's measurements.
Eve's interception is done with a beam splitter of transmittance $\tau$ and reflectance $\mu$. This gives:
\begin{equation}
\frac{A_{X}}{n_{A}}=\frac{\hat{B}}{\tau n_{B}}
\end{equation}
where $A_{X}$ is one of Alice's measured X quadrature values, and $\hat{B}$ is Alice's estimate of the corresponding quadrature values of the modes at Bob detector. The detector efficiency for Alice, Bob and Eve are given by $n_{A},\:n_{B},$ and $n_{E}$ respectively. By considering asymmetric beam splitters, continuing the analysis shows the quadrature values of the modes received by each person's detectors, $X_{A},$ $X_{B}$, and $X_{E}$ are found:
\begin{equation}
X_{A}=\frac{n_{A}}{2}x_{in}+\sqrt{1-\left(\frac{n_{A}}{2}\right)^2}v_{A}+N_{A},
\end{equation}
\begin{equation}
X_{B}=\frac{\tau n_{B}}{2}x_{in}+\sqrt{1-\left(\frac{\tau n_{B}}{2}\right)^2}v_{B}+N_{B},
\end{equation}
\begin{equation}
X_{E}=\frac{\mu n_{E}}{2}x_{in}+\sqrt{1-\left(\frac{\mu n_{E}}{2}\right)^2}v_{E}+N_{E},
\end{equation}
where $x_{in}$ is the quadrature output from the source, $v_{A},$ $v_{B},$ and $v_{E}$ describe the noise introduced at the beam splitters between the source and each person, and loss at their detector. $N_{A},$ $N_{B},$ and $N_{E}$ describes Gaussian noise added at each person's detector.
Taking the introduced noise to be described by a Gaussian distribution with
mean zero and variance one, we can calculate the uncertainty Alice has on Bob's measurements, then we can perform a similar analysis for Bob and Eve:
\begin{equation}
\Delta_{AB}=\left\langle \left(\hat{B}-X_{B}\right)^{2}\right\rangle =\left(\frac{\tau n_{B}}{n_{A}}\right)^2\left(1-\frac{n_{A}}{2}+\left\langle N_{A}^{2}\right\rangle \right)+1+\left\langle N_{B}^{2}\right\rangle ,
\end{equation}
\begin{equation}
\Delta_{BE}=\left(\frac{\mu n_{E}}{\tau n_{B}}\right)^2\left(1-\frac{\left(\tau n_{B}\right)^2}{2}+\left\langle N_{B}^{2}\right\rangle \right)+1+\left\langle N_{E}^{2}\right\rangle .
\end{equation}
The mutual information for Gaussian states can be shown to be \cite{Passive}:
\begin{equation}
I_{N}\left(A:B\right)=\frac{1}{2}\log_{2}\left(\frac{V+\chi}{1+\chi}\right),
\end{equation}
where V is the variance of the input thermal state and $\chi$ is the added noise. If $\chi_{line}$ is the noise added in the channels, and $\chi_{hom}$ is the detection noise, the total added noise in a channel with transmittance T is given by \cite{Passive}:
\begin{equation}
\chi=\chi_{line}+\frac{\chi_{hom}}{T},
\end{equation}
\begin{equation}
\chi_{line}=\frac{1}{T}-2+\Delta,
\end{equation}
\begin{equation}
\chi_{hom}=\frac{1+\left\langle N^{2}\right\rangle }{n_{B}}-1,
\end{equation}
where we have taken $T=1$ and assumed equal detector noise for each person, such that $\left\langle N_{A}^{2}\right\rangle=\left\langle N_{B}^{2}\right\rangle=\left\langle N_{E}^{2}\right\rangle=\left\langle N^{2}\right\rangle=1$. This simplified setup gives $\chi=\Delta$. Figure \ref{fig:Mutual Information Calculation} shows the plots of various mutual information values as variance is adjusted. Eve's beam splitter is assumed to be 50:50.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{Figure5.eps}
\caption{\textbf{Mutual information calculations using uncertainty.} The von Neumann mutual informations $I_{N}\left(A;B\right)$, $I_{N}\left(B;E\right)$ as the variance of the input thermal state, \(\langle n\rangle\left(\langle n\rangle+1\right)\) is changed. This is calculated through uncertainty in Alice and Eve's estimates of Bob's measurements. \label{fig:Mutual Information Calculation}
}
\end{figure}
\section{Covariance Matrix Description} \label{sec:Covariance-Matrices}
To verify this behaviour, we can use a second method to calculate von Neumann entropies, also extending Section \ref{sec:Information-Measurements} to allow for entropy calculations using the quantum state of the system, rather than Shannon entropies of measurements. As the states involved in this protocol are Gaussian, they can be completely described with covariance matrices. For an N-mode state $\rho$, the covariance matrix $\gamma$ is defined as \cite{Raul}:
\begin{equation}
\gamma_{ij}=\Tr\left[\rho\frac{1}{2}\left\{ \left(\hat{r}_{i}-d_{i}\right),\:\left(\hat{r}_{j}-d_{j}\right)\right\} \right].
\end{equation}
Here, \(r=\left(\hat{X}_{1},\hat{P}_{1},...,\hat{X}_{N},\hat{P}_{N}\right)\) consists of a pair of quadrature operators for each mode, $\hat{X}_{i}$ and $\hat{P_{i}}$, with $d_{i}=\left\langle \hat{r}_{i}\right\rangle $ denoting their expectation values. For the inputs into the initial splitter, the covariance matrix $\gamma_{12}$ is given by:
\begin{equation}
\gamma_{12}=\left[\begin{array}{cccc}
V & 0 & 0 & 0\\
0 & V & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{array}\right]
\end{equation}
Where $V$ is the variance of the quadratures of the beam output by the thermal source. This fully describes the thermal and vacuum modes input into the initial beam splitter. Through applying the beam splitter transformations to the relevant modes, the covariance matrix of the final state is calculated. The transformation, S, for a beam splitter with transmittance $\tau$ and reflectance $\mu$ is given by \cite{Raul}:
\begin{equation}
S\left(\tau ,\mu \right)=\left[\begin{array}{cc}
\tau & \mu \\
-\mu & \tau
\end{array}\right]\otimes I,
\end{equation}
By applying the beam splitters to the appropriate modes, the final covariance matrix can be found:
\begin{equation}
\gamma_{A_{1}A_{2}B_{1}B_{2}E_{1}E_{2}}=\left[\begin{array}{ccc}
\gamma_{A_{1}A_{2}} & C_{AB} & C_{AE}\\
C_{AB}^{T} & \gamma_{B_{1}B_{2}} & C_{BE}\\
C_{AE}^{T} & C_{BE}^{T} & \gamma_{E_{1}E_{2}}
\end{array}\right].
\end{equation}
The sub-matrices are given by:
\begin{equation}
\gamma_{A_{1}A_{2}}=\left[\begin{array}{cc}
\frac{1}{4}\left(V+3\right) & -\frac{1}{4}\left(V-1\right)\\
-\frac{1}{4}\left(V-1\right) & \frac{1}{4}\left(V+3\right)
\end{array}\right]\otimes I,
\end{equation}
\begin{equation}
\gamma_{B_{1}B_{2}}=\left[\begin{array}{cc}
\frac{\tau ^{2}}{4}\left(V+1\right)+\frac{1+\mu ^{2}}{2} & -\frac{\tau ^{2}}{4}\left(V+1\right)+\frac{1-\mu ^{2}}{2}\\
-\frac{\tau ^{2}}{4}\left(V+1\right)+\frac{1-\mu ^{2}}{2} & \frac{\tau ^{2}}{4}\left(V+1\right)+\frac{1+\mu ^{2}}{2}
\end{array}\right]\otimes I,
\end{equation}
\begin{equation}
\gamma_{E_{1}E_{2}}=\left[\begin{array}{cc}
\frac{\mu ^{2}}{4}\left(V+1\right)+\frac{1+\tau ^{2}}{2} & -\frac{\mu ^{2}}{4}\left(V+1\right)+\frac{1-\tau ^{2}}{2}\\
-\frac{\mu ^{2}}{4}\left(V+1\right)+\frac{1-\tau ^{2}}{2} & \frac{\mu ^{2}}{4}\left(V+1\right)+\frac{1+\tau ^{2}}{2}
\end{array}\right]\otimes I,
\end{equation}
\begin{equation}
C_{AB}=\left[\begin{array}{cc}
\frac{\tau}{4}\left(1-V\right) & -\frac{\tau}{4}\left(1-V\right)\\
-\frac{\tau}{4}\left(1-V\right) & \frac{\tau}{4}\left(1-V\right)
\end{array}\right]\otimes I,
\end{equation}
\begin{equation}
C_{AE}=\left[\begin{array}{cc}
-\frac{\mu}{4}\left(1-V\right) & \frac{\mu}{4}\left(1-V\right)\\
\frac{\mu}{4}\left(1-V\right) & -\frac{\mu}{4}\left(1-V\right)
\end{array}\right]\otimes I,
\end{equation}
\begin{equation}
C_{BE}=\left[\begin{array}{cc}
-\frac{\tau \mu}{4}\left(V-1\right) & \frac{\tau \mu}{4}\left(V-1\right)\\
\frac{\tau \mu}{4}\left(V-1\right) & -\frac{\tau \mu}{4}\left(V-1\right)
\end{array}\right]\otimes I.
\end{equation}
Here, $\gamma_{A_{1}A_{2}}$ is the covariance matrix describing the two modes Alice receives at their pair of detectors, with $C_{AB}$ describing covariance between Alice's modes and Bob's. The remaining sub-matrices are similarly defined. From this, von Neumann entropy values are calculated using symplectic eigenvalues. For a covariance matrix $\gamma$, the von Neumann entropy is given by \cite{Raul}:
\begin{equation}
S_{N}\left(\gamma\right)=\sum_{i}G\left(\frac{\lambda_{i}-1}{2}\right)\label{eq:Neumann}
\end{equation}
Where \(G\left(x\right)=\left(x+1\right)\log_{2}\left(x+1\right)-x\log_{2}x\), and $\lambda_{i}$ are the symplectic eigenvalues of $\gamma$. For the covariance matrix for a single mode system, $\gamma_{1}$, this is given by \(\lambda^{2}=\left|\gamma_{1}\right|\). For a two-mode state with the covariance matrix $\gamma_{12}$, taking \(\Delta=\left|\gamma_{1}\right|+\left|\gamma_{2}\right|-2\left|C\right|\) allows the two symplectic eigenvalues, $\lambda_{+}$ and $\lambda_{-}$ to be calculated:
\begin{equation}
\left(\lambda_{\pm}\right)^{2}=\frac{1}{2}\left(\Delta\pm\left[\Delta^{2}-4\left|\gamma_{12}\right|\right]^{\frac{1}{2}}\right).
\end{equation}
Mutual informations calculated in this way can be plotted against variance, this is displayed in Figure \ref{fig:Covariance Graph}. Upper and lower bounds can be placed on the mutual information values in the same manner as when Shannon entropies were used. However, requiring \(K_{RR}=I_{N}\left(A;B\right)-I_{N}\left(B;E\right)>0\), where $I_{N}\left(A;B\right)=S_{N}\left(\gamma_{A}\right)+S_{N}\left(\gamma_{B}\right)-S_{N}\left(\gamma_{AB}\right)$, allows for security against a stronger set of attacks. In the case of these "collective attacks", Eve does not perform measurements until after classical communication between Alice and Bob has occurred. In the example shown in Figure \ref{fig:Covariance Graph}, it can be seen that as the variance of the thermal state is increased, the protocol remains secure in the case Eve uses a 50:50 beam splitter. In this case, reverse reconciliation is used as $K_{RR}$ is positive. Additionally, analysing either the covariance matrix or measurement uncertainty both produce mutual information graphs which follow similar patterns as variance is increased.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{Figure6.eps}
\caption{\textbf{Mutual information calculations using covariance.} Von Neumann mutual information calculations plotted against thermal state variance,
found using the covariance matrix of the final state.
Here, Eve performs interception using a 50:50 beam splitter. \label{fig:Covariance Graph}
}
\end{figure}
With two methods of calculating von Neumann entropy displaying similar behaviour, we may now compare the outputs to Shannon entropies calculated through the simulation.
\section{Results} \label{sec:Results}
We performed calculations of the Shannon entropy for the three bit strings and the mutual information between each pair of strings. These strings were produced through the Python simulation. Currently no loss or noise is considered, this allows the simulation to be performed with a lossless Eve. The transmittance of Eve's splitter is varied to measure the effect of Eve's interception strength on the mutual information values. Also calculated was the von Neumann entropy of each mode and pair of modes, using equation \ref{eq:Neumann} and the covariance matrix describing the final state of the system. The results of these calculations and measurements are shown in Figure \ref{fig:Measurements}.
Meeting the restrictions placed on von Neumann mutual information which ensure secrecy in the case of a collective attack, $I_{N}\left(A;B\right)-I_{N}\left(B;E\right)>0$ or $I_{N}\left(A;B\right)-I_{N}\left(A;E\right)>0$, allow the protocol to be secure against a greater range of attacks than the restrictions based on Shannon entropy. Both are included here so that it can be seen that changes in von Neumann entropies are reflected in the Shannon entropies of the bit strings derived by each person after the protocol has been carried out.
It can be seen from Figure \ref{fig:Measurements} that $I\left(A;B\right)-I\left(A;E\right)$ crosses zero in both cases when Eve's beam splitter reflects half of the beam sent to Bob. This is expected as Bob and Eve's positions in the protocol are interchangeable in this special case, so $I\left(A;B\right)=I\left(A;E\right)$. If over half of Bob's beam is reflected by Eve, a key cannot be produced via direct reconciliation. However, the second possible requirement of $I\left(A;B\right)-I\left(B;E\right)>0$ is always satisfied in the no-loss scenario provided Eve's interception beam splitter has nonzero transmittance. This means that reverse reconciliation may be used to produce a secret key and establish secure communication during collective or individual attacks. This would allow the thermal state central broadcast protocol to be used as a method of quantum key distribution.
It is also clear that the von Neumann entropy has a higher magnitude than Shannon, this is expected due to the presence of discord in the system, which the Shannon entropy does not consider.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{Figure7.eps}
\includegraphics[width=0.8\textwidth]{Figure8.eps}
\caption{\textbf{Information with varying interception strength.} Calculations of von Neumann entropy using the covariance matrix and measurements of Shannon entropy taken from the simulation as the transmittance of Eve's beam splitter is varied. The notable result is that $I\left(A;B\right)-I\left(B;E\right)>0$ holds in both cases provided Bob receives a nonzero proportion of the signal sent to him. This used an average photon number of 200. Error bars describe one standard deviation. Due to discord in the system, von Neumann entropies have larger magnitudes than Shannon entropies. \label{fig:Measurements}}
\end{figure}
\section{Conclusions}
When considering a system without loss and noise, the lower bound placed on the key rate when reverse reconciliation is considered is positive under a beam splitter attack even when Eve has zero loss. This would allow for a secret key to be produced between Alice and Bob, and therefore secure communication could take place in the presence of an eavesdropper performing collective or individual attacks. Additionally, two separate methods of von Neumann mutual information analysis both showed that thermal sources with higher variance than those that could be produced in the Monte Carlo simulation allowed for a key to be produced with little change in the lower bound of the key rate.
Future work in this area could focus on examining the effects of adding noise and loss into different channels, checking if a positive key rate could be maintained. This is especially relevant for thermal states due to added noise being a large barrier to successful QKD. Additionally, a practical setup following the diagram shown in Figure \ref{fig:Protocol} would allow the protocol to be performed experimentally. This allows real key rates to be measured and would show if the protocol continues to be functional when using thermal sources likely to be employed in modern communication.
\section*{Acknowledgements}
Work was undertaken on ARC4, part of the High Performance Computing facilities at the University of Leeds, UK. DJ is supported by the Royal Society and also a University Academic Fellowship. This work was supported by the Northern Triangle Initiative Connecting capability fund as well as funding from the UK Quantum Technology Hub for Quantum Communications Technologies EP/M013472. Data used to plot the Shannon entropy graph in Figure \ref{fig:Measurements} is available from the Research Data Leeds Repository with the identifier https://doi.org/10.5518/944 \cite{doinumber}.
\section*{References}
\bibliographystyle{unsrt}
|
1,108,101,562,997 | arxiv | \section{Introduction}
There are continued interests in accurate modeling of gas-phase thermochemistry and dynamics that involve transition-metal and heavier elements, where
relativistic effects play an important role.
For instance, scientists at the US Air Force recently performed an experiment, aiming to use chemi-ionization involving lanthanide atoms to
alter the electron density in the ionosphere for radio-frequency communication,\cite{Shuman2015CR,Cox2015JCP}
for which accurate simulations could help analyze the experimental observation.
Another example is the reaction of $\mathrm{FeO}^+$ with a hydrogen molecule, a model reaction system for the so-called two-state reactivity,\cite{Schroder2000ACR} of which
accurate modeling still remains a challenge.\cite{Ard2014JPCA}
The spin barriers in the two-state reactivity mechanism are also ubiquitous in organometallic chemistry.\cite{Harvey2007PCCP}
Understanding these problems requires accurate description of strongly relativistic, quasi-degenerate electronic structure.
There have been, however, only a handful of theory developments to address this challenge.\cite{Malmqvist2002CPL,Abe2006JCP,Fleig2007TCA,Fleig2012CP,Kim2014JCP}
As a first step toward realizing predictive simulations of such processes,
we develop in this work novel computational tools that combine the four-component relativistic Dirac formalism\cite{Reiherbook}
and internally contracted multireference electron correlation methods.
Our approach is based on the four-component Dirac equation for electrons,
\begin{align}
\hat{H} = \sum_i \left[c^2 (\beta - I_4) + c(\boldsymbol{\alpha} \cdot \hat{\mathbf{p}}_i) - \sum_A^{\mathrm{atoms}} \frac{Z_A}{r_{iA}} \right] + \sum_{i<j} \hat{g}(i,j), \label{hamil}
\end{align}
where $\boldsymbol{\alpha}$ and $\beta$ are Dirac's matrices, $\hat{g}(i,j)$ is a two-electron operator, and $c$ is the speed of light.
$Z_A$ is a charge of a nucleus $A$ (note, however, that we use finite-nucleus models in practice).
Hereafter atomic units are used unless otherwise stated.
In this work, we use the full Breit operator for electron--electron interactions, i.e.,
\begin{align}
\hat{g}(i,j) = \frac{1}{r_{ij}} - \frac{1}{2}\frac{\boldsymbol{\alpha}_i\cdot \boldsymbol{\alpha}_j}{r_{ij}} - \frac{1}{2}\frac{(\boldsymbol{\alpha}_i\cdot \mathbf{r}_{ij}) (\boldsymbol{\alpha}_j\cdot \mathbf{r}_{ij})}{r^3_{ij}}.
\end{align}
The reader may consult Ref.~\onlinecite{Shiozaki2013JCP} for details on integral evaluation associated with this operator over Gaussian basis functions.
We first perform complete active space self-consistent field (CASSCF) calculations using this Hamiltonian,\cite{Jensen1996JCP,Bates2015JCP} in which orbitals are optimized using the minimax principle, and project out the space spanned by the `negative-energy' orbitals,
a procedure called no-pair projection.\cite{Reiherbook}
An efficient Dirac-CASSCF algorithm that we have developed can be found in Refs.~\onlinecite{Bates2015JCP} and \onlinecite{Kelley2013JCP}.
After the no-pair projection procedure, the Hamiltonian in the second quantization becomes
\begin{align}
\hat{H}_\mathrm{NP} = \sum_{xy} h_{xy} \hat{E}_{xy} + \frac{1}{2}\sum_{xyzw} v_{xy,zw} \hat{E}_{xy,zw},
\label{mohamil}
\end{align}
where $x$, $y$, $z$, and $w$ label any electronic molecular spin orbitals (MO), and
$h_{xy}$ and $v_{xy,zw}$ are the (complex-valued) Hamiltonian matrix elements in the MO basis in chemists' notation.
$\hat{E}_{xy}$ and $\hat{E}_{xy,zw}$ are operators defined as
\begin{subequations}
\begin{align}
&\hat{E}_{xy}= a^\dagger_x a_y,\\
&\hat{E}_{xy,zw}= a^\dagger_x a^\dagger_z a_w a_y.
\end{align}
\end{subequations}
Since the MO Hamiltonian [Eq.~\eqref{mohamil}] is isomorphic to the non-relativistic counterpart (and all the eigenstates are minima in the parameter space
after the no-pair projection procedure),
standard electron-correlation methods, such as internally contracted multireference configuration interaction (ic-MRCI),\cite{Werner1982JCP,Werner1988JCP,Sham2011JCP,Saitow2013JCP} can be used in conjunction with this Hamiltonian.
We note in passing that, even though our numerical results are based on the four-component formalism [Eq.~\eqref{hamil}], the multireference theory and programs developed in this work are equally
applicable to any two-component relativistic Hamiltonians.\cite{Liu2010MP,Saue2011CPC,Nakajima2012CR}
In the non-relativistic framework, the ic-MRCI method has been pioneered by Werner and co-workers.\cite{Werner1982JCP,Werner1988JCP,Sham2011JCP}
The ability of ic-MRCI to accurately and consistently describe the potential energy surfaces of small-molecule reactions
has been the key to understanding many of the gas-phase reactions studied in the past decades (for instance, see Refs.~\onlinecite{Manolopoulos1993S,Alexander2002S,Wu2004S}).
Very recently ic-MRCI has been extended to incorporate density matrix renormalization group reference functions with more than 20 orbitals in the active space by Saitow et al.\cite{Saitow2013JCP}
There are also parallel implementations of uncontracted MRCI,\cite{Lischka2011WIREs} though its computational cost is generally higher than that of ic-MRCI.
Another class of popular multireference approaches in non-relativistic theory is based on perturbation theory.
Among others the complete active space second-order perturbation (CASPT2) method\cite{Andersson1990JPC,Andersson1992JCP,Aquilante2008JCTC} is an internally contracted, multireference generalization of the standard M\o ller--Plesset perturbation theory and has been
applied to a wide variety of chemical problems.\cite{Pulay2011IJQC}
The $n$-electron valence state perturbation theory (NEVPT2)\cite{Angeli2001JCP,Angeli2002JCP} proposed by Angeli~et~al. (especially its strongly correlated variant)
uses a different zeroth-order Hamiltonian and has desirable properties such as strict size extensivity and numerical robustness against so-called intruder-state problems.
Here we report the theory and algorithms for relativistic ic-MRCI, CASPT2, and NEVPT2 based on the four-component Dirac Hamiltonians.
This work realizes relativistic ic-MRCI and NEVPT2 for the first time, whereas
CASPT2 has been reported in the past by Abe et al.\cite{Abe2006JCP} and by Kim et al.\cite{Kim2014JCP}
The implementations of ic-MRCI and CASPT2 are facilitated by an automatic code generator, {\sc smith3}.\cite{MacLeod2015JCP,smith}
The {\sc smith3} program was previously used to derive and implement nuclear energy gradients for fully internally contracted CASPT2\cite{MacLeod2015JCP}
and has been extended in this work to incorporate equations with spin orbitals in complex arithmetic.
Note that the automatic code generation approach has been used for relativistic single-reference coupled-cluster methods
by Hirata et al.\cite{Hirata2007JCP} and by Nataraj et al.\cite{Nataraj2010JCP}
The generated code and the code generator are both publicly available.\cite{bagel,smith}
The NEVPT2 code is manually implemented.
In the following we sketch the outline of the theories and implementations.
\section{Theory}
\subsection{Relativistic MRCI with internal contraction}
Our ic-MRCI implementation uses fully internally contracted basis functions,
which are similar to those used in the CASPT2 theory by Roos and co-workers.\cite{Andersson1992JCP}
The correlated wave functions are parameterized as
\begin{align}
|\Psi\rangle = T_\mathrm{ref} |\Phi_{\mathrm{ref}}\rangle
+ \sum_\Omega T_\Omega \hat{E}_\Omega |\Phi_{\mathrm{ref}}\rangle, \label{param}
\end{align}
in which $T$'s are the unknown amplitudes to be determined,
$\Omega$ denotes excitation manifolds in ic-MRCI,
and $\hat{E}_\Omega$ are associated excitation operators:
\begin{align}
\hat{E}_\Omega = &\left\{\hat{E}_{ai,bj},\, \hat{E}_{ar,bi},\, \hat{E}_{ar,bs},\, \hat{E}_{ai,rj},\right.\nonumber\\
&\,\left.\hat{E}_{ri,sj},\, \hat{E}_{ar,st},\, \hat{E}_{ri,st},\, \hat{E}_{ai,rs}\right\}.
\label{class}
\end{align}
Hereafter $i$ and $j$ label closed orbitals, $r$, $s$, and $t$ label active orbitals, and $a$ and $b$ label virtual orbitals.
Note that, because spin orbitals are used, $\hat{E}_{ai,rs}$ and $\hat{E}_{as,ri}$ that are distinguished in non-relativistic theories
generate identical sets of excited configurations.
The Kramers symmetry is not utilized in our ic-MRCI implementation except for integral compression.
$|\Phi_{\mathrm{ref}}\rangle$ is a relativistic multi-determinant reference function,
\begin{align}
|\Phi_{\mathrm{ref}}\rangle = \sum_{n_++n_- = n} C^{n_+,n_-} |I^{n_+,n_-}\rangle,
\end{align}
where $n_+$ and $n_-$ are the numbers of electrons that belong to Kramers $+$ and $-$ spin orbitals,
and $n$ is the total number of active electrons.\cite{Jensen1996JCP,Bates2015JCP}
In the ic-MRCI method, the Dirac Hamiltonian is diagonalized
in the space spanned by the parameters in Eq.~\eqref{param}, i.e.,
\begin{align}
&E = \min \left[\langle \Psi|\hat{H}_\mathrm{NP}|\Psi\rangle\right],
\end{align}
under a normalization constraint.
The following $\sigma$ and $\pi$ vectors are computed from each trial vector $\psi_P$ in the same basis,
\begin{subequations}
\begin{align}
\label{eqbegin}
&(\sigma_P)_\Omega = \langle \Phi_{\mathrm{ref}}| \hat{E}_\Omega^\dagger\hat{H}_\mathrm{NP} |\psi_P \rangle,\\
&(\sigma_P)_\mathrm{ref} = \langle \Phi_{\mathrm{ref}}| \hat{H}_\mathrm{NP} |\psi_P \rangle,\\
&(\pi_P)_\Omega = \langle \Phi_{\mathrm{ref}}| \hat{E}_\Omega^\dagger |\psi_P \rangle,\\
&(\pi_P)_\mathrm{ref} = \langle \Phi_{\mathrm{ref}}| \psi_P \rangle.
\end{align}
\end{subequations}
Note that we eliminate five-particle reduced density matrices from the equations by means of
a well-known commutator trick, i.e., (using $\hat{T}_\Omega \equiv T_\Omega\hat{E}_\Omega$)
\begin{align}
&\langle \Phi_{\mathrm{ref}}| \hat{E}_{\Omega'}^\dagger\hat{H}_\mathrm{NP} \hat{T}_\Omega |\Phi_{\mathrm{ref}}\rangle \nonumber\\
& \quad = \langle \Phi_{\mathrm{ref}}| \hat{E}_{\Omega'}^\dagger [\hat{H}_\mathrm{NP}, \hat{T}_\Omega] |\Phi_{\mathrm{ref}}\rangle
+ \langle \Phi_{\mathrm{ref}}| \hat{E}_{\Omega'}^\dagger \hat{T}_\Omega | \Phi_{\mathrm{ref}}\rangle E_{\mathrm{ref}},\label{eqend}
\end{align}
where $\Omega$ and $\Omega'$ belong to the same excitation class in Eq.~\eqref{class}.
A Hamiltonian matrix is then constructed within the subspace spanned by the trial vectors,\cite{Davidson1975JCompP}
\begin{align}
H_{PQ} = \mathbf{T}_P^\dagger \boldsymbol{\sigma}_Q, \quad S_{PQ} = \mathbf{T}_P^\dagger \boldsymbol{\pi}_Q,
\end{align}
and diagonalized to obtain the coefficients ($c_P$) that constitute an optimal linear combination of the trial vectors:
\begin{align}
\sum_Q H_{PQ} c_Q = E\sum_Q S_{PQ} c_Q.
\end{align}
Using these quantities, the residual vectors are
\begin{align}
\mathbf{R} = \sum_P c_P \left[\boldsymbol{\sigma}_P - E \boldsymbol{\pi}_P \right],
\end{align}
from which we generate a new set of trial vectors (see below).
The working equations [Eqs.~\eqref{eqbegin}--\eqref{eqend}] for $\sigma$-vector formation can be expressed in terms of reduced density matrices;
therefore, it is essentially identical to the non-relativistic counterpart except for spin symmetry in the latter.
The explicit formulas consist of ca.~750 tasks, most of which are tensor contractions. They can be found in supporting information.\cite{supp}
The equations were implemented into efficient computer code using the automatic code generator {\sc smith3}.\cite{MacLeod2015JCP,smith}
First, {\sc smith3} performs Wick's theorem to convert second-quantized expressions to a list of diagrams represented by tensors and their contractions.
Next it factorizes the diagrams to a tree of binary tensor contractions. Finally the tree is translated to computer code that is compiled and linked to the {\sc bagel} package.\cite{bagel}
See Refs.~\onlinecite{Hirata2003JPCA,Hirata2006TCA,Shiozaki2008PCCP} for further information on automatic code generation.
At the end of each ic-MRCI calculation, the Davidson correction is added to the total energy to approximately account for
size-extensivity errors.\cite{Langhoff1974IJQC}
The correction is
\begin{align}
\Delta E_{+\mathrm{Q}} = \left(\frac{1-T_\mathrm{ref}^2}{T_\mathrm{ref}^2}\right) E_\mathrm{corr},
\end{align}
where $T_\mathrm{ref}$ is the weight of the reference configuration in the correlated wave function [see Eq.~\eqref{param}],
and $E_\mathrm{corr}$ is the correlation energy from ic-MRCI calculations.
\subsection{Relativistic CASPT2 and NEVPT2}
The second-order perturbation methods, CASPT2 and NEVPT2, are defined as minimization of the so-called Hylleraas functional,
\begin{align}
E = \min\left[\langle \Psi^{(1)} | \hat{H}^{(0)} - E^{(0)} | \Psi^{(1)} \rangle + 2\Re\langle \Psi^{(1)} | \hat{H}_\mathrm{NP} | \Phi_{\mathrm{ref}} \rangle\right].
\end{align}
In CASPT2, the zeroth-order Hamiltonian $\hat{H}^{(0)}$ is chosen to be a projected Fock operator
\begin{align}
\hat{H}^{(0)} = \hat{P}\hat{f}\hat{P} + \hat{Q}\hat{f}\hat{Q},
\end{align}
where $\hat{P}$ is a projector to the reference configuration and $\hat{Q}$ is its orthogonal compliment.
The first-order wave function $\Psi^{(1)}$ is parameterized as in Eq.~\eqref{param}.
The minimization is performed by solving a set of linear equations using a subspace algorithm.
The construction of residual vectors,
\begin{align}
R_\Omega = 2\left[\langle \Omega | \hat{H}^{(0)} - E^{(0)} |\psi_p\rangle + \langle \Omega | \hat{H}_\mathrm{NP} | \Phi_{\mathrm{ref}} \rangle\right],
\end{align}
is akin to (but simpler than) that in ic-MRCI.
Here we used $\langle \Omega|\equiv \langle \Phi_\mathrm{ref}|\hat{E}_\Omega^\dagger$.
For details on the relativistic CASPT2 equations, see earlier reports by Abe et al.\cite{Abe2006JCP} and Kim et al.\cite{Kim2014JCP}
In NEVPT2, the zeroth-order Hamiltonian is defined using Dyall's Hamiltonian\cite{Dyall1995JCP} as
\begin{align}
\hat{H}^{(0)} = \hat{P}\hat{H}_\mathrm{NP}\hat{P} + \sum_{\omega}|\Phi_\omega\rangle E_\omega \langle\Phi_\omega|,
\label{nevh0}
\end{align}
where $\omega$ is the excitation class in Eq.~\eqref{class} and $\Phi_\omega$ is defined as
\begin{align}
|\Phi_\omega\rangle = \frac{\hat{P}_\omega \hat{H}_\mathrm{NP} |\Phi_\mathrm{ref}\rangle}{ \sqrt{\langle \Phi_\mathrm{ref}|\hat{H}_\mathrm{NP} \hat{P}_\omega \hat{H}_\mathrm{NP} |\Phi_\mathrm{ref}\rangle}}.
\end{align}
$\hat{P}_\omega$ is a projector onto $\omega$, and the denominator accounts for normalization.
$E_\omega$ that appears in Eq.~\eqref{nevh0} is
\begin{align}
E_\omega = \langle \Phi_\omega| \hat{H}_\mathrm{NP} | \Phi_\omega\rangle.
\end{align}
The wave function is parameterized using the so-called strong contraction scheme, i.e.,
\begin{align}
|\Psi\rangle = T_\mathrm{ref}|\Phi_\mathrm{ref}\rangle + \sum_\omega T_\omega |\Phi_\omega\rangle.
\end{align}
Since $\hat{H}^{(0)}$ of NEVPT2 does not include off-diagonal couplings between different $\omega$, the equations can be solved
without iterative procedures.
The working equations for relativistic NEVPT2 can be obtained
by dropping the factors of 2 that stem from spin summations in the non-relativistic equations in Ref.~\onlinecite{Angeli2002JCP}.
The explicit formulas are provided in supporting information.\cite{supp}
\subsection{Wave function updates in ic-MRCI and CASPT2}
Internally contracted basis functions ($\hat{E}_\Omega |\Phi_\mathrm{ref}\rangle$)
are not orthogonal with each other and sometimes linearly dependent;\cite{Werner1988JCP}
therefore, one has to take into account the overlap matrix when updating the amplitudes.
The generation of trial vectors is performed as the following.
Let us consider as an example the amplitudes associated with $\hat{E}_{ar,bs}$. In this case, the overlap and (approximate) diagonal Hamiltonian matrix elements,
$\mathbf{S}$ and $\mathbf{F}$, respectively, are
\begin{subequations}
\label{smat}
\begin{align}
&S_{rs,r's'} = \langle \Phi_{\mathrm{ref}}|\hat{E}_{rr',ss'}|\Phi_{\mathrm{ref}}\rangle,\\
&F_{rs,r's'} = \sum_{tt'}\langle \Phi_{\mathrm{ref}}|\hat{E}_{rr',ss',tt'}|\Phi_{\mathrm{ref}}\rangle f_{tt'},
\end{align}
\end{subequations}
where $\hat{E}_{rr',ss',tt'} = a^\dagger_r\hat{E}_{ss',tt'}a_{r'}$.
We calculate $\mathbf{S}^{-1/2}$ while projecting out the linearly dependent part so that $(\mathbf{S}^{-1/2})^\dagger \mathbf{S} \mathbf{S}^{-1/2}$ is a unit matrix (the eigenvalues that are smaller than $1.0\times 10^{-8}$ are discarded),
which is then used to form
\begin{align}
\tilde{\mathbf{F}} = (\mathbf{S}^{-1/2})^\dagger \mathbf{F} \mathbf{S}^{-1/2}.
\end{align}
Next $\tilde{\mathbf{F}}$ is diagonalized to yield a transformation matrix $\mathbf{U}$,
\begin{align}
\tilde{\mathbf{F}} = \mathbf{U} {\boldsymbol{\lambda}} \mathbf{U}^\dagger,
\label{fdiag}
\end{align}
with a diagonal matrix ${\boldsymbol{\lambda}}$.
Defining $\mathbf{X} = \mathbf{U}^\dagger \mathbf{S}^{-1/2}$,
we arrive at the formula for generating new trial vectors from residual vectors:
\begin{align}
(\psi_{p+1})_{ar,bs} = \sum_D \left[ \sum_{r's'}\frac{R_{ar',bs'} X_{D,r's'}}{E^{(0)} - \lambda_D - \epsilon_a - \epsilon_b}\right]X^\ast_{D,rs},
\end{align}
where $\epsilon_a$ is an orbital energy (i.e., $\epsilon_a = f_{aa}$) and $D$ labels the eigenvalues in Eq.~\eqref{fdiag}, the number of which is equal to or smaller than the numbers of rows and columns of the overlap matrix [Eq.~\eqref{smat}].
This formula implies that in ic-MRCI updates
the inverse of $\hat{H}_\mathrm{NP}-E$ is approximated by that of the diagonal part of the CASPT2 equation.\cite{Andersson1990JPC}
\subsection{Computation of rovibrational spectra}
Rovibrational energy levels of diatomic molecules in their $\Sigma$ states can be calculated by solving an effective one-dimensional Schr{\"o}dinger equation (in this section we avoid use of atomic units for clarity),
\begin{align}
\left[-\frac{\hbar^2}{2\mu}\frac{{\rm d}^2}{{\rm d}r^2}
+ V(r) + \frac{\hbar^2}{2\mu r^2}J(J+1)\right]
\Psi_{\nu,J}\left(r\right)
= E_{\nu,J} \Psi_{\nu,J}\left(r\right),
\label{radse}
\end{align}
in which $\nu$ and $J$ are the vibrational and rotational quantum numbers, respectively,
and $\mu$ is the reduced mass. The third term of the Hamiltonian accounts for the Coriolis coupling.
The rotation--vibration coupling is, therefore, variationally included in the calculations.
The line intensity $I_{\tilde{\nu}}$ associated with the transition energy $\tilde{\nu}$ can be computed as\cite{Bernath2005Spectra}
\begin{align}
I_{\tilde{\nu}} = \frac{(2J_f+1)}{8\pi c Q \tilde{\nu}^2}
\mathcal{A}_{\nu_i,J_i\to \nu_f,J_f}
e^{-{E_i}/{kT}}
\left(
1-e^{-hc\tilde{\nu}/kT}
\right),
\end{align}
in which $E_{i}$ is the energy of the initial state and $k$ is the Boltzmann constant.
The partition function $Q$ at a temperature $T$ is evaluated using
\begin{align}
Q = \sum_{l} \left(2J_l+1\right) e^{-{E_l}/{kT}},
\end{align}
where $l$ runs over rovibrational states.
We used $T=296$~K.
The quantum numbers of initial (final) states are labeled by $\nu_i$ and $J_i$ ($\nu_f$ and $J_f$).
Using the rovibrational wave functions ($\Psi_{\nu_i,J_i}$ and $\Psi_{\nu_f,J_f}$) and the dipole-moment function $M(r)$,
the Einstein coefficient $\mathcal{A}_{\nu_i,J_i\to \nu_f,J_f}$ is
\begin{align}
\mathcal{A}_{\nu_i,J_i\to \nu_f,J_f}
= \frac{8\pi^2\tilde{\nu}^3}{3\epsilon_{0}c^3\hbar}
\frac{S_{J_i,J_f}}{2J_i+1}
\left| \langle \Psi_{\nu_i,J_i}|M(r)|\Psi_{\nu_f,J_f} \rangle \right|^2,
\label{defA}
\end{align}
where $\epsilon_{0}$ is the vacuum permittivity
and $S_{J_i,J_f}$ is the H{\"o}nl--London factor,\cite{Hansson2005}
which is ${\rm{max}}(J_i,J_f)$ for the electronic ground states of HI and TlH.
\begin{table}
\caption{Root-mean-square deviations of the rovibrational transition energies of H$^{127}$I and $^{205}$TlH in $\rm{cm^{-1}}$ computed by the four-component methods.
The HITRAN database\cite{Hitran2012JQSRT} and experimental data\cite{Urban1989CPL} were used as references.\label{rovib}}
\begin{ruledtabular}
\begin{tabular}{lccccc}
& \multicolumn{1}{c}{CASSCF} & \multicolumn{1}{c}{CASPT2} & \multicolumn{1}{c}{NEVPT2} & \multicolumn{1}{c}{MRCI+Q} & \multicolumn{1}{c}{Origin}\\\hline
HI\\
$\nu=0\to1$ & 120 & 36 & 21 & 8 & 2230 \\
$\nu=0\to2$ & 245 & 73 & 43 & 15 & 4379 \\
$\nu=0\to3$ & 378 & 117 & 68 & 24 & 6448 \\
$\nu=0\to4$ & 519 & 163 & 96 & 34 & 8435 \\
TlH\\
$\nu=0\to1$ & 92 & 34 & 47 & 17 & 1345 \\
$\nu=1\to2$ & 93 & 33 & 46 & 15 & 1300 \\
$\nu=2\to3$ & 92 & 33 & 46 & 13 & 1255 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\section{Numerical Results}
\begin{figure}
\includegraphics[keepaspectratio,width=0.48\textwidth]{hi.pdf}
\caption{Potential energy curves of HI computed by four-component CASSCF, CASPT2, NEVPT2, and ic-MRCI+Q. The experimental
bond length and dissociation energy are 1.609~{\AA} and 3.20~eV, respectively.\label{hipec}}
\end{figure}
\begin{figure*}[t]
\includegraphics[keepaspectratio,width=0.95\textwidth]{hispec3.pdf}
\caption{Simulated rovibrational absorption spectra of H$^{127}$I
at 296K using four-component CASSCF, CASPT2, and ic-MRCI+Q.
The bottom panels are the observed lines from the HITRAN database
(hyperfine-split lines are averaged for comparison).
\label{hispec}}
\end{figure*}
First, to benchmark the accuracy,
we applied four-component CASSCF, CASPT2, NEVPT2, and ic-MRCI+Q to an HI molecule, for which there are reliable experimental reference data.\cite{Hitran2012JQSRT}
Uncontracted Dyall's cv3z\cite{Dyall2006TCA} and uncontracted cc-pVTZ\cite{Dunning1989JCP} basis sets were used for I and H, respectively.
Gaussian-type nuclear charge distributions were used.\cite{Visscher1997ADNDT}
The $4s$, $4p$, $4d$, $5s$, and $5p$ electrons of I and the $1s$ electron of H were correlated (i.e., 26 correlated electrons; 28 electrons were frozen),
among which $5s$, $5p$ of I and $1s$ of H were treated in the active space.
In correlated calculations, virtual orbitals were truncated at 55~$E_\mathrm{h}$.
The total number of correlated spin orbitals was 206.
The computed potential energy curves relative to their minima are shown in Fig.~\ref{hipec}.
The equilibrium bond lengths obtained by CASPT2, NEVPT2, and ic-MRCI+Q were 1.608, 1.609, and 1.606~\AA{}, respectively, which are in good agreement with
the experimental value (1.609~\AA{}).\cite{Herzbergbook}
The dissociation energies $D_e$ were estimated via extrapolation to be 3.0, 3.0, and 3.1~eV, respectively.
The experimental value is 3.20~eV.\cite{Herzbergbook}
We then simulated the absorption spectra based on
these potential energy curves interpolated by five-point piece-wise polynomials.
Dipole moments were computed at each point as electric-field derivatives [$M(r) = \partial E(r)/\partial \mathcal{E}_z$ where $\mathcal{E}_z$ is an external electric field along the molecular axis] using finite difference formulas.
The Level 8.2 program\cite{level8.2} was used to solve the radial Schr{\"o}dinger equation [Eq.~\eqref{radse}] and to evaluate $\mathcal{A}_{\nu_i,J_i\to \nu_f,J_f}$ [Eq.~\eqref{defA}].
The partition function and absorption spectra were computed using a program of Yorke et al.\cite{pn_exomol}
The computed spectra for the fundamental, overtone, and second overtone transitions are presented in Fig.~\ref{hispec},
in which the HITRAN reference spectra\cite{Hitran2012JQSRT} are also shown.
Overall, the line positions were accurately reproduced by ic-MRCI+Q within 0.5~\% (8~cm$^{-1}$ for the fundamental transitions and 34~cm$^{-1}$ for
the third overtone transitions),
attesting to the consistent accuracy of ic-MRCI+Q throughout potential energy surfaces;
The line intensity of the overtone and second overtones agreed well.
Our results overestimated the intensity of the fundamental transitions,
which is mainly because the intensity is largely suppressed by
the almost flat dipole-moment curve around the equilibrium geometry;
therefore, it is highly sensitive to the accuracy of the computed dipole moments.\cite{Li2013JQSRT}
The errors in the line positions computed by CASPT2 and NEVPT2 were found three or four times larger than those by ic-MRCI+Q.
\begin{figure}
\includegraphics[keepaspectratio,width=0.48\textwidth]{tlh.pdf}
\caption{Potential energy curves of TlH computed by four-component CASSCF, CASPT2, NEVPT2, and ic-MRCI+Q. The experimental
bond length and dissociation energy are 1.872~{\AA} and 2.06~eV, respectively.\label{tlhpec}}
\end{figure}
\begin{figure}
\includegraphics[keepaspectratio,width=0.48\textwidth]{fig2rev.pdf}
\caption{Simulated rovibrational absorption spectra of $^{205}$TlH
at 296K using four-component CASSCF, CASPT2, and ic-MRCI+Q.
Dotted lines in the bottom panel are the experimental line positions taken from Ref.~\onlinecite{Urban1989CPL}
superimposed by shifted ic-MRCI+Q spectra.
\label{tlhspec}}
\end{figure}
Next, we calculated the potential energy curve of TlH using CASSCF, CASPT2, NEVPT2, and ic-MRCI+Q.
The electronic structure of TlH around the equilibrium geometry has been studied by many authors.\cite{Fagri2001TCA,Zeng2010JCP,Knecht2014JCP}
We used uncontracted Dyall's cv3z\cite{Dyall2006TCA} and uncontracted cc-pVTZ\cite{Dunning1989JCP} basis sets for Tl and H, respectively,
in conjunction with Gaussian-type nuclear charge distributions.\cite{Visscher1997ADNDT}
The full-valence active space (4 electrons in the $6s$ and $6p$ orbitals of Tl and the $1s$ orbital of H) was used.
The $5s$, $5p$, $4f$, $5d$, $6s$, and $6p$ electrons of Tl and the $1s$ electron of H were correlated (i.e., 36 correlated electrons).
The virtual orbitals were again truncated at 55~$E_\mathrm{h}$, resulting in 248 correlated spin orbitals.
The potential energy curves of TlH computed by four-component CASSCF, CASPT2, NEVPT2, and ic-MRCI+Q are shown in Fig.~\ref{tlhpec}.
The dissociation energy $D_e$ from ic-MRCI+Q (2.00~eV) was in excellent agreement with the experimental value ($2.06$~eV),\cite{Herzbergbook} while
CASPT2 underestimated it by 0.2~eV (1.84~eV).
The equilibrium bond length ($1.872$~\AA{}) was also accurately reproduced by ic-MRCI+Q ($1.872$~\AA{}).
Those by CASPT2 and NEVPT2 were 1.870 and 1.885~\AA{}, respectively.
NEVPT2 was found less accurate than CASPT2 for this molecule, and its accuracy deteriorated as the bond is stretched.
The absorption spectra of TlH were likewise computed using
the energies at 20 grid points between 1.3 {\AA} and 6.0 {\AA}.
The computed spectra are presented in Fig.~\ref{tlhspec}.
The experimental line intensity was not found in the literature.
The mean-root-square errors in the computed rovibrational transition energies are also listed in Table.~\ref{rovib},
in which the experimental results from Ref.~\onlinecite{Urban1989CPL} are used as reference values.
The errors in the transition energies were around 35, 45, and 15~cm$^{-1}$ for CASPT2, NEVPT2, and ic-MRCI+Q.
Apart from the shift, the line positions computed by ic-MRCI+Q agree perfectly with the experimental results.
The remaining errors include incomplete treatment of dynamical correlation in the ic-MRCI+Q model, the effects of the higher-order quantum-electrodynamics interactions,
and the non-Born--Oppenheimer contributions.
The wall times for one iteration of relativistic CASPT2 and ic-MRCI on TlH were
roughly 2 and 80 minutes using two Xeon E5-2650 CPUs (2.0~GHz, 8 cores each) on a single node.
The wall time for non-relativistic ic-MRCI per iteration is about 16 seconds; therefore, relativistic ic-MRCI is roughly 300 times more expensive than the non-relativistic counterpart.
A factor of $2^6=64$ stems from the fact that relativistic ic-MRCI does not use spin symmetry. An additional factor of 3 should be ascribed to
matrix multiplication in complex arithmetic that is three times as expensive as that in real arithmetic.
The rest is due to other factors such as caching and optimized libraries.
\section{Conclusions}
In summary, we have developed four-component relativistic ic-MRCI, CASPT2, and NEVPT2 based on the Dirac Hamiltonian and full internal contraction.
The relativistic ic-MRCI and CASPT2 programs have been implemented using automatic code generation.
The programs are interfaced to the open-source {\sc bagel} package.\cite{bagel} The code generator {\sc smith3} is also publicly available.\cite{smith}
The accuracy of these methods has been presented by computing the entire potential energy curves of HI and TlH and directly comparing
calculated rovibrational transition energies with the experimental data.
It has been shown that ic-MRCI+Q can reproduce experimental transition energies with 0.5~\% and 1~\% accuracy for HI and TlH, respectively,
up to high-lying rovibrational transitions using uncontracted triple-$\zeta$ basis sets without any corrections or extrapolations.
Currently the size of ic-MRCI and CASPT2 calculations is limited by the memory requirement for two-electron MO integrals that are stored in core,
which is somewhat problematic especially because uncontracted one-electron basis functions (with energy cut-offs) have to be used for heavy elements.
Furthermore, wall times for multi-state ic-MRCI calculations scale cubicly with respect to the number of states included in the calculation, which become prohibitively long when several states are included in the calculations.
To address these problems, the parallelization of the programs
based on the {\sc tiledarray} library of Calvin and Valeev\cite{tiledarray} is under development in our group.
Our relativistic NEVPT2 code does not store 4-index intermediates and is heavily parallelized (to be presented elsewhere);
therefore, it is ready for use in chemical applications.
\section*{Supporting Information}
The working equations for relativistic NEVPT2 and the rovibrational transition energies and absorption spectra
of HI and TlH can be found in supporting information.
The computer-generated ic-MRCI equations are also included.
\begin{acknowledgments}
T.S. has been supported by the Air Force Office of Scientific Research Young Investigator Program (AFOSR Grant No.~FA9550-15-1-0031).
The development of the relativistic CASSCF program, on which this work is based, has been supported by the National Science Foundation CAREER Award (CHE-1351598).
W.M. has been supported by Grant-in-Aid for Young Scientists (B) (Grant No. 15K17815) from the Ministry of Education, Culture, Sports, Science and Technology Japan (MEXT).
\end{acknowledgments}
|
1,108,101,562,998 | arxiv | \section{Introduction}
The determination of the hadronic structure from first principles belongs to the key investigation
topics in lattice QCD. Central to our understanding of hadron structure are the structure functions which
describe the distribution
of quarks and gluons inside hadrons. In the last years some promising approaches have been proposed,
among them the calculation of the quasi particle distribution functions (for a review see~\cite{Cichy:2018mum}).
Our group has initiated a program to compute the structure functions from the forward Compton amplitude
of the nucleon~\cite{Chambers:2017dov, Young:2019}. A central motivation for this is to overcome the
issues of renormalization, operator mixing and the restriction to light-cone operators.
The starting point is the forward Compton amplitude of the nucleon~\cite{dis},
\begin{equation}
T_{\mu\nu}(p,q) = \rho_{\lambda \lambda^\prime}\! \int\! {\rm d}^4\!x\, {\rm e}^{iq\cdot x} \langle p,\lambda^\prime |T J_\mu(x) J_\nu(0)|p,\lambda\rangle \,,
\label{prod}
\end{equation}
which involves the time ordered product of electromagnetic currents sandwiched between nucleon states of
momentum $p$ and polarization $\lambda$, where $q$ is the momentum of the virtual photon
and $\rho$ is the polarization density matrix. In view of our investigation below we consider only the
unpolarized structure functions. In the unphysical region ($|p\cdot q| < q^2/2 $)
the relation of $T_{\mu\nu}(p,q)$ to the structure functions
$F_1(x,q^2), F_2(x,q^2)$ is given by~\cite{dis}
\begin{eqnarray}
T_{\mu\nu}(p,q) &=& \left(\delta_{\mu\nu}-\frac{q_\mu q_\nu}{q^2}\right)\, 4\omega \int_0^1 dx\, \frac{\omega x}{1-(\omega x)^2}\, F_1(x,q^2)\nonumber \\[0.5em]
&+& \left(p_\mu-\frac{p\cdot q}{q^2}q_\mu\right)\left(p_\nu-\frac{p\cdot q}{q^2}q_\nu\right)\, \frac{8\omega}{2p\cdot q} \int_0^1 dx\, \frac{1}{1-(\omega x)^2}\, F_2(x,q^2)\,,
\label{opes2}
\end{eqnarray}
with $\omega = 2p\cdot q/q^2$, discarding the subtraction term~\cite{Chambers:2017dov}. To simplify the numerical calculation, we may
choose $\mu = \nu = 3$ and $p_3 = q_3 = q_4 = 0$. We then have
\begin{equation}
T_{33}(p,q) = 4\omega \int_0^1 dx\, \frac{\omega x}{1-(\omega x)^2} F_1(x,q^2) \equiv
\int_0^1 dx\, K(x,\omega)\, F_1(x,q^2)\,.
\label{opess2}
\end{equation}
The matrix element $T_{33}(p,q)$ can be computed most efficiently by a simple extension of the
Feynman--Hellmann method~\cite{Chambers:2017dov,Horsley:2012pz}.
Performing a Taylor expansion of (\ref{opess2}) leads to a simple relation between
the moments $t_j=\int_0^1\,dx\,x^j\,F_1(x)$ of the structure function and the $\omega$--dependent
Compton amplitude
\begin{equation}
T_{33}(\omega) = 4\, \big(\omega^2 \,t_1 + \omega^4 \,t_3 + \cdots + \omega^{2M} \,t_{2M-1} + \dots \big) \,.
\label{poly}
\end{equation}
From these we then determine the moments of the parton distributions
$\mu_j$ from $t_j \sim \mu_j/2$ neglecting logs and terms $O(1/q^2)$.
\section{Problems and solutions of the Fredholm integral equation}
Formula (\ref{opess2}) is the basic relation for our investigation. It tells us how to extract the structure
function $F_1(x,q^2)$ given that we have available lattice data for the Compton amplitude $T_{33}(p,q)$. Unfortunately,
it is a Fredholm integral equation of the first kind. Those equations are known to be ill--posed. E.g., they are
extremely sensitive to very small perturbations of the data~\cite{Hansen} --
in our case to the lattice results of $T_{33}(p,q)$. Additionally, the solutions are not guaranteed to be unique.
There is no general solution method available.
If one finds a successful numerical strategy at all it depends always on the specific kernel $K$.
Therefore, a careful study of possible approaches
is needed. An analogous problem arises in the reconstruction of Ioffe time pseudo particle distribution
functions (pdf) and was
investigated in great detail in~\cite{Karpie:2019eiq}.
In order to test some possible numerical methods we generate mock data for the Compton amplitude.
As an example
we choose a valence type up quark distribution
\begin{equation}
x\,p^{\rm ref}_{u_v}(x) = 5.107 \, x^{0.8}\,(1-x)^3
\label{pref}
\end{equation}
chosen to satisfy the momentum sum rule
\begin{equation}
\int_0^1 \,{\rm dx} \,x\,p^{\rm ref}_{u_v}(x) = 1/3\,.
\end{equation}
This function is then used to generate the $T_{33}$ data via (\ref{opess2}) and to compare
with the results of our tested inversion algorithms.
\begin{figure}[b]
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{plotpdfSVD.eps}
\end{subfigure}
\hspace{0.06\textwidth}
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{plotpdfBMC.eps}
\label{fig:T33dat1}
\end{subfigure}
\vspace{-5mm}
\caption{Left: The pdf as
obtained from the SVD. The red curve is $x\,p^{\rm ref}_{u_v}(x)$ (\protect\ref{pref}). The blue shadowed area shows the
variation of the result $x\,p^{\rm SVD}_{u_v}(x)$ due to a $\pm 10 \%$ variation of $T_{33}$. Right: The pdf from
the BMC approach. The red curve is again the input (\protect\ref{pref}). The shadowed area is the
$68 \%$ quantile.}
\label{fig:T33dat}
\end{figure}
The numerical inversion requires a discretization of (\ref{opess2})
\begin{equation}
T_{33}(\omega_i) = \sum_{j=1}^N \, K(x_j,\omega_i)\, p(x_j)
\leftrightarrow T_{33,i}= \sum_{j=1}^N \, K_{ji}\, p_j \,, i = 1 \dots M \,,
\label{FId}
\end{equation}
where in general we have $N \neq M$.
One basic method to solve (\ref{FId}) for the $p_j$ is the singular value decomposition (SVD)~\cite{nr}.
It has the advantage that one does not need to make any further input assumptions about the expected form of the
wanted $p(x)$. On the other hand there is a certain freedom in omitting small singular values. Additionally,
using our kernel $K(x,\omega)$ (\ref{opess2}) we have cancellations of very large numbers which increases with the number of included
singular values. This demands very precise lattice data in order to get meaningful results. The result of
the inversion $x\,p^{\rm SVD}_{u_v}(x)$ is shown in the left panel of Fig. \ref{fig:T33dat} for $N=50$,
$M=10$ and $0< \omega < 1$. One recognizes
a trend around the the input distribution with some small oscillations . The integral using the mean value is
$ \int_0^1\,{\rm dx} x\,p^{\rm SVD}_{u_v}(x) \approx 0.33\,.$
An alternative approach uses some prior model information concerning the distribution and
tries to refine it according to the available data.
It belongs to the class of Bayesian methods.
One variant has been discussed in detail in~\cite{Karpie:2019eiq}. We follow a slightly different
procedure here (see also~\cite{Young:2019}). Our model assumption is the general form of a valence quark type distribution
\begin{equation}
p^{\rm val}(x,a,b,c) = \frac{a \,x^{b}\, (1-x)^c \,\Gamma (b+c+3)}{\Gamma (b+2) \Gamma (c+1)}\,.
\label{pval}
\end{equation}
We can determine the Compton amplitude from (\ref{opess2}) analytically (for $\omega < 1$)
\begin{eqnarray}
T_{33}^{\rm val}(\omega)&=& 2^{-b-c-1}\, \sqrt{\pi }\, a \,\omega ^2 \, \Gamma (b+c+3) \times\nonumber\\
& & _3\tilde{F}_2\left(1,\frac{b+2}{2},\frac{b+3}{2};\frac{1}{2} (b+c+3),\frac{1}{2}
(b+c+4);\omega ^2\right)\\
\label{Tanalytic}
&=& c_1(a,b,c)\, \omega^2 + c_3(a,b,c)\, \omega^4 + c_5(a,b,c)\, \omega^6 + \dots\,,
\label{Tanalyticser}
\end{eqnarray}
where $_3\tilde{F}_2$ is a regularized hypergeometric function. The power expansion of
$T_{33}^{\rm val}(\omega)$ is given in (\ref{Tanalyticser}). We proceed by first generating
$N_{MC}$ Monte Carlo sets
of model parameters $\{a,b,c\}_{k=1,\dots,N_{MC}}$. With these sets the quadratic deviations $\chi^2_k$
\begin{equation}
\chi^2_k = \sum_{n,j}\,\left(T_{33,n}-T_{33,(k)}^{\rm val}(\omega_n)\right)\,C^{-1}_{nj}\,
\left(T_{33,j}-T_{33,(k)}^{\rm val}(\omega_j)\right)
\label{chi2}
\end{equation}
are computed. $T_{33,n}$ are the data for $\omega_n$, whereas $T_{33,(k)}^{\rm val}(\omega_n)$
is (2.5)
calculated for one triple $\{a,b,c\}_{k}$ at $\omega_n$. $C^{-1}_{nj}$ is
the inverse covariance matrix of the data. The set $\chi^2_k$ is used
to make a weighted random choice out of the total set $\{a,b,c\}_k$ by the likelihood $exp(-\chi^2/2)$.
This constitutes our sample
parameter set from which we compute the means and the quantiles. Also in this case the model input
is crucial: the final values are inside the MC sets and the $\chi^2_k$ should contain reasonable small
minimal values. We call this method a Bayesian Monte Carlo (BMC) approach.
The resulting distribution is shown in the right panel of Fig. \ref{fig:T33dat}. The initial values
of the parameters are selected uniformly distributed around some suitable values.
An analogous procedure can be used to determine the moments via relation (\ref{poly}). In this case
the moments $t_j$ play the role of the parameters and are obtained directly from this approach.
Summarizing the SVD and the BMC approaches we favor the latter, because we recognize in the SVD solution
oscillations around the exact result although we use ideal mock data. For real lattice data which are far more
scattered and which often have more significant uncertainties the SVD inversion gives very
unstable results.
\section{First results from lattice data}
Now we investigate these methods with our latest lattice data for the nucleon Compton amplitude
for the connected part of the combination $u-d$. We use $32^3\times 64 \,(\beta=5.5)$ lattices at the
SU(3)--flavour symmetric
point $(\kappa_l=\kappa_s)$ and $M_\pi \approx 470$ GeV.
In this paper only data for $q^2 = 2.7,\, 3.5,\, 4.6\,$ GeV$^2$
are included. They are shown in the left panel of Fig. \ref{fig:t33latt}.
\begin{figure}[htb]
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{plotT33m.eps}
\end{subfigure}
\hspace{0.04\textwidth}
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{plotT33moms27.eps}
\end{subfigure}
\caption{Left: $T^{u-d}_{33}$ lattice data in the range $0 < \omega < 1$ for the three $q^2$ values
used for the analysis. Right: Result for (\protect\ref{T33mom}) together with the data for $q^2=2.7$ GeV$^2$.}
\label{fig:t33latt}
\end{figure}
As a first step we determine the first moments using (\ref{poly}) as the defining relation.
In this paper we restrict ourselves to order $\omega^{12}$.
In order to get information about the $q^2$--dependence we apply our BMC procedure to each of the three
data sets mentioned above. We compute the $\chi_k^2$ values from (\ref{chi2}), now with
\begin{equation}
T_{33,(k)}^{\rm val}(\omega_n)=4\, \sum_{j=1}^6\,t^{(k)}_{2j-1}\,\omega_n^{2j} \,.
\label{T33mom}
\end{equation}
We select $N_{MC}$ sets by sampling $\{a, b, c\}_k$ uniformly from intervals suggested by phenomenology
and
determine the moments $t(a,b,c)^{(k)}_i$ according to their valence quark beavior.
(A random selection with $t_1^{(k)} \ge t_3^{(k)} \ge \dots \ge 0$ as discussed in~\cite{Young:2019} leads to very similar results.)
We generate 100,000 MC data sets and from that we
select a subset of 500 samples weighted by the likelihood $exp(-\chi^2/2)$. From this subset we compute the $t_i$.
The resulting
Compton amplitude is given in the right panel of Fig. \ref{fig:t33latt} for $q^2 = 2.7 $ GeV$^2$ where we
observe a reasonable agreement with the data.
The moments themselves are presented in Fig. \ref{fig:momres}. They show their expected behavior
with increasing order.
For the first
moment we observe a slight dependence on $q^2$.
\begin{figure}[b]
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{plotmoms27q.eps}
\end{subfigure}
\hspace{0.04\textwidth}
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{plotfirstmomsallq.eps}
\end{subfigure}
\caption{Left: The first moments for $q^2=2.7$ GeV$^2$. The error bars are the quantiles encompassing
$68 \%$ of the data.
Right: The first moment for $q^2=2.7,3.5$ and $4.6$ GeV$^2$.}
\label{fig:momres}
\end{figure}
In the same spirit we try to obtain the complete particle distribution function. As data
we use the subset with $q^2=2.7$ GeV$^2$.
Concerning the priors, we are guided by the success of the moments determination above. We
sample the first moment uniformly out of the interval $[0 \dots 1]$ and let the BMC method
compute it from to the lattice data. This is supported by
our model ansatz (\ref{pval}) since
\begin{equation}
\langle x \rangle = \int_0^1\, {\rm dx}\,\, x \,p^{\rm val}(x,a,b,c) = a \,.
\label{pvalm}
\end{equation}
For the parameters $b$ and $c$ we choose input intervals suggested by phenomenology. Other prior
schemes will be investigated in a forthcoming paper.
We find using the mean curve and its quantile borders
$ \int_0^1\, {\rm dx}\,\, x \,p^{\rm res}_{u-d}(x)~= 0.58^{+25}_{-26} \,$,
consistent with the first moment given in
Fig. \ref{fig:momres}. Additionally, inserting the resulting mean values of the
parameters in (\ref{Tanalyticser}) we
find $c_1 \approx 1.09$ -- also compatible with the moments.
The results are shown in Fig. \ref{fig:pvalres}. One recognizes a strong similarity of the left
panel in Fig. \ref{fig:pvalres} with the right panel of Fig. \ref{fig:t33latt} which proves
the consistency of both approaches.
\begin{figure}[t]
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{plotT33pdf27.eps}
\end{subfigure}
\hspace{0.04\textwidth}
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{plotpdfBMClatt.eps}
\end{subfigure}
\caption{Left: The Compton amplitude (\protect\ref{Tanalytic}) with parameters obtained with BMC together
with the data.
Right: The resulting valence type distribution function. The shaded area is the $68 \%$ quantile.}
\label{fig:pvalres}
\end{figure}
In order to demonstrate the effect of the BMC procedure we show in Fig.~\ref{fig:respar} the change
of the parameters from the uniformly input values (blue) to the final values (red). The histogram in the
right panel demonstrates the transition from uniform input to the peaked distribution triggered
by the $\chi_k^2$ values. One recognizes that the procedure does not influence very much the values
of the parameters
$b$ and $c$ but significantly shrinks the range for parameter $a$ towards the first moment.
\begin{figure}[!htb]
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{plotparams1.eps}
\end{subfigure}
\hspace{0.04\textwidth}
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{histo27a.eps}
\end{subfigure}
\caption{Left: The parameters (a,b,c) of the valence distribution (\protect\ref{pval}). The blue lines are
the input value ranges, the red data points result from the BMC approach. Again the error bars are the $68 \%$ quantiles.
Right: Histogram for parameter $a$. Blue: input parameter range. Yellow: parameter range of the sample according
to $\chi^2$ selection.}
\label{fig:respar}
\end{figure}
\section*{Acknowledgements}
The numerical configuration generation (using the BQCD lattice
QCD program \cite{Haar:2017ubh})) and data analysis
(using the Chroma software library \cite{Edwards:2004sx}) was carried out
on the IBM BlueGene/Q and HP Tesseract using DIRAC 2 resources
(EPCC, Edinburgh, UK), the IBM BlueGene/Q (NIC, J\"ulich, Germany)
and the Cray XC40 at HLRN (The North-German Supercomputer
Alliance), the NCI National Facility in Canberra, Australia
(supported by the Australian Commonwealth Government)
and Phoenix (University of Adelaide).
RH was supported by STFC through grant ST/P000630/1.
HP was supported by DFG Grant No. PE 2792/2-1.
PELR was supported in part by the STFC under contract ST/G00062X/1.
GS was supported by DFG Grant No. SCHI 179/8-1.
RDY and JMZ were supported by the Australian Research Council Grants
DP140103067 and DP190100297.
We thank all funding agencies
|
1,108,101,562,999 | arxiv | \section*{Introduction}
\label{intro}
Understanding the price formation mechanisms is undoubtably among the most exciting challenges of modern finance. \emph{Market impact} refers to the way market participants' actions mechanically affect prices. Significant progress has been made in this direction during the past decades \cite{Hasbrouk2007,bouchaud2008markets,weber2005order,Bouchaud_impact_2010}. A notable breakthrough was the empirical discovery that the aggregate price impact of a meta-order\footnote{A ``meta-order'' (or parent order) is a bundle of orders
corresponding to a single trading decision. A meta-order is typically traded incrementally through a sequence of child orders.} is a concave function (approximately square-root) of its size $Q$ \cite{Grinold,Almgren2005,Toth2011,Donier2015}. In the recent past, so called ``latent'' order book models \cite{Toth2011,mastromatteo2014agent,MPRL,DonierLLOB} have proven to be a fruitful framework to theoretically address the question of market impact, among others.\\
As a precise mathematical incarnation of the latent order book idea, the zero-intelligence LLOB model of Donier \textit{et al.} \cite{DonierLLOB} was successful at providing a theoretical underpinning to the square root impact law. The LLOB model is based on a continuous mean field setting, that leads to a set of reaction-diffusion equations for the dynamics of the latent bid and ask volume densities. In the infinite memory limit (where the agents intentions, unless executed, stay in the latent book forever and there are no arrivals of new intentions), the latent order book becomes exactly linear and impact exactly square-root. Furthermore, this assumption leads to zero permanent impact of uninformed trades, and an inverse square root decay of impact as a function of time.
While the LLOB model is fully consistent mathematically, it suffers from at least two major difficulties when confronted with micro-data. First, a strict square-root law is only recovered in the limit where the execution rate $m_0$ of the meta-order is larger than the normal execution rate $J$ of the market itself -- whereas most meta-order impact data is in the
opposite limit $m_0 \lesssim 0.1 J$. Second, the theoretical inverse square-root impact decay is too fast and leads to significant short time mean-reversion effects, not observed in real prices. \\
The aim of the present paper is to show that introducing different timescales for the renewal of liquidity allows one to cure both the above deficiencies. In view of the way financial markets operate, this step is very natural: agents are indeed expected to display a broad spectrum of timescales, from low frequency institutional investors to High Frequency Traders (HFT). We show that provided the execution rate $m_0$ is large compared to the low-frequency flow, but small compared to $J$, the impact of a meta-order crosses over from a linear behaviour at very small $Q$ to a square-root law in a regime of $Q$s that can be made compatible with empirical data. We show that in the presence of a continuous, power-law distribution of memory times, the temporal decay of impact can be tuned to reconcile persistent order flow with diffusive price dynamics (often referred to as the \emph{diffusivity puzzle}) \cite{bouchaud2008markets,bouchaud2004fluctuations,Lillo2004}. We argue that the permanent impact of uninformed trades is fixed by the slowest liquidity memory time, beyond which mean-reversion effects disappear. Interestingly, the permanent impact is found to be linear { in the executed volume $Q$ and independent of the trading rate}, as dictated by no-arbitrage arguments.\\
Our paper is organized as follows. We first recall the LLOB model of \cite{DonierLLOB} in Section~\ref{llobrecall}. We then explore in Section~\ref{fincandep} the implications of finite cancellation and deposition rates (finite memory) in the reaction-diffusion equations, notably regarding permanent impact (Section~\ref{permimp}). We generalize the reaction-diffusion model to account for several deposition and cancellation rates. In particular, we analyse in Section~\ref{multifsec} the simplified case of a market with two sorts of agents: long memory agents with vanishing deposition and cancellation rates, and short memory high frequency agents (somehow playing the role of market makers). Finally, we consider in Section~\ref{densnusec} the more realistic case of a continuous distribution of cancellation and deposition rates and show that such a framework provides an alternative way to solve the diffusivity puzzle (see \cite{BenzaquenFLOB}) by adjusting the distribution of cancellation and deposition rates. Many details of the calculations are provided in the Appendices.
\section{Locally linear order book model}
\label{llobrecall}
We here briefly recall the main ingredients of the locally linear order book (LLOB) model as presented by Donier \textit{et al.} \cite{DonierLLOB}. In the continuous ``hydrodynamic'' limit we define the latent volume densities of limit orders in the order book: $\varphi_{\mathrm{b}}(x,t)$ (bid side) and $\varphi_{\mathrm{a}}(x,t)$ (ask side) at price $x$ and time $t$. The latter obey the following set of partial differential equations:
\begin{subeqnarray}
\partial_t \varphi_{\mathrm{b}} &=& D\partial_{xx}\varphi_{\mathrm{b}} -\nu\varphi_{\mathrm{b}} + \lambda \Theta(x_t-x) - R_\mathrm{ab}(x) \slabel{goveqsnl1}\\
\partial_t \varphi_{\mathrm{a}} &=& D\partial_{xx}\varphi_{\mathrm{a}} -\nu\varphi_{\mathrm{a}} + \lambda \Theta(x-x_t) - R_\mathrm{ab}(x)\ ,\quad \
\slabel{goveqsnl2}
\end{subeqnarray}
where the different contributions on the right hand side respectively signify (from left to right): heterogeneous reassessments of agents intentions with diffusivity $D$ (diffusion terms), cancellations with rate $\nu$ (death terms), arrivals of new intentions with intensity $\lambda$ (deposition terms), and matching of buy/sell intentions (reaction terms). The price $x_t$ is conventionally defined through the equation $ \varphi_{\mathrm{b}}(x_t,t)= \varphi_{\mathrm{a}}(x_t,t)$.
\begin{figure}[t!]
\begin{center}
\resizebox{0.48\columnwidth}{!}{ \includegraphics{Obstat.pdf}}
\end{center}
\caption{Stationary order book $\phi^\mathrm{st}(\xi)$ as computed by Donier \emph{et al.} \cite{DonierLLOB}. The linear approximation holds up to $\xi_{\mathrm c}=\sqrt{D\nu^{-1}}$ and the volume $Q_\mathrm{lin.}$ of the grey triangles is of order $Q_\mathrm{lin.}:=\mathcal{L}\xi_\mathrm{c}^2= J \nu^{-1}$.}
\label{Obstat}
\end{figure}
The non-linearity arising from the reaction term in Eqs. \eqref{goveqsnl1} and \eqref{goveqsnl2} can be abstracted away by defining $ \phi(x,t) = \varphi_{\textrm b}(x, t) - \varphi_{\textrm a}(x, t)$, which solves:
\begin{eqnarray}
\partial_t \phi &=& D \partial_{xx} \phi -\nu\phi + s(x,t) \ ,\label{firsteqsrc}
\end{eqnarray}
where the source term reads $s(x,t) = \lambda \,\textrm{sign} (x_t-x)$ and the price $x_t$ is defined as the solution of
\begin{eqnarray}
\phi(x_t,t) &=& 0 \ . \label{priceeq}
\end{eqnarray}
Setting $\xi=x-x_t$, the stationary order book can easily be obtained as: $\phi^\mathrm{st}(\xi)=-({\lambda}/{\nu}) \, \textrm{sign}(\xi) [1-\exp(-|\xi|/\xi_{\mathrm c})]$ where $\xi_{\mathrm c}=\sqrt{D\nu^{-1}}$ denotes the typical length scale below which the order book can be considered to be linear: $\phi^\mathrm{st}(\xi) \approx -\mathcal L \xi$ (see Fig.~\ref{Obstat}). The slope $\mathcal L := \lambda/\sqrt{\nu D}$ defines the {\it liquidity} of the market, from which the total execution rate $J$ can be computed since:
\begin{eqnarray}
J := \left. \partial_\xi \phi^\mathrm{st}(\xi) \right|_{\xi=0} = D \mathcal{L}.
\end{eqnarray}
Donier \emph{et al.} \cite{DonierLLOB} focussed on the \emph{infinite memory} limit, namely $\nu, \lambda \rightarrow 0$ while keeping $\mathcal L \sim \lambda {\nu}^{-1/2}$ constant, such that the latent order book becomes exactly linear since in that limit $\xi_{\mathrm c} \to \infty$. This limit considerably simplifies the mathematical analysis, in particular concerning the impact of a meta-order. An important remark must however be introduced at this point: although the limit $\nu \to 0$ is taken in \cite{DonierLLOB}, it is assumed that
the latent order book is still able to reach its stationary state $\phi^\mathrm{st}(\xi)$ before a meta-order is introduced. In other words, the limit $\nu \to 0$ is understood in a way such that the starting time of the meta-order is large compared to $\nu^{-1}$.
\section{Price trajectories with finite cancellation and deposition rates}
\label{fincandep}
As mentioned in the introduction we here wish to explore the effects of non-vanishing cancellation and deposition rates, or said differently the behaviour of market impact for executiong times larger than $\nu^{-1}$. The general solution of Eq.~\eqref{firsteqsrc} is given by:
\begin{eqnarray}
\phi(x,t) &=& \left( \mathcal G_\nu \star \phi_0\right)(x,t) + \int \text d y\int_0^\infty \text d \tau\, \mathcal G_\nu(x-y,t-\tau) s(y,\tau) \ , \label{convol}
\end{eqnarray}
where $\phi_0(x) =\phi(x,0)$ denotes the initial condition, and where $\mathcal G_\nu (x,t) = e^{-\nu t}\mathcal G (x,t)$ with $\mathcal G$ the diffusion kernel:
\begin{eqnarray}
\mathcal G(x,t) &=& \Theta(t) \frac{e^{-\frac{x^2}{4Dt}}}{\sqrt{4\pi Dt}} \ .
\end{eqnarray}
Following Donier \emph{et al.} \cite{DonierLLOB}, we introduce a buy (sell) meta-order as an extra point-like source of buy (sell) particles with intensity rate $m_t$ such that the source term in Eq.~\eqref{firsteqsrc} becomes: $s(x,t) = m_t \delta(x-x_t)\cdot \mathds{1}_{[0,T]} +\lambda \,\textrm{sign} (x_t-x)$, where $T$ denotes the time horizon of the execution. In all the following we shall focus on buy meta-orders -- without loss of generality since within the present framework everything is perfectly symmetric. Performing the integral over space in Eq.~\eqref{convol} and setting $\phi_0(x)=\phi^{\mathrm{st}}(x)$ yields:
\begin{eqnarray}
\phi(x,t) &=& \phi^\mathrm{st}(x)e^{-\nu t} + \int_0^{\min (t,T)} \text d \tau\, m_\tau \mathcal G_\nu(x-x_\tau,t-\tau) -\lambda\int_0^{t } \text d \tau \, \textrm{erf}\left[ \frac{x-x_\tau}{\sqrt{4D(t-\tau)}} \right] e^{-\nu(t-\tau)} \ .\label{mastereq}
\end{eqnarray}
The equation for price, \eqref{priceeq}, is not analytically tractable in the general case, but different interesting limit cases can be investigated. In particular, focussing on the case of constant participation rates $m_t = m_0$, one may consider:
\begin{itemize}
\item (\emph{i}) Small participation rate $m_0\ll J$ \emph{vs} large participation rate $m_0\gg J$.\medskip
\item (\emph{ii}) Fast execution $\nu T\ll 1$ (the particules in the book are barely renewed during the meta-order execution) \emph{vs} slow execution $\nu T\gg 1$ (the particles in the book are completely renewed, and the memory of the initial state has been lost).\medskip
\item (\emph{iii}) Small meta-order volumes $Q:=m_0 T\ll Q_\mathrm{lin.}$ (for which the linear approximation of the stationary book is appropriate, see Fig.~\ref{Obstat}) \emph{vs} large volumes $Q \gg Q_\mathrm{lin.}$ (for which the linear approximation is no longer valid).
\end{itemize}
So in principle, one has to consider $2^3 = 8$ possible limit regimes. However, some regimes are mutually exclusives so that only 6 of them remain. A convenient way to summarize the results obtained for each of the limit cases mentioned above is to expand the price trajectory $x_t$ up to first order in $\sqrt{\nu}$ as:\footnote{Note that working at constant $\mathcal L$ implies $\lambda=O\big(\sqrt{\nu}\big)$.}
\begin{eqnarray}
x_t &=& \alpha \left[ z_t^0+\sqrt{\nu} z_t^1+O(\nu)\right] \ ,\label{alphaz0z1}
\end{eqnarray}
where $z_t^0$ and $z_t^1$ denote respectively the 0th order and 1st order contributions. Table 1 gathers the results for fast execution ($\nu T\ll 1$) and small meta-order volumes
($Q \ll Q_\mathrm{lin.}$). Note that the leading correction term $z_t^1$ is negative, i.e. the extra incoming flux of limit orders acts to lower the impact of the meta-order, see Fig.~\ref{pricetraj}.
The price trajectory for slow execution and/or large meta-order volumes, on the other hand, simply reads:
\begin{eqnarray}
x_t &=& \frac{m_0 \nu}{\lambda} t \ .
\end{eqnarray}
The corresponding calculations and explanations are given in Appendix A.
\medskip
\begin{table*}[t!]
\centering
\resizebox{1\textwidth}{!}{ \includegraphics{table}}\medskip
\caption{Price trajectories for different impact regimes (see Eq.~\eqref{alphaz0z1}). We set $\beta_0 := { \frac12 }\left[{{m_0}/(2\pi J})\right]^{1/2}$.}
\label{tableimpact}
\end{table*}
\section{Permanent impact as a finite memory effect}
\label{permimp}
As mentioned in the introduction, the impact relaxation following the execution is an equally important question. We here compute the impact decay after a meta-order execution. In the limit of small cancellation rates, we look for a scaling solution of the form $z^1_t= T F(\nu t)$ (see Eq.~\eqref{alphaz0z1}) where $F$ is a dimensionless function. We consider the case where $\nu T \ll 1$ and $Q \ll Q_\mathrm{lin.}$. Long after the end of the execution of the meta-order, i.e. when $t\gg T$, Eq.~\eqref{priceeq} together with Eqs.~\eqref{mastereq} and \eqref{alphaz0z1} becomes (to leading order):
\begin{eqnarray}
0&=& -\frac{\lambda\alpha T}{\sqrt{D}}F(\nu t)e^{-\nu t} - 2\lambda\alpha\int_0^{t } \text d \tau \, \frac{z_t^0-z_\tau^0}{\sqrt{4\pi D(t-\tau)}} e^{-\nu(t-\tau)} \nonumber \\
&& \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad\quad \quad \quad -2\lambda \alpha T\sqrt{\nu} \int_0^t \text d \tau \, \frac{F(\nu t)-F(\nu \tau )}{\sqrt{4\pi D(t-\tau)}} e^{-\nu(t-\tau)}
\ .\label{}
\end{eqnarray}
Letting $u = \nu t$ and $z_t^0 = \beta/\sqrt{u}$ (see Table \ref{tableimpact}) yields:
\begin{eqnarray}
0&=&\sqrt{\pi}e^{-u} F(u) + \beta \int_0^{u } \text d v \, \frac{\sqrt{v} -\sqrt{u}}{\sqrt{uv(u-v)}} e^{v-u} + \int_0^{u } \text d v \, \frac{F(u)-F(v) }{\sqrt{u-v}} e^{v-u}
\ .\label{intequ}
\end{eqnarray}
Finally seeking $F$ asymptotically of the form $F(u) = F_\infty +Bu^{-\gamma}+Cu^{-\delta}e^{-u}$
one can show that:
\begin{eqnarray}
F(u)&=&F_\infty -\frac{\beta}{\sqrt{u}}\left[1-e^{-u}\right] \qquad (u \gg 1)\ ,\label{Fu}
\end{eqnarray}
with the permanent component given by $F_\infty = \beta\sqrt{\pi}$, where $\beta$ depends on the fast/slow nature of the execution (see Table \ref{tableimpact}).\\
Injecting the solution for $F(u)$ in Eq.~\eqref{alphaz0z1}, and taking the limit of large times, one finds that the $t^{-1/2}$ decay of the 0th order term is exactly compensated
by the $\beta u^{-1/2}$ term coming from $F(u)$, showing that the asymptotic value of the impact, given by $I_\infty= \alpha \sqrt{\nu} { T} F_\infty$, is reached exponentially fast as $\nu t \to \infty$ (see Fig.~\ref{pricetraj}). This result can be interpreted as follows. At the end of execution (when the peak impact is reached), the impact starts decaying towards zero in a slow power law fashion (see \cite{DonierLLOB}) until approximately $t \sim \nu^{-1}$, beyond which all memory is lost (since the book has been globally renewed). Impact cannot decay anymore, since the previous reference price has been forgotten.
Note that in the limit of large meta-order volumes and/or slow executions, all memory is already lost at the end of the execution and the permanent impact trivially matches the peak impact (see Fig.~\ref{pricetraj}).\\
\begin{figure}[t!]
\begin{center}
\resizebox{0.5\columnwidth}{!}{\includegraphics{pricetraj}}
\end{center}
\caption{Top graph: Price trajectory during and after a buy meta-order execution for $\nu T\ll 1$. (Black curve) 0th order result from \cite{DonierLLOB}. (Orange curve) 1st order result. (Blue curve) 1st order correction (see Eq.~\eqref{alphaz0z1}). Bottom graph: Price trajectory for $\nu T\gg1 $. Note that the $x$-axis is not to scale since $\nu^{-1}\ll$ (resp. $\gg$) $ T$.}
\label{pricetraj}
\end{figure}
An important remark is in order here. Using Table \ref{tableimpact}, one finds that $I_\infty { = \frac12} \xi_c (Q/Q_\mathrm{lin.})$ in both the small and large participation regime. In other words, we find that the permanent impact is {\it linear} in the executed volume $Q$, as dictated by no-arbitrage arguments { \cite{huberman2004price,gatheral2010no}} and compatible with the classical Kyle framework { \cite{kyle1985continuous}}.
\section{Impact with fast and slow traders}
\label{multifsec}
\subsection{Set up of the problem}
As stated in the introduction, one major issue in the impact results of the LLOB model as presented by Donier \textit{et al.} \cite{DonierLLOB} is the following. Empirically, the impact
of meta-orders is only weakly dependent on the participation rate $m_0/J$ (see e.g. \cite{Toth2011}). The corresponding \emph{square root law} is commonly written as:
\begin{eqnarray}
I_Q &:=& \langle x_T \rangle = Y \sigma \sqrt{\frac Q V} \ , \label{empiricalimp}
\end{eqnarray}
where $\sigma$ is the daily volatility, $V$ is the daily traded volume, and $Y$ is a numerical constant of order unity. Note that $I_Q$ only depends on the total volume of the meta-order $Q=m_0 T$, and not on $m_0$ (or equivalently on the time $T$). \\
As one can check from Table \ref{tableimpact}, the independence of impact on $m_0$ only holds in the large participation rate limit ($m_0\gg J$). However, most investors choose to operate in the opposite limit of small participation rates $m_0 \ll J$, and all the available data is indeed restricted to $m_0/J \lesssim 0.1$. Here we offer a possible way out of this conundrum. The intuition is that the total market turnover $J$ is dominated by high frequency traders/market makers, whereas resistance to slow meta-orders can only be provided by slow participants on the other side of the book.
More precisely, consider that only two sorts of agents co-exist in the market (see Section~\ref{densnusec} for a continuous range of frequencies):
\begin{enumerate}
\item Slow agents with vanishing cancellation and deposition rates: $\nu_{\text{s}} T \rightarrow 0$, while keeping the corresponding liquidity $\mathcal L_{\text{s}}:= \lambda_{\text{s}}/\sqrt{\nu_{\text{s}} D}$ finite; and
\item Fast agents with large cancellation and deposition rates, $\nu_{\text{f}} T \gg 1$, such that $\mathcal L_{\text{f}}:= \lambda_{\text{f}}/\sqrt{\nu_{\text{f}} D} \gg \mathcal L_{\text{s}}$.
\end{enumerate}
The system of partial differential equations to solve now reads:
\begin{subeqnarray}
\partial_t \phi_{\text{s}} &=& D \partial_{xx} \phi_{\text{s}} -\nu_{\text{s}}\phi_{\text{s}} +s_{\text{s}}(x,t) \slabel{phi1}
\\
\partial_t \phi_{\text{f}} &=& D \partial_{xx} \phi_{\text{f}} -\nu_{\text{f}}\phi_{\text{f}} +s_{\text{f}}(x,t) \ ,\slabel{phi2}
\end{subeqnarray}
where $s_k(x,t) = \lambda_k \,\textrm{sign} (x_{kt}-x) + m_{kt}\delta(x-x_{kt}) $, together with the conditions:
\begin{eqnarray}
m_{\text{s}t}+ m_{\text{f}t} &=& m_0 \label{ratesequal} \\
x_{\text{s}t}=x_{\text{f}t} &=& x_t \label{priceequal}\ .
\end{eqnarray}
Equation~\eqref{ratesequal} means that the meta-order is executed against slow and fast agents, respectively contributing to the rates $m_{\text{s}t}$ and $m_{\text{f}t}$. Equation~\eqref{priceequal} simply means that there is a unique transaction price, the same for slow and for fast agents.
The total order book volume density is then given by $\phi =\phi_{\text{s}}+\phi_{\text{f}}$. In particular, in the limit of slow/fast agents discussed above the stationary order book is given by the sum of $\phi_{\text{s}}^\mathrm{st}(x) \approx -\mathcal L_{\text{s}} x$ and $\phi_{\text{f}}^\mathrm{st}(x) \approx - (\lambda_{\text{f}}/\nu_{\text{f}})\textrm{sign}(x)$ (see Fig.~\ref{Obstat_multi}). The total transaction rate now reads
\begin{eqnarray}
J = \ D \left|\partial_x\left[ \phi_{\text{s}}^\mathrm{st}+\phi_{\text{f}}^\mathrm{st}\right]\right|_{x=0}=J_{\text{s}}+J_{\text{f}},
\end{eqnarray}
where $J_{\text{f}} \gg J_{\text{s}}$ (which notably implies that $J \approx J_{\text{f}}$).
\begin{figure}[t!]
\begin{center}
\resizebox{0.48\columnwidth}{!}{ \includegraphics{OBstat_multi.pdf}}
\end{center}
\caption{Stationary double-frequency order book $\phi^\textrm{st}(x)=\phi_{\text{s}}^\textrm{st}(x)$ (purple) $+\ \phi_{\text{f}}^\textrm{st}(x)$ (green) (see Section \ref{multifsec}).}
\label{Obstat_multi}
\end{figure}
\subsection{From linear to square-root impact}
We now focus on the regime where the meta-order intensity is large compared to the average transaction rate of slow traders, but small compared to the total transaction rate of the market, to wit: $J_{\text{s}} \ll m_0 \ll J$. In this limit Eqs.~\eqref{phi1} and \eqref{phi2}, together with the corresponding price setting equations $\phi_k(x_{kt},t) \equiv 0$ yield (see Appendix B):
\begin{subeqnarray}
x_{\text{s}t} &=&\left(\frac2{\mathcal L_{\text{s}}} \int_0^t \textrm d \tau \, m_{\text{s}\tau} \right)^{1/2} \slabel{x1tspec}\\
x_{\text{f}t} &=& \frac{\nu_{\text{f}}}{\lambda_{\text{f}}} \int_0^t \textrm d \tau \, m_{\text{f}\tau} \ . \slabel{x2tspec} \label{x1tx2t}
\end{subeqnarray}
Differentiating Eq.~\eqref{priceequal} with respect to time together with Eqs.~\eqref{x1tx2t} and using Eq.~\eqref{ratesequal} yields:
\begin{eqnarray}
m_{\text{f}t} &=&\frac{m_0}{\sqrt{1+\frac{t}{t^\star}}}, \quad \text{with} \quad t^\star:=\frac{1}{2\nu_{\text{f}}} \frac{J_{\text{f}}^2}{J_{\text{s}}m_0}, \label{m2}
\end{eqnarray}
and $m_{\text{s}t}=m_0-m_{\text{f}t}$. Equation \eqref{m2} indicates that most of the incoming meta-order is executed against the rapid agents for $t < t^\star$ but the slow agents then take over for $t>t^\star$ (see Fig.~\ref{pricetraj_multi}).
The resulting price trajectory reads:
\begin{eqnarray}
x_{t} &=&\frac{\lambda_{\text{f}}}{\mathcal L_{\text{s}} \nu_{\text{f}}}\left(\sqrt{1+\frac{t}{t^\star}}-1\right) \, , \label{ptrajmultif}
\end{eqnarray}
which crosses over from a linear regime when $t \ll t^\star$ to a square root regime for $t \gg t^\star$ (see Fig.~\ref{pricetraj_multi}). For a meta-order of volume $Q$ executed during a time interval $T$, the corresponding impact is linear in $Q$ when $T < t^\star$ and square-root (with $I_Q$ independent of $m_0$) when $T > t^\star$. This last regime takes place when $Q > m_0 t^\star$, which can be rewritten as:
\begin{eqnarray}
\frac{Q}{V_{\text{d}}} > \frac{1}{\nu_{\text{f}} T_{\text{d}}} \frac{J}{J_{\text{s}}},
\end{eqnarray}
where $V_{\text{d}}$ is the total daily volume and $T_{\text{d}}$ is one trading day. Numerically, with a HFT cancellation rate of -- say -- $\nu_{\text{f}} = 1$ sec$^{-1}$ and $J_{\text{s}} = 0.1 J$, one finds that the square-root law holds when the participation rate of the meta-order exceeds $3 \, 10^{-4}$, which is not unreasonable when compared with impact data. Interestingly the cross-over between a linear impact for small $Q$ and a square-root for larger $Q$ is consistent with the data presented by Zarinelli \emph{et al.} \cite{Zarinelli} (but note that the authors fit a logarithmic impact curve instead). \\
\begin{figure}[t!]
\begin{center}
\resizebox{0.39\columnwidth}{!}{ \includegraphics{pricetraj_multi.pdf}}
\end{center}
\caption{Execution rates $m_{it}$ (top) and price trajectory (bottom) within the double-frequency order book model (see Section \ref{multifsec}).}
\label{pricetraj_multi}
\end{figure}
\subsection{Impact decay}
Regarding the decay impact for $t > T$, the problem to solve is that of Eqs.~\eqref{phi1}, \eqref{phi2} and \eqref{priceequal} only where Eq.~\eqref{ratesequal} becomes:
\begin{eqnarray}
m_{\text{s}t}+m_{\text{f}t} &=& 0 \ .\label{ratessumzero}
\end{eqnarray}
The solution behaves asymptotically ($t\gg T$) to zero as $x_t \sim t^{-1/2}$ (see Appendix B). Given the results of Section~\ref{permimp} in the presence of finite memory agents,
the absence of permanent impact may seem counter-intuitive. In order to understand this feature of the double-frequency order book model in the limit $\nu_{\text{s}}\, T \rightarrow 0$, $\nu_{\text{f}}\, T\gg 1$, one can look at the stationary order book. As one moves away from the price the ratio of slow over fast volume fractions ($\phi_{\text{s}}/\phi_{\text{f}}$) grows linearly to infinity. Hence, the shape of the latent order book for $|x| \gg x^\star$ matches that of the infinite memory single-agent model originally presented by Donier \textit{et al.} \cite{DonierLLOB} (see Fig.~\ref{Obstat_multi}). This explains the mechanical return of the price to its initial value before execution, encoded in the slow latent order book. Note that in the limit of very small but finite $\nu_{\text{s}}$, the permanent impact is of order $\sqrt{\nu_{\text{s}}}$, as obtained in Section~\ref{permimp}.
\\
\subsection{The linear regime}
The regime of very small participation rates for which $m_0 \ll J_{\text{s}},J_{\text{f}}$ is also of conceptual interest. In such a case Eq.~\eqref{x1tspec} must be replaced with:
\begin{eqnarray}
x_{\text{s}t} &=&\frac{1}{\mathcal L_{\text{s}}} \int_0^t \textrm d \tau \, \frac{m_{\text{s}\tau}}{\sqrt{4\pi D (t-\tau)}} \label{x1tspecbis} \ ,
\end{eqnarray}
which together with Eqs.~\eqref{x2tspec}, \eqref{ratesequal} and \eqref{priceequal} yields, in Laplace space (see Appendix B):
\begin{eqnarray}
\widehat m_{1p} &=&\frac1p \frac{m_0}{1+\sqrt{pt^\dagger}} \ , \label{m1p}
\end{eqnarray}
where $t^\dagger = (m_0/\pi J_{\text{s}}) t^\star$, with $t^\star$ defined in Eq. \eqref{m2}. For small times ($t \ll t^\dagger$) one obtains $m_{\text{s}t}= 2m_0 \sqrt{t/t^\dagger}$ while for larger times ($t^\dagger \ll t < T$), $m_{\text{s}t}=m_0[1-\sqrt{t^\dagger/(\pi t)}]$. Finally using again Eqs.~\eqref{x2tspec}, \eqref{ratesequal} and \eqref{priceequal} yields $x_t = (\nu_{\text{f}}/\lambda_{\text{f}})m_0 t$ for $t \ll t^\dagger$ and $x_t = (\nu_{\text{f}}/\lambda_{\text{f}})m_0\sqrt{t t^\dagger/\pi}$ for $t^\dagger \ll t < T$, identical in terms of scaling to the price dynamics observed in the case $J_{\text{s}} \ll m_0 \ll J_{\text{f}}$ discussed above. The asymptotic impact decay is identical to the one obtained in that case as well.
\\
\section{Multi-frequency order book}
\label{densnusec}
The double-frequency framework {presented in Sec.~\ref{multifsec}} can be extended to the more realistic case of a continuous range of cancellation and deposition rates. Formally, one has to solve an infinite set of equations, labeled by the cancellation rate $\nu$:
\begin{eqnarray}
\partial_t \phi_\nu = D \partial_{xx} \phi_\nu -\nu\phi_\nu +s_\nu(x,t)\ , \label{phinuc}
\end{eqnarray}
where $\phi_\nu(x,t)$ denotes the contribution of agents with typical frequency $\nu$ to the latent order book, and $s_\nu(x,t) = \lambda_\nu \,\textrm{sign} (x_{ \nu t}-x) + m_{\nu t}\delta(x-x_{ \nu t})$, with $\lambda_\nu =\mathcal L_{\nu}\sqrt{\nu D}$. Equation \eqref{phinuc} must then be completed with:
\begin{subeqnarray}
\int_0^\infty \textrm d\nu \rho(\nu) m_{\nu t} &=&m_t \slabel{densnutaux} \\
x_{\nu t} &=&x_{t} \qquad \forall \nu \, , \label{densnutauxboth}
\end{subeqnarray}
where $\rho(\nu)$ denotes the distribution of cancellation rates $\nu$, and where we have allowed for an arbitrary order flow $m_t$. Solving exactly the above system of equations analytically is too ambitious a task. In the following, we present a simplified analysis that allows us to obtain an approximate scaling solution of the problem for a power law distribution of frequencies $\nu$.
\subsection{The propagator regime}
\label{diffusivitypuz}
We first assume, for simplicity, that the order flow $J_\nu$ is independent of frequency (see later for a more general case), and consider the case when $m_t \ll J$, $\forall t$. Although not trivially true, we assume (and check later on the solution) that this implies $m_{\nu t}\ll J$ $\forall \nu$, such that we can assume linear response for all $\nu$. Schematically, there are two regimes, depending on whether $t \gg \nu^{-1}$ -- in which case the corresponding density $\phi_\nu(x,t)$ has lost all its memory, or $t \ll \nu^{-1}$. In the former case the price trajectory follows Eq.~\eqref{x1tspecbis}, while in the latter case it is rather Eq.~\eqref{x2tspec} that rules the dynamics. One thus has:
\begin{subeqnarray}
\text{For } \nu t\ll 1 \quad x_{ t} &=&\frac{1}{\mathcal L\sqrt{D}} \int_0^t \textrm d \tau \, \frac{ m_{\nu\tau}}{\sqrt{4\pi (t-\tau)} } \slabel{} \\
\text{For } \nu t\gg 1 \quad x_{ t} &=&\frac{\nu^{1/2}}{\mathcal L\sqrt{D}} \int_0^t \textrm d \tau \, m_{\nu \tau} \ . \slabel{} \label{densnux}
\end{subeqnarray}
Inverting Eqs.~\eqref{densnux} and defining $\Psi(t) := 2/\sqrt{\pi t}$ yields (see Appendix B and in particular Eq.~\eqref{m1tx1point}):
\begin{subeqnarray}
\text{For } \nu t\ll 1 \quad m_{\nu t} &=& { \mathcal L\sqrt{D}} \int_0^t \textrm d \tau \, \Psi(t-\tau){\dot x_{ \tau }} \slabel{nupetit} \\
\text{For } \nu t\gg 1 \quad m_{\nu t} &=& \mathcal L \sqrt{D}{\nu^{-1/2}} \dot x_{ t} \ . \slabel{nugrand} \label{nupetitgrand}
\end{subeqnarray}
Our approximation is to assume that $m_{\nu t}$ in Eq.~\eqref{densnutaux} is effectively given by Eq.~\eqref{nupetit} as soon as $\nu<1/t$ and by Eq.~\eqref{nugrand} when $\nu>1/t$ such that Eq.~\eqref{densnutaux} becomes:
\begin{eqnarray}
\int_0^{1/t} \textrm d\nu \rho(\nu)\bigg[\int_0^t \textrm d\tau \Psi(t-\tau){\dot x_{ \tau }}\bigg]+ \int_{1/t}^\infty \textrm d\nu \rho(\nu) \bigg[\nu^{-1/2} \dot x_{ t} \bigg] &=&\frac {m_t}{\mathcal L \sqrt{D}} \ . \label{densnucentral}
\end{eqnarray}
Equation~\eqref{densnucentral} may be conveniently re-written as\footnote{We have implicitly defined the dimensionless functions $G(t) = \int_0^{1/t} \textrm d\nu \rho(\nu)$ and $H(t) =t_{\textrm c}^{-1/2} \int_{1/t}^\infty \textrm d\nu \rho(\nu) \nu^{-1/2}$.}
$\int_0^t \textrm d\tau \big[G(t) \Psi(t-\tau) + H(t)t_{\textrm c}^{1/2} \delta(t-\tau) \big] \dot x_\tau= {m_t}/({\mathcal L \sqrt{D}})$.
Formally inverting the kernel $M(t,\tau):=\big[ G(t) \Psi(t-\tau) + H(t)t_{\textrm c}^{1/2} \delta(t-\tau) \big]$ then yields the price dynamics $\dot x_t$ as a linear convolution of the past order flow $m_{\tau \leq t}$. Note that when $m_t \to 0$, $\dot x_t$ is also small and hence, using Eqs.~\eqref{nupetitgrand}, all $m_{\nu t}$ are all small as well, justify our use of Eqs.~\eqref{densnux} for all frequencies.
\begin{figure}[t!]
\begin{center}
\resizebox{0.48\columnwidth}{!}{ \includegraphics{Fig_part_5.png}}
\end{center}
\caption{Numerical determination of the kernel $K(t,\tau):=M^{-1}(t,\tau)$, for $\alpha=0.25$. One clearly sees that $K$ decays as $(t-\tau)^{-1/2}$ at large lags. The inset
shows that $K(t,t/2)$ behaves as $t^{\alpha - 1/2}$, as expected.}
\label{fig:kernel}
\end{figure}
\subsection{Resolution of the ``diffusivity puzzle''}
Let us now compute the functions $G$ and $H$ for a specific power-law distribution $\rho(\nu)$ defined as:
\begin{eqnarray}
\rho(\nu)&=& Z \nu^{\alpha-1} e^{-\nu t_{\textrm c}} \ , \label{rhonudens}
\end{eqnarray}
where $\alpha>0$, $t_{\textrm c}$ is a high-frequency cutoff, and $Z=t_{\textrm c}^\alpha/\Gamma(\alpha)$.\footnote{Note that rigorously one should also introduce a low frequency cutoff $\nu_{\textrm{LF}}$ to ensure the existence of a stationary state of the order book in the absence of meta-order. Otherwise, $\langle\nu^{-1}\rangle=\infty$ when $\alpha \leq 1$ and the system does not reach a stationary state (see the end of Section \ref{llobrecall} and \cite{BenzaquenFLOB} for a further discussion of this point).}
For such a distribution, one obtains $G(t) = 1- \Gamma(\alpha,t_{\textrm c}/t)/\Gamma(\alpha)$ and $H(t) = \Gamma(\alpha-1/2,t_{\textrm c}/t)/\Gamma(\alpha)$. In the limit $t\ll t_{\textrm c}, \ G(t)\approx 1$ and $H(t)\approx 0$. In the limit $t\gg t_{\textrm c}, \ G(t)\approx (t/t_{\textrm c})^{-\alpha}/[\alpha\Gamma(\alpha)]$, and the dominant term in the first order expansion of $H(t)$ depends on whether $\alpha \lessgtr 1/2$. One has $H(t|_{\alpha<1/2})\approx 2 (t/t_{\textrm c})^{1/2-\alpha}/[\Gamma(\alpha)(1-2\alpha)]$ and $H(t|_{\alpha>1/2})\approx \Gamma(\alpha-1/2)/\Gamma(\alpha)$. Focussing on the interesting case $\alpha < 1/2$, one finds (see Fig.~\ref{fig:kernel}) that inversion of the kernel $M(t,\tau)$ is dominated, at large times, by the first term $G(t) \Psi(t-\tau)$. Hence, one finds in that regime:\footnote{Taking into account the $H(t)$ contribution turns out not to change the following scaling argument.}
\begin{eqnarray}
x_{t}&\approx&\frac{\alpha \Gamma(\alpha)}{\mathcal L t_{\textrm c}^{\alpha} \sqrt{D} } \int_0^t \textrm d \tau \,
\frac{m_\tau \tau^{\alpha}}{\sqrt{4\pi (t-\tau)} } \ . \label{densnuprop}
\end{eqnarray}
Let us now show that this equation can lead to a diffusive price even in the presence of a long-range correlated order flow. Assuming that $\langle m_t m_{t'}\rangle \sim |t -t'|^{-\gamma}$ with $0 < \gamma < 1$ (defining a long memory process, as found empirically \cite{bouchaud2004fluctuations,bouchaud2008markets}), one finds from Eq. (\ref{densnuprop}) that the mean square price is given by:
\begin{eqnarray}
\langle x_t^2 \rangle \propto \iint_0^t \textrm d \tau \textrm d \tau' \frac{ \langle m_\tau m_{\tau'}\rangle {(\tau \tau')}^{\alpha}}{\sqrt{(t-\tau)(t-\tau')} } \ .
\end{eqnarray}
Changing variables through $\tau \to tu$ and $\tau' \to tv$ easily yields $\langle x_t^2 \rangle \propto t^{1+2\alpha-\gamma}$. Note that the LLOB limit corresponds to a unique low-frequency for the latent liquidity. This limit can be formally recovered when $\alpha \to 0$. In this case, we recover the ``disease'' of the LLOB model, namely a mean-reverting, subdiffusive price $\langle x_t^2 \rangle \propto t^{1-\gamma}$ for all values of $\gamma > 0$. Intuitively, the latent liquidity in the LLOB case is too persistent and prevents the price from diffusing.
Imposing price diffusion, i.e. $\langle x_t^2 \rangle \propto t$ finally gives a consistency condition similar in spirit to the one obtained in \cite{bouchaud2004fluctuations}:
\begin{eqnarray}
\alpha&=& \frac{\gamma}{2} < \frac12 \ . \label{alphasgamma}
\end{eqnarray}
Equation~\eqref{alphasgamma} states that for persistent order flow to be compatible with diffusive price dynamics, the long-memory of order flow must be somehow buffered by a long-memory of the liquidity, which makes sense. The present resolution of the diffusivity puzzle -- based on the memory of a multi-frequency self-renewing latent order book -- is similar to, but different from that developed in \cite{BenzaquenFLOB}. In the latter study we assumed the reassessment time of the latent orders to be fat-tailed, leading to a ``fractional'' diffusion equation for $\phi(x,t)$.
\subsection{Metaorder impact}
We now relax the constraint that $\lambda_\nu \propto \sqrt{\nu}$ and define $J_{\nu} := J_{\text{hf}} (\nu t_c)^{\zeta}$
with $\zeta>0$, meaning that HFT is the dominant contribution to trading, since in this case
\begin{eqnarray}
J&=& \int_{0}^\infty \textrm d\nu \rho(\nu) J_{\nu} = J_{\text{hf}} \frac{\Gamma(\zeta+\alpha)}{\Gamma(\alpha)}.
\end{eqnarray}
(The case $\zeta<0$ could be considered as well, but is probably less realistic).\\
We consider a meta-order with constant execution rate $m_0 \ll J_{\text{hf}}$. Since $J_\nu$ decreases as the frequency decreases, there must exist a frequency $\nu^\star$ such that
$m_0 = J_{\nu^\star}$, leading to $\nu^\star t_c = (m_0/J_{\text{hf}})^{1/\zeta}$. When $\nu \ll \nu^\star$, we end up in the non-linear, square-root regime where $m_0 \gg J_\nu$ and Eq. ~\eqref{x1tspec} holds. Proceeding as in the previous section, we obtain the following approximation for the price trajectory:
\begin{equation}
G_\zeta(t) \bigg[\int_0^{t }\textrm d\tau \Psi(t-\tau)\dot x_{\tau}\mathds{1}_{\{ t\leq \nu^{\star -1}\}} + \frac{x_t \dot x_t}{2\sqrt D} \mathds{1}_{\{ t>\nu^{\star -1}\}} \bigg]+t_c^{1/2} H_\zeta(t) \dot x_{t} =\frac{m_0 \sqrt{D}}{J_{\text{hf}}} \ . \label{densnucentral_s}
\end{equation}
where, in the limit $t\gg t_c$ and $\alpha + \zeta < 1/2$:
\begin{subeqnarray}
G_\zeta(t)&:=& \int_0^{1/t} \textrm d\nu \rho(\nu) (\nu t_c)^{\zeta} \approx \left(\frac{t_c}{t}\right)^{\alpha+\zeta} \frac1{\Gamma(\alpha)(\alpha+s)}\\
H_\zeta(t)&:=& \int_{1/t}^\infty \textrm d\nu \rho(\nu) (\nu t_c)^{\zeta-1/2} \approx \left(\frac{t_c}t\right)^{\alpha+\zeta-1/2} \frac1{\Gamma(\alpha)(1/2 - \alpha-s)}\ .
\end{subeqnarray}
At short times $t \ll \nu^{\star -1}$, Eq.~\eqref{densnucentral_s} boils down to Eq.~\eqref{densnucentral} with $\alpha \rightarrow \alpha+\zeta$ and one correspondingly finds:
\begin{equation}
x_t \propto x_c \frac{m_0}{J_{\textrm{hf}}} \left(\frac{t}{t_c}\right)^{\frac12+\alpha+\zeta} \ ,
\end{equation}
where $x_c := \sqrt{Dt_c}$. For $t \gg \nu^{\star -1}$, the second term in Eq.~\eqref{densnucentral_s} dominates over both the first and the third terms, leading to a generalized
square-root law of the form:
\begin{equation}
x_t \propto x_c \sqrt{\frac{m_0}{J_{\text{hf}}}} \, \left(\frac{t}{t_c}\right)^{\frac{1+\alpha+\zeta}2} \ ,
\end{equation}
Compatibility with price diffusion imposes now that $\alpha + \zeta = \gamma/2$, which finally leads to (see Fig.~\ref{pricetraj_multidens}):
\begin{subeqnarray}
x_t &\propto& x_c \frac{m_0}{J_{\textrm{hf}}} \,\left(\frac{t}{t_c}\right) ^{\frac{1+\gamma}{2}}, \quad {\text{when}} \quad t \ll t_c \left(\frac{J_{\text{hf}}}{m_0}\right)^{1/\zeta} \\
x_t &\propto& x_c \sqrt{\frac{m_0}{J_{\text{hf}}}}\, \left(\frac{t}{t_c}\right)^{\frac{2+\gamma}{4}}, \quad {\text{when}} \quad t \gg t_c \left(\frac{J_{\text{hf}}}{m_0}\right)^{1/\zeta} \ .
\end{subeqnarray}
In the latter case, setting $\gamma = 1/2$ and $Q = m_0 T$, one finds an impact $I_Q:=x_T$ behaving as\footnote{{Note that $5/8\approx 0.6$ is very close close to the empirical impact results reported by Almgren \emph{et al.} and Brockmann \emph{et al.} \cite{Almgren2005,Brockmann2015} in the case of equities, for which $\gamma$ is usually close to 1/2.} } $Q^{5/8}$ as soon as $Q > \upsilon (J_{\text{hf}}/m_0)^{(1-\zeta)/\zeta}$, where we have introduced an elementary volume $\upsilon := J_{\text{hf}} t_c$, which is the volume traded by HFT during their typical cancellation time.
\begin{figure}[t!]
\begin{center}
\resizebox{0.48\columnwidth}{!}{ \includegraphics{pricetraj_multidens.pdf}}
\end{center}
\caption{{Price trajectory during a constant rate metaorder execution within the multi-frequency order book model. For $\gamma=1/2$, the impact crosses over from a $t^{3/4}$ to a $t^{5/8}$ regime.}}
\label{pricetraj_multidens}
\end{figure}
\section{Conclusion}
\label{concl}
In this work, we have extended the LLOB latent liquidity model \cite{DonierLLOB} to account for the presence of agents with different memory timescales. This has allowed us to overcome several conceptual and empirical difficulties faced by the LLOB model. We have first shown that whenever the longest memory time is finite (rather than divergent in the LLOB model), a permanent component of impact appears, even in the absence of any ``informed'' trades. This permanent impact is {\it linear} in the traded quantity { and independent of the trading rate}, as imposed by no-arbitrage arguments. We have then shown that the square-root impact law holds provided the meta-order participation rate is large compared to the trading rate of ``slow'' actors, which can be small compared to the total trading rate of the market -- itself dominated by high-frequency traders. In the original LLOB model where all actors are slow, a square-root impact law independent of the participation rate only holds when the participation rate is large compared to the total market rate, which is not consistent with empirical data. Finally, the multi-scale latent liquidity model offers a new resolution of the diffusivity paradox, i.e. how an order flow with long-range memory can give rise to a purely diffusive price. We show that when the liquidity memory times are themselves fat-tailed, mean-reversion effects induced by a persistent order book can exactly offset trending effects induced by a persistent order flow. \\
We therefore believe that the multi-timescale latent order book view of markets, encapsulated by Eqs.~\eqref{phinuc} and \eqref{densnutauxboth}, is rich enough to capture a large part of the subtleties of the dynamics of markets. It suggests an alternative framework to build agent based models of markets that generate realistic price series, that complement and maybe simplify previous attempts \cite{Toth2011,mastromatteo2014agent}. A remaining outstanding problem, however, is to reconcile the extended LLOB model proposed in this paper with some other well known ``stylized facts'' of financial price series, namely power-law distributed price jumps and clustered volatility. We hope to report progress in that direction soon. Another, more mathematical endeavour is to give a rigorous meaning to the multi-timescale reaction model underlying Eqs.~\eqref{phinuc} and \eqref{densnutauxboth} and to the approximate solutions provided in this paper. It would be satisfying to extend the no-arbitrage result of Donier et al. \cite{DonierLLOB}, valid for the LLOB model, to the present multi-timescale setting. \\
We thank J. Bonart, A. Darmon, J. de Lataillade, J. Donier, Z. Eisler, A. Fosset, S. Gualdi, I. Mastromatteo, M. Rosenbaum and B. T\'oth for extremely fruitful discussions.
\clearpage
\section*{Appendix A}
\label{sec:Appendix1}
We here provide the calculations that link Eq.~\eqref{alphaz0z1} and Table~\ref{tableimpact} during a meta-order execution ($t\leq T$); the impact decay computations ($t>T$) are given and discussed in Section \ref{permimp}.\\
In the limit of slow execution of the meta-order, one has ${(x_t-x_\tau)^2}\ll {{4D(t-\tau)}}$ such that Eq.~\eqref{mastereq} together with Eq.~\eqref{priceeq} becomes:
\begin{eqnarray}
0 &=& \phi^\mathrm{st}(x_t)e^{-\nu t} + \int_0^{t} \text d \tau\, \frac{m_0}{\sqrt{4\pi D(t-\tau)} } e^{-\nu(t-\tau) } -{2 \lambda}\int_0^{t } \text d \tau \, \frac{x_t-x_\tau}{\sqrt{4\pi D(t-\tau)} } e^{-\nu(t-\tau)} \ .\label{slowshort}
\end{eqnarray}
Interestingly, slow and short execution is only compatible with small meta-order volume\footnote{Equivalently, rapid and long execution is only consistent with large meta-order volume (combining $m_0\gg J$ and $\nu T\gg 1$ implies $m_0 T \gg J \nu^{-1}$).} (indeed, combining $m_0\ll J$ and $\nu T\ll 1$ implies $m_0 T \ll J \nu^{-1}$). Thus for slow and short execution, using the linear approximation $\phi^\mathrm{st}(x_t)=-\mathcal L x_t$ and letting Eq.~\eqref{alphaz0z1} into Eq.~\eqref{slowshort} yields:
\begin{subeqnarray}
0&=& -\mathcal L \alpha z^0_t +m_0\sqrt{\frac t{\pi D}}\slabel{slowshort0th} \\
0&=& -\mathcal L\sqrt{\nu} z^1_t - 2\lambda \int_0^t \mathrm d \tau\, \frac{z_t^0-z_\tau^0}{\sqrt{4\pi D(t-\tau)}}\ .\slabel{slowshort1st}
\end{subeqnarray}
Equation \eqref{slowshort0th} yields $\alpha = m_0/(\mathcal L\sqrt{\pi D})$ and $z_t^0=\sqrt{t}$, and it follows from Eq.~\eqref{slowshort1st} that $z_t^1 = - kt$ where $k=\sqrt{4/\pi} - \sqrt{\pi/4}$. \\
In the limit of fast execution, one has ${(x_t-x_\tau)^2}\gg {{4D(t-\tau)}}$ such that the meta-order term can be approximated through the saddle point method. Letting $x_\tau \approx x_t- (t-\tau)\dot x_t$ into the price equation now yields:
\begin{eqnarray}
0 &=& \phi^\mathrm{st}(x_t)e^{-\nu t} + \int_0^{t} \text d \tau\, m_0 \frac{e^{-\frac{\dot x_t^2(t-\tau)}{4D}}}{\sqrt{4\pi D(t-\tau)} } e^{-\nu(t-\tau) }
-{ \lambda}\int_0^{t } \text d \tau \, e^{-\nu(t-\tau)} \ .\label{fastshort}
\end{eqnarray}
Letting $u=t-\tau$ and given ${4D}/{\dot x_t^2}\ll t$ such that $\int_0^t \mathrm du \approx \int_0^\infty \mathrm du$, Eq.~\eqref{fastshort} becomes:
\begin{eqnarray}
0 &=& \phi^\mathrm{st}(x_t)e^{-\nu t} + \frac{m_0}{\sqrt{\dot x_t^2+4D\nu}} +\frac{ \lambda}\nu\left( e^{-\nu t}-1\right) \, .\label{fastshortbis}
\end{eqnarray}
For short execution with small meta-order volume (we use $\phi^\mathrm{st}(x_t)=-\mathcal L x_t$), letting Eq.~\eqref{alphaz0z1} into Eq.~\eqref{fastshortbis} yields:
\begin{subeqnarray}
0&=& -\mathcal L \alpha z^0_t + \frac{m_0}{\alpha |\dot z_t^0|} \slabel{fastshortbis0th} \\
0&=& -\mathcal L \alpha \sqrt{\nu}z^1_t - \frac{\sqrt{\nu}m_0}{\alpha}\frac{\dot z_t^1}{ (\dot z_t^0)^2} -\lambda t \ .\slabel{fastshortbis1st}
\end{subeqnarray}
Equation \eqref{fastshortbis0th} yields $\alpha = \sqrt{{2m_0}/{\mathcal L}}$ and $z_t^0=\sqrt{t}$, and thus Eq.~\eqref{fastshortbis1st} becomes $\dot z_t^1 + {z_t^1}/({2t}) = - \frac12\sqrt{J/({2m_0})}$. It follows that $z_t^1 = - \frac t3\sqrt{J/(2m_0)} $.
For a fast, short and large meta-order, $x_t$ is expected to go well beyond the linear region of the order book such that in a hand-waving static approach (consistent with fast and short execution) one can match $m_0 t$ and the area of a rectangle of sides $x_t$ and $\lambda\nu^{-1}$ (see Fig.~\ref{Obstat}). Letting $x_t=b t$ yields $b = m_0\nu /\lambda$. Note that this result can be recovered by letting $x_t=b t$ and $\phi^\mathrm{st}(x_t)=-\lambda \nu^{-1}$ into Eq.~\eqref{fastshortbis}. Indeed, at leading order one obtains:
\begin{eqnarray}
0&=& -\frac{\lambda}\nu + \frac{m_0}{ |\dot x_t|} \ ,\label{}
\end{eqnarray}
from which the result trivially follows.\\
For long execution ($\nu T\gg1$) the memory of the initial book is rapidly lost and one expects Markovian behaviour. Letting again $x_t=b t$ into the price equation and changing variables through $\tau =t(1-u)$ yields:
\begin{eqnarray}
0 &=& m_0 \sqrt{t} \int_0^1 \text d u\,\frac{ e^{-\frac{b ^2tu}{4D}}}{\sqrt{4\pi Du}} e^{-\nu t u}
-\lambda\int_0^1 \text d u \textstyle\,e^{-\nu t u} \, \textrm{erf} \sqrt{\frac{b^2tu}{4D}} \nonumber \\ &=& \left(m_0 - \frac{\lambda b }{\nu}\right)\frac{1}{\sqrt{b ^2+4D\nu }}\, \textrm{erf} \,\textstyle \sqrt{\left( \frac{b ^2}{4D}+\nu\right) t }\ . \label{longall}
\end{eqnarray}
Interestingly, Eq.~\eqref{longall} yields $b = m_0\nu /\lambda$ (regardless of execution rate and meta-order size), which is exactly the result obtained above in the case of fast and short execution of a large meta-order but for different reasons.
\section*{Appendix B }
\label{sec:Appendix2}
We here provide the calculations underlying the double-frequency order book model presented in Section~\ref{multifsec}. In particular {for the case $J_{\text{s}}\ll m_0\ll J_{\text{f}}$}, Eqs.~\eqref{x1tx2t} are obtained as follows. In the limit of large trading intensities the saddle point methods (as detailed in Appendix A) can also be applied to the case of nonconstant execution rates (one lets $m_\tau \approx m_t$ about which the integrand is evaluated, see \cite{DonierLLOB}), in particular one obtains (equivalent to Eq.~\eqref{fastshortbis1st}):
\begin{eqnarray}
\mathcal L_{\text{s}} x_{\text{s}t}|\dot x_{\text{s}t}|&=& {m_{\text{s}t}}\ , \label{}
\end{eqnarray}
which yields Eq.~\eqref{x1tspec}. For the rapid agents ($\nu_{\text{f}}T\gg 1$) we must consider the case of long execution. In particular, an equation tantamount to Eq.~\eqref{longall} can also be derived in the case of nonconstant execution rates. Proceeding in the same manner, one easilly obtains:
\begin{eqnarray}
0 &=& \left( m_{\text{f}t} - \frac{\lambda_{\text{f}} \dot x_{\text{f}t} }{\nu_{\text{f}}} \right) \frac{1}{\sqrt{\dot x_{\text{f}t} ^2+4D\nu_{\text{f}} }}\, \textrm{erf} \,\textstyle \sqrt{\left( \frac{\dot x_{\text{f}t}^2}{4D}+\nu_{\text{f}}\right) t }\ , \quad\quad \label{}
\end{eqnarray}
which yields ${ \dot x_{\text{f}t} } = m_{\text{f}t}\nu_{\text{f}} /\lambda_{\text{f}}$ and thus Eq.~\eqref{x2tspec}. Then, as mentioned in Section~\ref{multifsec}, the asymptotic impact decay is obtained from Eqs.~\eqref{phi1}, \eqref{phi2} and \eqref{priceequal}
only where for $t>T$ we replace Eq.~\eqref{ratesequal} with Eq.~\eqref{ratessumzero}. Using Eq.~\eqref{mastereq} together with Eq.~\eqref{priceeq} in the limit $\nu_{\text{s}}T\rightarrow 0$, and $\nu_{\text{f}}T\gg 1$ together with \eqref{priceequal} yields ($t>T$):
\begin{subeqnarray}
\mathcal L_{\text{s}} x_t &=& \int_0^T \!\!\!\! +\! \int_T^t \textrm d\tau \frac{ m_{\text{s}\tau} }{\sqrt{4\pi D(t-\tau)}} \\
0&=& \int_0^T \!\!\!\! +\! \int_T^t \textrm d\tau \frac{e^{-\nu_{\text{f}}(t-\tau)}}{\sqrt{4\pi D (t-\tau)}}\big[ m_{\text{f}\tau} - 2\lambda_{\text{f}}(x_t-x_\tau) \big] \, . \quad \quad \label{decaymultifreq}
\end{subeqnarray}
Asymptotically ($t\gg T$) the system of Eqs.~\eqref{decaymultifreq} becomes:
\begin{subeqnarray}
\mathcal L_{\text{s}} x_t &=& \int_0^T \frac{ m_{\text{s}\tau} \textrm d\tau}{\sqrt{4\pi D(t-\tau)}} +\int_T^t \frac{m_{\text{s}\tau} \textrm d\tau}{\sqrt{4\pi D(t-\tau)}} \slabel{decm1} \\
0&=&\int_0^t \textrm d\tau \frac{e^{-\nu_{\text{f}}(t-\tau)}}{\sqrt{4\pi D (t-\tau)}}\left[ m_{\text{f}\tau} - 2\lambda_{\text{f}}(x_t-x_\tau) \right] . \quad \quad \slabel{decm2} \label{decaymultifreqsimp}
\end{subeqnarray}
We expect the asymptotic impact decay to be of the form $x_t = x_\infty + B/\sqrt{t}$. In addition Eq.~\eqref{decm2} indicates that $m_{\text{f}t} \sim \dot x_t$. We thus let $m_{\text{s}t}=-m_{\text{f}t}=C/t^{3/2}$. Injecting into Eq.~\eqref{decm1} yields $x_\infty = 0$ (no permanent impact) and:
\begin{eqnarray}
\frac{\mathcal L_{\text{s}} B}{\sqrt{t}} &=&\frac1{\sqrt{t}} \left[ \frac{m_0f_{T}}{\sqrt{4\pi D}} +\frac{C}{\sqrt{\pi DT}} \right] \ , \label{l1Bsqrtt}
\end{eqnarray}
where $ f_T=T$ if $t^\star\ll T$ and $f_T = T^2/(3t^\star)$ if $ t^\star \gg T$.
On the other hand, letting $u=t-\tau$ in Eq.~\eqref{decm2} and using $x_t-x_s \approx (t-s)\dot x_t$ yields at leading order:
\begin{eqnarray}
0&=&\int_0^\infty \textrm du \,\frac{e^{-\nu_{\text{f}}u}}{\sqrt{u}}\left[ -\frac C{t^{3/2}} + \frac{\lambda_{\text{f}}B u}{t^{3/2}} \right]
= \sqrt{\frac{\pi}{\nu_{\text{f}}t^{3}}}\left[ -C + \frac{\lambda_{\text{f}} B}{\nu_{\text{f}}}\right]\label{} \ ,
\end{eqnarray}
which combined with Eq.~\eqref{l1Bsqrtt} easily leads to the values of $B$ and $C$.\\ %
For the case $m_0\ll J_{\text{s}},J_{\text{f}}$, the calculations are slightly more subtle. Inverting Eq.~\eqref{x1tspecbis} in Laplace space yields:
\begin{eqnarray}
m_{\text{s}t} &=&2{\mathcal L_{\text{s}}} \sqrt{D} \int_0^t \textrm d \tau \, \frac{ \dot x_{\text{s}\tau}}{\sqrt{\pi (t-\tau)} } \label{m1tx1point} \ .
\end{eqnarray}
One can easily check this result by re-injecting Eq.~\eqref{m1tx1point} into Eq.~\eqref{x1tspecbis}. In turn, inverting Eq.~\eqref{x2tspec} is straightforward and yields $
m_{\text{f}t} =({\lambda_{\text{f}}}/{\nu_{\text{f}}})\dot x_{\text{f}t}$.
Injecting $\dot x_{\text{s}t}=\dot x_{\text{f}t}$ into Eq.~\eqref{m1tx1point} and using Eq.~\eqref{ratesequal} yields:
\begin{eqnarray}
m_{\text{s}t} &=&\frac 1{\sqrt{t^\dagger}} \int_0^t \textrm d \tau \, \frac{ m_0- m_{\text{s}\tau}}{\sqrt{\pi(t-\tau)} } \label{} \ ,
\end{eqnarray}
{which can be written as:
\begin{eqnarray}
\int_0^t \textrm d \tau \, { m_{\text{s}\tau}}\Phi(t-\tau) = 2m_0 \sqrt{t}\ , \quad \text{with} \ \Phi(t) := {\delta(t)}{\sqrt{\pi t^\dagger}} +\frac{\theta(t)}{\sqrt{t}} \ . \label{realtolapm1}
\end{eqnarray}
Taking the Laplace transform of Eq.~\eqref{realtolapm1} one obtains $\widehat \Phi(p) \widehat m_{sp}=m_0\sqrt{\pi}/p^{3/2}$ with $\widehat \Phi(p)=\sqrt{\pi t^\dagger}+\sqrt{\pi/p}$,
which in turn yields Eq.~\eqref{m1p}.}
\clearpage
\bibliographystyle{iopart-num}
|
1,108,101,563,000 | arxiv | \section{Introduction}
Optical chirality plays an important role in the optical sensing of biomolecules (see, e.g., [1]) and the interaction of light with chiral nanostructures or metamaterials [2]. The nonreciprocity in transmission or circular dichroism measures the chiral effects in the interaction of the optical field with the specimen. However, it is of both fundamental and practical interest to introduce a measure of the chirality of the optical field itself. Recently, Tang and Cohen [3] used a proposal by Lipkin [4] and introduced the local measure of the chirality of a nonparaxial monochromatic field. This will be called here optical \textit{chirality density} in order to emphasize its local nature. Together with the \textit{chirality flow density}, these quantities satisfy the continuity equation, akin to the Poynting theorem. A recent experiment [5] confirmed that this measure of optical chirality is meaningful.
One of the points of [4] was that the chirality density is not simply related to the local ellipticity (i.e., the degree of circular polarization of light in real space) but represents a more sophisticated characteristic which can take arbitrary large values when divided by the electric field energy density. Here we show that the chirality density is directly related to the \textit{helicity} of light, i.e., the degree of circular polarization in the \textit{momentum} (plane-wave) representation. We derive local and integral values of the chirality and chirality flow of a nonparaxial free field and show that the operators of energy and momentum of light, multiplied by the helicity, correspond to these quantities. This unveils the actual relation of the optical chirality to the polarization and sheds light on the similarity with the Poynting theorem.
\section{Basic equations}
To begin with, we consider monochromatic light in free space, characterized by the real electric and magnetic fields: $\bm{\mathcal E}({\bf r},t)$, $\bm{\mathcal H}({\bf r},t)$, and their standard complex representations: ${\bf E}\left( {\bf r} \right)$, ${\bf H}\left( {\bf r} \right)$, so that $\bm{\mathcal E}({\bf r},t) = {\mathop{\rm Re}\nolimits} \left[ {{\bf E}\left( {\bf r} \right)e^{ - i\omega t} } \right]$, ${\bm {\mathcal H}}({\bf r},t) = {\mathop{\rm Re}\nolimits} \left[ {{\bf H}\left( {\bf r} \right)e^{ - i\omega t} } \right]$. Using Gaussian units, we write the energy density $w$ and Poynting energy flow ${\bf s}$ [6]:
\begin{equation}\label{eqn:1}
w = {g \over 2}\left( {\bm{\mathcal E}^2 + \bm{\mathcal H}^2 } \right)~,
\end{equation}
\begin{equation}\label{eqn:2}
{\bf s} = c\,g \left( \bm{\mathcal E} \times \bm{\mathcal H}\right)~,
\end{equation}
where $g=(4\pi)^{-1}$. The chirality density $\chi$ and the corresponding chirality flow $\bm{\varphi}$ introduced in [3,4] read:
\begin{equation}\label{eqn:3}
\chi = {c \over \omega }\,{g \over 2}\left[ {\bm{\mathcal E} \cdot \nabla \times \bm{\mathcal E} + \bm{\mathcal H} \cdot \nabla \times \bm{\mathcal H}} \right]~,
\end{equation}
\begin{equation}\label{eqn:4}
{\bm \varphi } = {{c^2 } \over \omega }\,{g \over 2}\left[ {\bm{\mathcal E} \times \left( {\nabla \times \bm{\mathcal H}} \right) - \bm{\mathcal H} \times \left( {\nabla \times \bm{\mathcal E}} \right)} \right]~.
\end{equation}
Compared to [3], we multiplied Eqs.~(3) and (4) by a constant $c/\omega$, in order to have the same dimensionality as the energy density and flow. The energy and chirality satisfy the continuity equations [3,6]:
\begin{equation}\label{eqn:5}
{{\partial w} \over {\partial t}} + \nabla \cdot {\bf s} = 0~,
\end{equation}
\begin{equation}\label{eqn:6}
{{\partial \chi} \over {\partial t}} + \nabla \cdot {\bm \varphi} = 0~.
\end{equation}
It is worth noticing that $\chi$ and ${\bm \varphi}$ are time-independent [4] (so that ${{\partial \chi} / {\partial t}} =0$), whereas $w$ and ${\bf s}$ possess oscillating terms [6]. Performing time averaging, we express these quantities via complex fields:
\begin{equation}\label{eqn:7}
\bar w = {g \over 4}\,{\mathop{\rm Re}\nolimits} \left( {{\bf E}^* \cdot {\bf E} + {\bf H}^* \cdot {\bf H}} \right)~,
\end{equation}
\begin{equation}\label{eqn:8}
{\bf \bar s} = c\,{g \over 2}\,{\mathop{\rm Re}\nolimits} \left( {{\bf E}^* \times {\bf H}} \right)~,
\end{equation}
\begin{equation}\label{eqn:9}
\bar \chi = \chi = - {g \over 2}\,{\mathop{\rm Im}\nolimits} \left( {{\bf E}^* \cdot {\bf H}} \right)~,
\end{equation}
\begin{equation}\label{eqn:10}
{\bar {\bm \varphi} } = {\bm \varphi } = c\,{g \over 4}\,{\mathop{\rm Im}\nolimits} \left( {{\bf E}^* \times {\bf E} + {\bf H}^* \times {\bf H}} \right)~.
\end{equation}
Here we used Maxwell equations and wrote Eqs.~(7)--(10) in a form exhibiting a notable symmetry between the energy and chirality. While $w$ and ${\bf s}$ are respectively a scalar and a vector, $\chi$ and ${\bm \varphi}$ are a pseudoscalar and a pseudovectors changing their signs upon mirror reflections. After performing the time-averaging, the continuity equations (5) and (6) reduce to $\nabla \cdot {\bf \bar s} = 0$ and $\nabla \cdot {\bm \varphi} = 0$.
\section{Helicity representation}
It is known that the chirality of an electromagnetic field is typically associated with the degree of circular polarization. At the same time, this quantity, known as helicity (i.e., the spin state of a photon), is well defined only for plane waves, i.e., in the momentum representation [7]. This is due to the transverse nature of the electromagnetic waves, for which the polarization is orthogonal to its wave vector. Therefore, one can naturally characterize the polarization of each plane wave in the spectrum, but it is difficult to characterize the local spatial polarization of a nonparaxial superposition of multiple plane waves -- in the generic case, the polarization has all three components and exhibits rather complicated features [8]. Recently, analyzing the energy flows in optical fields, we have shown that the dynamical characteristics (such as energy, momentum, and angular momentum) of a generic propagating electromagnetic field acquire a particularly clear and simple form in the helicity momentum representation [9]. Below we apply this representation to Eqs.~(7)--(10).
The Fourier plane-wave spectrum of the complex fields can be written as
\begin{equation}
\label{eqn:11}
\left\{ {\bf E}\left( {\bf r} \right), {\bf H}\left( {\bf r} \right) \right\} = {\alpha \over {2\pi }}\int {\left\{ {\bf \tilde E}\left( {\bf k} \right), {\bf \tilde H}\left( {\bf k} \right) \right\} e^{i{\bf k} \cdot {\bf r}} } d^2 {\bf k}~,
\end{equation}
where the integration is performed over the $(k_x, k_y)$ plane (for simplicity we assume propagating field and neglect evanescent waves) and the normalization factor $\alpha=\sqrt{2\omega /g}$ is introduced for convenience below. Now each complex Fourier amplitude can be represented as a sum of two circularly-polarized plane waves with well-defined helicities:
\begin{equation}\label{eqn:12}
{\bf \tilde E} = {\bf \tilde E}^+ + {\bf \tilde E}^- ~,~~~{\bf \tilde H} = {\bf \tilde H}^+ + {\bf \tilde H}^-~,
\end{equation}
where the helicities $\sigma=\pm 1$ correspond to the right-hand and left-hand circularly-polarized waves. The basic properties of the circular polarizations yield [7]
\begin{equation}\label{eqn:13}
{\bf \tilde H}^\sigma = - i\sigma {\bf \tilde E}^\sigma~,~~~{\bf \tilde E}^{\sigma *} \times {\bf \tilde E}^\sigma = i\sigma {{\bf k} \over k}\left| {{\bf \tilde E}^\sigma } \right|^2~.
\end{equation}
Substituting the representation (11)--(13) and performing some calculations similar to those in [9,10] for the energy flows, we arrive at
\begin{equation}\label{eqn:14}
\bar w = {\omega \over {\left( {2\pi } \right)^2 }}\sum\limits_{\sigma = \pm 1} {{\mathop{\rm Re}\nolimits} \int {d^2 {\bf k'}\int {d^2 {\bf k}\,e^{i {\bm \kappa} \cdot {\bf r}} \left( {{\bf \tilde E}^{\sigma * \prime} \cdot {\bf \tilde E}^\sigma } \right)} } }~,
\end{equation}
\begin{equation}\label{eqn:15}
{\bf \bar s} = {{c\,\omega } \over {\left( {2\pi } \right)^2 }}\sum\limits_{\sigma = \pm 1} {{\mathop{\rm Im}\nolimits} \int {d^2 {\bf k'}\int {d^2 {\bf k}\,e^{i{\bm \kappa} \cdot {\bf r}} \sigma \left( {{\bf \tilde E}^{\sigma * \prime} \times {\bf \tilde E}^\sigma } \right)} } }~,
\end{equation}
\begin{equation}\label{eqn:16}
\chi = {\omega \over {\left( {2\pi } \right)^2 }}\sum\limits_{\sigma = \pm 1} {{\mathop{\rm Re}\nolimits} \int {d^2 {\bf k'}\int {d^2 {\bf k}\,e^{i{\bm \kappa} \cdot {\bf r}} \sigma \left( {{\bf \tilde E}^{\sigma * \prime} \cdot {\bf \tilde E}^\sigma } \right)} } }~,
\end{equation}
\begin{equation}\label{eqn:17}
{\bm \varphi } = {{c\,\omega } \over {\left( {2\pi } \right)^2 }}\sum\limits_{\sigma = \pm 1} {{\mathop{\rm Im}\nolimits} \int {d^2 {\bf k'}\int {d^2 {\bf k}\,e^{i{\bm \kappa} \cdot {\bf r}} \left( {{\bf \tilde E}^{\sigma * \prime} \times {\bf \tilde E}^\sigma } \right)} } }~,
\end{equation}
where ${\bm \kappa} = {{\bf k} - {\bf k'}}$ and ${\bf \tilde E}^{\sigma * \prime} \equiv {\bf \tilde E}^{\sigma *} \left( {{\bf k'}} \right)$. It is clear that in Eqs.~(14)--(17) the integrands of the energy and chirality quantities differ \textit{only} by the helicity $\sigma$, which flips upon mirror reflections and spatial inversion. Note also that Eqs.~(14)--(17) do \textit{not} contain interference cross-terms mixing different helicities -- this is a remarkable feature of the helicity representation diagonalizing quadratic field forms of Maxwell equations [9,10].
From Eqs.~(14) and (16) it immediately follows that the ratio of the chirality density $\chi$ to the energy density ${\bar w}$ cannot exceed 1 in absolute value:
\begin{equation}\label{eqn:18}
{\chi \over {\bar w}} \, = \, {{\bar w^+ - \bar w^{\,-}} \over {\bar w^+ + \bar w^{\,-}}} \, \in \, \left[ { - 1,1} \right]~.
\end{equation}
Here $\bar w^\sigma$ is the energy density of the $\sigma$-polarized part of the field and the maximal chirality $\chi/{\bar w}=\sigma$ is achieved for a field composed of plane waves with the same helicity $\sigma$. It should be emphasized that in this case the actual local polarization of the optical field in real space (which results from interference of multiple plane waves propagating in different directions) can be rather complicated and far from being circular [8,10].
Our results clarify the examples given in the Supporting Online Materials to [3], where a superposition of two counter-propagating plane waves with different polarization was considered. One can note that the chirality density obtained for such field represents the sum of energy-weighted helicities of the partial waves. The enormously high response in the so-called superchiral fields was achieved in [3,5], because there the chirality efficiency was determined by the ratio of the chirality density to the \textit{electric} field energy density, which is only a part of the total energy density of the field. (Indeed, in the example [3] of counterpropagating orthogonally-polarized waves, the maxima of the electric energy density correspond to the minima of the magnetic energy density and vise versa.)
Note also that in [5] the evanescent near-field modes were involved, which are beyond the scope of this work.
It is interesting to calculate the integral values of the energy, momentum, chirality, and corresponding `chiral momentum' of the field. The momentum density ${\bf p}$ differs from the energy flow ${\bf \bar s}$ by a factor of $c^2$ (${\bf p} = {\bf \bar s}/c^2$), and we introduce the corresponding \textit{chiral momentum} density: ${\bm \pi } = {\bm \varphi }/c^2$. For propagating beam-like fields the integral quantities per unit $z$-length are determined by the 2D spatial integration over the $(x,y)$ plane [9,11]:
\begin{eqnarray}\label{eqn:19}
\nonumber
W = \int {\bar w} \,d^2 {\bf r}~,~~~{\bf P} = \int {{\bf p}} \,d^2 {\bf r}~,\\
{\rm X} = \int \chi \,d^2 {\bf r}~,~~~
{\bm \Pi } = \int {\bm \pi } \,d^2 {\bf r}~.
\end{eqnarray}
Performing this integration of Eqs.~(14)--(17) and using $\int {e^{i{\bm \kappa} \cdot {\bf r}} } d^2 {\bf r} = \left( {2\pi } \right)^2 \delta ^2 \left( {\bm \kappa} \right)$ along with Eq.~(13), we obtain
\begin{equation}\label{eqn:20}
W = \sum\limits_{\sigma = \pm 1} {\int {\omega \left| {{\bf \tilde E}^\sigma } \right|^2 } }d^2 {\bf k}~,
\end{equation}
\begin{equation}\label{eqn:21}{\bf P} = \sum\limits_{\sigma = \pm 1} {\int {{\bf k} \left| {{\bf \tilde E}^\sigma } \right|^2 } }d^2 {\bf k}~,
\end{equation}
\begin{equation}\label{eqn:22}
{\rm X} = \sum\limits_{\sigma = \pm 1} {\int {\sigma\omega \left| {{\bf \tilde E}^\sigma } \right|^2 } }d^2 {\bf k}~,
\end{equation}
\begin{equation}\label{eqn:23}{\bm \Pi} = \sum\limits_{\sigma = \pm 1} {\int {\sigma{\bf k} \left| {{\bf \tilde E}^\sigma } \right|^2 }} d^2 {\bf k}~.
\end{equation}
These equations allow a clear interpretation in terms of the quantum-like operators of the corresponding dynamical quantities [7,9,12]. Introducing the field state vector $\left| {{\bf \tilde E}^\sigma } \right\rangle = {\bf \tilde E}^\sigma$ and assuming the convolution defined as $\left\langle ~~ \right|\left. ~ \right\rangle = \sum\limits_{\sigma = \pm 1} {\int {d^2 {\bf k}} }$, we see that Eqs.~(20)--(23) can be regarded as the expectation values ${\bf O} = \left\langle {{\bf \tilde E}^\sigma } \right|{\bf \hat O}\left| {{\bf \tilde E}^\sigma } \right\rangle$ of the following operators of energy, momentum, chirality, and chiral momentum:
\begin{eqnarray}\label{eqn:24}
\nonumber
\hat W = \omega~,~~~~{\bf \hat P} = {\bf k}~,\\
\hat {\rm X} = \sigma \omega~,~~~~{\bf \hat \Pi } = \sigma {\bf k}~.
\end{eqnarray}
Thus, using the operator formalism, the 4-pseudovector of the chirality and chiral momentum, $\left(\hat{\rm X},\hat{\bm \Pi}\right)$ represent just the usual energy-momentum $\left( {\omega ,{\bf k}} \right)$ multiplied by the pseudoscalar of helicity $\sigma$. This remarkably simple result unveils the actual physical meaning of the measure of chirality introduced in [3-5] and explains its connection with the polarization state and Poynting theorem. Importantly, the chiral momentum is closely related to the \textit{spin angular momentum} of light, which is represented by the operator ${\bf \hat \Sigma } = \sigma {\bf k}/k$ [9]. Hence, for monochromatic fields the chirality (22) and chiral momnetum (23) represent, up to constant factors, the averaged helicity of the field and its spin angular momentum:
\begin{equation}\label{eqn:25}
{\rm X} = \omega \langle \sigma \rangle~,~~~~{\bm \Pi} = \frac{\omega}{c} {\bf \Sigma }~.
\end{equation}
\[ \]
\section{Conclusion}
To summarize, we have examined the measure of the optical chirality for a free monochromatic optical field. Using the momentum representation (Fourier decomposition), we uncovered the close connections of the chirality density and chirality flow to the polarization helicity, energy density, and Poynting energy flow. The ratio of the chirality density to the energy density reaches its maximum absolute value for the fields composed of plane waves with the same well-defined helicity. At the same time, the local spatial polarization structure resulting from the interference of partial plane waves can be rather complicated. Finally, we have determined the integral values of the chirality and the corresponding chiral momentum and have revealed that the usual energy-momentum operator multiplied by the helicity underlies these quantities.
\textit{Note added.--} After submission of this work, a related paper [13] came to our attention. Similar quantities and conservation laws are considered there, but in the context of the Riemann-Silberstein vectors, which enables to deal with helicities in the coordinate representation [7b].
We acknowledge support from the European Commission (Marie Curie Action), Science Foundation Ireland (Grant No. 07/IN.1/I906),
LPS, NSA, ARO, DARPA, AFOSR, NSF grant No. 0726909, JSPSRFBR contract No. 09-02-92114, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and the Funding Program for Innovative R\&D on Science and Technology (FIRST).
|
1,108,101,563,001 | arxiv | \section{Introduction}
First-order methods in convex optimization have attracted significant attention in statistical learning in recent years. They are appealing to many learning problems, such as LASSO regression and matrix completion, which have diverse applications in analyzing large-scale biological systems and high-dimensional biomedical measurement profiles~\cite{Zaslavskiy09, Klau09}. These first-order optimization methods scale well with the current ``big'' data in many biomedical applications due to their {advantages that they have low computation burden per iteration} and they are easy to be implemented on parallel computational resources.
In this paper, we focus on the Frank-Wolfe algorithm, which is also known as the conditional gradient method. One of its advantages is that at each iteration step it decomposes the complex constrained optimization problem into sub-problems which are easier to solve. Additionally, it is a projection free algorithm, which avoids solving the projection problem for constrained optimization as done in many other algorithms. The original Frank-Wolfe algorithm, developed for smooth convex optimization on a polytope, dates back to Frank and Wolfe~\cite{FWold}. Dunn and Harshbarger~\cite{FWg1, FWg2} have generalized the algorithm to solve the optimization for more general smooth convex objective functions over bounded convex feasible regions. Recently, researchers~\cite{BCFW} have proposed stochastic optimization ideas to scale up the original Frank-Wolfe algorithm.
Based on these previous seminal efforts, our main contribution in this paper is that we generalize the stochastic block coordinate Frank-Wolfe algorithm proposed in~\cite{BCFW}, previously with \emph{block separable constraints}, to solve more general optimization problems with \emph{any convex compact constraints}, including the problems with block inseparable constraints. Such a generalized algorithm has a broader range of biomedical applications, including biological network alignment. We prove the convergence of our generalized stochastic block coordinate Frank-Wolfe algorithm and evaluate the algorithm performance for querying conserved functional protein complexes in real-world protein-protein interaction (PPI) networks.
In the following sections, we first describe the model formulation of the optimization problems that we are generally interested. Specifically, to address potential difficulty from more general convex compact constraints, we derive a new stochastic block coordinate Frank-Wolfe algorithm and provide the convergence proof. Then, we formulate the IsoRank problem for network alignment~\cite{isorank} as a convex programming problem and develop a SBCFW-IsoRank algorithm based on our new stochastic block coordinate Frank-Wolfe algorithm. At last, in our experiments, we show the efficiency and effectiveness of our algorithm for solving the PPI network query problem
\section{Stochastic Block Coordinate Descent Frank-Wolfe Algorithm} \label{sec2}
Consider the minimization problem:
\begin{equation}\label{eq1}
\begin{aligned}
\textup{min:}& \ \ f(\mathbf{x})\\
s.t. & \ \ \mathbf{x} \in \mathfrak{D},
\end{aligned}
\end{equation}
where the objective function $f(\mathbf{x})$ is convex and differentiable on $\mathbf{R}^N$, and the domain $\mathfrak{D}$ is a compact convex subset of any vector space. We assume that the optimal solution $\mathbf{x}^*$ to the above problem is non-empty and bounded without loss of generality.
Assume that we can decompose the solution space $\mathbf{R}^N$ into $n$ equal-size subspaces:
\begin{equation}
\mathbf{R}^N = \overset{n}{\underset{i=1}{\oplus}} \mathbf{R}^{N_i}, \ \ N = \sum_{i=1}^{n} N_i,
\end{equation}
where $N_1=\ldots=N_i = \ldots, N_n$ and $\mathbf{R}^{N_i}$ denotes the $i$th equal-size subspace along the corresponding coordinates. This decomposition enables scalable stochastic optimization algorithms. Based on this decomposition, we introduce matrices $U_i$, who sum up to an identity matrix $I_N= \sum_{i=1}^n U_i$,
and $U_i$ is a $N\times N$ matrix with $U_i(t,t) = 1, t\in R^{N_i}$ on its diagonal and the other entries being equal to zero. In typical stochastic optimization algorithms~\cite{sopt}, instead of computing the gradient $\nabla f(\mathbf{x})$ at each iteration, the \emph{partial gradient} of $f(\mathbf{x})$ on a randomly selected subspace $\mathbf{R}^{N_i}$ is used:
\begin{equation}
\nabla_i f(\mathbf{x}) = U_i\nabla f(\mathbf{x}).
\end{equation}
Now we generalize the previous stochastic block coordinate Frank-Wolfe algorithm derived in~\cite{BCFW} to solve more general optimization problems with any compact convex constraints $\mathfrak{D}$. The new generalized stochastic block coordinate Frank-Wolfe (SBCFW) algorithm is illustrated in \textbf{Algorithm 1}. In the pseudo code, the operation $i = \mathfrak{C}_k$ randomly selects one of the $n$ equal-size subspaces to update the partial gradient at each iteration with the same probability. \textcolor{black}{In addition, $U_j \times \mathbf{s} = U_j \times \mathbf{x}^k$ denotes the condition that the elements of the $j$th block of $\mathbf{s}$ equal to the elements of the $j$th block of $\mathbf{x}^k$.}
\begin{table}[ht]
\begin{center}
\begin{tabular}{ lc }
\hline
\textbf{Algorithm1: } Generalized SBCFW Algorithm\\
\hline
1 \ Let $\mathbf{x}^0 \in \mathfrak{D}$, $k=0$.\\
2 \ \textbf{While} Stopping criteria not satisfied \textbf{do}\\
3 \ \ \ \textcolor{black}{Randomly divide $\mathbf{R}^N$ into $n$ blocks $\mathbf{R}^N = \overset{n}{\underset{i=1}{\oplus}} \mathbf{R}^{N_i}$};\\
4 \ \ \ Choose $i = \mathfrak{C}_k$;\\
5 \ \ \ Find $\mathbf{s}^k_i$ such that\\
6 \ \ \ \ \ \ \ \textcolor{black}{$\mathbf{s}^k_i := arg \underset{\begin{subarray}{c} U_j \times \mathbf{s} = U_j \times \mathbf{x}^k, \forall j\ne i; \\ \mathbf{s}\in\mathfrak{D} \end{subarray}}{\textup{min}} \nabla_i f(\mathbf{x}^k)^T(\mathbf{s}-\mathbf{x}^k)$};\\
7 \ \ \ Determine the step size $\gamma$\\
8 \ \ \ \ \ \ \ \textcolor{black}{$\gamma:= arg \underset{\gamma\ \in [0,1]}{\textup{min}} f((1- \gamma)\mathbf{x}^k + \gamma \mathbf{s}^k_i)$};\\
9 \ \ \ Update $\mathbf{x}^{k+1} := (1- \gamma)\mathbf{x}^k + \gamma \mathbf{s}^k_i$;\\
10\ \ \ $k=k+1$;\\
11\ \textbf{Endwhile} \\
\hline
\end{tabular}
\end{center}
\end{table}
Note that our generalized SBCFW algorithm is similar to the algorithm in~\cite{BCFW}, which aims to solve optimization problems with block separable constraints and has the sub-linear convergence property. However, our algorithm provides a more generalized framework, which can manipulate any convex and compact constraints no matter whether they are block separable or not. Because the setup of our algorithm is more general without any specific structure, it is difficult to obtain theorectical convergence rate guarantees. In this paper, we only provide the proof that our SBCFW converges to the global optimum. The convergence guarantee of the generalized SBCFW algorithm is provided by \textit{\textbf{Theorem 1}} below, which is based on \textit{\textbf{Lemma 1}}:
\noindent \textit{\textbf{Lemma 1: }} At each iteration of the SBCFW algorithm, the following inequality holds
\begin{equation}
\nabla f(\mathbf{x}^k)^T\left ( E_i[\mathbf{s}^k_i] - \mathbf{x}^k \right ) \leq 0,
\end{equation}
where $E_i[\mathbf{s}^k_i]$ is the expectation of $\mathbf{s}^k_i$ with respect to the random selection of the $i$th cordinate block to the corresponding subspace.
\begin{proof}
Assuming at the $k$th iteration, we solve the following optimization problem:
\begin{equation}\label{opt1}
\begin{aligned}
\textup{min:}& \ Z_k^i(\mathbf{s}):= \nabla_i f(\mathbf{x}^k)^T(\mathbf{s}-\mathbf{x}^k)\\
s.t.& \ U_j \times \mathbf{s} = U_j \times \mathbf{x}^k, \ \forall j\ne i,\\
& \ \mathbf{s} \in \mathfrak{D}.
\end{aligned}
\end{equation}
The solution to~\eqref{opt1} is $\mathbf{s}^k_i$. With $\mathbf{s}^k_i$ achieving the minimum of~\eqref{opt1}, we have
\begin{equation}
Z_k^i(\mathbf{s}^k_i) \leq Z_k^i(\mathbf{x}^k) = \nabla_i f(\mathbf{x}^k)^T(\mathbf{x}^k-\mathbf{x}^k) = 0.
\end{equation}
Therefore,
\begin{equation}
Z_k^i(\mathbf{s}^k_i)=\nabla_i f(\mathbf{x}^k)^T(\mathbf{s}^k_i-\mathbf{x}^k) \leq 0.
\end{equation}
Taking expectation on both sides of the above inequality with respect to random blocks, we obtain
\begin{equation}
\begin{aligned}
&\quad \quad \quad \quad \quad E_i \left [ \nabla_i f(\mathbf{x}^k)^T(\mathbf{s}^k_i-\mathbf{x}^k) \right ] & \leq 0 \\
\Rightarrow & \ \quad \quad \quad \quad \quad \dfrac{1}{n} \sum_i \nabla_i f(\mathbf{x}^k)^T(\mathbf{s}^k_i-\mathbf{x}^k) & \leq 0 \\
\Rightarrow & \ \left ( \sum_i \nabla_i f(\mathbf{x}^k) \right )^T \dfrac{1}{n} \left ( \sum_i (\mathbf{s}^k_i-\mathbf{x}^k) \right ) &\leq 0 \\
\Rightarrow & \ \ \left ( \sum_i \nabla_i f(\mathbf{x}^k) \right )^T \left ( \dfrac{1}{n} \sum_i \mathbf{s}^k_i-\mathbf{x}^k \right ) &\leq 0 \\
\Rightarrow & \ \quad \quad \quad \quad \nabla f(\mathbf{x}^k)^T\left ( E_i[\mathbf{s}^k_i] - \mathbf{x}^k \right ) &\leq 0.
\end{aligned}
\end{equation}
The inequality in the third line can be derived based on the fact that $\mathbf{s}^k_i - \mathbf{x}^k$ is a vector with only its $i$th coordinate block having non-zero values and the other parts being all zeros. With that, the summation in the second line can be written as the inner product between vectors $\sum_i \nabla_i f(\mathbf{x}^k)$ and $\sum_i (\mathbf{s}^k_i-\mathbf{x}^k)$.
\end{proof}
We now analyze the convergence of the new SBCFW algorithm based on \textit{\textbf{Lemma 1}} from two cases. The first case is when
\begin{equation}\label{station}
\nabla f(\mathbf{x}^k)^T\left ( E_i[\mathbf{s}^k_i] - \mathbf{x}^k \right ) = 0.
\end{equation}
This simply means that $\mathbf{x}^k$ is a stationary point. Because the original objective function $f(\mathbf{x})$ is convex, we can conclude that $\mathbf{x}^k$ is the global minimum. Another case is when
\begin{equation}\label{station}
\nabla f(\mathbf{x}^k)^T\left ( E_i[\mathbf{s}^k_i] - \mathbf{x}^k \right ) < 0,
\end{equation}
indicating that $E_i[\mathbf{s}^k_i] - \mathbf{x}^k$ is a decent direction based on the definition~\cite{ddirect}. Hence, $E_i[\mathbf{s}^k_i] - \mathbf{x}^k$ can move along the direction to get closer to the global minimum in expectation. Furthermore, we compute the optimal step size at each iteration, therefore the objective function values are guaranteed to be non-increasing. With that, we present \textit{\textbf{Theorem 1}} as follows:
\noindent \textit{\textbf{Theorem 1: }} The sequence $\left \{ f(\mathbf{x}^1), f(\mathbf{x}^2), ..., f(\mathbf{x}^k), ... \right \}$ generated by the SBCFW algorithm is non-increasing
\begin{equation}
f(\mathbf{x}^1) \geq f(\mathbf{x}^2) \geq ... \geq f(\mathbf{x}^k) \geq f(\mathbf{x}^{k+1}), \ \ k \rightarrow \infty.
\end{equation}
\section{Biological Network Alignment}
\subsection{Optimization Model Formulation}\label{3.a}
In this section, we re-formulate the involved optimization problem for the network alignment algorithm---IsoRank~\cite{isorank} to address the potential computational challenges of aligning multiple large-scale networks. The new formulation has the same mathematical programming structure as the problem~\eqref{eq1}.
Let $G_a$ and $G_b$ be two biological networks to align. Two networks has $N_a$ and $N_b$ vertices respectively. We define $B\in \mathbf{R}^{(N_a\times N_b)\times(N_a\times N_b)}$ as the Cartesian product network from $G_a$ and $G_b$: $B=G_a\otimes G_b$. Denote the all one vector $\mathbf{1}\in \mathbf{R}^{N_a\times N_b}$ and
\begin{equation}
\bar{B}=B\times \textup{Diag}(B\mathbf{1})^{-1}, \label{eq:rdw}
\end{equation}
where $\textup{Diag}(B\mathbf{1})$ can be considered as a degree matrix with $B\mathbf{1}$ on its diagonal and all the other entries equal to zero. $\bar{B}$ contains the transition probabilities for the underlying Markov random walk in IsoRank~\cite{isorank}. It is well known that if $G_a$ and $G_b$ are connected networks and neither of them is bipartite graph, then the corresponding Markov chain represented by $\bar{B}$ is irreducible and ergodic, and there exists a unique stationary distribution for the underlying state transition probability matrix $\bar{B}$. The goal of the IsoRank algorithm is to find a right maximal eigenvector of the matrix $\bar{B}$: $\bar{B}\mathbf{x}=\mathbf{x}$ and $\mathbf{1}^T \mathbf{x}= 1, \ \mathbf{x} \geq 0$, which corresponds to the best correspondence relationships between vertices across two networks. When two networks are of reasonable size, spectral methods as well as power methods can be implemented to solve the IsoRank problem~\cite{isorank}. However, with large-scale networks, the transition probability matrix $\bar{B}$ can be extremely large (quadratic with $N_a\times N_b$) and spectral methods can be computationally prohibitive. In this paper, we re-formulate this problem of searching for maximal right eigenvector as a constrained optimization problem:
\begin{equation}\label{obj}
\begin{aligned}
\textup{min:}\ & f(\mathbf{x}) := \dfrac{1}{2}\left \| \bar{B}\mathbf{x} - \mathbf{x} \right \|^2 \\
s.t.\ & \mathbf{1}^T\mathbf{x} = 1, \ \mathbf{x} \geq 0. \quad\quad\quad (\mathfrak{H})
\end{aligned}
\end{equation}
After expanding the objective function, we obtain $f(\mathbf{x}) = \dfrac{1}{2}\mathbf{x}^TM\mathbf{x}$, where $M= \bar{B}^T\bar{B}-\bar{B}-\bar{B}^T+I$. Therefore the equivalent optimization problem is
\begin{equation}\label{obj1}
\begin{aligned}
\textup{min:}\ & f(\mathbf{x}) := \dfrac{1}{2}\mathbf{x}^TM\mathbf{x} \\
s.t.\ & \mathbf{1}^T\mathbf{x} = 1, \ \mathbf{x} \geq 0. \quad\quad\quad (\mathfrak{H})
\end{aligned}
\end{equation}
The gradient of $f(\mathbf{x})$ can be easily computed $\nabla f(\mathbf{x}) = M\mathbf{x}$. Furthermore, we find that the Hessian matrix of $f(\mathbf{x})$ is $M$, which is a positive semi-definite matrix proven by \textit{\textbf{Lemma 2}}:
\noindent \textit{\textbf{Lemma 2}}: $M= \bar{B}^T\bar{B}-\bar{B}-\bar{B}^T+I$ is positive semi-definite.
\begin{proof}
$M$ can be written as $M=(\bar{B}-I)^T(\bar{B}-I)$, which proves the lemma.
\end{proof}
With \textit{\textbf{Lemma 2}}, it is obvious that the objective function $f(\mathbf{x})$ is convex. Also, the constraint set $\mathfrak{H} = \{\mathbf{x} | \mathbf{x}^T\textbf{1}=1, \mathbf{x}\geq 0\}$ is a unit simplex, which is convex and compact. Hence, the IsoRank problem~\eqref{obj} has the same problem structure as~\eqref{eq1} and our generalized SBCFW algorithm can be used to solve~\eqref{obj1} with much better scalability and efficiency due to the efficiency of the randomized partial gradient computation at each iteration. Similarly as in~\cite{isorank}, in addition to network topology, we can incorporate other information in the formulation for more biologically significant alignment results by replacing $\bar{B}$ with $\hat{B} = \alpha\bar{B}+(1-\alpha)\bar{S}\mathbf{1}^T, \ \alpha\in[0,1]$. Here $\bar{S} = S/ | S|$ is a normalized similarity vector with size $N_a\times N_b$, cancatenated from the doubly indexed similarity estimates $S([u,v])$ based on the sequence or function similarity between vertices $u$ in $G_a$ and $v$ in $G_b$.
\subsection{SBCWF-IsoRank Algorithm}
As shown in Section~\ref{3.a}, $f(\mathbf{x})$ in~\eqref{obj} is convex and the constraint set $\mathfrak{H}$ in~\eqref{obj1} is a convex compact set. Therefore, we can apply the generalized SBCWF algorithm proposed in Section~\ref{sec2} to solve the corresponding optimization problem~\eqref{obj1}. The detailed algorithm is illustrated in~\textbf{Algorithm~2}. Here we want to emphasize that, in each iteration of our SBCFW-IsoRank algorithm, both the time complexity and the space complexity are $O\left (\dfrac{N^2}{n} \right )$, which is achieved through tracking the vectors of $\mathbf{p}_k=E\mathbf{x}^k$ and $\mathbf{q}_k=E\mathbf{s}^k_i$ at step 2 and 10 of each iteration in~\textbf{Algorithm~2}, respectively. The stopping criterion is $\left \| \bar{B}\mathbf{x} - \mathbf{x} \right \| \leq \xi \left \|\mathbf{x} \right \|$, which can be efficiently estimated by
\begin{equation}
\left \| \bar{B}\mathbf{x} - \mathbf{x} \right \| = \mathbf{x}^TM\mathbf{x} = (E\mathbf{x})^TE\mathbf{x} = \mathbf{p}_k^T\mathbf{p}_k,
\end{equation}
which is taken in line 11 in the SBCFW-IsoRank algorithm.
\begin{table}[ht]
\begin{center}
\begin{tabular}{ lc }
\hline
\textbf{Algorithm 2: } SBCFW-IsoRank Algorithm \\
\hline
\textbf{Input:} $\xi$, $n$ and $E$\\
1 \textbf{for} $k = 0, ..., \infty$ \textbf{do}\\
2 \ \ randomly divide $\mathbf{R}^N$ into $n$ equal-size parts\\
3 \ \ choose $i = \mathfrak{C}_k$\\
4 \ \ \textbf{if}($k==0$)\\
5 \ \ \ \ initialize the $i$th block of $\mathbf{x}^0$ with $\dfrac{n}{N}$\\
6 \ \ \textbf{endif}\\
7 \ \ compute $\mathbf{p}_k=E\mathbf{x}^k$ and $\nabla_i f(\mathbf{x}^k) = [E^T]_i\mathbf{p}_k$\\
8 \ \ solve the sub-problem: \\
9 \ \ \ \ \ \ \ $\mathbf{s}^k_i := arg \underset{\begin{subarray}{c} U_j\times\mathbf{s} = U_j\times\mathbf{x}^k, \ \forall j\ne i;\\ \mathbf{s}\in\mathfrak{H}, \end{subarray}}{\textup{min}} \nabla_i f(\mathbf{x}^k)^T(\mathbf{s}-\mathbf{x}^k)$\\
10 \ compute $\mathbf{q}_k=E\mathbf{s}^k_i$\\%=E\mathbf{x}^{k-1}+\gamma^kE(\mathbf{s}_i^{k-1}-\mathbf{x}^{k-1})$
11 \ \textbf{if} $\mathbf{p}_k^T\mathbf{p}_k < \xi \left \| \mathbf{x} \right \|$\\
12 \ \ \ \ \textbf{break};\\
13 \ \textbf{endif}\\
14 \ compute the step size $\gamma_k^*$:\\
15
\ \ \ \ \ \ \ $\gamma_k^*=\left\{\begin{matrix}
\textup{min}\left \{\hat{\gamma}, 1\right \} & \hat{\gamma} > 0, \hat{\gamma} = \dfrac{\mathbf{p}_k^T\mathbf{p}_k-\mathbf{p}_k^T\mathbf{q}_k}{\mathbf{p}_k^T\mathbf{p}_k-2\mathbf{p}_k^T\mathbf{q}_k+\mathbf{q}_k^T\mathbf{q}_k}\\
0 & o.w.
\end{matrix} \right.$\\
16 \ \ $\mathbf{x}^{k+1} = \mathbf{x}^k + \gamma^k(\mathbf{s}_i^k - \mathbf{x}^k)$\\
17 \textbf{endfor}\\
\textbf{Output:} $\mathbf{x}^k$\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Initialization}
In order to guarantee both the time and space complexity to be $O (\frac{N^2}{n} )$ at each iteration, we can not initialize the algorithm with randomly generated $\mathbf{x}^0$ to avoid a multiplication of a matrix of size $N\times N$ and a vector of size $N$, whose time and space complexity would be $O(N^2)$. We propose to initialize $\mathbf{x}^0$ in the following way:~First, randomly divide $\mathbf{R}^N$ into $n$ parts with equal sizes and randomly pick the $i$th part. Then, we initialize every elements in the $i$th part with $\frac{n}{N}$, which makes $\mathbf{x}^0$ in the feasible space defined by the constraint set $\mathfrak{H}$. Using the above initialization strategy, the time and space complexity for computating $\nabla_i f(\mathbf{x}^0)$, $\mathbf{p}_0=E\mathbf{x}^0$ and $\mathbf{q}_0=E\mathbf{s}^0$ are all under $O (\frac{N^2}{n})$, which is easy to verify.
\subsection{Algorithm to Solve the Sub-problem}
As shown in the SBCFW-IsoRank algorithm, at each iteration we need to solve a sub-problem. Fortunately, the sub-problem can be solved in a straightforward manner for the optimization problem~\eqref{obj1}. For the following sub-problem at iteration $k$:
\begin{equation}\label{submin}
\begin{aligned}
\textup{min:}\ & \nabla_i f(\mathbf{x}^k)^T(\mathbf{s}-\mathbf{x}^k)\\
s.t.\ & \mathbf{s}\in\mathfrak{H}, \\
&U_j\times \mathbf{s}= U_j\times \mathbf{x}^k, \ \forall j\ne i,
\end{aligned}
\end{equation}
the optimal solution is $\mathbf{s}^* = \mathbf{x}^k - U_i\mathbf{x}^k + L\mathbf{e}_j$, where $\mathbf{e}_j$ is an all-zero vector except that the $j$th element is 1 and $L = \sum_{l\in R^{N_i}} \mathbf{x}^k(l)$. Here, $j$ is the index of the coordinate with the smallest value in the $i$th block of $\nabla_i f(\mathbf{x}^k)$:
\begin{equation}
j = arg\underset{l\in \mathbf{R}^{N_i}}{\textup{min:}}\ [\nabla_i f(\mathbf{x}^k)](l).
\end{equation}
\subsection{Optimal Step Size}
To obtain the optimal step size at each iteration, we need to solve the following optimization problem:
\begin{equation}\label{qf}
\begin{aligned}
\textup{min:}\ & \left (\mathbf{x}^k + \gamma (\mathbf{s}^k - \mathbf{x}^k ) \right )^TM\left (\mathbf{x}^k + \gamma (\mathbf{s}^k - \mathbf{x}^k ) \right )\\
s.t. \ & 0\leq \gamma \leq 1,
\end{aligned}
\end{equation}
which is the classic quadratic form with respect to $\gamma$. If $\hat{\gamma}=\dfrac{\mathbf{p}_k^T\mathbf{p}_k-\mathbf{p}_k^T\mathbf{q}_k}{\mathbf{p}_k^T\mathbf{p}_k-2\mathbf{p}_k^T\mathbf{q}_k+\mathbf{q}_k^T\mathbf{q}_k}>0$, which is the solution to~\eqref{qf} without any constraints, the optimal solution $\gamma^*$ is the minimum value between 1 and $\hat{\gamma}$, otherwise $\gamma^*=0$. The definition of $\mathbf{p}_k$ and $\mathbf{q}_k$ are given in lines 7 and 10 in~\textbf{Algorithm~2}.
\subsection{Time and Space Complexity}\label{IIIC}
At each iteration, the most computationally expensive operations are the updates of $\mathbf{p}_k$ and $\mathbf{q}_k$ (lines 7 and 10 of SBCFW-IsoRank) and the calculation of the partial gradient $\nabla_i f(\mathbf{x}^k)$ (line 7 of SBCFW-IsoRank).
The calculation of $\mathbf{p}_k$ and $\mathbf{q}_k$ are similar. From line 10 of \textbf{Algorithm 2}, we know
\begin{equation}
\begin{aligned}
\mathbf{p}_k&=E\mathbf{x}^k\\
&=E\left ( \mathbf{x}^{k-1} + \gamma^{k-1}(\mathbf{s}^{k-1}-\mathbf{x}^{k-1}) \right )\\
&=\mathbf{p}_{k-1} + \gamma^{k-1}E(\mathbf{s}^{k-1}-\mathbf{x}^{k-1}).
\end{aligned}
\end{equation}
The second equation is derived by replacing $\mathbf{x}^k$ with the equation in line 16 of our SBCFW-IsoRank algorithm. Because we keep tracking $\mathbf{p}_k$ at each iteration, we do not need to recompute $\mathbf{p}_{k-1}$. Therefore, we only need to compute $E(\mathbf{s}^{k-1}-\mathbf{x}^{k-1})$, which takes $O\left (\dfrac{N^2}{n} \right )$ operations because $(\mathbf{s}^{k-1}-\mathbf{x}^{k-1})$ is a vector, with only its $i$th block being non-zeros and all the other parts are zeros. Additionally, the memory consumption is also $O\left (\dfrac{N^2}{n} \right )$ by the similar argument. Similarly, we can compute $\mathbf{q}_k$
\begin{equation}
\begin{aligned}
\mathbf{q}_k&=E\mathbf{s}^k\\
&=E\left ( \mathbf{x}^{k} + (L\mathbf{e}_j-U_i\mathbf{x}^{k}) \right )\\
&=\mathbf{p}_{k} + E(L\mathbf{e}_j-U_i\mathbf{x}^{k}),
\end{aligned}
\end{equation}
where $(L\mathbf{e}_j-U_i\mathbf{x}^{k})$ is also a vector with only the $i$th block having non-zero values. Therefore, the computation of $\mathbf{q}_k$ also takes $O\left (\dfrac{N^2}{n} \right )$ operations and consumes $O\left (\dfrac{N^2}{n} \right )$ memory.
The equation of calculating $\nabla_i f(\mathbf{x}^k)$ is as follows:
\begin{equation}
\nabla_i f(\mathbf{x}^k) = [E^T]_i \mathbf{p}_k,
\end{equation}
where the operator $[\cdot]_i$ is to get the rows of the matrix corresponding to the $i$th coordinate block. Hence, it is easy to verify that the time complexity and space complexity of computing $\nabla_i f(\mathbf{x}^k)$ are $O\left (\dfrac{N^2}{n} \right )$.
In summary, based on the above analyses, both the time complexity and space complexity of our SBCFW-IsoRank at each iteration are $O\left (\dfrac{N^2}{n} \right )$.
\section{Experiments}
In this section, we apply our SBCFW-IsoRank algorithm to two network query problems. For the first set of experiments, we take a known protein complex in an archived yeast protein-protein interaciton (PPI) network in one database~\cite{Krogan06} as the query to search for the subnetwork in another yeast PPI network~\cite{hasty} with different archived interactions. We call this yeast-yeast network query problem. The goal of this set of experiments is to check the correctness of our algorithm as we have the ground truth for the target subnetwork. With that, we aim to test the convergence property of our algorithm under different partitions and the relationship between the number of iterations and number of partitions. The second experiment is to query a large-scale yeast PPI network in IntAct~\cite{intact} to find similar subnetworks of proteins with similar cellular functionalities for a known protein complex in human PPI network. The aim of this experiment is to show that our new algorithm can help transfer biology knowledge from model organisms to study potential functionalities of molecules in different organisms.
\subsection{Yeast-Yeast Network Query Problem}
We test our SBCFW-IsoRank algorithm on the yeast-yeast PPI network query problem by solving the optimization problem introduced in the previous section. We take a subnetwork with 6 proteins (Fig.~\ref{fig1}(a)) from the Krogan's yeast PPI network~\cite{Krogan06} as the query example to search for the conserved functional complex in a target network, which is the Collins network~\cite{hasty} with 1,622 proteins and 9,074 interactions. The query subnetwork is the transcription factor TFIIIC complex in the Krogan's network and we are interested in testing whether we can find the same subnetwork in the Collins network. The dimension of our optimization problem is $6\times 1,622=9,732$. We run this preliminary example so that we can compare our stochastic optimization results with the results from the power method, which is typically done in the original IsoRank algorithm~\cite{isorank}.
Theoretically, the time and space complexity of our SBCFW-IsoRank at each iteration are both $O(N^2/n)$ based on the analysis in Section~\ref{IIIC}. Compared to $O(N^2)$ time and space complexity for the power method by IsoRank~\cite{isorank}, our SBCFW-IsoRank algorithm can scale better with properly selected $n$.
\begin{figure}[tb]
\centerin
\centerline{\includegraphics[width=12cm]{./figure1}}
\caption{Query subnetwork and its aligned result in the target network: (a) the query subnetwork in the Krogan's yeast PPI network; (b) the aligned result in Collins yeast PPI network.}
\label{fig1}
\end{figure}
As both the query example and the target network contain interactions among proteins from the same organism---yeast, we can easily check the correctness of the query result. We define the accuracy as the number of corrected aligned proteins divided by the total number of proteins in the query subnetwork. We implement the SBCFW-IsoRank algorithm for different numbers of partitions $n$ but use the same stopping criterion: $\left \| \hat{B}\mathbf{x} -\mathbf{x}\right \| \leq \xi \left \| \mathbf{x}\right \|, \xi=0.1$. In Table~I, we find that our stochastic optimization algorithm obtains the same biologically meaningful results as the power method.
\begin{figure}[tb]
\centering
\centerline{\includegraphics[height=8cm, width=12cm]{./converge}}
\caption{The change of the objective function values with the increasing number of iterations with different numbers of partitions.}
\label{fig2}
\end{figure}
Fig.~\ref{fig2} shows the changes of the objective function values with respect to the increasing number of iterations. As illustrated in~Fig.~\ref{fig2}, our algorithm converges for all different $n$. Additionally, we find the larger the number of partitions $n$ is, the larger the number of iterations we need to have the algorithm converge to the global optimum with the same stopping criterion. This clearly demonstrates the tradeoff between the efficiency and scalability of the stochastic optimization algorithms. Interestingly, we notice that for $n=10, 30,$ and $50$, the number of iterations does not increase much, which indicates that we may achieve fast computation with reasonably large $n$ because our algorithm is more efficient for larger $n$ at each iteration.
\begin{table}[h]
\begin{center}
\caption{Comparison on different decompositions with $\xi=0.1$.} \label{Table1Label}
\begin{tabular}{|c|c|c|c|}
\hline
\#. of partitions ($n$) & Computational time (s) & \#. Iterations & Accuracy\\
\hline
2 & 11.60 & 648 & 100\%\\
\hline
5 & 8.53 & 1,045 & 100\%\\
\hline
10 & 7.44 & 1,742 & 100\%\\
\hline
30 & 5.05 & 1,880 & 100\%\\
\hline
50 & 4.93 & 2,070 & 100\%\\
\hline
100 & 7.06 & 2,942 &100\%\\
\hline
200 & 13.05 & 4,478 & 100\%\\
\hline
\end{tabular}
\end{center}
\end{table}
To further investigate the performance with different $n$, we run our algorithm 10 times for each $n$ and show the average computational time, the average number of iterations, and the average accuracy score in Table~I. From Table~I, we observe that for all different $n$, our algorithm can obtain $100\%$ accuracy, which again demonstrates the effectiveness and convergence of our generalized SBCFW algorithm. Also, we notice that with the increasing $n$, the number of iterations increase; however, the computational time is first reducing then increasing. For example, when $n=2$, our algorithm converges with the smallest number of iterations, but the computational time is not the best because at each iteration the algorithm takes $O\left ( \dfrac{N^2}{2} \right )$ operations. In contrast, when $n=50$, though the number of iterations is larger, but it reaches the global optimum with the least computation time, which is indeed twice faster than $n=2$. The trend of the computational time implies that there may exist the best number of partitions $n^*$. Empirically, when $n<n^*$ the computational time decreases while the computational time can increase when $n>n^*$. However, it is difficult to provide a theoretical proof for this observed phenomenon. Finally, for the scalibility of the algorithm, we always prefer larger $n$ to make the memory requirement as low as possible.
\subsection{Human-Yeast Network Query Problem}
We further study the biological signficance of network query results by our SBCFW-IsoRank algorithm. We extract a subnetwork as a query example from a human PPI network archived in~IntAct~\cite{intact}. The query subnetwork is the proteasome core complex, with induced interactions among the corresponding proteins from~IntAct~\cite{intact}. The proteasome core complex in human consists of 14 proteins in total, as shown in~Fig.~\ref{fig3}(a). The target network is the yeast PPI network, also obtained from IntAct~\cite{intact}, which has 6,392 proteins and 77,065 interactions. Our goal is to find the most similar subnetwork to the human proteasome core complex in the target yeast PPI network, based on both the interaction topology and the protein sequence similarity, which is computed by BLAST~\cite{blast}.
We first construct the alignment network, which has $N=14\times 6,392=89,488$ vertices. By our SBCFW-IsoRank algorithm with $n=300$, instead of operating a matrix of size $89,488\times 89,488$ by the power method, we only need to handle a matrix of size $298\times 89,488$. At each iteration, the computational time as well as the memory requirement are reduced 300 times. Our Matlab implementation of SBCFW-IsoRank on a MacPro notebook with 8GB RAM takes only around 750 seconds to converge by reaching the stopping criteria ($\xi=0.1$).
The identified subnetwork in the target yeast PPI network by our algorithm is illustrated in~Fig.~\ref{fig3}(b). To evaluate the biological significance of the obtained subnetwork, we check the p-value based on GO (Gene Ontology) enrichment analysis using GOTerm~Finder~\cite{bauer}. The identified subnetwork is significantly enriched in GO term GO:0005839, which is in fact the same proteasome core complex, with p-value $9.552e-36$. This experiment demonstrates that our algorithm can find the biologically consistent groups of proteins with the same cellular functionalities as the proteins in the query subnetwork, hence with the capability of transferring existing biology knowledge in model organisms (yeast for example) to less studied organisms when the group of proteins in the query subnetwork require better understanding of their cellular functionalities.
\begin{figure}[tb]
\centerin
\centerline{\includegraphics[width=12cm]{./example222}}
\caption{Querying human protein complex in a yeast PPI network. The proteins are annotated by their gene names. The solid lines are protein interactions and the dash lines denote orthologous relationships based on protein sequence similarity by BLAST between the proteins in different organisms. (a) Human proteasome core complex; (b) The aligned proteasome core complex in yeast found by SBCFW-IsoRank.}
\label{fig3}
\end{figure}
\section{Conclusions}
In this paper, we generalize the block coordinate Frank-Wolfe algorithm to solve general convex optimization problems with any convex and compact constraint set. Our generalized SBCFW algorithm has the convergence guarantee. We re-formulate the IsoRank problem to such a convex programming problem and solve the biological network alignment problem by our SBCFW-IsoRank algorithm, which scales better with the size of networks under study. The scalability, efficiency, and effectiveness of our algorithm on solving IsoRank are demonstrated for real-world PPI network query problems. In our future work, we will consider the derivation of the optimal partition number for better tradeoff between computational efficiency and scalability.
\section{Acknowledgements}
The authors would like to thank Simon Lacoste-Julien for pointing out the error in the original conference paper. This work was partially supported by Awards \#1447235 and \#1244068 from the National Science Foundation; as well as Award R21DK092845 from the National Institute Of Diabetes And Digestive And Kidney Diseases, National Institutes of Health.
\bibliographystyle{acm}
|
1,108,101,563,002 | arxiv | \section{Introduction}\label{TuringIntro}
In determining the number of non-trivial zeroes of the Riemann zeta-function $\zeta(s)$ in a given range, one proceeds in two stages. First, one can compute a number of zeroes along the critical line using Gram's Law\footnote{Briefly: The Gram points $\{g_{n}\}_{n\geq -1}$ are easily computed and have an average spacing equal to that of the non-trivial zeroes of $\zeta(\sigma +it)$, viz. $g_{n+1}-g_{n} \asymp (\log g_{n})^{-1}$. Gram's Law states that for $t\in(g_{n}, g_{n+1}]$ there is exactly one zero of $\zeta(\frac{1}{2}+it)$. Gram's Law was shown to fail infinitely often by Titchmarsh \cite{Titchmarsh3}, and shown to fail in a positive proportion of cases by the author \cite{Trudgian}.} or Rosser's Rule\footnote{For some $n$, one defines a Gram block of length $p$ as the interval $(g_{n}, g_{n+p}]$, wherein there is an even number of zeroes in each of the intervals $(g_{n}, g_{n+1}]$ and $(g_{n+p-1}, g_{n+p}]$, and an odd number of zeroes in each of the intervals $(g_{n+1}, g_{n+2}], \ldots, (g_{n+p-2}, g_{n+p-1}]$. Rosser's Rule then states that a Gram block of length $p$ contains exactly $p$ zeroes of $\zeta(\frac{1}{2}+it)$. Rosser's Rule holds more frequently than Gram's Law, but its infinite failure was first shown by Lehman \cite{Lehman} --- see also the work of the author [\textit{op.\ cit.}]} (see, e.g. \cite[Chs VI-VII]{Edwards}), which gives one a lower bound on the total number of zeroes in the critical strip in that range. To conclude that one has found the \textit{precise} number of zeroes in this range, one needs an additional argument.
The earliest method employed was due to Backlund \cite{Backlund} and relies on showing that $\Re\zeta(s) \neq 0$ along the lines connecting $2, 2+iT, \frac{1}{2}+iT$. This is very labour intensive: nevertheless Backlund was able to perform these procedure for $T=200$ and later Hutchinson extended this to $T=300.468$ (see \cite[\textit{loc.\ cit.}]{Edwards} for more details). In both cases the zeroes of $\zeta(s)$ located via Gram's Law were verified to be the only zeroes in the given ranges. Titchmarsh \cite{Titchmarsh3} continued to use this method to show that the Riemann hypothesis is valid for $|t|\leq 1468$.
Asides from its computational intricacies, this method of Backlund is bound to fail for sufficiently large $T$. To see this, it is convenient to introduce the function $S(T)$ defined as
\begin{equation}\label{sdef}
S(T) = \pi^{-1} \arg \zeta(\tfrac{1}{2}+iT),
\end{equation}
where if $T$ is not an ordinate of a zero of $\zeta(s)$, the argument is determined by continuous variation along the lines connecting $2, 2+iT, \frac{1}{2}+ iT.$ If $T$ coincides with a zero of $\zeta(s)$ then
\begin{equation*}
S(T) = \lim_{\delta\rightarrow 0} \{S(t+\delta) + S(t-\delta)\}.
\end{equation*}
The interest in the properties of $S(T)$ is immediate once one considers its relation to the function $N(T)$, the number of non-trivial zeroes of $\zeta(\sigma+it)$ for $|t|\leq T$. In the equation
\begin{equation}
N(T) = \frac{T}{2\pi} \log\frac{T}{2\pi} - \frac{T}{2\pi} + \frac{7}{8} + O(T^{-1}) + S(T),
\end{equation}
the error term is continuous in $T$, whence it follows that $S(T)$ increases by +1 whenever $T$ passes over a zero of the zeta-function. Concerning the behaviour of $S(T)$ are the following estimates
\begin{equation}\label{littlewood}
\int_{0}^{T} S(t)\, dt = O(\log T),
\end{equation}
due to Littlewood (see, e.g. \cite[pp.\ 221-222]{Titchmarsh}), and
\begin{equation}\label{Sgrowth}
S(T) = \Omega_{\pm}\left(\frac{(\log T)^{\frac{1}{3}}}{(\log\log T)^{\frac{7}{3}}}\right),
\end{equation}
due to Selberg \cite{Selberg1}.
Returning to Backlund's approach: if $\Re\zeta(s) \neq 0$ along the lines connecting $2, 2+iT, \frac{1}{2}+iT$ then, when varied along these same lines, $|\arg\zeta(s)| < \frac{\pi}{2},$ if one takes the principal argument. It therefore follows that $|S(T)|< \frac{1}{2}$, whence $S(T)$ is bounded, which contradicts (\ref{Sgrowth}).
\subsection{Turing's Method}
A more efficient procedure in producing an upper bound on the number of zeroes in a given range was proposed by Turing \cite{Turing} in 1953. This relies on a quantitative version of Littlewood's result (\ref{littlewood}), given below as
\begin{theorem}\label{TurCri}
Given $t_{0}>0$, there are positive constants $a$ and $b$ such that, for $t_{2}>t_{1}>t_{0}$, the following estimate holds
\begin{equation}\label{Tur82}
\bigg|\int_{t_{1}}^{t_{2}} S(t)\, dt \bigg| \leq a + b \log t_{2}.
\end{equation}
\end{theorem}
Since $S(t)$ increases by $+1$ whenever $t$ passes over a zero (on the line or not), the existence of too many zeroes in the range $t\in(t_{1}, t_{2})$ would cause the integral in (\ref{Tur82}) to be too large.
Turing's paper \cite{Turing} contains several errors, which are fortunately corrected by Lehman \cite{Lehman}. Furthermore, Lehman also improves the constants $a$ and $b$, thereby making Turing's Method more easily applicable. Here additional improvements on the constants in Turing's Method are given in \S \ref{TM}. Rumely \cite{Rumely} has adapted Turing's Method to Dirichlet $L$-functions and this is herewith improved in \S \ref{TMDLF}. Finally, in \S \ref{TMDZF} the analogous improvements to the argument of Dedekind zeta-functions is given, following the work of Tollis \cite{Tollis}.
It is interesting to note the motivation of Turing as he writes in \cite[p.\ 99]{Turing}
\begin{quote}
The calculations were done in an optimistic hope that a zero would be found off the critical line, and the calculations were directed more towards finding such zeros than proving that none existed.
\end{quote}
Indeed, Turing's Method has become the standard technique used in modern verification of the Riemann hypothesis.
\section{Turing's Method for the Riemann zeta-function}\label{TM}
\subsection{New results}\label{New Results}
In general let the triple of numbers ($a, b, t_{0}$) satisfy Theorem \ref{TurCri}. Turing showed that $(2.07, 0.128, 168\pi)$ satisfied (\ref{Tur82}) and Lehman showed that $(1.7, 0.114, 168\pi)$ does so as well. Brent \cite[Thm 2]{Brent} used the result of Lehman [\textit{op.\ cit.}, Thm 4] to prove the following
\begin{theorem}[Lehman--Brent]\label{LBrent}
If $N$ consecutive Gram blocks with union $[g_{n}, g_{p})$ satisfy Rosser's Rule, where
\begin{equation}\label{GB}
N\geq \frac{b}{6\pi}\log^{2} g_{p}+ \frac{(a-b\log 2\pi)}{6\pi}\log g_{p},
\end{equation}
then
\begin{equation*}
N(g_{n}) \leq n+1;\quad N(g_{p}) \geq p+1.
\end{equation*}
\end{theorem}
Since, by assumption these $N$ Gram blocks together contain exactly $p-n$ zeroes, this shows that up to height $g_{p}$ there are at most $p+1$ zeroes: and this is precisely the upper bound one has sought. Using the constants of Lehman viz.\ $(a=1.7, b=0.114)$ it is seen that one must find at least
\begin{equation}\label{BrentN}
N \geq 0.0061 \log^{2} g_{p} + 0.08 \log g_{p}
\end{equation}
consecutive Gram blocks to apply Theorem \ref{LBrent}. This constraint on $N$ has been used in the modern computational search for zeroes, and appears in the early works, e.g.\ \cite{Brent} right through to the recent, e.g.\ \cite{XG}.
Turing makes the remark several times in his paper \cite{Turing} that the constant $b$ could be improved at the expense of the constant $a$. In (\ref{GB}) the first term dominates when $g_{p}$ is large, and therefore for computation at a large height it is desirable to choose $b$ to be small. Indeed, what is sought is the minimisation of
\begin{equation}\label{Ftimes}
F(a, b, g_{p}) = b\log\frac{g_{p}}{2\pi} +a.
\end{equation}
Current verification of the Riemann hypothesis has surpassed the height $T = 10^{12}$, see, e.g.\ \cite{XG} wherein (\ref{BrentN}) requires the location of at least 8 Gram blocks. In \S \ref{TuringCalc} the function $F(a, b, g_{p})$ is minimised at $g_{p} = 2\pi \cdot 10^{12}$ which leads to
\begin{theorem}\label{Thm1}
If $t_{2} >t_{1} >168\pi$, then
\begin{equation*}
\bigg|\int_{t_{1}}^{t_{2}} S(t)\, dt\bigg| \leq 2.067 + 0.059 \log t_{2}.
\end{equation*}
\end{theorem}
It should be noted that the constants achieved in Theorem \ref{Thm1} are valid\footnote{The constant $168\pi$ which occurs in the triples of Turing and Lehman seems to be a misprint. In the proof of the rate of growth of $\zeta(\frac{1}{2} +it)$, given here in Lemma \ref{l2}, Turing and Lehman require $t>128\pi$ so that the error terms in the Riemann--Siegel formula are small. A computational check shows that Lemma \ref{l2} in fact holds for all $t>1$. Choosing a moderately large value of $t_{0}$ ensures that the small errors accrued (i.e.\ the $\delta$ in Lemma \ref{l44} and the $\epsilon$ in Lemma \ref{l9}) are suitably small. At no point do Turing and Lehman require the imposition of a $t_{0}$ greater than $128\pi$. It is worthwhile to note that one could replace $168\pi$ in Theorem \ref{Thm1} by a smaller number, and although this has little application to the zeta-function, it may be useful for future applications to Dedekind zeta-functions --- cf.\ \S \ref{TMDZF}.} for all $t_{2}>t_{1}>168\pi$, and that at $t_{1}>2\pi \cdot 10^{12}$ these constants minimise the right-side of (\ref{Ftimes}). The above theorem and Theorem \ref{LBrent} immediately lead to
\begin{cor}\label{Thm2}
If $N$ consecutive Gram blocks with union $[g_{n}, g_{p})$ satisfy Rosser's Rule where
\begin{equation*}
N \geq 0.0031 \log^{2} g_{p} + 0.11 \log g_{p},
\end{equation*}
then
\begin{equation*}
N(g_{n}) \leq n+1; \qquad N(g_{p}) \geq p+1.
\end{equation*}
\end{cor}
The above corollary shows that, in order to apply Turing's Method at height $g_{p} = 2\pi\cdot 10^{12}$, one needs to find only 6 Gram blocks in which the Rosser Rule is valid.
\subsection{Proof of Theorem \ref{Thm1}}\label{Proof of Theorem 1.2.2}
This section closely follows the structure of Lehman's refinement \cite{Lehman} of Turing's work \cite{Turing}. Some of the Lemmas are identical to those in these papers, and their proofs are deferred to \cite{Lehman}.
To begin, one rewrites the integral of the function $S(t)$ using the following
\begin{lemma}\label{Ll1}
If $t_{2} >t_{1} >0$, then
\begin{equation}\label{110}
\pi\int_{t_{1}}^{t_{2}} S(t)\, dt =\Re\int_{\frac{1}{2}+it_{2}}^{\infty +it_{2}} \log \zeta(s) \, ds - \Re\int_{\frac{1}{2}+it_{1}}^{\infty +it_{1}} \log \zeta(s) \, ds.
\end{equation}
\end{lemma}
\begin{proof}
This is Lemma 1 in \cite{Lehman}, and the proof is based on Littlewood's theorem for analytic functions, but more detail is supplied in \cite{LehmanOld}, or \cite[pp.\ 190-192]{Edwards}.
\end{proof}
Henceforth: Lemmas \ref{l2}--\ref{200} are used to bound the first integral on the right-hand side of (\ref{110}), and Lemmas \ref{l8}--\ref{l9} are needed to bound the second integral.
\begin{lemma}\label{l2}
If $t\geq 128\pi$, then
\begin{equation*}
|\zeta\left(\tfrac{1}{2} +it\right)| \leq 2.53\, t^{\frac{1}{4}}.
\end{equation*}
\end{lemma}
\begin{proof}
See the argument in \cite{Lehman} where some corrections are given to Titchmarsh's explicit calculation of the error in the Riemann--Siegel formula.
\end{proof}
This estimate can certainly be improved insofar as reducing the exponent of $t$ is concerned. Currently the best bound on the growth of the zeta-function is due to Huxley \cite{Huxley}, viz.\ $\zeta(\frac{1}{2} +it) \ll t^{\alpha +\epsilon},$ where $\alpha = \frac{32}{205} \approx 0.1561$. However the methods used to attain this bound are complicated and the calculation of the implied constant would prove lengthy. The coarser, but simpler proof (see \cite[{Ch.\ V} \S 5]{Titchmarsh}) due to van der Corput yields
\begin{equation}\label{vdc}
|\zeta\left(\tfrac{1}{2} +it\right)| \leq At^{\frac{1}{6}} \log t,
\end{equation}
where the calculation of the constant $A$ is not too time consuming. Indeed following the arguments in \cite[{Chs IV-V}]{Titchmarsh} and using a result of Karatsuba \cite[Lem.\ 1]{KKapprox}, one can take $A\leq 20$.
The logarithmic term in (\ref{vdc}) is relatively innocuous since, for a given $\eta>0$ one can then take $t_{0}$ so large that
\begin{equation*}
\log t \leq A' t^{\eta},
\end{equation*}
where $A' = A'(\eta, t_{0})$ can be easily computed, whence
\begin{equation*}
|\zeta\left(\tfrac{1}{2} +it\right)| \leq AA't^{\frac{1}{6} +\eta}.
\end{equation*}
Turing \cite[p.\ 108]{Turing} makes reference to the improvements made possible by these refined estimates on the growth of $\zeta(\frac{1}{2} +it)$. The following Lemmas will be written with
\begin{equation}\label{currentl}
|\zeta\left(\tfrac{1}{2} +it\right)| \leq K t^{\theta},
\end{equation}
so that the benefit of such a refinement as that in (\ref{vdc}) can be seen clearly.
The bound on $\zeta(s)$ on the line $\sigma=\frac{1}{2}$ can be combined with that on the line $\sigma = c >1$, whence the Phragm\'{e}n--Lindel\"{o}f theorem can be applied throughout the strip $\frac{1}{2}\leq \sigma \leq c$. The papers of Turing and Lehman use the value $c=\frac{5}{4}$ and some improvement will be given later by choosing an optimal value of $c$ at the end of the proof. A result needed is
\begin{lemma}\label{l5}
Let $a, b, Q$ and $k$ be real numbers, and let $ f(s)$ be regular analytic in the strip $-Q\leq a\leq \sigma \leq b$ and satisfy the growth condition
$$ |f(s)| <C\exp \left\{e^{k|t|}\right\},$$ for a certain $C>0$ and for $0<k<\pi/(b-a)$. Also assume that
\[|f(s)|\leq\left\{\begin{array}{ll}
A|Q+s|^{\alpha} & \mbox{for $\Re(s) = a,$}\\
B|Q+s|^{\beta} & \mbox{for $\Re(s) = b$}\\
\end{array}
\right.\]
with $\alpha \geq \beta$. Then throughout the strip $a \leq \sigma \leq b$ the following holds
$$ |f(s)| \leq A^{(b-\sigma)/(b-a)}B^{(\sigma - a)/(b-a)}|Q+s|^{\alpha(b-\sigma)/(b-a) + \beta(\sigma-a)/(b-a)}.$$
\end{lemma}
\begin{proof}
See \cite[pp.\ 66-67]{Rad}.
\end{proof}
Take $Q=0;\; a=\frac{1}{2};\; b=c;\; f(s) = (s-1)\zeta(s)$, whence all the conditions of Lemma \ref{l5} are satisfied. Then on the line $\sigma = \frac{1}{2}$ it follows that
\begin{equation*}
|f(s)| \leq Kt^{\theta} |s-1| \leq K|s|^{\theta+1},
\end{equation*}
by virtue of (\ref{currentl}). On the line $\sigma = c$,
\begin{equation*}
|f(s)| \leq |s-1|\zeta(c) \leq \zeta(c) |s|,
\end{equation*}
since $c>1$. So one can take $A= K; \; \alpha = \theta +1;\; B=\zeta(c);\; \beta = 1$ and then apply Lemma \ref{l5} to obtain,
\begin{equation}\label{1.16}
|(s-1)\zeta(s)| \leq\left[ K^{c-\sigma}\{\zeta(c)\}^{\sigma-\frac{1}{2}} |s|^{\theta(c-\sigma) + c-\frac{1}{2}}\right]^{1/(c-\frac{1}{2})}.
\end{equation}
For sufficiently large $t$, let $C_{1}$ and $C_{2}$ be numbers satisfying
\begin{equation*}
|s-1|\geq C_{1}|s|; \qquad |s| \leq C_{2} |t|.
\end{equation*}
When $t>168\pi$ one can take $C_{1}^{-1} \geq 1 +\delta$ and $C_{2} \leq 1+\delta$, where $\delta = 2\cdot 10^{-6}$. This gives an estimate on the growth of $\zeta(s)$ in terms of $t$ only, and, together with (\ref{1.16}) proves
\begin{lemma}\label{l44}
Let $K,\theta$ and $t_{0}$ satisfy the relation that $|\zeta(\frac{1}{2}+it)| \leq Kt^{\theta}$ whenever $t>t_{0}>168\pi$. Also, let $\delta = 2\cdot 10^{-6}$ and let $c$ be a parameter satisfying $1<c\leq \frac{5}{4}$. Then throughout the region $\frac{1}{2}\leq \sigma\leq c$ the following estimate holds
\begin{equation*}
|\zeta(s)| \leq (1+\delta)\left\{K^{c-\sigma} \{\zeta(c)\}^{\sigma-\frac{1}{2}} ((1+\delta)\, t)^{\theta(c-\sigma)}\right\}^{1/(c-\frac{1}{2})}.
\end{equation*}
\end{lemma}
Now, in the integral
\begin{equation*}\label{convo}
\int_{\frac{1}{2} +it}^{\infty+it} \log |\zeta(s)| \, ds
\end{equation*}
one seeks to apply the convexity bound of Lemma \ref{l44} over the range $\frac{1}{2}\leq\sigma\leq c$, and to trivially estimate $\zeta(s)$ for $\sigma >c$. To this end, write
\begin{equation*}
\int_{\frac{1}{2} +it}^{\infty+it} \log |\zeta(s)| \, ds = \int_{\frac{1}{2}+ i t}^{c + it} \log |\zeta(s)|\, ds + m(c),
\end{equation*}
where
\begin{equation*}
m(c) := \int_{c+ i t}^{\infty + it} \log |\zeta(s)|\, ds \leq \int_{c}^{\infty} \log |\zeta(\sigma)|\, d\sigma,
\end{equation*}
since $c>1$. The application of Lemma \ref{l44} proves
\begin{lemma}\label{200}
Under the same assumptions as Lemma \ref{l44}, the following estimate holds,
\begin{equation*}
\Re\int_{\frac{1}{2}+it}^{\infty+i t} \log\zeta(s)\, ds < a_{1} + b_{1}\log t,
\end{equation*}
where
\begin{equation*}
a_{1} = \int_{c}^{\infty}\log|\zeta(\sigma)|\, d\sigma + \frac{1}{2}\left(c-\tfrac{1}{2}\right)\log \left\{K\zeta(c)\right\} + \delta,
\end{equation*}
and
\begin{equation*}
b_{1} =\frac{\theta}{2}(c-\tfrac{1}{2}).
\end{equation*}
\end{lemma}
The improvements in the following lemmas come from writing $\zeta(s+d)$ in place of $\zeta(s+1)$ which is used in the methods of Turing and Lehman. One then seeks the optimal value of $d\leq1$ at the end of the proof. Write
\begin{equation}\label{213}
\begin{split}
\Re\int_{\frac{1}{2}+it}^{\infty+it} \log \zeta(s)\, ds &= \int_{\frac{1}{2}+it}^{\frac{1}{2}+d+it}\log\bigg|\frac{\zeta(s)}{\zeta(s+d)}\bigg|\, ds\\ &+ \int_{\frac{1}{2}+d+it}^{\infty +it} \log |\zeta(s)|\, ds + \int_{\frac{1}{2}+d+it}^{\frac{1}{2}+2d+it} \log|\zeta(s)|\, ds,
\end{split}
\end{equation}
where $\frac{1}{2}<d\leq 1$. Since $d>\frac{1}{2}$ then $\Re (s) >1$ in the second and third integrals on the right side of the above equation. Thus, $\zeta(s)\geq \zeta(2\sigma)/\zeta(\sigma)$, so that, after suitable changes of variables, (\ref{213}) becomes
\begin{equation}\label{lattc}
\Re\int_{\frac{1}{2}+it}^{\infty+it}\log\zeta(s)\, ds \geq \int_{\frac{1}{2}+it}^{\frac{1}{2}+d+it}\log\bigg|\frac{\zeta(s)}{\zeta(s+d)}\bigg|\, ds + I(d),
\end{equation}
where
\begin{equation}\label{defineI}
\begin{split}
I(d) &= \frac{1}{2}\int_{1+2d}^{\infty}\log\zeta(\sigma)\, d\sigma - \int_{\frac{1}{2} +d}^{\infty}\log\zeta(\sigma)\, d\sigma\\
&+ \frac{1}{2}\int_{1+2d}^{1+4d}\log\zeta(\sigma)\, d\sigma - \int_{\frac{1}{2}+d}^{\frac{1}{2} +2d}\log\zeta(\sigma)\, d\sigma,
\end{split}
\end{equation}
and these integrals, all convergent, will be evaluated at the end of the proof.
The integrand the right side of (\ref{lattc}) can be rewritten\footnote{This method of approach is slightly easier than that given in Turing's paper, as noted by Lehman \cite[p.\ 310]{Lehman}.} using the Weierstrass product formula cf.\ \cite[pp.\ 82-83]{Davenport}
\begin{equation}\label{riewei}
\zeta(s) = \frac{e^{bs}}{2(s-1)\Gamma(1+\frac{s}{2})}\prod_{\rho}\left(1-\frac{s}{\rho}\right)e^{s/\rho},
\end{equation}
where the product is taken over zeroes $\rho$ and $b$ is a constant such that
\begin{equation*}\label{Davestyle}
b =\frac{1}{2}\log \pi -\sum_{\rho}\frac{1}{\rho},
\end{equation*}
when the sum converges if each zero is paired with its conjugate.
Thus
\begin{equation*}
\begin{split}
\log\bigg|\frac{\zeta(s)}{\zeta(s+d)}\bigg| &= \log\bigg|\frac{s+d-1}{s-1}\bigg| - \log\bigg|\frac{\Gamma(\frac{s}{2} +1)}{\Gamma(\frac{s}{2} +1+\frac{d}{2})}\bigg| \\
&+\sum_{\rho}\log\bigg|\frac{s-\rho}{s+d-\rho}\bigg| - \frac{d}{2}\log \pi,
\end{split}
\end{equation*}
and so it follows that
\begin{equation}\label{L9final}
\begin{split}
\Re\int_{\frac{1}{2}+it}^{\infty+it} \log \zeta(s)\, ds &\geq \sum_{\rho}\int_{\frac{1}{2}+it}^{\frac{1}{2}+d+it}\log\bigg |\frac{s-\rho}{s+d-\rho}\bigg |\, ds\\
&- \int_{\frac{1}{2}+it}^{\frac{1}{2}+d+it}\log \bigg | \frac{\Gamma(\frac{s}{2}+1)}{\Gamma(\frac{s+d}{2} +1)}\bigg |\, ds\\ &+ \int_{\frac{1}{2}+it}^{\frac{1}{2}+d+it} \log \bigg |\frac{s+d-1}{s-1}\bigg|\, ds +I(d) - \frac{d^{2}}{2}\log\pi \\& =I_{1} - I_{2} + I_{3} + I(d) - \frac{d^{2}}{2}\log \pi.
\end{split}
\end{equation}
The following Lemmas are needed for evaluation of $I_{1}$ and $I_{2}$. Since $\frac{1}{2}<d\leq 1$, it is easily seen that $I_{3}\geq 0$ but since the argument of the logarithm tends to one as $t \rightarrow\infty$, no further improvements are possible.
To estimate the integral $I_{2}$ the following result is required, which is a quantitative version of the classical estimate
\begin{equation*}
\frac{\Gamma'(z)}{\Gamma(z)} = \log z + O\left(\frac{1}{z}\right),
\end{equation*}
see, e.g.\ \cite[Ch.\ XII]{WW}.
\begin{lemma}\label{l8}
Define the symbol $\Theta$ in the following way: $f(x) = \Theta\{g(x)\}$ means that $|f(x)|\leq g(x)$. If $\Re z > 0$, then
\begin{equation*}
\frac{\Gamma '(z)}{\Gamma (z)} = \log z - \frac{1}{2 z} + \Theta\left(\frac{2}{\pi^{2} | (\Im z)^{2} - (\Re z)^{2}|}\right).
\end{equation*}
\end{lemma}
\begin{proof}
See \cite[Lem.\ 8]{Lehman}.
\end{proof}
Using the mean-value theorem for integrals, $I_{2}$ can be written as
\begin{equation*}
I_{2} = -\frac{1}{2}\int_{\frac{1}{2} +it}^{\frac{1}{2}+d+it}\left\{\int_{0}^{d} \Re \frac{\Gamma '\left(1+\frac{s+\xi}{2}\right)}{\Gamma\left(1+\frac{s+\xi}{2}\right)}\, d\xi\right\}\, ds = -\frac{d^{2}}{2} \Re \frac{\Gamma '\left(\sigma +\frac{it}{2}\right)}{\Gamma\left(\sigma +\frac{it}{2}\right)},
\end{equation*}
for some $\sigma: \frac{5}{4} <\sigma < d+ \frac{5}{4}$, whence by Lemma \ref{l8}
\begin{equation}\label{finalI2}
I_{2} = -\frac{d^{2}}{2}\log \frac{t}{2} + \epsilon,
\end{equation}
where $|\epsilon|$ is comfortably less than $9\cdot 10^{-5}$ when $t>168\pi$ and decreases rapidly with increasing $t$.
In \cite{Turing}, the integrand in $I_{1}$ is evaluated using an approximate solution to a differential equation. This is then summed over the zeroes $\rho$. Using the fact that if $\rho = \beta + i\gamma$ lies off the critical line, then so too does $1-\overline{\rho}$, Booker \cite{Booker} was able to sharpen the bound on $I_{1}$. His result is given in the $d=1$ case of the following
\begin{lemma}[Booker]\label{l7}
Given a complex number $w$ with $|\Re(w)|\leq \frac{1}{2}$ then for $\frac{1}{2}< d\leq 1$ the following holds
\begin{equation*}
\int_{0}^{d} \log \bigg |\frac{(x+d+w)(x+d-\overline{w})}{(x+w)(x-\overline{w})}\bigg|\, dx \leq d^2 (\log 4) \Re\left(\frac{1}{d+w} + \frac{1}{d-\overline{w}}\right).
\end{equation*}
\end{lemma}
\begin{proof}
The proof for $d=1$ is given as Lemma 4.4 in \cite{Booker}. The adaptation to values of $d$ such that $\frac{1}{2}<d\leq 1$ is straightforward.
\end{proof}
Write
\begin{equation*}
I_{1} = \sum_{\rho}\int_{0}^{d}\log\bigg|\frac{\sigma +\frac{1}{2}+it-\rho}{\sigma+d + \frac{1}{2} + it-\rho}\bigg|\, d\sigma,
\end{equation*}
and apply Lemma \ref{l7} with $w=\frac{1}{2}+it-\rho$, pairing together $\rho$ and $1-\overline{\rho}$, whence
\begin{equation*}
I_{1} \geq -d^{2}(\log 4)\sum_{\rho}\Re\left(\frac{1}{\frac{1}{2}+d+it-\rho}\right).
\end{equation*}
Here the improvement of Booker's result is seen, as Lehman [\textit{op.\ cit.}]\ has $1.48$ in the place of $\log 4 \approx 1.38$. Rather than appealing to the Mittag-Leffler series for $\zeta(\frac{1}{2}+it)$ as in \cite{Lehman}, here one proceeds directly by rewriting the sum over the zeroes using the Weierstrass product (\ref{riewei}). By logarithmically differentiating (\ref{riewei}) and taking real parts, it is seen that
\begin{equation*}
\begin{split}
-I_{1} \leq d^{2}(\log 4)\bigg\{&\Re\frac{\zeta' \left(\frac{1}{2}+d+it\right)}{\zeta\left(\frac{1}{2}+d+it\right)} + \frac{d-\frac{1}{2}}{(d-\frac{1}{2})^{2} + t^{2}}\\
&+\frac{1}{2}\Re\frac{\Gamma'\left(\frac{1}{2}+\frac{\frac{1}{2} +d+it}{2}\right)}{\Gamma\left(\frac{1}{2}+\frac{\frac{1}{2} +d+it}{2}\right)} - \frac{1}{2}\log\pi\bigg\},
\end{split}
\end{equation*}
and thus, using Lemma \ref{l8} with $t>168\pi$ one has
\begin{equation*}
-I_{1} \leq d^{2}(\log 4)\left\{\Re\frac{\zeta'\left(\frac{1}{2}+d+it\right)}{\zeta\left(\frac{1}{2}+d+it\right)} + \frac{1}{2}\log\frac{t}{2} -\frac{1}{2}\log\pi + \epsilon'\right\},
\end{equation*}
where $|\epsilon'| \leq 10^{-4}$.
Finally, since $d>\frac{1}{2}$ then
\begin{equation*}
\Re\frac{\zeta'\left(\frac{1}{2}+d+it\right)}{\zeta\left(\frac{1}{2}+d+it\right)} \leq \bigg|\frac{\zeta'\left(\frac{1}{2}+d+it\right)}{\zeta\left(\frac{1}{2}+d+it\right)}\bigg| \leq -\frac{\zeta'\left(\frac{1}{2}+d\right)}{\zeta\left(\frac{1}{2}+d\right)},
\end{equation*}
and so
\begin{equation}\label{finalI1}
-I_{1} \leq d^{2} (\log 4)\left\{-\frac{\zeta'\left(\frac{1}{2}+d\right)}{\zeta\left(\frac{1}{2}+d\right)} + \frac{1}{2}\log t -\frac{1}{2}\log{2\pi} +\epsilon'\right\}.
\end{equation}
The results for $I_{1}$ and $I_{2}$ contained in equations (\ref{finalI1}) and (\ref{finalI2}) respectively can be used in (\ref{L9final}) to give
\begin{equation*}
\begin{split}
-\Re\int_{\frac{1}{2}+it}^{\infty+it} \log \zeta(s)\, ds &\leq d^{2}(\log 4)\left\{-\frac{\zeta'\left(\frac{1}{2}+d\right)}{\zeta\left(\frac{1}{2}+d\right)} + \frac{1}{2}\log t -\frac{1}{2}\log 2\pi \right\}\\
& - \frac{d^{2}}{2}\log\frac{t}{2} - I(d) +\frac{d^{2}}{2}\log\pi +3\epsilon',
\end{split}
\end{equation*}
where $I(d)$ is defined by equation (\ref{defineI}). This then proves
\begin{lemma}\label{l9}
For $t>168\pi$, $d$ satisfying $\frac{1}{2}<d\leq 1$, and $\epsilon' =10^{-4}$, the following estimate holds
\begin{equation*}
-\Re\int_{\frac{1}{2}+it}^{\infty+it} \log \zeta(s)\, ds \leq a_{2} + b_{2}\log t,
\end{equation*}
where
\begin{equation*}
\begin{split}
a&= d^{2}(\log 4)\left\{-\frac{\zeta'(\frac{1}{2}+d)}{\zeta(\frac{1}{2}+d)} - \frac{1}{2}\log 2\pi +\frac{1}{4}\right\} + \frac{d^{2}}{2}\log\pi \\
&- \frac{1}{2}\int_{1+2d}^{\infty}\log\zeta(\sigma)\, d\sigma + \int_{\frac{1}{2} +d}^{\infty}\log\zeta(\sigma)\, d\sigma\\
&- \frac{1}{2}\int_{1+2d}^{1+4d}\log\zeta(\sigma)\, d\sigma + \int_{\frac{1}{2}+d}^{\frac{1}{2} +2d}\log\zeta(\sigma)\, d\sigma +3\epsilon',
\end{split}
\end{equation*}
and
\begin{equation*}
b= \frac{d^{2}}{2}\left(\log 4 -1\right).
\end{equation*}
\end{lemma}
Lemmas \ref{Ll1}, \ref{l44} and \ref{l9} prove at once
\begin{theorem}
Let $t_{2} >t_{1}>t_{0}>168\pi$ and let the pair of numbers $K,\theta$ satisfy the relation that $\zeta(\frac{1}{2}+it)\leq Kt^{\theta}$ for $t>t_{0}$. Also, let $\mu = 3 \cdot 10^{-6}$. If the parameters $c$ and $d$ are chosen such that $1<c\leq \frac{5}{4}$ and $\frac{1}{2}<d\leq 1$ then
\begin{equation*}
\bigg|\int_{t_{1}}^{t_{2}}S(t)\, dt\bigg| \leq a + b\log t_{2},
\end{equation*}
where
\begin{equation}\label{Turingzeta}
\begin{split}
\pi a&= d^{2}(\log 4)\left\{-\frac{\zeta'(\frac{1}{2}+d)}{\zeta(\frac{1}{2}+d)} - \frac{1}{2}\log 2\pi + \frac{1}{4}\right\} +\frac{d^{2}}{2}\log\pi\\
&- \frac{1}{2}\int_{1+2d}^{\infty}\log\zeta(\sigma)\, d\sigma
+ \int_{\frac{1}{2} +d}^{\infty}\log\zeta(\sigma)\, d\sigma
- \frac{1}{2}\int_{1+2d}^{1+4d}\log\zeta(\sigma)\, d\sigma \\
&+\int_{\frac{1}{2}+d}^{\frac{1}{2} +2d}\log\zeta(\sigma)\, d\sigma
+ \frac{1}{2}(c-\tfrac{1}{2})\log \left\{K\zeta(c)\right\} + \int_{c}^{\infty}\log\zeta(\sigma)\, d\sigma + \mu,
\end{split}
\end{equation}
and
\begin{equation}\label{Turingzetab}
2\pi b= \theta(c-\tfrac{1}{2}) + d^{2}(\log 4 -1).
\end{equation}
\end{theorem}
\subsection{Calculations}\label{TuringCalc}
Taking the parameters $c=\frac{5}{4}$ and $d=1$, $\theta =\frac{1}{4}$ and $K=2.53$ one has, from the pair of equations (\ref{Turingzeta}) and (\ref{Turingzetab}) that $a=1.61$ and $b=0.0914$. These can be compared with the constants of Lehman, viz.\ $(a=1.7, b=0.114)$. It can also be seen that the minimal value of $b$ attainable by this method is $0.0353$.
Since the application of Turing's Method involves Gram blocks one wishes to minimise the bound given in (\ref{GB}). That is, one wishes to minimise the quantity $F(a, b, g_{p})$ given in (\ref{Ftimes}). Here, the values of $a$ and $b$ have been chosen to be optimal, for the application to Gram blocks, at height $g_{p}=2\pi\cdot 10^{12}$. Since it has been shown above that $a$ and $b$ are themselves functions of $c$ and $d$, write $F(c, d)$ for $F(a, b, 2\pi\cdot 10^{12})$.
Since there are no terms in (\ref{Turingzeta}) and (\ref{Turingzetab}) which involve both $c$ and $d$, one can write $F(c, d) = F_{c}(c) + F_{d}(d)$ and optimise each of the functions $F_{c}$ and $F_{d}$ separately. The presence of integrals involving the zeta-function in equations (\ref{Turingzeta}) and (\ref{Turingzetab}) makes the optimisation process difficult, even for a computer programme. Therefore, small values of $F_{c}(c)$ and $F_{d}(d)$ were sought over the intervals
\begin{equation*}\label{latticeTuring}
d= d(N)= 0.99- 2N\Delta; \quad c= c(N)= 1.24 - N\Delta,
\end{equation*}
where $\Delta = 0.02$ and $0\leq N\leq 12$. This showed that values of $F(c, d)\leq 3.72$ were clustered around $d=0.71$ and $c=1.08$. A further search for small values was conducted with
\begin{equation*}\label{latticeTuring2}
d= d(N)= 0.68+N\Delta; \quad c= c(N)= 1.05 + N\Delta,
\end{equation*}
where, this time, $\Delta = 0.01$ and $0\leq N\leq 20$. The smallest value found in this second search was $F(c, d) = 3.6805\ldots$, corresponding to $d= 0.74$ and $c=1.1$. For simplicity the choice of $d=\frac{3}{4}$ and $c=\frac{11}{10}$ gives $F(c, d)= 3.6812\ldots$ and obtains the constants in Theorem \ref{Thm1}, viz.\
\begin{equation*}
a(\tfrac{11}{10}, \tfrac{3}{4}) = 2.0666;\quad b(\tfrac{11}{10}, \tfrac{3}{4}) =0.0585.
\end{equation*}
\section{Dirichlet $L$-functions}\label{TMDLF}
\subsection{Introduction}\label{Outlineforothers}
In the works of Rumely \cite{Rumely} and Tollis \cite{Tollis}, analogues for Turing's Method are developed for Dirichlet $L$-functions, and for Dedekind zeta-functions respectively. Each of these proofs is based on \cite{Lehman}, so it is fitting to apply the above adaptations to yield better constants in these analogous cases. Since many of the details in the proofs are identical to those in \S\ref{TM}, this section and \S\ref{TMDZF} are less ponderous than the previous one.
\subsection{Analogies to the functions $Z(t)$, $\theta(t)$ and $S(t)$}
Let $\chi$ be a primitive Dirichlet character with conductor $Q>1$, and let $L(s, \chi)$ be the Dirichlet $L$-series attached to $\chi$. Furthermore define $\delta = (1-\chi(-1))/2$ so that $\delta$ is $0$ or $1$ according to whether $\chi$ is an even or odd character. Then the function
\begin{equation}\label{Dirichletdef}
\xi(s, \chi) = \left(\tfrac{Q}{\pi}\right)^{\frac{s}{2}} \Gamma\left(\tfrac{s+\delta}{2}\right) L(s, \chi)
\end{equation}
is entire and satisfies the functional equation
\begin{equation*}\label{Dirichletfunctional}
\xi(s, \chi)= W_{\chi}\xi(1-s, \overline{\chi}),
\end{equation*}
where
\begin{equation*}
W_{\chi} = i^{-\delta} \tau(\chi)Q^{-\frac{1}{2}}; \quad \tau(\chi) = \sum_{n=1}^{Q} \chi(n) e^{\frac{2\pi n i}{Q}}.
\end{equation*}
It is easily seen that $|W_\chi| = 1$ and so one may write $W_{\chi}= e^{i\theta_{\chi}}$ and, for $s=\frac{1}{2} +it$,
\begin{equation*}\label{Dirichlettheta}
\theta(t, \chi):= \frac{t}{2}\log\frac{Q}{\pi} + \Im\log\Gamma\left(\tfrac{s+\delta}{2}\right) - \frac{\theta_{\chi}}{2}.
\end{equation*}
Then the following functions $Z(t, \chi)$ and $\theta(t, \chi)$ are related by the equation
\begin{equation*}\label{DirichletRS}
Z(t, \chi) = e^{i\theta(t, \chi)}L(s, \chi),
\end{equation*}
where $Z(t, \chi)$ is real. This is analogous to the equation
\begin{equation*}
Z(t) = e^{i\theta(t)}\zeta(\frac{1}{2} +it),
\end{equation*}
which can be found in \cite[Ch.\ IV, \S 17]{Titchmarsh}.
One can now show that $\theta(t, \chi)$ is ultimately monotonically increasing. This means that the \textit{Gram points} $g_{n}$ can be defined for Dirichlet $L$-functions as those points at which $\theta(g_{n}, \chi) = n\pi$.
Similarly to (\ref{sdef}), define, whenever $t$ is not an ordinate of a zero of $L(s, \chi)$, the function
\begin{equation}\label{DirichletS}
S(t, \chi) = \frac{1}{\pi}\arg L(\tfrac{1}{2}+it, \chi),
\end{equation}
where, as before, the argument is determined via continuous variation along the straight lines connecting $2$, $2+it$ and $\frac{1}{2} +it$; with a continuity condition if $t$ coincides with a zero. It is known that
\begin{equation*}\label{LittlewoodS}
\int_{t_{1}}^{t_{2}} S(t, \chi)\, dt = O(\log Qt_{2}),
\end{equation*}
and Turing's Method for Dirichlet $L$-functions requires a quantitative version of this result.
\subsection{Theorem and new results}\label{DirLResults}
\begin{theorem}\label{422}
Let $(a, b, t_{0})$ denote the following triple of numbers. Given $t_{0}>0$ there are positive constants $a$ and $b$ such that, whenever $t_{2}>t_{1}>t_{0}$ the following estimate holds
\begin{equation}\label{540}
\bigg| \int_{t_{1}}^{t_{2}} S(t, \chi)\, dt\bigg|\leq a+ b\log\frac{Qt_{2}}{2\pi},
\end{equation}
\end{theorem}
Rumely \cite{Rumely} has shown that $(1.8397, 0.1242, 50)$ satisfies (\ref{540}).
Analogous to Theorem \ref{LBrent} is
\begin{theorem}[Rumely]\label{thmrum}
For $t_{2}>t_{1}>50$ the following estimate holds
\begin{equation}\label{Spidi}
\int_{t_{1}}^{t_{2}} \bigg|S(t, \chi) \frac{\theta'(t, \chi)}{\pi}\bigg|\leq 0.1592\log\frac{QT}{2\pi}\left(a + b\log\frac{QT}{2\pi}\right):= B(Q, t_{2}).
\end{equation}
\end{theorem}
The constant $0.1592$ comes from applying Stirling's formula to the function $\theta(t, \chi)$. It is this bound which is used in practical calculations. As in the case of the zeta-function, $a$ and $b$ are roughly inversely proportional, so one can choose these parameters in such a way that the quantity $B(Q, t_{2})$ is minimised for a given $Q$ and $t_{2}$.
At $Q=100$ and $t_{2} = 2500$, Rumely's constants $(a=1.8397, b=0.1242)$ give the value
\begin{equation*}
B(Q, t_{2}) \approx 5.32,
\end{equation*}
however there is a misprint in \cite{Rumely} and this is quoted as 4.824, which does not appear to affect his numerical calculations. The values of $a$ and $b$ have been optimised in (\ref{Spidi}) for $Q=100$ and $t_{2}=2500$, which proves the following
\begin{theorem}\label{TurDir}
If $t_{2}>t_{1}>t_{0}$ then the following estimate holds
\begin{equation*}
\bigg| \int_{t_{1}}^{t_{2}} S(t, \chi)\, dt\bigg|\leq 1.975+ 0.084\log\left(\frac{Qt_{2}}{2\pi}\right).
\end{equation*}
\end{theorem}
It therefore follows that $B(100, 2500) \approx 4.82$. Further reductions in the size of $B(Q, t_{2})$ are possible if the quantity $Qt_{2}$ is taken much larger, which will certainly happen in future calculations.
\subsection{Proof of Theorem \ref{TurDir}}\label{Proof of Theorem 1.3.2}
Littlewood's lemma on the number of zeroes of an analytic function in a rectangle is used to prove
\begin{lemma}\label{L563}
If $t_{2}>t_{1}>0$, then
\begin{equation}\label{563}
\int_{t_{1}}^{t_{2}} S(t, \chi)\, dt = \frac{1}{\pi}\int_{\frac{1}{2} +it_{2}}^{\infty+it_{2}} \log|L(s, \chi)|\, d\sigma - \frac{1}{\pi}\int_{\frac{1}{2} +it_{1}}^{\infty+it_{1}} \log|L(s, \chi)|\, d\sigma.
\end{equation}
\end{lemma}
\begin{proof}
The proof of this is the same as for Lemma \ref{Ll1}.
\end{proof}
The following Lemma is a convexity estimate which will be used to give an upper bound on the first integral in (\ref{563}).
\begin{lemma}[Rademacher]\label{Rad567}
Suppose $1<c<\frac{3}{2}$. Then, for $1-c\leq \sigma \leq c$, for all moduli $Q>1$, and for all primitive characters $\chi$ with modulus $Q$,
\begin{equation}\label{Rademacher}
|L(s, \chi)| \leq \left(\frac{Q|1+s|}{2\pi}\right)^{\frac{c-\sigma}{2}} \, \zeta(c).
\end{equation}
\end{lemma}
\begin{proof}
See \cite[Thm 3]{Rademacher}.
\end{proof}
Rumely chooses $c= \frac{5}{4}$, but here the value of $c$ will be chosen optimally at the end of the argument. In preparation for taking the logarithm of both sides of (\ref{Rademacher}) note that for $\frac{1}{2}\leq \sigma \leq c$ and $t\geq t_{0}$, one can find an $\epsilon >0$ such that $\log(|1+s|/t) \leq \epsilon$. This will be used to express $|\log L(s, \chi)|$ as a function of $t$ rather than $s$. Indeed, if $\sigma \leq \frac{5}{4}$ and $t>t_{0}$ it is easy to show that
\begin{equation*}
\frac{|1+s|}{t} \leq 1 + \frac{81}{32t_{0}^{2}} = 1+\epsilon.
\end{equation*}
Write
\begin{equation}\label{5alive}
\int_{\frac{1}{2}+it}^{\infty +it}\log|L(s, \chi)|\, ds = \int_{\frac{1}{2}}^{c} \log|L(s, \chi)|\, d\sigma + \int_{c}^{\infty}\log|L(s, \chi)|\, d\sigma,
\end{equation}
where the convexity result will be applied to the first integral on the right-side. To estimate the second, note that for $\sigma \geq c>1$ one can write
\begin{equation*}
|L(s, \chi)| = \bigg|\sum_{n=1}^{\infty}\chi(n)n^{-s}\bigg| \leq \sum_{n=1}^{\infty} n^{-\sigma} = \zeta(\sigma).
\end{equation*}
With this estimation and the convexity estimate of (\ref{Rademacher}), equation (\ref{5alive}) becomes
\begin{equation*}
\begin{split}
\int_{\frac{1}{2}+it}^{\infty+it} \log|L(s, \chi)|\, d\sigma &\leq \frac{1}{4}\left(c-\tfrac{1}{2}\right)^{2} \left\{\log\frac{Qt}{2\pi} + \epsilon\right\} \\
&+ \left(c-\tfrac{1}{2}\right)\log\zeta(1+\eta) + \int_{c}^{\infty}\log|\zeta(\sigma)|\, d\sigma.
\end{split}
\end{equation*}
This then proves
\begin{lemma}\label{P1Di}
If $t>t_{0}>0$ and $c$ is a parameter satisfying $1<c\leq\frac{5}{4}$, then throughout the region $\frac{1}{2}\leq \sigma\leq c$ the following estimate holds
\begin{equation*}
\int_{\frac{1}{2}+it}^{\infty+it} \log|L(s, \chi)|\, d\sigma \leq a_{1} + b_{1} \log\frac{Qt}{2\pi}.
\end{equation*}
where
\begin{equation}\label{a609}
a_{1} = \frac{729}{2048t_{0}^{2}} + \left(c-\tfrac{1}{2}\right)\log\zeta(c) + \int_{c}^{\infty}\log|\zeta(\sigma)|\, d\sigma,
\end{equation}
and
\begin{equation*}
b_{1} =\frac{1}{4}(c-\tfrac{1}{2})^{2}.
\end{equation*}
\end{lemma}
Rumely uses $t_{0}=50$, whence one can take the first term in (\ref{a609}) to be at most $1.5\cdot 10^{-4}$.
The improvements in the following Lemmas arise from taking $d$ to be in the range $\frac{1}{2}<d\leq 1$ and choosing the value of $d$ optimally at the end of the proof. One writes
\begin{equation*}
\int_{\frac{1}{2}+it}^{\infty+it} \log|L(s, \chi)|\, d\sigma
\end{equation*}
as a sum of integrals in the style of (\ref{213}). For $\sigma>1$ one can write
\begin{equation*}
\begin{split}
\log|L(s, \chi)| &= -\sum_{p}\log|1-\chi(p)p^{-s}| \geq-\sum_{p}\log(1+p^{-\sigma}) \\
&= \sum_{p}\left\{ -\log(1-p^{-2\sigma}) + \log(1-p^{-\sigma}) \right\}= \log \zeta(2\sigma) - \log\zeta(\sigma),
\end{split}
\end{equation*}
whence
\begin{equation*}
\int_{\frac{1}{2}+it}^{\infty+it} \log|L(s, \chi)|\, d\sigma \geq \int_{\frac{1}{2}+it}^{\frac{1}{2}+d+it} \log\bigg|\frac{L(s, \chi)}{L(s+d, \chi)}\bigg|\, d\sigma + I(d),
\end{equation*}
where $I(d)$ is the same function defined in (\ref{defineI}) in $\S \ref{Proof of Theorem 1.2.2}$.
Now the integrand on the right of the above equation can be expanded using the Weierstrass Product\footnote{Note that equation (18) of \cite{Rumely} has $(Q/\pi)^{s}$, rather than $(Q/\pi)^{s/2}$.}, see, e.g.\ \cite[pp.\ 84-85]{Davenport}
\begin{equation}\label{DWe}
\left(\tfrac{Q}{\pi}\right)^{\frac{s}{2}}\Gamma\left(\tfrac{s+\delta}{2}\right)L(s, \chi) = \xi(s, \chi) = e^{A+Bs}\prod_{\rho} (1-\tfrac{s}{\rho})e^{\frac{s}{\rho}},
\end{equation}
with
\begin{equation}\label{daveporto}
B = -\lim_{T\rightarrow\infty}\sum_{|\rho|<T}\frac{1}{\rho}.
\end{equation}
In the same manner as Turing's Method for the zeta-function, one arrives at
\begin{equation}\label{firstterm}
\begin{split}
\int_{\frac{1}{2}+it}^{\infty+it} \log|L(s, \chi)|\, d\sigma &\geq \frac{d^{2}}{2}\log\frac{Q}{\pi} + \int_{\frac{1}{2}+it}^{\frac{1}{2} +d+it} \log\bigg|\frac{\Gamma\left(\frac{s+d+\delta}{2}\right)}{\Gamma\left(\frac{s+\delta}{2}\right)}\bigg|\, d\sigma \\
&+\sum_{\rho}\int_{\frac{1}{2}+it}^{\frac{1}{2}+d+it}\log\bigg|\frac{s-\rho}{s+d-\rho}\bigg|\, d\sigma +I(d)\\
&= \frac{d^{2}}{2}\log\frac{Q}{\pi} +I_{1} +I_{2} +I(d).
\end{split}
\end{equation}
As before, one uses the second mean-value theorem for integrals to address $I_{1}$, whence
\begin{equation*}
I_{1} = \int_{\frac{1}{2}+it}^{\frac{1}{2} +d+it} \log\bigg|\frac{\Gamma\left(\frac{s+d+\delta}{2}\right)}{\Gamma\left(\frac{s+\delta}{2}\right)}\bigg|\, d\sigma = \frac{d^{2}}{2}\Re\frac{\Gamma'\left(\tau+\frac{it}{2}\right)}{\Gamma\left(\tau+\frac{it}{2}\right)},
\end{equation*}
for
\begin{equation*}
\tau \in \left(\tfrac{1}{4}+\tfrac{\delta}{2},\, d +\tfrac{1}{4} + \tfrac{\delta}{2}\right) \subset \left(\tfrac{1}{4},\, d+\tfrac{3}{4}\right),
\end{equation*}
since $\delta$ is either $0$ or $1$. Using Lemma \ref{l8}, one has that
\begin{equation}\label{670}
I_{1} \geq \frac{d^{2}}{2}\left\{\log\frac{t}{2} - \epsilon'\right\},
\end{equation}
where
\begin{equation}\label{670b}
\epsilon' = \frac{11}{t_{0}^{2}},
\end{equation}
and, since, $t_{0}>50$, it follows that $\epsilon'<5\cdot 10^{-3}$.
The application of Lemma \ref{l7} to $I_{2}$, with zeroes $\rho$ paired with $1-\overline{\rho}$ gives
\begin{equation*}\label{oncebooker}
I_{2} \geq -d^{2}(\log 4)\sum_{\rho}\Re\left(\frac{1}{d+\frac{1}{2} +it -\rho}\right).
\end{equation*}
Now logarithmically differentiate the Weierstrass product in (\ref{DWe}), take real parts, and use (\ref{daveporto}), to arrive at
\begin{equation}\label{almost}
\sum_{\rho}\Re\left(\frac{1}{s-\rho}\right) = \frac{1}{2}\log\frac{Q}{\pi} + \frac{1}{2} \Re\left(\frac{\Gamma'\left(\frac{s+\delta}{2}\right)}{\Gamma\left(\frac{s+\delta}{2}\right)}\right) + \Re\left(\frac{L'(s, \chi)}{L(s, \chi)}\right).
\end{equation}
For $\sigma=\Re(s) >1$, one can write
\begin{equation}\label{Rel}
\Re\left(\frac{L'(s, \chi)}{L(s, \chi)}\right) = \Re\left(\sum_{p}\frac{\chi(p)\log p}{p^{s}-\chi(p)}\right) \leq \sum_{p}\frac{\log p}{p^{\sigma} -1} = -\frac{\zeta'(\sigma)}{\zeta(\sigma)}.
\end{equation}
Thus when $s=d+\frac{1}{2}+it$, an application of Lemma \ref{l8} to (\ref{almost}) together with (\ref{Rel}) gives
\begin{equation}\label{suban}
I_{2} \geq -d^{2}(\log 4)\left(\frac{1}{2}\log\frac{Qt}{2\pi} + \frac{5}{t_{0}^{2}} - \frac{\zeta'(\frac{1}{2} +d)}{\zeta(\frac{1}{2} +d)}\right).
\end{equation}
The results for $I_{2}$, contained in (\ref{suban}), and for $I_{1}$, contained in (\ref{670}) and (\ref{670b}), can be combined with (\ref{firstterm}) to prove
\begin{lemma}\label{wolfb}
For $t>t_{0}>50$ and for a parameter $d$ satisfying the condition $\frac{1}{2}<d\leq 1$, the following estimate holds
\begin{equation*}
-\int_{\frac{1}{2}+it}^{\infty+it}\log|L(s, \chi)|\, d\sigma \leq a_{2} + b_{2}\log\frac{Qt}{2\pi},
\end{equation*}
where
\begin{equation*}
\begin{split}
a &= \frac{13d^{2}}{t_{0}^{2}} - d^{2}(\log 4)\frac{\zeta'\left(\tfrac{1}{2}+d\right)}{\zeta\left(\tfrac{1}{2}+d\right)} - \frac{1}{2}\int_{2d+1}^{\infty}\log \zeta(\sigma)\, d\sigma\\
&+\int_{\frac{1}{2} +d}^{\infty}\log \zeta(\sigma)\, d\sigma-
+ \frac{1}{2}\int_{2d+1}^{4d+1}\log \zeta(\sigma)\, d\sigma + \int_{\frac{1}{2}+d}^{\frac{1}{2}+2d}\log \zeta(\sigma)\, d\sigma,
\end{split}
\end{equation*}
and
\begin{equation*}
b= \frac{d^{2}}{2}(\log 4 -1).
\end{equation*}
\end{lemma}
Lemmas \ref{L563}, \ref{P1Di} and \ref{wolfb} prove at once
\begin{theorem}
If $t_{2}>t_{1}>t_{0}>50$ and $c$ and $d$ are parameters such that $1<c\leq \frac{5}{4}$ and $\frac{1}{2}<d\leq 1$, the following estimate holds
\begin{equation*}
\bigg|\int_{t_{1}}^{t_{2}}S(t, \chi)\, dt\bigg| \leq a + b \log\left(\frac{Qt_{2}}{2\pi}\right),
\end{equation*}
where
\begin{equation}\label{finala}
\begin{split}
a\pi &= (c-\tfrac{1}{2})\log\zeta(c) + \int_{c}^{\infty}\log\zeta(\sigma)\, d\sigma - d^{2}(\log 4)\frac{\zeta'\left(\tfrac{1}{2}+d\right)}{\zeta\left(\tfrac{1}{2}+d\right)} \\
&- \frac{1}{2}\int_{2d+1}^{\infty}\log \zeta(\sigma)\, d\sigma
+\int_{\frac{1}{2} +d}^{\infty}\log \zeta(\sigma)\, d\sigma\\
&- \frac{1}{2}\int_{2d+1}^{4d+1}\log \zeta(\sigma)\, d\sigma + \int_{\frac{1}{2}+d}^{\frac{1}{2}+2d}\log \zeta(\sigma)\, d\sigma +\frac{15d^{2}}{t_{0}^{2}}
\end{split}
\end{equation}
and
\begin{equation}\label{finalb}
2b\pi = \frac{1}{2}\left(c-\tfrac{1}{2}\right)^{2} + d^{2}(\log 4 -1).
\end{equation}
\end{theorem}
\subsection{Calculations and improvements}\label{DirCalc}
In (\ref{finala}) and (\ref{finalb}) Rumely has $c=\frac{5}{4}$ and $d=1$ as well as $1.48$ in place of $\log 4 \approx 1.38$, and thus he calculates\footnote{The number $1.1242$ quoted by Rumely in his Theorem 2 is a result of a rounding error from his Lemma 2.}
\begin{equation*}
a = 1.839; \quad b= 0.1212.
\end{equation*}
Even with the same values of $c$ and $d$, the inclusion of Lemma \ref{l7} gives the result here that
\begin{equation*}
a(\tfrac{5}{4}, 1) = 1.794 \quad b(\tfrac{5}{4}, 1) = 0.1063.
\end{equation*}
For the values of $Q=100$, $t_{2}=2500$ the quantity $B(Q, t_{2})$ --- defined in Theorem \ref{thmrum} --- was minimised over two intervals using a computer programme, similarly to \S\ref{TuringCalc}. This yielded the optimal value for $B(Q, t_{2})$ at $c=1.17$ and $d=0.88$, whence the constants
\begin{equation*}
a(1.17, 0.88) = 1.9744, \quad b(1.17, 0.88) = 0.0833,
\end{equation*}
which appear in Theorem \ref{TurDir}.
\section{Dedekind zeta-functions}\label{TMDZF}
Let $K$ be a number field of degree $N$ with discriminant $D$ and with ring of integers $\mathcal{O}_{K}$. Let the signature of the field be $(r_{1}, r_{2})$, by which it is meant that $K$ has $r_{1}$ real embeddings and $r_{2}$ pairs of complex embeddings, whence $N=r_{1} +2r_{2}$. Then for $\Re(s) >1$ the Dedekind zeta-function is defined as
\begin{equation*}\label{dedekind}
\zeta_{K}(s) = \sum_{\mathfrak{a}\in \mathcal{O}_{K}} (N\mathfrak{a})^{-s} = \sum_{n\geq 1}a_{n} n^{-s},
\end{equation*}
where $\mathfrak{a}$ ranges over the non-zero ideals of $\mathcal{O}_{K}$ and $a_{n}$ is the number of ideals with norm $n$. Like the Riemann zeta-function, the Dedekind zeta-function can be extended via analytic continuation to the entire complex plane where it is defined as a meromorphic function with a simple pole at $s=1$. If
\begin{equation*}\label{lamda}
\Lambda_{K}(s) = \Gamma(\tfrac{s}{2})^{r_{1}} \Gamma(s)^{r_{2}} \left(\frac{\sqrt{|D_{K}|}}{\pi^{\frac{N}{2}}2^{r_{2}}}\right)^{s} \zeta_{K}(s),
\end{equation*}
then the Dedekind zeta-function satisfies the functional equation
\begin{equation}\label{functional}
\Lambda_{K}(s) = \Lambda_{K} (1-s).
\end{equation}
One can define, see, e.g.\ \cite{Tollis}, the functions analogous to $Z(t)$ and $\theta(t)$ by
\begin{equation*}
Z_{K}(t) = e^{i\theta_{K}(t)}\zeta_{K}(\tfrac{1}{2}+it).
\end{equation*}
Analogous to the function $S(t)$ define,
\begin{equation*}\label{Sdef}
S_{K}(t) = \frac{1}{\pi} \arg \zeta_{K}(\tfrac{1}{2} +it); \quad S^{1}_{K}(t) = \int_{0}^{t}S_{K}(u)\, du,
\end{equation*}
where the valuation of the argument is determined, if $t$ is not an ordinate of a zero, by continuous variation along the line from $\infty +it$ to $\frac{1}{2} +it$ and $S(0) =0$. The modified Turing criterion for Dedekind zeta-functions relies on the following
\begin{theorem}\label{650}
Given $t_{0}>0$ there are positive constants $a, b$ and $g$ such that, whenever $t_{2}>t_{1}>t_{0}$ the following estimate holds
\begin{equation}\label{TuringTol}
\bigg|\int_{t_{1}}^{t_{2}} S_{K}(t)\, dt \bigg| \leq a+bN + g\log\left(|D_{K}| \left(\frac{t_{2}}{2\pi}\right)^{N}\right).
\end{equation}
\end{theorem}
If one denotes the quadruple $(a, b, g, t_{0})$ as those numbers satisfying (\ref{TuringTol}), then the work of Tollis \cite{Tollis} leads to the quadruple $(0.2627, 1.8392, 0.122, 40)$.
Analogous to Theorem \ref{thmrum}, is the following
\begin{theorem}[Tollis]
For $t_{2}>t_{1}>40$ then
\begin{equation*}
\begin{split}
\bigg|\int_{t_{1}}^{t_{2}} S_{K}(t) \frac{\theta'_{K}(t)}{\pi}\, dt\bigg| &\leq \left(\frac{b}{2\pi}N + \frac{a}{2\pi}\right)\log\left(|D_{K}|\left(\frac{t_{2}}{2\pi}\right)^{N}\right)\\
&+ \frac{g}{2\pi}\log^{2}\left(|D_{K}|\left(\frac{t_{2}}{2\pi}\right)^{N}\right) \\
&= B(D_{K}, t_{2}, N).
\end{split}
\end{equation*}
\end{theorem}
For a given $D_{K}, t_{2}, N$ one wishes to choose the constants $a, b$ and $g$ so as to minimise $B(D_{K}, t_{2}, N)$.
For the sample values $N=4$, $D_{K}=1000$ and $t_{2} = 80$ one finds that Tollis's constants give $B(D_{K}, t_{2}, N)\approx 26.44$. As will be shown in \S \ref{Dedcalc}, very little improvement can be given on the constants of Tollis. Nevertheless the inclusion of Lemma \ref{l7} is enough to prove
\begin{theorem}\label{thm657}
Given $t_{2}>t_{1}>40$ the following estimate holds
\begin{equation}\label{Turing}
\bigg|\int_{t_{1}}^{t_{2}} S_{K}(t)\, dt \bigg| \leq 0.264+1.843N + 0.105\log\left(|D_{K}| \left(\frac{t_{2}}{2\pi}\right)^{N}\right).
\end{equation}
\end{theorem}
The improvements to Tollis's work will most likely be of use in the search for zeroes of Dedekind zeta-functions of large discriminant or degree but at \textit{small} height. For this reason the constant $t_{0}$ has been retained in the following equations, and appears in Theorem \ref{L890} from which Theorem \ref{thm657} is derived.
\subsection{Proof of Theorem \ref{thm657}}
As before, one begins by proving
\begin{lemma}\label{lwtollis}
\begin{equation*}
\pi\int_{t_{1}}^{t_{2}}S_{K}(t)\, dt = \int_{\frac{1}{2} +it_{2}}^{\infty +it_{2}} \log|\zeta_{K}(s)|\, ds - \int_{\frac{1}{2} +it_{1}}^{\infty +it_{1}} \log|\zeta_{K}(s)|\, ds.
\end{equation*}
\end{lemma}
\begin{proof}
The proof is the same as in Lemma \ref{Ll1}.
\end{proof}
The convexity estimate required is
\begin{lemma}[Rademacher]\label{6701}
For $1<c<\frac{3}{2}$ and $s= \sigma +it$ then throughout the range $1-c\leq\sigma\leq c$, the following estimate holds
\begin{equation}\label{convexity}
|\zeta_{K}(s)| \leq 3\bigg|\frac{1+s}{1-s}\bigg| \zeta(c)^{N}\left(|D_{K}|\left\{\frac{|1+s|}{2\pi}\right\}^{N}\right)^{\frac{c-\sigma}{2}}.
\end{equation}
\end{lemma}
\begin{proof}
See \cite[Thm 4]{Rademacher}.
\end{proof}
Note that, for $\frac{1}{2}\leq \sigma \leq c\leq \frac{5}{4}$ and for $t>t_{0}$ one can write
\begin{equation*}\label{thp}
\log|1+s|\leq \log t + \frac{81}{32t_{0}^{2}}.
\end{equation*}
This then enables one to place an upper bound on (\ref{convexity}) in term of $t$ rather than $s$.
Now write
\begin{equation*}
\int_{\frac{1}{2}+it}^{\infty +it}\log |\zeta_{K}(s)|\, ds = \int_{\frac{1}{2}+it}^{c +it}\log |\zeta_{K}(s)|\, ds + \int_{c}^{\infty} \log|\zeta_{K}(\sigma)|\, d\sigma,
\end{equation*}
where the second integral on the right-hand side is estimated trivially by the relation
\begin{equation}\label{pop}
\log|\zeta_{K}(\sigma+it)| \leq N\log\zeta(\sigma),
\end{equation}
since $\sigma >1$. The inequality in (\ref{pop}) can be seen by taking the prime ideal decomposition as in, e.g.\ \cite[p.\ 199]{Rademacher}. An application of the convexity estimates from Lemma \ref{6701} proves the following
\begin{lemma}\label{1stDed}
For $t>t_{0}>0$ and for a parameter $c$ satisfying $1<c\leq \frac{5}{4}$, the following estimate holds
\begin{equation*}
\int_{\frac{1}{2}+it}^{\infty+it} \log |\zeta_{K}(s)|\, ds \leq a_{1} + b_{1}N + g_{1}\log\left(|D_{K}|\left(\frac{t_{2}}{2\pi}\right)^{N}\right),
\end{equation*}
where
\begin{equation*}
a_{1}= \left(c-\tfrac{1}{2}\right)\left(\frac{81}{32t_{0}^{2}} + \log 3\right),
\end{equation*}
\begin{equation*}
b_{1}= \left(c-\tfrac{1}{2}\right)\left(\log\zeta(c) + \frac{81\left(c-\frac{1}{2}\right)}{128t_{0}^{2}}\right) + \int_{c}^{\infty}\log\zeta(\sigma)\, d\sigma,
\end{equation*}
and
\begin{equation*}
g_{1}=\frac{1}{4}\left(c-\tfrac{1}{2}\right)^{2}.
\end{equation*}
\end{lemma}
One writes
\begin{equation*}
\int_{\frac{1}{2}+it}^{\infty+it}\log|\zeta_{K}(s)|\, ds
\end{equation*}
as a sum of three integrals in the style of (\ref{213}). Thence, when $\sigma>1$ one can use the fact that
\begin{equation*}
\log|\zeta_{K}(s)| \geq N(\log|\zeta(2\sigma)| - \log|\zeta(s)|),
\end{equation*}
to write
\begin{equation*}\label{start}
\int_{\frac{1}{2}+it}^{\infty+it}\log|\zeta_{K}(s)|\, ds \geq \int_{\frac{1}{2}+it}^{\frac{1}{2}+d+it}\log\bigg|\frac{\zeta_{K}(s)}{\zeta_{K}(s+d)}\bigg|\, ds +NI(d),
\end{equation*}
where $I(d)$ is the same function defined in (\ref{defineI}) in \S\ref{Proof of Theorem 1.2.2}. One aims at using the functional equation to estimate the integrand on the right-hand side. Using a result of Lang \cite[Ch.\ XIII]{Lang} one can write out the Weierstrass product viz.\
\begin{equation}\label{Lang}
s(s-1)\Lambda_{K} (s)= e^{a+bs}\prod_{\rho}\left(1-\frac{s}{\rho}\right)e^{s/\rho},
\end{equation}
where
\begin{equation*}
\Re b = -\sum_{\rho}\Re\frac{1}{\rho}.
\end{equation*}
This then gives
\begin{equation}\label{Dedspli}
\begin{split}
\int_{\frac{1}{2}+it}^{\infty+it} \log|\zeta_{K}(s)|\, ds &\geq d^{2}\log\left(\frac{\sqrt{|D_{K}|}}{\pi^{\frac{N}{2}}2^{r_{2}}}\right) + r_{1}\int_{\frac{1}{2}+it}^{\frac{1}{2}+d+it}\log\bigg|\frac{\Gamma(\frac{s+d}{2})}{\Gamma(\frac{s}{2})}\bigg|\, ds \\
&+ r_{2}\int_{\frac{1}{2}+it}^{\frac{1}{2}+d+it}\log\bigg|\frac{\Gamma(s+d)}{\Gamma(s)}\bigg|\, ds \\
&+ \int_{\frac{1}{2}+it}^{\frac{1}{2}+d+it} \sum_{\rho}\log\bigg|\frac{s-\rho}{s+d-\rho}\bigg| \, ds + NI(d)\\
&\geq d^{2}\log\left(\frac{\sqrt{|D_{K}|}}{\pi^{\frac{N}{2}}2^{r_{2}}}\right) + I_{1}+I_{2}+I_{3}+NI(d).
\end{split}
\end{equation}
Applying the second mean-value theorem for integrals gives
\begin{equation*}
I_{1}= \frac{r_{1}d^{2}}{2} \Re\frac{\Gamma'\left(\frac{it}{2} +\tau_{1}\right)}{\Gamma\left(\frac{it}{2} +\tau_{1}\right)}; \quad I_{2} = r_{2}d^{2} \Re\frac{\Gamma'\left(it +\tau_{2}\right)}{\Gamma\left(it +\tau_{2}\right)},
\end{equation*}
where $\frac{1}{4}<\tau_{1}<d+\frac{1}{4}$ and $\frac{1}{2} < \tau_{2} <2d+\frac{1}{2}$. Hence Lemma \ref{l8} gives
\begin{equation}\label{I1Ded}
I_{1} \geq \frac{r_{1}d^{2}}{2}\left(\log\frac{t}{2} - \frac{7}{2t_{0}^{2}}\right); \quad I_{2} \geq r_{2}d^{2}\left(\log t - \frac{11}{4t_{0}^{2}}\right).
\end{equation}
The integral $I_{3}$ is estimated using Lemma \ref{l7}; and logarithmic differentiation of (\ref{Lang}) gives
\begin{equation*}
\begin{split}
\sum_{\rho}\Re\left(\frac{1}{s-\rho}\right)&= \Re\left(\frac{1}{s} +\frac{1}{s-1}\right) + \frac{r_{1}}{2}\Re\frac{\Gamma'\left(\frac{s}{2}\right)}{\Gamma\left(\frac{s}{2}\right)}\\
&+ r_{2}\Re\frac{\Gamma'\left(s\right)}{\Gamma\left(s\right)} + \log\bigg|\frac{\sqrt{|D_{K}|}}{\pi^{\frac{N}{2}}2^{r_{2}}}\bigg| + \Re\left(\frac{\zeta'_{K}(s)}{\zeta_{K}(s)}\right).
\end{split}
\end{equation*}
Since
\begin{equation*}
\Re\frac{\zeta'_{K}(s)}{\zeta_{K}(s)} \leq -N\frac{\zeta'(\sigma)}{\zeta(\sigma)},
\end{equation*}
when $\sigma>1$, the expression for $I_{3}$ becomes
\begin{equation*}
\begin{split}
I_{3} \geq -d^{2}(\log 4)\bigg\{&\frac{2}{t_{0}^{2}} +\frac{r_{1}}{2}\left(\log\frac{t}{2} +\frac{2}{t_{0}^{2}}\right) + r_{2}\left(\log t + \frac{3}{2t_{0}^{2}}\right)\\
&+ \log\frac{\sqrt{|D_{K}|}}{\pi^{\frac{N}{2}}2^{r_{2}}} - N\frac{\zeta'(d+\tfrac{1}{2})}{\zeta(d+\tfrac{1}{2})}\bigg\}.
\end{split}
\end{equation*}
Thus the estimates for $I_{1}$ and $I_{2}$, which are contained in (\ref{I1Ded}) and the estimate of $I_{3}$ above prove, via (\ref{Dedspli})
\begin{lemma}\label{LastDed}
If $t>t_{0}>0$ and $d$ is a parameter that satisfies $\frac{1}{2}<d\leq 1$, then the following estimate holds
\begin{equation*}
-\int_{\frac{1}{2}+it}^{\infty +it} \log|\zeta_{K}(s)|\, ds \leq a_{2} + b_{2}N + g_{2}\log\left(|D_{K}|\left(\frac{t}{2\pi}\right)^{N}\right),
\end{equation*}
where
\begin{equation*}
a_{2}=\frac{4d^{2}\log 2}{t_{0}^{2}},
\end{equation*}
\begin{equation*}
b_{2}= d^{2}(\log 2)\left\{\log 2 - \frac{1}{2} - 2\frac{\zeta'\left(\tfrac{1}{2}+d\right)}{\zeta\left(\tfrac{1}{2}+d\right)} + \frac{8}{t_{0}^{2}}\right\} - I(d),
\end{equation*}
and
\begin{equation*}
g_{2}= \frac{d^{2}}{2}(\log 4 -1),
\end{equation*}
and $I(d)$ is defined by (\ref{defineI}) in \S\ref{Proof of Theorem 1.2.2}.
\end{lemma}
Lemmas \ref{lwtollis}, \ref{1stDed} and \ref{LastDed} prove at once
\begin{theorem}\label{L890}
If $t_{2}>t_{1}>t_{0}>0$ and the parameters $c$ and $d$ satisfy $1<c\leq \frac{5}{4}$ and $\frac{1}{2}<d\leq 1$, then the following estimate holds
\begin{equation*}\label{turded}
\bigg|\int_{t_{1}}^{t_{2}}S_{K}(t)\, dt\bigg| \leq a + bN + g\log\left(|D_{K}|\left(\frac{t}{2\pi}\right)^{N}\right),
\end{equation*}
where
\begin{equation*}
\pi a= \left(c-\tfrac{1}{2}\right)\left(\frac{81}{32t_{0}^{2}} + \log 3\right) + \frac{4d^{2}\log 2}{t_{0}^{2}},
\end{equation*}
\begin{equation*}
\begin{split}
\pi b= &\left(c-\tfrac{1}{2}\right)\left(\log\zeta(c) + \frac{81\left(c-\frac{1}{2}\right)}{128t_{0}^{2}}\right) + \int_{c}^{\infty}\log\zeta(\sigma)\, d\sigma \\
&+ d^{2}(\log 2)\left\{\log 2 - \frac{1}{2} - 2\frac{\zeta'\left(\tfrac{1}{2}+d\right)}{\zeta\left(\tfrac{1}{2}+d\right)} + \frac{8}{t_{0}^{2}}\right\} - I(d),
\end{split}
\end{equation*}
and
\begin{equation*}
\pi g= \frac{1}{4}\left(c-\tfrac{1}{2}\right)^{2} + \frac{d^{2}}{2}(\log 4 -1).
\end{equation*}
\end{theorem}
\subsection{Calculations}\label{Dedcalc}
Given the values $D_{K} = 1000, N=4, t_{0} = 40$ and $t_{2}=100$, the quantity to be minimised is
\begin{equation*}
F(a, b, g) = a + 4b + 18g,
\end{equation*}
with $a$, $b$ and $g$ defined in Theorem \ref{L890}. Proceeding with an optimisation programme similar to that in \S \ref{TuringCalc}, one finds that in fact the `trivial estimate', viz.\ the values $c=\frac{5}{4}$ and $d=1$ produce the minimum value of $F(a, b, g)$ and hence the minimum value of $B(D_{K}, t_{2}, N)$ as defined in (\ref{Turing}). The optimisation argument is only better than the trivial estimate when one of the parameters $D_{K}$, $t_{2}$ or $N$ is large, which will certainly occur in future calculations.
\section*{Acknowledgements}
My sincere thanks to Richard Brent, Herman te Riele and Sebastian Wedeniwski for their advice on computations using Turing’s method; and to Andrew Booker who recommended the writing of \S\S \ref{TMDLF} and \ref{TMDZF}. I am grateful for the kind suggestions of the referee. Lastly, I wish to thank my supervisor Roger Heath-Brown for his continual guidance and support.
|
1,108,101,563,003 | arxiv | \section{Introduction}
Toroidal order is a peculiar magnetic state, which accompanies spontaneous breaking of both time-reversal and spatial-inversion symmetries.
The toroidal moment is defined by the sum of outer products of magnetic moments and polarization vectors;
its simplest form is given by spin ``loops" around the inversion centers.
Recently, toroidal order has gained interest because it gives rise to cross correlations between electric and magnetic responses, such as magnetoelectric effects, diamagnetic anomaly, and nonreciprocal directional dichroism~\cite{gorbatsevich1994toroidal,popov1998magnetoelectric,schmid2001ferrotoroidics,EdererPhysRevB.76.214404,Spaldin:0953-8984-20-43-434203,kopaev2009toroidal}.
Such experimental studies, however, are thus far restricted to insulating materials~\cite{FolenPhysRevLett.6.607,popov1999magnetic,arima2005resonant,MiyaharaJPSJ.81.023712,van2007observation,ToledanoPhysRevB.84.094421}.
Recent theoretical studies have cast a new light on the toroidal ordering from two viewpoints~\cite{Yanase:JPSJ.83.014703,Hayami_PhysRevB.90.024432}.
One is the possibility of the toroidal order in metallic systems despite the absence of macroscopic polarization.
The other is the effect of a site-dependent antisymmetric spin-orbit coupling, which appears in the lattice preserving the spatial-inversion (parity) symmetry globally but breaking intrinsically at each magnetic site.
The representative lattice structures with such local parity breaking are zigzag chain, honeycomb lattice, and diamond lattice.
In these systems, peculiar transport and magnetoelectric effects were predicted:
for instance, a new-type of magnetotransport effect, that is, the intrinsic off-diagonal response without an external magnetic field~\cite{Hayami_PhysRevB.90.024432}.
In order to further stimulate experiments, it is desired to systematically study when and how such toroidal ordered metals are realized from the microscopic point of view.
In the present study, we discuss the possibility of a metallic toroidal order in $f$ electron systems.
We here focus on a uranium compound UNi$_4$B~\cite{Mentink1994,Oyamada2007,Haga,Oyamada2008-2} as a candidate material showing the toroidal nature.
UNi$_4$B has been studied for its peculiar magnetic ordering, that is, a partial disorder, which is given by a periodic array of magnetically-ordered and nonmagnetic sites~\cite{Mekata_JPSJ.42.76}.
The compound shows a second-order phase transition at 20~K to the partially disordered state.
The magnetic unit cell consists of nine U sites:
six out of the nine U sites develop magnetic moments and form a coplanar vortex-lattice-type order composed of hexagonal loops of the magnetic moments, while the rest three, each of which is surrounded by the magnetic U sites, remain nonmagnetic.
The peculiar magnetic state was theoretically studied on the basis of a classical pseudospin model by the mean-field approximation~\cite{LacroixUNi4B}.
However, the model was an effective localized spin model obtained by integrating out the interplay between conduction and localized electrons.
Hence, it is not appropriate for describing the nature specific to metallic systems that we are interested in here.
Moreover, despite the vortex-lattice-type order that consists of spin loops, a viewpoint of the toroidal ordering has been lacked in the previous studies.
In particular, the role of the antisymmetric spin-orbit coupling has been fully neglected.
In the following, we revisit this partial disorder with paying attention to the local parity breaking and toroidal ordering.
\section{Model}
\label{sec:Model}
In order to investigate the toroidal ordering in $f$ electron systems, we
begin with the periodic Anderson model, which is a standard microscopic model describing the hybridization between conduction and localized electrons.
We here consider an extension of the model to incorporate the effect of the antisymmetric spin-orbit coupling originating from the lattice symmetry.
Specifically, we consider a layered triangular lattice,
and take into account an antisymmetric hybridization between conduction and localized electrons, with keeping UNi$_4$B in mind.
The Hamiltonian for the extended periodic Anderson model is given by
\begin{align}
\label{eq:Ham_PAMSO_sec2}
\mathcal{H}
= &-t \sum_{\langle i, j \rangle,\sigma}
(c^{\dagger}_{i \sigma} c_{j \sigma} + {\rm H.c.} )- V \sum_{i ,\sigma}
( c^{\dagger}_{i \sigma}f_{i \sigma}+{\mathrm{H.c.}} )
+ E_0 \sum_{i, \sigma} n_{i \sigma}^f \nonumber \\
&
+ U \sum_i n_{i \uparrow}^f n_{i \downarrow}^f
+ \sum_i (\bm{s}^{cf}_i \times \bm{D}^{cf}_i)^z,
\end{align}
where $c^{\dagger}_{i \sigma}$($c_{i \sigma}$) and $f^{\dagger}_{i \sigma}$($f_{i \sigma}$)
are the creation (annihilation) operators of conduction and localized
electrons with spin $\sigma$ at site $i$, and $n_{i\sigma}^{f} = f_{i\sigma}^\dagger f_{i\sigma}$.
The first four terms in Eq.~(\ref{eq:Ham_PAMSO_sec2}) represent the standard periodic Anderson model: the kinetic energy of conduction electrons, the on-site hybridization between conduction and localized
electrons, the atomic energy of $f$ electrons, and the on-site Coulomb interaction for $f$ electrons.
The sum of $\langle i, j \rangle$ in the first term is taken over the nearest-neighbor sites on the layered triangular lattice.
For simplicity, we assume the same transfer integral $t$ for intra- and inter-layer bonds; the results are not qualitatively altered for small anisotropy.
Hereafter, we set $t$ as an energy unit and both the lattice constants in the intra- and inter-layer directions as a length unit.
The last term is the antisymmetric hybridization term between conduction and localized electrons; $\bm{s}_i^{cf} = \sum_{\sigma \sigma'} ( c^{\dagger}_{i \sigma} \bm{\sigma}_{\sigma \sigma'} f_{i\sigma'} + {\rm H.c.})$.
Here, we assume that the vector $\bm{D}^{cf}_i$ has site (sublattice) dependence, which characterizes the local parity breaking at each site, and
originates from the odd-parity crystalline electric field discussed below.
In the following calculations, we fix the electron density at half filling, $n=\sum_{i \sigma} \langle c_{i \sigma}^{\dagger}c_{i\sigma} + f_{i \sigma}^{\dagger}f_{i \sigma} \rangle/N=2$, where $N$ is the total number of sites.
\begin{figure}[htb!]
\begin{center}
\includegraphics[width=1.0 \hsize]{fig1.eps}
\caption{
\label{Fig:triangular_localinversion}
(a) Schematic picture of a triangular plane in the layered structure for the model in Eq.~(\ref{eq:Ham_PAMSO_sec2}).
The green arrows indicate the directions of $\bm{D}^{cf}_{l}$ in Eq.~(\ref{eq:D-UNi4B}).
The dashed diamond indicates the magnetic unit cell, and the numbers from $0$ to $8$ represent the sublattices.
(b) Schematic picture of a model for UNi$_4$B, including
two different types of nonmagnetic ions denoted by open blue and red circles at the centers of U triangular plaquettes.
The configuration of the nonmagnetic ions is a simplified model for the experimental lattice structure in Ref.~\cite{Haga}.
Filled blue circles and green triangles represent the crystallographically different lattice sites on the triangular plane, distinguished by the surrounding nonmagnetic ions.
}
\end{center}
\end{figure}
In the present model, we assume the direction of $\bm{D}^{cf}_i$ in a nine-sublattice form with the specific directions in each triangular plane ($xy$ plane).
The directions are assumed to be common among the triangular planes.
The nine-sublattice form is represented by
\begin{equation}
\label{eq:D-UNi4B}
\bm{D}^{cf}_l =
\begin{cases}
(\cos \frac{\pi}{3}l, \sin \frac{\pi}{3}l, 0) D\sin k_z & (l=0,1,\dots, 5), \\
0 & (l=6,7,8),
\end{cases}
\end{equation}
where $l$ is the sublattice index [see Fig.~\ref{Fig:triangular_localinversion}(a)], and $D$ is a parameter to control the magnitude of the antisymmetric hybridization.
The possible origins for $\bm{D}_i^{cf}$ in the nine-sublattice form are discussed in relation with UNi$_4$B in Sec.~\ref{sec:Discussion}.
\section{Result}
\begin{figure}[htb!]
\begin{center}
\includegraphics[width=1.0 \hsize]{fig2.eps}
\caption{
\label{Fig:souzu_UNi4B}
Ground-state phase diagram of the model in Eq.~(\ref{eq:Ham_PAMSO_sec2}) with Eq.~(\ref{eq:D-UNi4B}) on a layered triangular lattice obtained by the mean-field calculations.
The data are taken at half filling for $D=3$ and $E_0 = -6$.
Schematic pictures of the ordering patterns are shown in the right panel.
The arrows and the size of circles represent magnetic moments and local electron densities of $f$ electrons, respectively.
The phases I, III, and IV are partially disordered states, while the phase V is a noncollinear antiferromagnetic phase.
``Other PD" represents a complicated magnetically ordered state with partial disorder.
See the text for details.
}
\end{center}
\end{figure}
We study the ground state of the model in Eq.~(\ref{eq:Ham_PAMSO_sec2}) with Eq.~(\ref{eq:D-UNi4B}) by the standard Hartree-Fock approximation to the Coulomb $U$ term.
We assume the same nine-sublattice form for the mean fields as that for $\bm{D}^{cf}_i$, but allow an arbitrary magnetic and charge pattern within the nine-site unit cell, similar to the method used in Ref.~\cite{Hayami_PhysRevB.90.024432}.
We also assume that the magnetic moments are within the triangular planes.
We calculate the mean fields by taking the sum over $12 \times 12 \times 36$ grid points in the folded Brillouin zone.
Figure~\ref{Fig:souzu_UNi4B} shows the ground-state phase diagram at half filling obtained by the mean-field calculations.
Schematic pictures of magnetic and charge states of $f$ electrons are shown in the right panel of Fig.~\ref{Fig:souzu_UNi4B}.
The result shows that the system exhibits a variety of magnetic orders while changing the Coulomb interaction $U$ and the hybridization $V$.
The most interesting phase is the partially disordered phase with a vortex-lattice-type magnetic structure, which is denoted by the phase I in Fig.~\ref{Fig:souzu_UNi4B}.
This magnetic pattern coincides with that observed in UNi$_4$B~\cite{Mentink1994}.
This phase I is stabilized in the region for intermediate $U$ and $V$.
The system is metallic and shows charge disproportionation: the local charge density is higher at the nonmagnetic sites than the magnetic sites, as shown in the schematic picture in Fig.~\ref{Fig:souzu_UNi4B}.
The tendency of charge disproportionation in the partially disordered state is the same as that in the partially disordered state in the periodic Anderson model without the antisymmetric hybridization, the last term in Eq.~(\ref{eq:Ham_PAMSO_sec2})~\cite{Hayami2011,Hayami2012} as well as in the Kondo lattice model~\cite{Motome2010}.
This partially disordered state accommodates a toroidal order: the magnetic moments are perpendicular to $\bm{D}^{cf}_i$ at each site with forming spin loops around the nonmagnetic sites (sublattice 7 in Fig.~\ref{Fig:triangular_localinversion}).
Indeed, this metallic phase exhibits a characteristic modulation of the band structure with a band bottom shift, anisotropic magnetotransport, and magnetoelectric effects, similar to the results in the toroidal ordered state discussed in Ref.~\cite{Hayami_PhysRevB.90.024432}.
Such interesting aspects of the partially disordered state, however, have not been studied in experiments thus far.
It is highly desired to examine this peculiar state experimentally from the viewpoint of toroidal ordering.
Let us mention other phases appearing in Fig.~\ref{Fig:souzu_UNi4B}.
The phase II in the small $U$ region is the nonmagnetic phase.
This is a band insulator, whose gap is opened by the hybridization $V$.
The phase III and IV are the partially disordered phases, whose magnetic structures
are similar to each other but different from the phase I, as shown in Fig.~\ref{Fig:souzu_UNi4B}.
The phase III is metallic with small charge disproportionation similar to that in the phase I, whereas the phase IV is insulating without any charge modulation.
The phase V in the large $U$ region is the metallic antiferromagnetic phase, in which all the sites retain nonzero magnetic moments forming a noncollinear 120$^\circ$ structure, as shown in Fig.~\ref{Fig:souzu_UNi4B}.
\section{Discussion}
\label{sec:Discussion}
In the previous section, we found the partially disordered phase showing the toroidal order in the model in Eq.~(\ref{eq:Ham_PAMSO_sec2}) with Eq.~(\ref{eq:D-UNi4B}).
Obviously, the form of $\bm{D}^{cf}_i$ plays an important role in stabilizing this peculiar state.
Let us here discuss the reason why we assume $\bm{D}^{cf}_i$ in the nine-sublattice form in Eq.~(\ref{eq:D-UNi4B}).
In the assumption, we refer to the recent crystal structure data for UNi$_4$B obtained by the neutron diffraction experiment~\cite{Haga}.
The results indicate that the nonmagnetic ions, Ni and B, are not either uniformly or randomly distributed in the U triangular lattice layers, but comprise a periodic structure.
There, 1/3 of U sites are surrounded by the same ions (namely, all B or all Ni), whereas the rest 2/3 are surrounded by a mixture of B and Ni (two B and four Ni or four B and two Ni).
This implies that the 1/3 U sites feel a rather symmetric crystalline electric field from the neighboring nonmagnetic ions, whereas the rest 2/3 are subject to a rather asymmetric one.
Taking account of the characteristic of the local parity breaking, we mimic this experimental situation by the nine-sublattice form of $\bm{D}^{cf}_i$.
Although the actual structure is more complicated with a larger unit cell~\cite{Haga},
we simplify the situation by assuming that two kinds of nonmagnetic ions, corresponding to B and Ni, are aligned eighteen-site unit cell, as denoted by open red and black circles in Fig.~\ref{Fig:triangular_localinversion}(b).
This assumption leads to the nine-site sublattice of the triangular-lattice sites corresponding to U by differentiating two crystallographically different sites, as shown in Fig.~\ref{Fig:triangular_localinversion}(b): the 1/3 sites denoted by the filled blue circles feel a symmetric crystalline electric field, while the rest 2/3 denoted by the green triangles are subject to the asymmetric field in the direction connecting the site with the sublattice site $l=7$.
The asymmetric crystalline electric field results in the local parity breaking at the sublattice sites $l=0, 1, \dots,$ and $5$.
The principal direction of the asymmetry determines the direction of $\bm{D}^{cf}_i$ as Eq.~(\ref{eq:D-UNi4B}).
Besides the structural origin from the nonmagnetic ions, the site-dependent anisotropic $\bm{D}^{cf}_i$ might also originate from the magnetostriction.
In general, the bond length depends on both
the relative angle between the magnetic moments at the both ends as well as their magnitudes.
Hence, the U-U distances can be modulated according to the peculiar spin configuration in the partially disordered state below the transition temperature:
by the symmetry argument, the magnetostriction arising from the vortex-lattice-type magnetic pattern leads to the same form of $\bm{D}^{cf}_i$ as in Eq.~(\ref{eq:D-UNi4B}).
We note that this contribution appears only below the magnetic transition temperature.
Although we assumed the model-embedded (temperature-independent) $\bm{D}^{cf}_i$ in the present analysis, a rather different situation may occur if $\bm{D}^{cf}_i$ becomes nonzero only by the magnetostriction, i.e., only once the partial disorder sets in.
In this situation, the magnetoelectric effects appear only below the critical temperature.
\section{Summary}
We have examined the possibility of active toroidal moments in the partially disordered state, bearing UNi$_4$B in mind.
Analyzing an extended periodic Anderson model including the antisymmetric hybridization between conduction and localized electrons, we have shown that the model exhibits a partial disorder with vortex-lattice-type magnetic ordering, similar to that in UNi$_4$B.
Our results imply that the partially disordered state in UNi$_4$B may host a toroidal order.
Hence, we anticipate that UNi$_4$B shows the interesting band deformation, magnetotransport, magnetoelectric effects, and intrinsic off-diagonal magnetoelectric response without an external magnetic field, as pointed out in Ref.~\cite{Hayami_PhysRevB.90.024432}.
On the other hand, further analyses are desirable in order to test the validity of our assumption on the form of the antisymmetric hybridization.
We have discussed two scenarios: the surrounding nonmagnetic sites and the magnetostriction.
It might be possible to identify the origin of the antisymmetric hybridization experimentally, for instance, by the magnetoelectric response.
In the former scenario, the staggered (toroidal) response is nonzero above the critical temperature and shows a peak near the critical temperature~\cite{Hayami_PhysRevB.90.024432}, while it becomes nonzero only below the critical temperature in the latter~\cite{Hayami_PhysRevB.90.081115}.
\ack
The authors thank Y. Haga and H. Amitsuka for fruitful discussions.
S.H. is supported by Grant-in-Aid for JSPS Fellow.
This work was supported by Grants-in-Aid for Scientific Research (No.~24340076), the Strategic Programs for Innovative Research (SPIRE), MEXT, and the Computational Materials Science Initiative (CMSI), Japan.
\section*{References}
\bibliographystyle{iopart-num}
|
1,108,101,563,004 | arxiv | \section{Introduction}\label{intro}
Hardy martingales developed alongside Banach spaces of analytic functions and
played an important role in establishing their isomorphic invariants.
For instance those martingales were employed in the construction of subspaces in $L^1/H^1$
isomorphic to $L^1.$ An integrable Hardy martingale $F=(F_k)$ satisfies
the $L^1$ estimate
$$ \| \sup_k |F_k| \| _1 \le e \sup_k \|F_k \| _1 , $$
and it may be decomposed into the sum of Hardy martingales as $ F = G+B$ such that
$$ \| (\sum \bE_{k-1} |\Delta _k G|^2)^{1/2} \|_1 + \sum \|\Delta B_k\|_1 \le C\|F\|_1. $$
See Garling, Bourgain, Mueller. Equally peculiar for Hardy martingales are the
are the transform estimates
$$
\| (\sum \bE_{k-1} |\Delta _k G|^2)^{1/2} \|_1 \le C
\| (\sum \bE_{k-1} |\Im w_{k-1}\Delta _k G|^2)^{1/2} \|_1 ,$$
for every adapted sequence $(w_k) $ satisfying $ |w_k| \ge 1/C . $
A proof of Bourgain's theorem that $L^1$ embeds into $L^1/H^1$ may be obtained
in the following way:
\begin{enumerate}
\item Use as starting point the estimates of the Garnett Jones Theorem.
\item Prove stability under dyadic perturbation for the Davis and Garsia Inequalities.
\item Prove stability under dyadic perturbation of the martingale transform estimates.
\end{enumerate}
We determined the extent to which DGI are stable under dyadic perturbation, and we showed how
the above strategy actually gives an isomorphism from $L^1$ into a subspace of $L^1/H^1. $
In the present paper we turn to the martingale transform estimates and verify that they are
indeed stable under dyadic perturbations.
\section{Preliminaries}
\paragraph{Martingales and Transforms on $\bT^\bN$. }
Let $\bT= \{ e^{i\theta} : \theta \in
[0, 2\pi [ \} $ be the torus
equipped with the normalized angular measure.
Let $\bT^\bN $
be its countable product
equipped with the product Haar measure $\bP .$ We let
$\bE $ denote expectation with respect to $\bP .$
Fix $k \in \bN $, the
cylinder sets
$ \{(A_1, \dots, A_k , \bT^\bN )\},$
where $A_i,\, i \le k $ are measurable subsets
of $\bT$, form the $\s-$algebra $\cF_k $.
Thus we obtain a filtered probability space $(\bT^\bN, (\cF_k) , \bP) $.
We let $\bE_{k}$ denote the conditional expectation with
respect to the $\s-$algebra $\cF_k .$
Let $G = (G_k) $ be an $L^1(\bT^\bN)-$bounded martingale.
Conditioned on $\cF_{k-1}$ the martingale difference $\Delta G_k =
G_k- G_{k-1}$ defines an element in $L_0^1(\bT) ,$ the Lebesgue space of integrable,
functions with vanishing mean.
We define the previsible norm as
\begin{equation}\label{15augmt2}
\| G\|_\cP =\| ( \sum_{k = 1 }^\infty \bE_{k-1} |\Delta G_k |^2 )^{1/2} \|_{L^1},
\end{equation}
and refer to $( \sum_{k = 1 }^\infty \bE_{k-1} |\Delta G_k |^2 )^{1/2}$
as the conditional square function of $G.$
For any bounded and
adapted sequence $W = ( w_k ) $ we define
the martingale transform operator $T_W $ by
\begin{equation}\label{18-10-19-4}
T_W (G ) = \Im \left [ \sum w_{k-1} \Delta_k G \right ] .
\end{equation}
Garsia \cite{sia} is our reference to martingale inequalities.
\paragraph{Sine-Cosine decomposition.}
Let
$G = (G_k
$
be a martingale on $\bT^\bN$ with respect to the canonical product filtration $(\cF_k) $.
Let $U = (U_k)$
be the martingale defined by averaging
\begin{equation}\label{18-10-16-2}
U _k (x,y) =
\frac12 \left[ G_k(x,y)+ G _k(x,\overline{y})\right],
\end{equation}
where $x \in \bT^{k-1} , \, y \in \bT . $
The martingale $U $
is called the cosine part of $G$.
Putting $V_k = G_k - U_k $ we obtain the corresponding
sine-martingale $ V = ( V_k) $, and the sine-cosine decomposition of $G$ defined by
$$ G = U +V . $$
By construction we have
$\Delta V_k ( x , y ) = -\Delta V_k ( x , \overline{y} ), $
and $ U_k ( x , y ) = U_k ( x , \overline{y} ), $ for any $k\in \bN .$
\paragraph{The Hilbert transform.}
The Hilbert transform on $L^2 ( \bT )$ is defined as
Fourier multiplier by
$$ H( e^{in\theta}) = -i {\rm sign} (n) e^{in\theta} . $$
Let $1 \le p \le \infty. $ The Hardy space $H^p_0 (\bT ) \sb L^p_0 (\bT ) $
consist of those $p-$integrable
functions of vanishing mean, for which the harmonic extension to the
unit disk is analytic. See \cite{jg81}.
For $ h \in H^2_0( \bT )$ and let $ y = \Im h . $
The Hilbert transform recovers $h $ from its imaginary part $y$ ,
we have $ h = -Hy +iy . $ and
$ \| h \| _2 = \sqrt{2} \| y \|_2 . $ For
$ w \in \bC ,$ $ | w| = 1 $ we have therefore
$$ \| h \| _2 = \sqrt{2} \| y \| _2= \sqrt{2}\| \Im (w\cdot h)\|_2 . $$
\section{Martingale estimates}
\paragraph{Hardy martingales. }
An $L^1(\bT^\bN ) $ bounded $(\cF_k)$ martingale $G = (G_k) $
is called a Hardy martingale if conditioned on $\cF_{k-1}$
the martingale difference
$ \Delta G _k $
defines an element in $H^1_0 (\bT ). $ See \cite{g1}, \cite{gar2}.
\cite{pfxm12, deco-2014, deco-2016}
Since the Hilbert transform, applied to functions with vanishing mean, preseves the $ L^2 $ norm, we have
$\bE_{k-1} | \Delta U _k |^2
= \bE_{k-1} |\Im w_{k-1} \Delta G_k|^2 , $
for each adapted sequence $W = ( w_k ) $ with $|w_k | =1 ,$ and consequently,
\begin{equation}\label{24o1210}
\| ( \sum \bE_{k-1} | \Delta U_k |^2 )^{1/2} \|_1 =
\| ( \sum \bE_{k-1} |\Im w_{k-1} \Delta G_k|^2 )^{1/2} \|_1 .
\end{equation}
We restate \eqref{24o1210} as
$
\| U \| _{\cP } = \| T_ W ( G ) \|_\cP ,$
where
$
T_W (G ) = \Im \left [ \sum w_{k-1} \Delta_k (G )\right ] .
$
In this paper we show that the lower $\cP$ norm estimate $
\| U \| _{\cP } \le \| T_ W ( G ) \|_\cP ,$ is stable under dyadic perturbation.
\paragraph{Dyadic martingales. }
The dyadic sigma-algebra on $ \bT^\bN $ is defined with
Rademacher functions. For $ x = ( x _k ) \in \bT^\bN $
define $\cos_k ( x ) = \Re x_k $ and
$$
\s_ k ( x ) = {\rm sign} (\cos_k ( x )).
$$
We let $\cD$ be the sigma- algebra
generated by
$ \{\s_k , k \in \bN \} $ and call it the dyadic sigma-algebra
on $ \bT^\bN .$
Let $ G \in L^1 ( \bT^\bN ) $
with sine cosine decomposition $G = U + V $,
then $\bE ( U_k | \cD ) =\bE ( G _k | \cD ) $
for $k\in \bN , $ and hence
$$ U - \bE ( U | \cD ) + V = G - \bE ( G | \cD ). $$
Our principle result
asserts
stability for \eqref{24o1210} under dyadic perturbations as follows:
\begin{theor}\label{11aug60b}
Let $G = (G_k)_{k = 1 }^n $ be a martingale and let $U = (U_k)_{k = 1 }^n $
be its cosine martngale given by \eqref{18-10-16-2}.
Then, for any
adapted sequence $W = ( w_k ) $ satisfying $|w_k | =1 ,$ we have
\begin{equation}\label{11aug95aa}
\|U - \bE ( U |\cD ) \|_{\cP} \le
C \| T_ W ( G - \bE ( G | \cD )) \|_\cP^{1/2} \|G\|_{\cP}^{1/2} ,\end{equation}
where $T_W $ is the martingale transform operator defined by \eqref{18-10-19-4}.
\end{theor}
Define $\s \in L^2 (\bT )$ by
$ \s (\zeta) = {\rm sign} \Re \zeta . $ Note that $\s (\zeta) = \s(\overline{\zeta}) , $
for all $\zeta \in \bT . $
For $ f , g \in L^2(\bT) $ we put $ \la f , g \ra =\int_\bT f \overline {g} dm .$
\begin{lemma}\label{10sep121}
Let $h \in H_0^2 ( \bT) ,$ and
$ u(z)
= (h(z) +h( \overline{z} ))/2 $
Then for $ w,b \in \bC , $ with $ |w| = 1 , $
$$
\Im ^2(w\cdot(\la u, \s \ra - b)) + \Re^2 (w\cdot\la u, \s \ra) +
\int_{\bT} |u - \la u, \s \ra \s |^2 d m
=\int_{\bT} \Im ^2(w\cdot( h - b \s )) d m
$$
\end{lemma}
\goodbreak\noindent{\sc Proof. }\nobreak
First put
$ w_0 = 1_\bT , $ $ w_1 = \s, $
and choose any orthonormal system $\{w_k: k \ge 2 \}$ in
$L_G^2 (\bT ) $ so that $\{w_k: k \ge 0\}$ is an orthonormal
basis for $L_G^2 (\bT ). $ Then
$\{w_k, H w_k : k \ge 0\},$
where $H$ the Hilbert transform, is a orthonormal basis in $ L^2 ( \bT )$.
Moreover in the Hardy space $ H^2 ( \bT )$ the analytic system
$$
\{(w_k +i H w_k) : k \ge 0\}$$
is an orthogonal
basis with $\|w_k +i H w_k \|_2 = \sqrt{2} , \, k \ge 1 .$
Fix $ h \in H_0^2 ( \bT) $ and $ w,b \in \bC , $ with $ |w| =
1 .$ Clearly by replacing $ h $ by $w h $ and $ b $ by $ w b $
it suffices to prove the lemma with $w = 1 . $
Since $ \int u = 0 $ we have
that $$ u = \sum_{n = 1}^\infty c_n w_n .$$
We apply the Hilbert transform and rearrange terms to get
\begin{equation}\label{10sep122}
h - b\s= (c_1 - b)\s + ic_1 H\s + \sum_{n = 2}^\infty c_n (w_n +i H
w_n) .
\end{equation}
Then, taking imaginary parts gives
\begin{equation}\label{10sep123}
\Im (h-b\s) =
\Im (c_1 - b)\s + \Re c_1 H \s + \sum_{n = 2}^\infty \Im c_n w_n +
\Re c_n Hw_n .
\end{equation}
By ortho-gonality the identity \eqref{10sep123} yields
\begin{equation}\label{10sep124}
\int_{\bT} \Im ^2( h - b\s ) d m
= \Im ^2(c_1 - b) + \Re^2 c_1 + \sum_{n = 2}^\infty |c_n|^2 .
\end{equation}
On the other hand, since $ \int u = 0 $, $c_1 =\la u, \s \ra, $ and $ w_1 = \s $
we get
\begin{equation}\label{10sep125}
\int_{\bT} |u - \la u, \s \ra \s |^2 d m
= \sum_{n = 2}^\infty |c_n|^2 .
\end{equation}
Comparing the equations \eqref{10sep124} and \eqref{10sep125}
completes the proof.
\endproof
We use below some arithmetic, that we isolate first.
\begin{lemma}\label{10sep126}
Let $ \mu , b \in \bC $ and
\begin{equation}\label{11aug65d}
|\mu| +\dfrac{|\mu-b|^2}{|\mu|+|b|} = a.
\end{equation}
Then for any $w \in \bT ,$
\begin{equation}\label{11aug60e}
( a - |b|)^2 \le 4( \Im^2 (w\cdot(\mu-b))
+ \Re^2 (w\cdot \mu) ) .
\end{equation}
and
\begin{equation}\label{11aug60d}
|\mu-b |^2 \le 2(a^2 - |\mu|^2).
\end{equation}
\end{lemma}
\goodbreak\noindent{\sc Proof. }\nobreak
By rotation invariance it suffices to prove
\eqref{11aug60e} for $w = 1 .$
Let $ \mu = m_1 + i m_2 $ and $ b = b_1 + i b_2 . $
By definition \eqref{11aug65d}, we have
$$
a - |b| = \dfrac{ |\mu| ^2 - |b|^2 + | \mu - b |^2}{| \mu| + |b |}.
$$
Expand and regroup the numerator
\begin{equation}\label{17sep121}
|\mu| ^2 - |b|^2 + | \mu - b |^2
= 2m_1 ( m_1 - b_1 ) + 2 m_2 ( m_2 - b_2 ) .
\end{equation}
By the Cauchy Schwarz inequality, the right hand side \eqref{17sep121}
is bounded by
$$
2 ( m_1^2 + ( m_2 - b_2 )^2 )^{1/2}( m_2^2 + ( m_1 -b_1)^2)^{1/2} .
$$
Note that $m_1 = \Re \mu $ and $m_2 - b_2 = \Im ( \mu - b ) . $
It remains to observe that
$$
( m_2^2 + ( m_1 -b_1)^2)^{1/2} \le |\mu | + |b| .$$
or equivalently
$$
m_1^2+ m_2^2 - 2 m _1 b_2 + b_1^2 \le |\mu |^2 +2 |\mu| |b|+
|b|^2, $$
which is obviously true.
Next we turn to verifying \eqref{11aug60d}. We have
$a^2 -|\mu|^2 = (a+|\mu|)(a -|\mu|)$ hence
\begin{equation}\label{17sep124}
a^2 -|\mu|^2 =
\left[ 2|\mu| +\dfrac{|\mu-b|^2}{|\mu|+|b|}
\right]\dfrac{|\mu-b|^2}{|\mu|+|b|} .\end{equation}
In view of \eqref{17sep124} we get \eqref{11aug60d} by showing that
\begin{equation}\label{17sep123}
2|\mu|^2 +2|\mu||b| + |\mu-b|^2 \ge \frac12 ( |\mu|+|b| )^2 .
\end{equation}
The left hand side of \eqref{17sep123} is larger than
$|\mu|^2+|b|^2$ while the right hand side of \eqref{17sep123} is
smaller $|\mu|^2+|b|^2 .$
\endproof
We merge the inequalities of Lemma \ref{10sep126}
with the identity in Lemma \ref{10sep121}.
\begin{prop}\label{25a121}
Let $ b \in \bC $ and $h \in H_0^2 ( \bT) .$
If $ u(z)
= (h(z) +h( \overline{z} ))/2 $ and
$$ | \la u, \s \ra| +\frac{| \la u, \s \ra-b|^2}{| \la u,
\s \ra| + |b|} = a , $$
then
\begin{equation}\label{22aug12b}
\int_{\bT} |u - b \s |^2 dm
\le 8(a^2 -|\la u, \s \ra|^2)+ \int_{\bT} |u - \la u, \s \ra \s |^2
dm.
\end{equation}
and for all $ w \in \bC , $ with $|w| = 1 ,$
\begin{equation}\label{22aug12}
( a - |b| )^2 + \int_{\bT} |u - \la u, \s \ra \s |^2
dm \le 8
\int_{\bT} \Im ^2(w\cdot( h - b \s )) dm .
\end{equation}
\end{prop}
\goodbreak\noindent{\sc Proof. }\nobreak
Put
\begin{equation}\label{25a123}
J^2 = \int_{\bT} \Im ^2(w\cdot( h - b \s )) dm .
\end{equation}
The proof exploits the basic identities for the integral $J^2 $ and
$\int_{\bT} |u - b \s |^2 dm $ and intertwines them with the
arithmetic \eqref{11aug65d} -- \eqref{11aug60d}.
\paragraph{Step 1.} Use the straight forward identity,
\begin{equation}\label{22aug12a}
\int_{\bT} |u - b \s |^2 dm
= |\la u, \s \ra -b |^2 + \int_{\bT} |u - \la u, \s \ra \s |^2
dm.
\end{equation}
Apply \eqref{11aug60d}, so that
$$ |\la u, \s \ra -b |^2 \le 8(a^2 -|\la u, \s \ra|^2 ) ,$$
hence by \eqref{22aug12a} we get \eqref{22aug12b},
\begin{equation*
\int_{\bT} |u - b \s |^2 dm
\le 8(a^2 -|\la u, \s \ra|^2)+ \int_{\bT} |u - \la u, \s \ra \s |^2
dm.
\end{equation*}
\paragraph{Step 2.}
The identity of Lemma \ref{10sep121} gives
\begin{equation}\label{22aug12c}
\Im ^2(w\cdot(\la u, \s \ra - b)) + \Re^2 (w\cdot\la u, \s \ra) +
\int_{\bT} |u - \la u, \s \ra \s |^2 dm = J^2 .
\end{equation}
Apply \eqref{11aug60e} with $ \mu = \la u, \s \ra $
to the left hand side in \eqref{22aug12c}, and
get \eqref{22aug12},
\begin{equation*
( a - |b| )^2 + \int_{\bT} |u - \la u, \s \ra \s |^2
dm \le 8 J^2 .
\end{equation*}
\endproof
\subsubsection*{Proof of Theorem \ref{11aug95aa}}
Let $ \{g_k\} $ be the martingale difference sequence of
the Hardy martingale $G = (G_k) ,$ and
let
$ \{u_k\} $ be the martingale difference sequence of
the associated cosine martingale
$U = (U_k) .$
By convexity we have
$$\bE ( \sum_{k = 1}^\infty |\bE_{k-1}( u_k \s_k )|^2)^{1/2} =
\bE \bE ( ( \sum_{k = 1}^\infty |\bE_{k-1}( u_k \s_k )|^2)^{1/2} | \cD ) \ge \bE ( \sum_{k = 1}^\infty |\bE (\bE_{k-1}( u_k \s_k ) | \cD) |^2
)^{1/2} .
$$
Put $ b_k = \bE (\bE_{k-1}( u_k \s_k ) | {\cD})$
and note that
$ \bE ( u_k | \cD ) =
b_k \s_k
.$
\paragraph{Step 1.}
Let $
Y^2 = \sum _{k = 1 }^\infty | \bE_{k-1} (u_k \s_k) |^2 $ and $Z^2 = \sum_{k = 1 }^\infty |b_k| ^2 $.
Then restating the above convexity estimate we have
\begin{equation}\label{28aug70j}
\bE ( Y ) \ge \bE ( Z) .
\end{equation}
\paragraph{Step 2.}
Since
$
\bE(g_k |{\cD}) = \bE(u_k | {\cD}) , $
the square of the conditioned square functions of
$ T_W ( G - \bE ( G | {\cD})) $ coincides with
\begin{equation}\label{28a12x}
\sum \bE_{k-1} |\Im (w_{k-1}\cdot( g_k - b_k \s_k ))|^2 .
\end{equation}
\paragraph{Step 3.}
The sequence $ \{ u_k -b_k \s_k\} $ is the martingale difference sequence of
$ U - \bE_{\cD}( U)$.
The square of its conditioned square functions is hence given by
\begin{equation}\label{28a1245}
\sum \bE_{k-1}| u_k - b_k \s_k |^2 .
\end{equation}
Following the pattern of \eqref{11aug65d} define
$$ a_k = |\bE_{k-1}( u_k \s_k )| +\frac{|\bE_{k-1}( u_k \s_k ) -b_k|^2}{
|\bE_{k-1}( u_k \s_k )| + |b_k|}
, $$
and
$$v_k =u_k - \bE_{k-1}( u_k \s_k )\s_k , \quad\quad r_k ^2 = \bE_{k-1}| v_k |^2 .$$
By \eqref{22aug12b}
\begin{equation}\label{11aug70a}
\begin{aligned}
&\bE_{k-1}| u_k - b_k \s_k |^2
\le 8 ( a_k^2 + r_k ^2 - |\bE_{k-1}^2( u_k \s_k ) | ).
\end{aligned}
\end{equation}
\paragraph{Step 4.}
With
$ X^2 = \sum_{k = 1 }^\infty a_k ^2 + r_k^2 ,$
we have the obvious pointwise estimate, $ X \ge Y $. Taking into account \eqref{11aug70a}
gives
\begin{equation}\label{28aug70g}
\| U - \bE (U | {\cD}) \|_\cP \le \sqrt{8} \bE (X^2 -Y^2)^{1/2} \le \sqrt{8}( \bE( X - Y ))^{1/2} ( \bE( X + Y ))^{1/2} .
\end{equation}
The factor $\bE( X + Y )$
in \eqref{28aug70g} admitts an upper bound by
\begin{equation}\label{28aug70h}
\bE (X+Y) \le C \|U \|_{\cP} \le C \|G \|_{\cP} .
\end{equation}
\paragraph{Step 5.}
Next we turn to estimates for $\bE( X - Y ).$
By \eqref{28aug70j},
$\bE( X - Y )
\le
\bE( X - Z ) ,$
and by triangle inequality
$$
X - Z \le (\sum_{k = 1 }^\infty (a_k - |b_k|)^2 + r_k^2)^{1/2}
.$$
By \eqref{22aug12}
\begin{equation*
( a_k - |b_k|)^2 + r_k^2
\le 8 \bE_{k-1} |\Im (w_{k-1}\cdot( g_k - b_k \s_k ))|^2 ,
\end{equation*} and hence
$$ \bE( X - Z )
\le C \| T_W(G - \bE( G | {\cD}))\|_\cP .$$
Invoking
\eqref{28aug70g}
and \eqref{28aug70h} completes the proof.
\endproof
\bibliographystyle{abbrv}
|
1,108,101,563,005 | arxiv | \section{Introduction}
\label{sec:intro}
Recently, language models (LMs) trained on large datasets have achieved remarkable success in various Natural Language Processing (NLP) tasks (cf. \citealp{1905.00537, wang2018glue}). The literature of targeted syntactic evaluations has shown that these models implicitly learn syntactic structures of natural language, even though they do not receive explicit syntactic supervision \citep{warstadt-etal-2020-blimp-benchmark, hu-etal-2020-systematic}.
However, previous work has also shown that there is still a benefit for LMs to receive explicit syntactic supervision. Recurrent Neural Network Grammars (RNNGs; \citealp{dyer-etal-2016-recurrent}), the integration of Recurrent Neural Networks (RNNs; \citealp{Elman:1990}) with an explicit syntactic bias, have achieved better syntactic generalization performance than vanilla RNNs \citep{kuncoro-etal-2018-lstms, wilcox-etal-2019-structural, hu-etal-2020-systematic}. In addition, previous work has recommended RNNGs as a cognitively plausible architecture, showing that RNNGs can successfully predict human reading times \citep{yoshida-etal-2021-modeling} or brain activities \citep{hale-etal-2018-finding}. The key difference between RNNGs and RNNs is a \textbf{composition function}, which recursively composes subtrees into a single vector representation.
On the other hand, Transformer architectures \citep{Vaswani2017AttentionNeed} have been shown to outperform RNN architectures in various NLP tasks \citep{devlin-etal-2019-bert}. The key difference between Transformers and RNNs here is a \textbf{self-attention mechanism}, which selectively attends to previous vectors to obtain sentence representations. Recently, an attempt was made to investigate whether Transformer architectures with the self-attention mechanism also benefit from explicit syntactic supervision \citep{qian-etal-2021-structural}, but their ``Parsing as Language Modeling (PLM)'' approach \citep{choe-charniak-2016-parsing} does not employ the composition function, which is essential for RNNGs. Therefore, it is reasonable to hypothesize that their approach may not achieve the full benefit of explicit syntactic supervision.
In this paper, we propose a novel architecture called \textbf{Composition Attention Grammars} (CAGs) that recursively compose subtrees into a single vector representation with the composition function, and selectively attend to previous structural information with the self-attention mechanism. We investigate whether these components---the composition function and the self-attention mechanism---can both induce human-like syntactic generalization. Specifically, we train LMs with and without these two components, with the model sizes carefully controlled, and evaluate their syntactic generalization performance against six test circuits \citep{hu-etal-2020-systematic} on the SyntaxGym benchmark \citep{gauthier-etal-2020-syntaxgym}. The results demonstrated that the composition function and the self-attention mechanism both play an important role to make LMs more human-like, and closer inspection of linguistic phenomenon implied that the composition function allowed syntactic features, but not semantic features, to percolate into subtree representations.
In addition, the methodological innovation of this paper is a strictly controlled experimental design, as practiced in cognitive sciences. In NLP research, evaluations are often conducted on models with different model sizes, leading to uncertainty regarding which component of these models affects the results. This paper conducts strictly controlled experiments in order to isolate the effects of individual components such as the composition function and the self-attention mechanism.
\section{Composition Attention Grammar}
\label{sec:CAG}
In this section, we introduce a novel architecture called Composition Attention Grammars (CAGs).
\begin{figure*}[t]
\centering
\includegraphics[width=\hsize]{figs/CAG.pdf}
\caption{The architecture of Composition Attention Grammars (CAGs). CAGs utilize (i) the composition function to recursively compose subtrees into a single vector representation, and (ii) the self-attention mechanism to selectively attend to previous structural information.}
\label{fig:CAG}
\end{figure*}
\subsection{Syntactic language model}
\label{subsec:actions}
CAGs are a type of syntactic LM \citep{choe-charniak-2016-parsing, dyer-etal-2016-recurrent, qian-etal-2021-structural}, which estimates the following joint distribution of a sentence $X$ and its syntactic structure $Y$:
\begin{align}
\label{eq: joint_dist}
p(X,Y) = p(a_1, \cdots, a_n) = \prod_{t=1}^{n}p(a_t|a_{<t})
\end{align}
where $a_t$ is an action by which CAGs jointly generate the sentence and its syntactic structure in a top-down, left-to-right fashion. Each $a_t$ can be one of the three actions below:
\begin{itemize}
\item \texttt{GEN(x)}: Generate a terminal symbol ``x''.
\item \texttt{NT(X)}: Open a nonterminal symbol ``X''.
\item \texttt{REDUCE}: Close a nonterminal symbol that was opened by \texttt{NT(X)}.
\end{itemize}
See Figure~\ref{fig:action} for an example of actions to jointly generate the sentence and its syntactic structure in a top-down, left-to-right fashion.
\subsection{Architecture}
To estimate the joint distribution in Equation~\ref{eq: joint_dist}, CAGs utilize (i) the composition function to recursively compose subtrees into a single vector representation, and (ii) the self-attention mechanism to selectively attend to previous structural information. The architecture of CAGs is summarized in Figure~\ref{fig:CAG}. Following previous work \citep{kuncoro-etal-2017-recurrent, noji-oseki-2021-effective}, CAGs rely on a stack data structure, and each action in Section~\ref{subsec:actions} changes the stack state as follows:
\begin{itemize}
\item \texttt{GEN(x)}: Push a terminal embedding $\mathbf{e}_x$ onto the stack.
\item \texttt{NT(X)}: Push a nonterminal embedding $\mathbf{e}_X$ onto the stack.
\item \texttt{REDUCE}: First, repeatedly pop vectors from the stack until a nonterminal embedding is popped. Then, apply the composition function based on bidirectional LSTMs \citep{650093} to these popped vectors $\mathbf{e}_l, \dots, \mathbf{e}_m$, to compose subtrees into a single vector representation $\mathbf{e}_s$:
\begin{align}
\mathbf{e}_s = \comp([\mathbf{e}_l, \dots, \mathbf{e}_m]).
\end{align}
$\mathbf{e}_s$ is then pushed onto the stack.
\end{itemize}
After each action, CAGs employ the self-attention mechanism, which selectively attends to previous vectors in the stack $\mathbf{e}_1, \dots, \mathbf{e}_k$ by calculating the weight of attention to each vector with the query, key, and value vectors generated from $\mathbf{e}_1, \dots, \mathbf{e}_k$, in order to represent a partial parse at each time step $t$:
\begin{align}
\mathbf{h}_t = \selfattn([\mathbf{e}_1, \dots, \mathbf{e}_k]).
\end{align}
Then, $\mathbf{h}_t$ defines the next action distribution:
\begin{align}
a_{t+1} \sim \softmax(\mathbf{W}_a\mathbf{h}_t+\mathbf{b}_a)
\end{align}
where $\mathbf{W}_a$ and $\mathbf{b}_a$ are the weights and biases of a fully connected layer that projects $\mathbf{h}_t$ to logits for each action $a$, and $\softmax$ is a softmax function that projects the logits to the next action distribution.
\subsection{Differences from other syntactic LMs}
In this subsection, we focus on the differences between CAGs and other syntactic LMs.
\paragraph{Difference from RNNGs}
CAGs and RNNGs both utilize the composition function to recursively compose subtrees into a single vector representation. CAGs differ from RNNGs in that, in order to represent the partial parse at each time step, CAGs utilize the self-attention mechanism which selectively attends to previous structural information, whereas RNNGs utilize stack-LSTMs \cite{dyer-etal-2015-transition}. We hypothesize that CAGs have the advantage of selective attention to previous structural information over RNNGs.
\paragraph{Difference from PLMs}
CAGs and PLMs both utilize the self-attention mechanism which selectively attends to previous structural information. CAGs differ from PLMs in that CAGs utilize the composition function to recursively compose subtrees into a single vector representation, whereas PLMs treat actions $a_1, \dots, a_n$ flatly as vanilla Transformers treat words $w_1, ..., w_n$. We hypothesize that CAGs have the advantage of recursive composition of subtrees over PLMs.
In order to incorporate composition-like characteristics, \citet{qian-etal-2021-structural} proposed PLM-masks, namely, PLMs with a dynamic masking mechanism, which specializes two attention heads: one to attend to the inside of the most recently opened nonterminal symbol, and another to attend to the outside. We will perform a comparison between CAGs and PLM-masks in order to investigate whether recursive composition of subtrees has additional advantages over the dynamic masking mechanism in inducing human-like syntactic generalization.
\begin{table*}[t]
\centering
\scalebox{0.78}{
\begin{NiceTabular}{l||c|c|c}
\toprule
\multicolumn{1}{c}{}&\multicolumn{1}{c}{$-\syn$}&\multicolumn{2}{c}{$+\syn$}\\
\cmidrule{3-4}
\multicolumn{1}{c}{}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{$-\comp$}&\multicolumn{1}{c}{$+\comp$}\\
\hline\hline
$-\selfattn$ & \begin{tabular}{c}LSTM\\ \citep{hochreiterLongShorttermMemory1997}\end{tabular} & \begin{tabular}{c}ActionLSTM\\ \citep{choe-charniak-2016-parsing}\end{tabular} & \begin{tabular}{c}RNNG\\ \citep{dyer-etal-2016-recurrent}\end{tabular} \\
\hline
$+\selfattn$ & \begin{tabular}{c}Transformer\\ \citep{radford2018improving}\end{tabular} & \begin{tabular}{c}PLM\\ \citep{qian-etal-2021-structural}\end{tabular} & \begin{tabular}{c}PLM-mask\\($(+)\comp$; \citealp{qian-etal-2021-structural})\\ \rowcolor[rgb]{0.83, 0.83, 0.83} CAG\\ (This work) \end{tabular} \\
\bottomrule
\end{NiceTabular}}
\caption{LMs investigated in this paper. $\pm\syn$ means whether LMs receive explicit syntactic supervision. $\pm\comp$ means whether LMs utilize the composition function, and $\pm\selfattn$ means whether LMs are based on Transformer architectures with the self-attention mechanism. PLM-masks do not utilize the composition function, but use the local subtree information with the dynamic masking mechanism ($(+)\comp$).}
\label{tab:models}
\end{table*}
\begin{table*}[t]
\centering
\scalebox{0.90}{
\begin{tabular}{lccccc}
\toprule
& \#Layer & \#Hidden dimension & \#Input dimension & \#Head & \#Model size\\
\hline
LSTM & 2 & 301 & 301 & N/A & \textbf{16.59M} \\
ActionLSTM & 2 & 301 & 301 & N/A & \textbf{16.58M} \\
RNNG & 2 & 276 & 276 & N/A & \textbf{16.61M} \\
Transformer & 3 & 272 & 272 & 4 & \textbf{16.62M} \\
PLM & 3 & 272 & 272 & 4 & \textbf{16.63M} \\
PLM-mask & 3 & 272 & 272 & 4 & \textbf{16.63M} \\
CAG & 3 & 256 & 256 & 4 & \textbf{16.57M} \\
\bottomrule
\end{tabular}}
\caption{Hyperparameters of LMs investigated in this paper. We controlled the hyperparameters in order to make model sizes maximally comparable.}
\label{tab:model_size}
\end{table*}
\section{Experiment}
We designed a strictly controlled experiment for testing whether the two components---the composition function and the self-attention mechanism---can both induce human-like syntactic generalization. Specifically, we train LMs with and without these two components with the model sizes carefully controlled, and evaluate their syntactic generalization performance against six test circuits on the SyntaxGym benchmark. We also train and evaluate two vanilla LMs with and without the self-attention mechanisms as a baseline. The following subsections describe the experimental settings in further detail.
\subsection{Language models}
\label{subsec:LM}
This subsection describes LMs investigated in this paper (Table~\ref{tab:models}). We controlled the hyperparameters in order to make model sizes maximally comparable (Table~\ref{tab:model_size}).
\paragraph{LSTM}
LSTMs \citep{hochreiterLongShorttermMemory1997} are a vanilla LM ($-\syn$) based on RNN architectures ($-\selfattn$). LSTMs were adopted as a baseline for syntactic LMs without the self-attention mechanism. Our LSTMs were implemented with the PyTorch package.\footnote{\url{https://github.com/pytorch/pytorch}}
\paragraph{ActionLSTM}
ActionLSTMs \citep{choe-charniak-2016-parsing} are a syntactic LM ($+\syn$) based on RNN architectures ($-\selfattn$). ActionLSTMs treat actions flatly without the composition function ($-\comp$). Our ActionLSTMs were implemented with the PyTorch package.
\paragraph{RNNG}
RNNGs are a syntactic LM ($+\syn$) based on RNN architectures ($-\selfattn$). RNNGs recursively compose subtrees into a single vector representation with the composition function ($+\comp$). The implementation with the PyTorch package by \citet{noji-oseki-2021-effective} was employed.\footnote{\url{https://github.com/aistairc/rnng-pytorch}}
\paragraph{Transformer}
Transformers \citep{radford2018improving} are a vanilla LM ($-\syn$) based on Transformer architectures ($+\selfattn$). Transformers were adopted as a baseline for syntactic LMs with the self-attention mechanism. Our Transformers were implemented with Huggingface's Transformer package \citep{wolf-etal-2020-transformers}.\footnote{\url{https://github.com/huggingface/transformers}}
\paragraph{PLM}
PLMs are a syntactic LM ($+\syn$) based on Transformer architectures ($+\selfattn$). PLMs treat actions flatly without the composition function ($-\comp$). The implementation with Huggingface's Transformer package by \citet{qian-etal-2021-structural} was employed.\footnote{\url{https://github.com/IBM/transformers-struct-guidance}}
\paragraph{PLM-mask}
PLM-masks are a syntactic LM ($+\syn$) based on Transformer architectures ($+\selfattn$). PLM-masks do not utilize the composition function, but use the local subtree information with the dynamic masking mechanism ($(+)\comp$). The implementation with Huggingface’s Transformer package by \citet{qian-etal-2021-structural} was employed.
\paragraph{CAG}
CAGs are a syntactic LM ($+\syn$) based on Transformer architectures ($+\selfattn$). CAGs recursively compose subtrees into a single vector representation with the composition function ($+\comp$). Our CAGs were implemented with the PyTorch and Huggingface's Transformer packages.\footnote{We will make our code publicly available at the time of the conference.}
\subsection{Training}
All LMs were trained on the BLLIP-\textsc{lg} dataset, which comprises 1.8M sentences and 42M tokens sampled from the Brown Laboratory for Linguistic Information Processing 1987-89 Corpus Release 1 (BLLIP; \citealp{charniak2000bllip}). We followed the train-dev-test split of \citet{hu-etal-2020-systematic}. Following \citet{qian-etal-2021-structural}, we split the sentences into subwords using a Byte Pair Encoding tokenizer \citep{sennrich-etal-2016-neural} from Huggingface's Transformer package. The baseline vanilla LMs used only terminal subwords, whereas the syntactic LMs used terminal subwords and syntactic structures. We utilized syntactic structures re-parsed by \citet{hu-etal-2020-systematic} with a state-of-the-art constituency parser \citep{kitaev-klein-2018-constituency}. All LMs were trained at the sentence level with a learning rate of $10^{-3}$, a dropout rate of $0.1$, Adam optimizer, and a minibatch size of 256 for 15 epochs. We selected the checkpoint with the lowest loss on the development set for evaluation. The experiment was conducted three times with different random seeds.
\subsection{Targeted syntactic evaluation}
In order to evaluate whether LMs learn human-like syntactic generalization, we employed six test circuits \citep{hu-etal-2020-systematic} on the SyntaxGym benchmark \citep{gauthier-etal-2020-syntaxgym}. Specifically, each test circuit deals with the following grammatical phenomenon: Agreement, Licensing, Garden-Path Effects, Gross Syntactic State, Center Embedding, and Long-Distance Dependencies. Each circuit is further subcategorized into suites; for example, the Agreement circuit contains a suite on a specific type of Agreement, such as ``subject-verb number agreement with prepositional phrase''. Each test suite consists of items designed to probe the specific grammatical phenomenon, and LMs succeed when they meet a success criterion, which defines inequalities among conditional probabilities on a grammatically critical position that should hold if they have learned the appropriate syntactic generalization. For example, to succeed on an item of the ``subject-verb number agreement with prepositional phrase'' suite, LMs should assign a higher probability to the underlined critical position of (\ex{1}a) than (\ex{1}b):
\eenumsentence[1]{
\item \hspace*{\astspace} The author next to the senators \underline{is} good.
\item * The author next to the senators \underline{are} good.
}
Following \citet{qian-etal-2021-structural}, we employed word-synchronous beam search \citep{stern-etal-2017-effective} to derive the probability of a grammatically critical position from syntactic LMs. Word-synchronous beam search retains a collection of the most likely syntactic structures that are predicted given an observed partial sentence $w_1, \cdots, w_{i}$ and marginalizes their probabilities to approximate $p(w_{i}|w_{<i})$:
\begin{align}
\nonumber
p(w_{i}|w_{<i}) &= \frac{p(w_1, \cdots, w_{i})}{p(w_1, \cdots, w_{i-1})}\\
&\sim \frac{\sum_{Y_{i} \in \mathcal{Y}_{i}}{p(w_1, \cdots, w_{i}, Y_{i})}}{\sum_{Y_{i-1} \in \mathcal{Y}_{i-1}}{p(w_1, \cdots, w_{i-1}, Y_{i-1})}}
\end{align}
where $\mathcal{Y}_{i}$ denotes the collection of syntactic structures given $w_1, \cdots, w_{i}$. Following \citet{qian-etal-2021-structural}, we set the action beam size to 100, word beam size to 10, and fast-track to 5.
\begin{comment}
Following \citet{qian-etal-2021-structural}, word-synchronous beam search \citep{stern-etal-2017-effective} with the action beam size $k=100$ was employed for inference of the syntactic LMs.\footnote{We set the word beam size to 10 and the fast-track to 5.}
\end{comment}
\section{Results and discussion}
\subsection{Overall accuracies}
\label{subsec:overall}
\begin{figure}[t]
\centering
\includegraphics[width=\hsize]{figs/sg_overall_lg.pdf}
\caption{Overall accuracies of our controlled experiment. The average accuracies across the SyntaxGym test suites and different random seeds (the vertical axis) are plotted against the LMs investigated in this paper (the horizontal axis), with the accuracies of PLM-masks and GPT-2 from \citet{qian-etal-2021-structural}. The accuracies of PLM-masks and GPT-2 from \citet{qian-etal-2021-structural} are reference points as their model sizes are significantly larger than the other models investigated in this paper. Each dot denotes the accuracy of a specific seed.}
\label{fig:overall}
\end{figure}
\begin{table*}[t]
\centering
\scalebox{0.66}{
\begin{NiceTabular}{l||c|c|c|cc}
\toprule
\multicolumn{1}{c}{}&\multicolumn{1}{c}{$-\syn$}&\multicolumn{2}{c}{$+\syn$} & \multicolumn{2}{c}{}\\
\cmidrule{3-4}
\multicolumn{1}{c}{}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{$-\comp$}&\multicolumn{1}{c|}{$+\comp$}&\multicolumn{1}{c}{$[+\sy]-[-\sy]$}&\multicolumn{1}{c}{$[+\com]-[-\com]$}\\
\hline\hline
$-\selfattn$ & 56.6 $\pm$ 3.3 (LSTM) & 72.5 $\pm$ 1.8 (ActionLSTM) & 81.1 $\pm$ 2.8 (RNNG) & 20.2 $\pm$ 4.7 & 8.6 $\pm$ 3.3\\
\hline
$+\selfattn$ & 48.1 $\pm$ 1.5 (Transformer) & 75.4 $\pm$ 0.2 (PLM) & \begin{tabular}{c}
69.6 $\pm$ 0.9 (PLM-mask) \\
\textbf{83.8 $\pm$ 1.4 (CAG)}
\end{tabular} & \begin{tabular}{c}
24.4 $\pm$ 1.8 \\
31.5 $\pm$ 2.1
\end{tabular} & \begin{tabular}{c}
-5.8 $\pm$ 0.9 \\
8.4 $\pm$ 1.4
\end{tabular}\\
\hline
$[+\sa] - [-\sa]$ & -8.5 $\pm$ 3.7 & 2.9 $\pm$ 1.8 & \begin{tabular}{c}
-11.5 $\pm$ 2.9 \\
2.7 $\pm$ 3.1
\end{tabular} & & \\
\bottomrule
\end{NiceTabular}}
\caption{Overall accuracy of each LM and the difference in the accuracy between minimally different LMs. $[+\sa] - [-\sa]$ denotes the difference in the accuracy between LMs with $+\selfattn$ and $-\selfattn$. $[+\sy]-[-\sy]$ and $[+\com]-[-\com]$ denote the differences in the accuracy between LMs with $+\syn$ and $-\syn$, and between LMs with $+\comp$ and $-\comp$, respectively. The standard deviations of the differences were calculated, assuming that the accuracies were normally distributed.}
\label{tab:overall}
\end{table*}
Overall accuracies of our controlled experiment are summarized in Figure~\ref{fig:overall}. The average accuracies across the SyntaxGym test suites and different random seeds (the vertical axis) are plotted against the LMs investigated in this paper (the horizontal axis), with the accuracies of PLM-masks and GPT-2 \citep{radford2019language} from \citet{qian-etal-2021-structural}. The accuracies of PLM-masks and GPT-2 from \citet{qian-etal-2021-structural} are reference points as their model sizes are significantly larger than the other models investigated in this paper. Each dot denotes the accuracy of a specific seed. The results demonstrate that CAGs achieved the highest overall accuracy, suggesting that the composition function and the self-attention mechanism both play an important role to make LMs more human-like. Notice importantly that CAGs (83.8\%) outperformed GPT-2 (80.8\%) trained on 250$\times$ data with a 7$\times$ model size.
In the rest of this subsection, we discuss the effects of model components on the overall accuracy. In order to isolate the effects of individual components, Table~\ref{tab:overall} shows the overall accuracy of each LM and the difference in the accuracy between minimally different LMs.
\paragraph{$+$Syntax vs. $-$Syntax}
The LMs with explicit syntactic supervision outperformed the LMs without it, both without the self-attention mechanism (LSTM: 56.6\%, the average accuracy of ActionLSTM and RNNG: 76.7\%; +20.2\%) and with the self-attention mechanism (Transformer: 48.1\%, the average accuracy of PLM and PLM-mask: 72.5\%; +24.4\%, and the average accuracy of PLM and CAG: 79.4\%; +31.5\%). This result corroborates previous work \citep{kuncoro-etal-2017-recurrent, wilcox-etal-2019-structural, hu-etal-2020-systematic}, suggesting that explicit syntactic supervision plays an important role to make LMs more human-like.
\paragraph{$+$Composition vs. $-$Composition}
The LMs with the composition function outperformed the LMs without it, both without the self-attention mechanism (ActionLSTM: 72.5\%, RNNG: 81.1\%; +8.6\%) and with the self-attention mechanism (PLM: 75.4\%, CAG: 83.8\%; +8.4\%), suggesting that the composition function induces human-like syntactic generalization \citep{kuncoro-etal-2017-recurrent, wilcox-etal-2019-structural}.
\paragraph{$+$SelfAttn vs. $-$SelfAttn}
Without explicit syntactic supervision, the LMs with the self-attention mechanism underperformed the LMs without it (LSTM: 56.6\%, Transformer: 48.1\%; -8.5\%).
In contrast, with explicit syntactic supervision, the LMs with the self-attention mechanism outperformed the LMs without it, both without the composition function (ActionLSTM: 72.5\%, PLM: 75.4\%; +2.9\%) and with the composition function (RNNG: 81.1\%, CAG: 83.8\%; +2.7\%). This result suggests that it is important to selectively attend to previous structural information not just words.
\begin{figure*}[t]
\centering
\includegraphics[width=\hsize]{figs/sg_circuit_lg.pdf}
\caption{Circuit accuracies of our controlled experiment. The average accuracies across the SyntaxGym test suites and different random seeds on each test circuit (the vertical axis) are plotted against the LMs investigated in this paper (the horizontal axis). Each dot denotes the accuracy of a specific seed.}
\label{fig:circuit}
\end{figure*}
\paragraph{$+$Composition vs. $(+)$Composition}
CAGs with the composition function outperformed PLM-masks with the dynamic masking mechanism (PLM-mask: 69.6\%, CAG: 83.8\%; +14.2\%). This result suggests that recursive composition of subtrees has additional advantages over the local subtree information in inducing human-like syntactic generalization. Note incidentally that our PLM-masks achieved a lower accuracy (69.6\%) than PLM-masks from \citet{qian-etal-2021-structural} (74.8\%), which may be caused by the difference in balance between specialized and vanilla attention heads: we specialized two out of 4 attention heads, whereas \citet{qian-etal-2021-structural} specialized two out of 12. Nevertheless, given that CAGs (83.8\%) outperformed the PLM-masks from \citet{qian-etal-2021-structural} (74.8\%) by a large margin, it is safe to conclude that recursive composition of subtrees has additional advantages over the local subtree information.
\subsection{Circuit accuracies}
Circuit accuracies of our controlled experiment are summarized in Figure~\ref{fig:circuit}. The average accuracies across the SyntaxGym test suites and different random seeds on each test circuit (the vertical axis) are plotted against the LMs investigated in this paper (the horizontal axis). Each dot denotes the accuracy of a specific seed. The results demonstrate that with explicit syntactic supervision, the LMs with the self-attention mechanism marginally outperformed the LMs without it on most of the test circuits, but the LMs with the composition function outperformed or underperformed the LMs without it depending on the test circuits.
In the rest of this subsection, we investigate the pros and cons of the composition function through closer inspection of syntactic phenomena.
\paragraph{Syntactic features may percolate into the subtree representations.}
The LMs with the composition function outperformed the comparable LMs without it on three out of six circuits (Licensing, Garden-Path Effects, and Gross Syntactic State). Specifically, RNNGs and CAGs both outperformed ActionLSTMs and PLMs by a large margin (+23.0\% and +26.0\%, respectively) on Licensing, which includes items like (\ex{3}):
\eenumsentence[3]{
\item \hspace*{\astspace} The author next to the senators hurt \\\hspace*{\astspace} \underline{herself}.
\item * The authors next to the senator hurt \\\hspace*{\astspace} \underline{herself}.
}
To successfully assign a higher probability to (\ex{3}a) than (\ex{3}b), LMs should understand that the reflexive pronoun must agree with the subject of the sentence in number. The subject NP ``The author/authors next to the senators/senator'' is composed into a single NP vector, as confirmed by the fact that RNNGs and CAGs both correctly assigned the following structure ``(NP The author/authors (ADVP next (PP to (NP the senators/senator))))'' to the subject NP.\footnote{RNNGs and CAGs both achieved considerably high bracketing F1 (RNNG: 96.0, CAG: 98.2) against acceptable test sentences parsed with the state-of-the-art constituency parser \citep{kitaev-klein-2018-constituency} on the Licensing circuit.} Given that RNNGs and CAGs successfully assigned a higher probability to an acceptable sentence through this subject NP vector, we can hypothesize that the syntactic features such as number may properly percolate into the subject NP vector.
\paragraph{Semantic features may not percolate into the subtree representations.}
In contrast, the LMs with the composition function underperformed the comparable LMs without it on the other circuits (Agreement, Center Embedding, and Long-Distance Dependencies). Specifically, RNNGs and CAGs both underperformed ActionLSTMs and PLMs most significantly on Center Embedding (-4.76\% and -1.79\%, respectively), which includes items like (\ex{4}):
\eenumsentence[4]{
\item \hspace*{\astspace} The shirt that the man \underline{bought ripped}.
\item * The shirt that the man \underline{ripped bought}.
}
To successfully assign a higher probability to (\ex{4}a) than (\ex{4}b), LMs should understand that the verb that can take the inanimate subject ``shirt'' should appear at the end of the sentence. The subject NP ``The shirt that the man bought/ripped'' is composed into a single NP vector, as confirmed by the fact that RNNGs and CAGs both correctly assigned the following structure ``(NP The shirt (SBAR (WHNP that)(S (NP the man)(VP bought/ripped))))'' to the subject NP.\footnote{RNNGs and CAGs both achieved high bracketing F1 (RNNG: 96.7, CAG: 95.2) on the Center Embedding circuit. In addition, these scores are higher than ActionLSTMs and PLMs (ActionLSTM: 96.1, PLM: 94.2), respectively, indicating that the lower accuracy of RNNGs and CAGs than ActionLSTMs and PLMs on this circuit is not due to failure in parsing.} Given that RNNGs and CAGs failed to assign a higher probability to an acceptable sentence through this subject NP vector, we can hypothesize that the semantic features such as animacy may not properly percolate into the subject NP vector.
\paragraph{What kind of features percolates?}
The important implication here is that, with the composition function, the syntactic features may percolate into the subtree representations, but the semantic features may not. The detailed analysis of this implication (e.g., an analysis of the inner mechanics of feature percolation at the single neuron level; \citealp{lakretz-etal-2019-emergence}) will remain for future work.
\subsection{Overall accuracy and perplexity}
In this subsection, we compare the SyntaxGym overall accuracy against perplexity, the standard evaluation metric for LMs. The relationship between the overall accuracy and perplexity is summarized in Figure~\ref{fig:ppl_acc}: the overall accuracy (vertical axis) is plotted against perplexity (horizontal axis; lower is better). Following \citet{qian-etal-2021-structural}, we calculated the perplexity on the BLLIP held-out test set and derived the perplexity from the syntactic LMs, given the syntactic structures of the test sentences equal to the gold structures. Figure~\ref{fig:ppl_acc} demonstrates that explicit syntactic supervision generally improves both the overall accuracy and perplexity, but among the syntactic LMs, the overall accuracy is not linearly correlated with perplexity: PLMs and PLM-masks achieved worse overall accuracy, but better perplexity than RNNGs and CAGs. This result corroborates \citet{hu-etal-2020-systematic} that suggests a dissociation between perplexity and human-like syntactic generalization performance.
Recently, the relationship between perplexity and LMs' cognitive plausibility has attracted considerable attention. Besides LMs' human-like syntactic generalization performance, previous work on the correlation between perplexity and LMs' psychometric predictive power has typically reported that LMs with the better perplexity are more cognitively plausible \citep{fossum-levy-2012-sequential, goodkind-bicknell-2018-predictive, DBLP:conf/cogsci/WilcoxGHQL20}, but more recently, the counter-argument that lower perplexity is not always human-like has been widely discussed \citep{hao-etal-2020-probabilistic, oh-etal-2021-surprisal, kuribayashi-etal-2021-lower}. Given these recent trends, it is possible that the evaluation solely on perplexity may be orthogonal to the goal of human-like LMs (cf. \citealp{linzen-2020-accelerate}).
\begin{figure}[t]
\centering
\includegraphics[width=\hsize]{figs/ppl_acc.pdf}
\caption{The relationship between the overall accuracy and perplexity: the overall accuracy (vertical axis) is plotted against perplexity (horizontal axis; lower is better).}
\label{fig:ppl_acc}
\end{figure}
\section{Related work}
\label{sec:releted}
While writing this paper, we noticed that \citet{https://doi.org/10.48550/arxiv.2203.00633}, which is similar in spirit to our work, was submitted to the arXiv: they proposed Transformer Grammars (TGs) that incorporate recursive syntactic composition. TGs obtain a single vector representation of subtrees with the self-attention mechanism via an attention mask, but in contrast, CAGs obtain the representation with the composition function based on bidirectional LSTMs. While TGs are superior to CAGs in computational efficiency (see \hyperref[sec:lim]{Limitations} section), CAGs achieved better syntactic generalization performance on SyntaxGym (83.8\%) than TGs (82.5\%) that were trained with a 12$\times$ model size, suggesting that the composition function based on bidirectional LSTMs is advantageous in obtaining a vector representation of subtrees. Thorough comparisons between CAGs and TGs will remain for future work.
\section{Conclusion}
In this paper, we proposed a novel architecture called \textbf{Composition Attention Grammars} (CAGs) that recursively compose subtrees into a single vector representation with the composition function, and selectively attend to previous structural information with the self-attention mechanism. We investigated whether these components---the composition function and the self-attention mechanism---can both induce human-like syntactic generalization. Specifically, we trained LMs with and without these two components with the model sizes carefully controlled, and evaluated their syntactic generalization performance against six test circuits on the SyntaxGym benchmark. The results demonstrated that the composition function and the self-attention mechanism both play an important role to make LMs more human-like, and closer inspection of linguistic phenomenon implied that the composition function allowed syntactic features, but not semantic features, to percolate into subtree representations.
\section*{Limitations}
\label{sec:lim}
Although it is not a central research question in this paper, a limitation with CAGs is their computational cost. While TGs \cite{https://doi.org/10.48550/arxiv.2203.00633} process all inputs simultaneously during training as in vanilla Transformers, CAGs must be trained recursively because the internal state of the stack changes dynamically due to the composition function. In fact, although we utilized effective batching for LMs with the composition function \citep{noji-oseki-2021-effective} and prevented CAGs from re-computing pre-computed attention keys and values, training of CAGs on the BLLIP-\textsc{lg} dataset (1.8M sentences and 42M tokens) for 15 epochs took two weeks on eight GPUs (NVIDIA V100). In addition, the self-attention mechanism consumes a large amount of memory, making it difficult to train CAGs with larger model sizes. The model size in this paper is the maximum that can be trained on V100 with 32GB memory. In order to address these limitations, we plan to introduce a computationally efficient self-attention mechanism (cf. \citealp{https://doi.org/10.48550/arxiv.2009.06732}) to CAGs in future work.
\section*{Acknowledgement}
\label{sec:ack}
We would like to thank Peng Qian for sharing the re-parsed BLLIP-\textsc{lg} dataset, which is used to train syntactic LMs in \citet{hu-etal-2020-systematic}, and for answering various questions. We are also grateful to three anonymous reviewers for valuable comments and suggestions. This work was supported by JST PRESTO Grant Number JPMJPR21C2.
|
1,108,101,563,006 | arxiv | \section{Introduction}
An active line of research studies the relations between structural
properties of groups and sets of invariants. There is a large number of examples of this and we name only four which might be considered as the genesis of this type of investigations; these four examples should also clarify our interest in this paper. Given a finite group $G$, we may associate the prime graph based on the conjugacy class sizes: the vertices are the prime numbers dividing the cardinality of some conjugacy class of $G$ and the prime numbers $p$ and $q$ are declared to be adjacent if and only if $pq$ divides the cardinality of some conjugacy class of $G$. On the set of conjugacy class sizes, we may also associate the divisor graph: the vertices are the cardinalities of the non-central conjugacy classes of $G$ and the numbers $m$ and $n$ are declared to be adjacent if and only if they are not relatively prime. It is well known that a great deal of information on $G$ is encoded in both of these graphs and this establishes a beautiful flow of information between the algebraic structure of $G$ and the combinatorial properties of the graphs. Two entirely similar constructions can be done replacing the set of conjugacy class sizes with the set of irreducible complex character degrees. Again, the prime graph and the divisor graph on the character degrees encode interesting information about the group. Considering the ``duality'' between conjugacy classes and irreducible characters there are also some remarkable connections among all four of these graphs.
In~\cite{L}, Mark L. Lewis has generalized in a very natural and useful way these graphs. Given a subset $X\subseteq \mathbb{N}\setminus\{0,1\}$, Lewis has considered the {\em prime graph} $\Delta(X)$ and the {\em divisor graph} $\Gamma(X)$. The vertices of $\Delta(X)$ are the prime numbers dividing some element of $X$ and two distinct prime numbers are declared to be adjacent if and only if their product divides some member of $X$. The vertex set of $\Gamma(X)$ is $X$ and two distinct elements of $X$ are declared to be adjacent if and only if they are not relatively prime. Then, Lewis has shown (for arbitrary sets $X$) some remarkable general connections between $\Delta(X)$ and $\Gamma(X)$. By taking $X$ the set of conjugacy class sizes or the set of irreducible complex characters, one recovers the graphs introduced in the previous paragraph and rediscovers some of their basic relations.
There is a gadget that can be used to study simultaneously $\Delta(X)$ and $\Gamma(X)$. Inspired by the remarkable connections between the common divisor graph $\Gamma(X)$ and the prime degree graph $\Delta(X)$ discussed in ~\cite{L} by Lewis, Iranmanesh and Praeger~\cite{IP} introduced the notion of {\em bipartite divisor graph} $B(X)$, and proved that most of these connections follow immediately from $B(X)$. The vertex set of $B(X)$ is the disjoint union of the set of prime numbers dividing some element of $X$ and the set $X$ itself, where a prime number $p$ is declared to be adjacent to an element $x$ of $X$ if and only if $p$ divides $x$. For instance, with this new tool, Iranmanesh and Praeger (re)established the links between the number of connected components and the diameters of $\Delta(X)$ and $\Gamma(X)$ simply working with $B(X)$. They were also able to classify the graphs $\Gamma$ with $\Gamma\cong B(X)$, for some set $X$.
Before continuing our discussion, it is very important to observe that $B(X)$ brings more information than $\Delta(X)$ and $\Gamma(X)$. In other words, the graph $B(X)$ cannot be recovered only from $\Delta(X)$ and $\Gamma(X)$. For instance, when $\Delta(X)\cong \Gamma(X)$ is the complete graph $K_3$ on three vertices, the graph $B(X)$ can be isomorphic to one of the two graphs in Figure~\ref{fig1fig1}.
\begin{figure}[!ht]
\begin{tikzpicture}
[scale=1,auto=left,every node/.style={circle,fill=blue!20}]
\node[fill=blue] (m1) at (-3,1) {};
\node[fill=blue] (m2) at (-4,1) {};
\node[fill=blue] (m3) at (-5,1) {};
\node (m4) at (-3,-1) {};
\node (m5) at (-4,-1) {};
\node (m6) at (-5,-1) {};
\node[fill=blue] (n1) at (3,1) {};
\node[fill=blue] (n2) at (4,1) {};
\node[fill=blue] (n3) at (2,1) {};
\node (n4) at (3,-1) {};
\node (n5) at (4,-1) {};
\node (n6) at (2,-1) {};
\draw (m1) to (m4);
\draw (m1) to (m5);
\draw (m2) to (m4);
\draw (m2) to (m6);
\draw (m3) to (m6);
\draw (m3) to (m5);
\draw (n1) to (n6);
\draw (n1) to (n5);
\draw (n1) to (n4);
\draw (n3) to (n4);
\draw (n2) to (n4);
\end{tikzpicture}
\caption{Examples of two bipartite divisor graphs $B(X)$ giving rise to $\Delta(X)\cong \Gamma(X)\cong K_3$}
\label{fig1fig1}
\end{figure} These graphs arise by taking $p,q$ and $r$ three distinct primes and by taking, for instance, $X=\{pq,pr,qr\}$ or $X=\{pqr,p,p^2\}$. (There are other isomorphism classes of $B(X)$ yielding $\Delta(X)\cong \Gamma(X)\cong K_3$, here we just presented two.)
It goes without saying that the extra information brought by $B(X)$ asks for a finer investigation.
The first application of the bipartite divisor graph in group theory is for the set of conjugacy class sizes, that is, for a given finite group $G$, $X:=\{m\in\mathbb{N}\mid m \textrm{ is the conjugacy class size of some non-central element of }G\}$. (There is no standard notation for this graph and, in this section, we denote it by $B(Cl(G))$.) In ~\cite{BDIP}, the authors considered this graph and studied various properties. Among other things, they proved that the diameter is at most $6$ and they classified the groups attaining the upper bound. Moreover, when the graph has no cycles, the diameter is actually at most $5$ and they classified the groups for which the graph is a path of length $5$.
The classification of Dolfi and Jabara~\cite{DJ} of the finite groups with only two non-trivial conjugacy class sizes has spurred more interest in $B(Cl(G))$ and has proved useful in studying $B(Cl(G))$. For instance, it follows immediately from~\cite{DJ} that there is no group $G$ with $B(Cl(G))\cong C_{4}$, where $C_4$ is the {\em cycle} of length $4$. Therefore, it is interesting to see, if one minded so, whether there exists a group $G$ with $B(Cl(G))$ isomorphic to a cycle. Taeri in~\cite{T} has answered this question and has proved that $B(Cl(G))$ is a cycle if and only if it is the cycle $C_6$ of length six. Moreover, Taeri has classified the groups $G$ with $B(Cl(G))\cong C_6$; indeed, $G\cong A\times \mathrm{SL}_2(q)$ where $A$ is an abelian group and $q\in \{4,8\}$. Since $C_4$ is also the {\em complete bipartite} graph $K_{2,2}$ and since there is no finite group $G$ with $B(Cl(G))\cong K_{2,2}$, Taeri~\cite[Question~1]{T} has asked whether $B(Cl(G))$ can be isomorphic to some complete bipartite graph. In~\cite{HSpiga}, we answered this question and we constructed infinitely many groups $G$ with $B(Cl(G))\cong K_{2,5}$. However, as far as we are aware, it is not known for which positive integers $n$ and $m$ there exists a finite group $G$ with $B(Cl(G))\cong K_{n,m}$ (let alone a meaningful classification of the groups $G$ with $B(Cl(G))\cong K_{n,m}$).
We conclude this brief discussion on $B(Cl(G))$ recalling that the first author and Iranmanesh~\cite[Theorem~$4.1$]{HI} have classified the groups $G$ where $B(Cl(G))$ is isomorphic to a {\em path}. This classification was obtained by investigating the combinatorial properties of the bipartite divisor graphs constructed from the product of subsets of positive integers~\cite{HI}.
\smallskip
In this paper we are concerned with the {\em bipartite divisor graph for the set of irreducible complex character degrees}. Given a finite group $G$, we let $\Irr(G)$ be the set of the irreducible complex characters of $G$, we let $\cd(G):=\{\chi(1)\mid \chi\in \Irr(G)\}$ and we let $\cd(G)^*:=\cd(G)\setminus\{1\}$. Finally, we let $B(G)$ denote the bipartite divisor graph for the set of integers $\cd(G)^*$. We recall that the vertex set is the disjoint union of the set of prime numbers dividing some element of $\cd(G)^*$ and $\cd(G)^*$ itself, where we declare the prime $p$ to be adjacent to the character degree $m$ if and only if $p$ divides $m$.
The scope of this paper is to outline some main results on $B(G)$. We feel that future research on $B(G)$ might benefit from this because these results are scattered over a number of papers (cf.~\cite{H,Hregular, mus, moo4, moo5}). During this process, we are able to improve some of these results. Moreover, along the way, we leave some problems and questions.
In Section~\ref{sec:special graphical shapes}, we investigate the groups $G$ where $B(G)$ is in a certain class of graphs (paths, union of paths, cycles and complete bipartite). In Section~\ref{sec:reg}, we study the groups $G$ where $B(G)$ is a regular graph, that is, all vertices of $B(G)$ have the same valency. Finally, in Section~\ref{sec:bounded}, we study the groups $G$ with $B(G)$ having at most $6$ vertices. We proceed by discussing the main results that have been proved already and (in several occasions) by improving some of this work.
\subsection{Notation}
All groups and graphs in our paper are finite.
We denote by $\g\cd(m,n)$ the {\em greatest common divisor} of the integers $m$ and $n$. Given a prime number $p$, we let
$n_p$ be the {\em $p$-part} of $n$, that is, the largest power of $p$ dividing the integer $n$. Similarly, we denote by $n_{p'}$ the {\em $p'$-part} of $n$, that is, $n_{p'}:=n/n_p$. We let $\pi(n)$ denote the set of all prime divisors of the natural number $n$.
Given a graph $\mathcal{G}$, we let $V(\mathcal{G})$ denote the {\em vertex set}, we let $E(\mathcal{G})$ denote the {\em edge set}, we let $n(\mathcal{G})$ denote the number of {\em connected components} and we let $o(\mathcal{G})$ denote the cardinality of $V(\mathcal{G})$. The diameter of $\mathcal{G}$, denoted by $\diam(\mathcal{G})$, is the maximum of the diameters of the connected components of $\mathcal{G}$. If $\mathcal{G}$ is disconnected and $\mathcal{G}_{1},\ldots,\mathcal{G}_{n}$ are the connected components of $\mathcal{G}$, then we write $\mathcal{G}:=\mathcal{G}_{1}+\cdots +\mathcal{G}_{n}$. By {\em length of a path} or {\em a cycle}, we mean the number of edges in the path or in the cycle. Also, by $P_{n}$ and $C_{n}$, we mean a path of length $n$ and a cycle of length $n$, respectively. A complete graph on $n$ vertices and a complete bipartite graph on $(m,n)$ vertices are denoted by $K_{n}$ and $K_{m,n}$, respectively.
Given a finite group $G$, we let $\pi(G)$ be the set of all {\em prime divisors of the order} of $G$. As usual, we write $dl(G)$ and $h(G)$ to denote the {\em derived length} and the {\em Fitting height} of $G$, respectively. We denote the {\em first and second Fitting subgroups} of $G$ by $\F G$ and ${\bf F}_2(G)$, respectively. Other notations throughout the paper are standard and should cause no confusion.
We let $\rho(G)$ be the set of all {\em prime numbers dividing some element of} $\cd(G)^*$. The graphs that we use in this paper are:
\begin{description}
\item[Prime graph $\Delta(G)$]
\begin{align*}
V(\Delta(G))&:=\rho(G),\\
E(\Delta(G))&:=\{\{p,q\}\mid p,q\in \rho(G), p\ne q, pq \textrm{ divides some element of }\cd(G)\};
\end{align*}
\item[Common divisor graph $\Gamma(G)$]
\begin{align*}
V(\Gamma(G))&:=\cd(G)^*,\\
E(\Gamma(G))&:=\{\{m,k\}\mid m,k\in \cd(G)^{*}, m\ne k, \g\cd(m,k)\neq 1\};
\end{align*}
\item[Bipartite divisor graph $B(G)$]
\begin{align*}
V(B(G))&:=\rho(G)\amalg \cd(G)^{*}\, (\textrm{disjoint union}),\\
E(B(G))&:=\{\{p,m\}\mid p\in\rho(G),m\in \cd(G)^{*}, p\textrm{ divides }m\}.
\end{align*}
\end{description}
\subsection{Notation in the figures of this paper}
We have consistently drawn all figures of this paper so that the vertices in the lower part of the picture are light blue and are the elements of $\rho(G)$ and the vertices in the upper part of the picture are blue and are the elements of $\cd(G)^{*}$.
\section{Groups whose bipartite divisor graphs have special shapes}
\label{sec:special graphical shapes}
One of the main questions that naturally arises in this area of research is classifying those groups whose bipartite divisor graphs have special shapes. In ~\cite{H}, the first author of this paper discussed the cases where the bipartite divisor graph for the set of irreducible character degrees is
\begin{itemize}
\item a path (see Theorem~\ref{thm:990}),
\item a union of paths for non-solvable groups (see Theorem~\ref{thm:99}), or
\item a cycle (see Theorems~\ref{thm:55}).
\end{itemize}
In this section, one the one hand, we review and we improve the results in~\cite{H}, on the other hand, we discuss the algebraic structure of a solvable group whose bipartite divisor graph is a union of paths.
In our analysis we use the classification into {\em six types} of Mark Lewis~\cite{ML2} of the solvable groups whose degree graph is disconnected. Lewis has named these classes Type~1--6 and for each of these types he has given a detailed description in~\cite[Lemmas~$3.1$--$3.6$]{ML2}. Except for the proof of Theorem~\ref{thm:P4}, we do not need this full classification here. We assume that the reader is broadly familiar with these types, however we highlight below the following properties tailored to our needs.
\begin{remark}~\label{rem:20}{\rm
Let $X$ be a solvable group with $\Delta(X)$ disconnected. Then $\Delta(X)$ has two connected components. Moreover, the following hold.
\begin{itemize}
\item If $X$ is of type $1$, $2$, $3$, or $5$, then at least one of the connected components of $\Delta(X)$ has cardinality $1$. Thus, if each connected component of $\Delta(X)$ has at least two vertices, then $X$ is a group of type $4$ or $6$. The converse is not true: there are some groups of type $4$ having prime graph consisting of two isolated vertices. For example, the group
\[\texttt{SmallGroup(168,43)}\cong \mathrm{A}\Gamma\mathrm{L}(1,8)\]
in the ``SmallGroup'' library in GAP~\cite{GAP4} is of type $4$ and its prime graph has vertex set $\{3,7\}$. Moreover, if $X$ is a group of type $6$, then $X$ has a normal Sylow $p$-subgroup and $\Delta(X)$ has a connected component consisting of $\pi([{\bf F}_{2}(X):\F X])\cup\{p\}$ and this set has cardinality greater than $1$.
\item If $X$ is a group of type $1$, then $h(X)=2$, while for all the other types $h(X)\ge 3$, see~\cite[Lemma~4.1]{ML2}.
\item The group $X$ has a normal non-abelian Sylow subgroup if and only if $X$ is of type $1$ or $6$.
\item If $X$ is a group of type $5$, then $\{1,2,2^{a}+1\}\subseteq \cd(X)$, for some positive integer $a$.
\item If $X$ is a group of type $2$, then $\cd(X)=\{1,2,3,8\}$ and if $X$ is of type $3$, then $\cd(X)=\{1,2,3,4,8,16\}$.
\end{itemize}}
\end{remark}
\subsection{The case where the bipartite divisor graph is a path}
Let $G$ be a finite group. In~\cite{H} it is proved that $B(G)$ has diameter at most seven and this upper bound is the best possible. In the special case that $B(G)$ is a path of length $n$, ~\cite[Proposition 2]{H} improves this bound by showing that $n\leq 6$; moreover, $G$ is solvable and $dl(G)\leq 5$. The following theorem gives a more detailed description of $G$.
\begin{theorem}[{{See,~\cite{H}}}]~\label{thm:990}
Let $G$ be a finite group with $B(G)$ a path of length $n$. Then, one of the following occurs:
\begin{itemize}
\item[(i)] $G$ has an abelian normal subgroup $N$ such that $\cd(G)=\{1,[G : N]\}$ and $G/N$ is abelian. Furthermore, $n\in\{1,2\}$.
\item[(ii)] There exist normal subgroups $N$ and $K$ of $G$ and a prime number $p$ with the following properties:
\begin{itemize}
\item[(iia)] $G/N$ is abelian;
\item[(iib)] $\pi(G/K)\subseteq\rho(G)$;
\item[(iic)] either $p$ divides all the non-trivial irreducible character degrees of $N$ (this implies that $N$ has a normal $p$-complement), or $\cd(N)=\{1, l, k,h/m\}$, where $\cd(G)=\{1, m, h, l, k\}$.
\end{itemize}
Furthermore, $n\in\{4,5,6\}$.
\item[(iii)] $\cd(G)=\{1,p^{\alpha},q^{\beta},p^{\alpha}q^{\beta}\}$, where $p$ and $q$ are distinct primes and $\alpha,\beta$ are positive integers. Thus $n=4$.
\item[(iv)] There exists a prime $s$ such that $G$ has a normal $s$-complement $H$. Either $H$ is abelian and $n\in\{1,2\}$ or $H$ is non-abelian and
\begin{itemize}
\item[(iva)] $\cd(G)=\{1,h,hl\}$, for some positive integers $h$ and $l$, and $n=3$;
\item[(ivb)] $n=4$ and $G/H$ is abelian. Either $\cd(H)=\{[H:\F H]\}\cup \cd(\F H)$ or $\cd(H)=\{1, [{\bf F}_{2}(H):\F H], [H:\F H]\}$. Also $[G:\F G]\in \cd(G)$ and $\cd(\F G)=\{1,h_{s^{'}}\}$, where $[G:\F G]\neq h\in \cd(G)$;
\item[(ivc)] $n=3$, $G/H$ is abelian, $h:=[G:\F G]\in \cd(G)$, $\F G=P\times A$, where $P$ is a $p$-group for some prime number $p$, $A\leq \Z G$, $\cd(G)=\cd(G/A)$, and $\cd(P)=\{1, m_{s^{'}}\}$ for $h\neq m\in \cd(G)$
\end{itemize}
\end{itemize}
\end{theorem}
In Theorem~\ref{thm:990}, the description of the groups $G$ with $B(G)\cong P_n$ and $n\le 3$ is rather good. (For instance, when $n=1$, or $n=2$ and $|\rho(G)|=2$, the group $G$ has a unique non-trivial character degree. These groups are classified by the work in~\cite[Chapter~12]{IS} and~\cite{BH}. Similarly, when $|\cd(G)^*|=2$, a great deal of information is in~\cite{N}.) However, when $n\ge 4$, the information on $G$ is not yet very satisfactory.
For the time being, we focus on the case $n=4$, that is, $B(G)\cong P_{4}$. This situation may arise from two different cases: either $|\rho(G)|=2$ or $|\rho(G)|=3$. Both cases are possible as it is shown in Table~\ref{tab:p4}. Examples of the first case are rather elementary, it suffices to take the direct product $G:=A\times B$, where $A$ and $B$ are both groups having a unique non-trivial character degree $p_a^\alpha$ and $p_b^\beta$, respectively, where $p_a$ and $p_b$ are distinct prime numbers. It is rather more intriguing to construct examples of the second kind, as witnessed by the fact that the smallest finite group $G$ with $B(G)\cong P_4$ and $\rho(G)=3$ has cardinality $960$. Therefore, we now look more closely to this case.
\begin{table}[ht]
\caption{Examples of $B(G)=P_{4}$}
\centering
\begin{tabular}{|c|c|c|}
\hline $G$ & $|\rho(G)|$ & $\cd(G)$ \\
\hline $\mathrm{Sym}(3)\times \mathrm{Alt}(4)$ & $2$ & $\{1,2,3,6\}$ \\ \hline $\texttt{SmallGroup(960,5748)}$ & $3$ & $\{1,12,15\}$ \\
\hline
\end{tabular}
\label{tab:p4}
\end{table}
Assume that $G$ is a finite group with $B(G)\cong P_{4}$ and $|\rho(G)|=3$. Let $\rho(G):=\{p,q,r\}$ and let $\alpha,\beta,\gamma,\delta$ be positive integers such that
$\cd(G):=\{1,p^\alpha q^\beta,r^\gamma q^\delta\}$. Since every non-linear character degree of $G$ is divisible by the prime $q$, we deduce (from a celebrated theorem of Thompson~\cite[(12.2)]{IS}) that $G$ has a normal $q$-complement $L$. Let $Q$ be a Sylow $q$-subgroup of $G$. Thus $G$ equals the semidirect product $L\rtimes Q$.
Let $\theta\in \Irr(L)$ and let $\chi\in\Irr(G)$ with $\langle \chi_L,\theta\rangle\ne 0$. Then, from Clifford theory, $\chi_L=e(\theta_1+\cdots+\theta_t)$, where $\theta_1,\ldots,\theta_t$ are the conjugates of $\theta$ under $G$. As $\chi(1)\in \{1,p^\alpha q^\beta,r^\gamma q^\delta\}$ and $e,t$ are divisors of $|G:L|=|Q|$ by~\cite[(11.29)]{IS}, we deduce
\begin{equation}\label{eq:2}
\cd(L)=\{1,p^\alpha,r^\gamma\}.
\end{equation}
Therefore $\Delta(L)$ is a disconnected graph with two isolated vertices. As $L$ is solvable, Remark~\ref{rem:20} implies that $L$ is a group of type $1$, $4$ or $5$ in the sense of Lewis. In particular, if $L$ has a non-abelian normal Sylow subgroup, then $L$ is of type $1$. An example of this case is in Table~\ref{tab:p4}, which we now discuss.
\begin{example}~\label{exam: P4}{\em
Let $G=\texttt{SmallGroup}(960,5748)$. Then $\cd(G)=\{1,12,15\}$ and
$$G\cong (\mathbb{Z}_{2}\times\mathbb{Z}_{2}).(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times\mathbb{Z}_{2})\rtimes\mathbb{Z}_{15}$$ Indeed, $G$ has a normal Sylow $2$-subgroup $P$ with $|P|=2^6=64$, $\Z P=P'=\Phi(P)$ and $|\Z P|=4$.
Using the notation that we have established above, $q=3$, $L$ is the normal $3$-complement of $G$ and $\cd(L)=\{1,4,5\}$. As $\Delta(L)\cong K_{1}+K_{1}$ and $L$ has a normal non-abelian Sylow subgroup, Remark~\ref{rem:20} implies that $L$ is of type $1$.}
\end{example}
Motivated by Example~\ref{exam: P4}, the following theorem verifies that $L$ is always a group of type $1$ and explains in part the elusiveness of the groups $G$ with $B(G)\cong P_4$ and $|\rho(G)|=2$.
\begin{theorem}~\label{thm:P4}
Suppose that $\cd(G_0)=\{1,p^\alpha q^\beta,r^\gamma q^\delta\}$. Then there exists $A\le \Z {G_0}$ such that for the factor group $G:=G_0/A$ the following holds (replacing $r$ with $p$ if necessary):
\begin{itemize}
\item[(i)] $G$ contains a normal Sylow $p$-subgroup $P$;
\item[(ii)]$P=G'=\F G$;
\item[(iii)]$P$ is semiextraspecial and $G/P$ is cyclic ($P$ is called semiextraspecial if, for all maximal subgroups $N$ of $\Z P$, the factor group $P/N$ is extraspecial);
\item[(iv)]$G/P'$ is a Frobenius group with Frobenius kernel $P/P'$;
\item[(v)]$P<\cent G{P'}<G$;
\item[(vi)]$G/\cent G{P'}$ acts as a Frobenius group on $P'$;
\item[(vii)]$\cd(G)=\cd(G_0)$, where $p^{2\alpha}=|P:P'|$ and $q^{\beta}=|G:\cent G {P'}|$;
\item[(viii)]if $L$ is the $q$-complement of $G$, then $L$ is of type $1$ in Lewis sense.
\end{itemize}
\end{theorem}
\begin{proof}
The proof follows applying Theorem~$5.6$ in~\cite{N} in our context.
\end{proof}
Given a finite group $G$, it is easy verify that, if $B(G)$ is a path, then both $\Delta(G)$ and $\Gamma(G)$ are paths. In particular, if $B(G)\cong P_{6}$, then $\Gamma(G)\cong P_{3}$ and hence $\Gamma(G)$ has diameter three. The converse is not always true, as we show in the following example.
\begin{example}{\em
We recall a construction from~\cite{shafiei}.
Let $p$, $q$, and $r$ be three, not necessarily distinct, primes with $q\neq r$ such that $q$ and $r$ do not divide $p^{qr}-1$, let $V$ be the additive group of the field $\mathbb{F}_{p^{qr}}$ of order $p^{qr}$, let $S$ be the Galois group of the field extension $\mathbb{F}_{p^{qr}}/\mathbb{F}_p$, let $C$ be the cyclic subgroup of order $\frac{p^{qr}-1}{p^{r}-1}$ of the multiplicative group of $\mathbb{F}_{p^{qr}}^*$ and let $G:=(V\rtimes C)\rtimes S$. Then $$\cd(G)=\left\{1,q,qr,\frac{p^{qr}-1}{p^{r}-1},r\frac{p^{qr}-1}{p^{r}-1}\right\}.$$ Thus $B(G)$ is a path if and only if $\frac{p^{qr}-1}{p^{r}-1}$ is a prime power. Lemma~$3.1$ in~\cite{shafiei} yields that, if $\frac{p^{qr}-1}{p^{r}-1}$ is a prime power, then either $r$ is a power of $q$, or $p=q=2$ and $r=3$.
The first case is not possible because by hypothesis $r$ and $q$ are distinct primes. In the second case, $r=3$ is a divisor of $p^{qr}-1=63$, which contradicts our hypothesis. Thus $\frac{p^{qr}-1}{p^{r}-1}$ is not a prime power and $B(G)$ is not a path.}
\end{example}
Next, we prove the existence of groups $G$ with $B(G)\cong P_{n}$, for $n\in\{5,6\}$. This answers Question~1 in~\cite{H}.
\begin{example}\label{ex:5}{\rm
When $n=5$, we give infinitely many groups $G$ with $B(G)\cong P_5$. Our example is due to P\'{e}ter P\'{a}l P\'{a}lfy, and we gratefully acknowledge his contribution.
Let $r$ be an odd prime, let $p$ be a prime with $p\equiv 1\pmod {2r}$ and let $P$ be an extraspecial group of order $p^{3}$ with exponent $p$. (Observe that the existence of infinitely many primes $p$ follows, for instance, from Dirichlet theorem on primes in arithmetic progression.) The group $P$ has the following presentation:
$$P=\langle x_{1},x_{2}\mid x_{1}^{p}=x_{2}^{p}=[x_1,x_2]^p=[x_1,[x_1,x_2]]=[x_2,[x_1,x_2]]=1\rangle.$$
Since $2r$ divides $p-1$, there exists $\alpha\in \mathbb{Z}$ such that $\alpha $ has order $2r$ in the multiplicative group of $(\mathbb{Z}/p\mathbb{Z})^*$. Set $\beta:=\alpha^{r-1}$ and observe that $\beta$ has order $r$ in the multiplicative group $(\mathbb{Z}/p\mathbb{Z})^*$ because $\gcd(2r,r-1)=2$. From the presentation of $P$, we see that the mapping
$$x_{1}\mapsto x_{1}^{\alpha},\quad x_{2}\mapsto x_{2}^{\beta}$$
defined on the generators of $P$ extends to an automorphism of $P$ of order $2r$, which we denote by $y$.
Set $G:=P\rtimes \langle y\rangle$ with respect to the above action. We have
$$[x_1,x_2]^y=[x_1^y,x_2^y]=[x_1^\alpha,x_2^\beta]=[x_1,x_2]^{\alpha\beta}=[x_1,x_2]^{\alpha\alpha^{r-1}}=[x_1,x_2]^{\alpha^r}.$$ Since $\alpha^r$ has order $2$ modulo $p$, we obtain $[x_1,x_2]^y=[x_1,x_2]^{-1}$ and hence $y$ acts on $\Z P$ by inverting its elements. From this, it is easy to see that $\cd(G)=\{1,r,2r,2p\}$, so $B(G)$ is a path of length five.
}
\end{example}
\begin{example}\label{ex:6}{\rm
When $n=6$, our construction is based on some preliminary theoretical work and then its implementation in a computer. Here, we report only the outcome of our computations because we are not able to give a general construction.
Let $P$ be the group $\texttt{SmallGroup}(256,3679)$.
One can check that the group $P$ has nilpotency class $3$, $|P:\gamma_2(P)|=|\gamma_2(P):\gamma_3(P)|=2^3=8$ and $|\gamma_3(P)|=4$. Moreover, each section of the lower central series has exponent $2$. Let $T$ be a Hall $2'$-subgroup in the automorphism group of $P$. A simple computation yields that $T$ is non-abelian and has cardinality $21$. Let $G$ be the semidirect product $P\rtimes T$. Then $G$ is a solvable group having cardinality $5\,376=2^8\cdot 3\cdot 7$ and a computation yields $\cd(G)=\{1,3,7,14,24\}.$ Therefore $B(G)$ is a path of length $6$. Furthermore, considering $G$ as a permutation group of degree $32$, it is generated by $\alpha_{1}$, $\alpha_{2}$, and $\alpha_{3}$, where
\begin{align*}
&\text{\small$\alpha_{1}:=(1,2)(3,14,9,20)(4,15,19,27)(5,7)(6,8)(10,21,18,26)(11,22,12,23)(13,16)(17,31,25,29)(24,32,28,30)$},\\
&\text{\small$\alpha_{2}:=(2,27,25,19,21,10,31)(3,29,8,22,17,11,14)(4,20,9,30,16,15,24)(7,23,28,12,26,18,32)$},\\
&\text{\small$\alpha_{3}:=(3,12,30)(4,29,18)(5,13,6)(7,16,8)(9,11,32)(10,19,31)(14,23,24)(15,17,26)(20,22,28)(21,27,25)$}.
\end{align*}
This is the smallest group we managed to construct having $B(G)$ a path of length $6$.}
\end{example}
Now that we have established the existence of a group $G$ with $B(G)\cong P_n$ for $n\in\{5,6\}$, it would be interesting to give a classification of this family of groups.
\begin{problem}Give structural information on the finite groups $G$ with $B(G)\cong P_{n}$ for $n\in\{5,6\}$.
\end{problem}
\vskip 0.4 true cm
\subsection{Union of paths}\label{unionpaths}
As it is explained in~\cite[Example 3.4]{N}, given a prime $p$ and two positive integers $k$ and $m$ with $k$ dividing $p^m\pm 1$, there exists a solvable group $PH$ such that $\cd(PH)=\{1,k,p^{m}\}$. (More information on the structure of $G$ is in~\cite{N}, but this is of no concern here.) In particular, when $|\pi(k)|=2$, $B(PH)$ is the first graph in Figure~$\ref{fig: 5}$, and hence $B(PH)$ is a union of paths.
On the other hand, $\cd(M_{10})=\{1,9,10,16\}$ and $\cd(\mathrm{PSL}_2(25))=\{1,13,24,25,26\}$ and hence $B(M_{10})$ and $B(\mathrm{PSL}_2(25))$ have two connected components which are both paths: $B(M(10))=P_2+P_3$ and $B(\mathrm{PSL}_2(25))=P_2+P_5$.
These examples stimulate the curiosity of investigating those finite groups $G$ such that each connected component of $B(G)$ is a path. For this aim, first we let $G$ be a finite non-solvable group. In the following theorem, we refine the main result of~\cite{H}, cf.~\cite[Theorem~6]{H}.
\begin{theorem}~\label{thm:99}
Let $G$ be a finite non-solvable group with $B(G)$ a union of paths. Then $B(G)$ is disconnected, $B(G)$ has $2$ or $3$ connected components and $|\cd(G)|\in \{4,5\}$.
Moreover, we have one of the following cases:
\begin{itemize}
\item[(i)]$n(B(G))=2$, $|\cd(G)|=4$, $G$ has a normal subgroup $U$ such that $U\cong \mathrm{PSL}_2(q)$ or $U\cong \mathrm{SL}_2(q)$ for some odd $q\ge 5$ and, if $C:=\cent G U$, then $C\le \Z G$ and $G/C\cong \mathrm{PGL}_2(q)$. Thus $\cd(G)=\{1,q,q-1,q+1\}$.
\item[(ii)]$n(B(G))=2$, $|\cd(G)|=4$, $G$ has a normal subgroup of index $2$ that is a direct product of $\mathrm{PSL}_2(9)$ and a central subgroup $C$. Furthermore, $G/C\cong M_{10}$ and $\cd(G)=\{1,9,10,16\}$.
\item[(iii)]$n(B(G))=2$, $|\cd(G)|=5$, $G$ has a solvable normal subgroup $V$ such that $G/V$ is almost simple and isomorphic to either $\mathrm{PSL}_2(q)$, or $\mathrm{PGL}_2(q)$, or $\mathrm{P}\Gamma\mathrm{L}_2(2^r)$ (for some prime number $r$), $M_{10}$,
$\mathrm{P}\Sigma\mathrm{L}_2(9)$, or $\mathrm{PGL}_2(3^{2f'}).2$ (for some $f'\ge 1$).
\item[(iv)]$n(B(G))=3$, $\cd(G)=\{1,2^n,2^n-1,2^n+1\}$ and $G\cong \mathrm{PSL}_2(2^{n})\times A$, where $A$ is an abelian group and $n\geq 2$.
\end{itemize}
\end{theorem}
\begin{proof}
From Theorem~\ref{thm:990}, the non-solvability of $G$ implies that $B(G)$ is disconnected. By~\cite[Theorem 6.4]{L}, we have $n(\Delta(G))\le 3$ and, by~\cite{IP}, we have $n(B(G))=n(\Delta(G))$. Thus $B(G)$ is disconnected with at most three connected components.
When $n(B(G))=3$, the result follows from~\cite{H} and we obtain part~(iv). Suppose then $n(B(G))=2$. From~\cite{H}, we deduce $\cd(G)\in \{4,5\}$. When $\cd(G)=4$, the result follows from~\cite[Theorem~A]{GA} and we obtain parts~(i) and~(ii). Finally, suppose that $\cd(G)=5$. The finite groups having $5$ (or $6$) character degrees are classified in~\cite[Corollary~C]{LZ}. From this classification, we see that $G$ contains a normal solvable subgroup $V$ with $G/V$ almost simple with socle $\mathrm{PSL}_2(p^f)$ (with $p^f\ge 4$), or $\mathrm{PSL}_3(4)$, or $^{2}B_2(2^{2m+1})$ (with $m\ge 1$). The cases $\mathrm{PSL}_3(4)$ and $^2{B}_2(2^f)$ do not arise here because any almost simple group having socle one of these two groups has $6$ irreducible complex character degrees. In particular, $G/V$ is almost simple with socle $\mathrm{PSL}_2(p^f)$. Now, the actual structure of $G/V$ can be inferred from the work of White~\cite[Theorem~A]{white}. Indeed, White computes explicitly the character degrees of each almost simple group $X$ having socle $\mathrm{PSL}_2(p^f)$. In part~(iii), we have selected the groups $X$ with $B(X)$ a union of paths.
\end{proof}
Observe that the converse of Theorem~\ref{thm:99} does not hold; for instance, $B(\textrm{PSL}_2(2^n))$ consists of three connected components for each value of $n$, however these connected components are not necessarily paths (this depends on number-theoretic questions concerning the factorization of $2^n+1$ and $2^n-1$).
\vskip 0.3 true cm
Now, we let $G$ be a {\em solvable} group with $B(G)$ disconnected and union of paths (the connected case was discussed in the previous section). As $G$ is solvable, we have $n(B(G))=2$ and $|\rho(G)|\geq 2$. Here we describe the structure of $G$ using the types introduced by Lewis. By Remark~\ref{rem:20}, type $3$ does not occur. As $\Delta(G)$ is triangle-free, the main result in~\cite[Lemma~$2.2$]{Tong-viet} yields $|\rho(G)|\leq 4$. Using $n(B(G))=2$ and $2\le |\rho(G)|\le 4$, a simple case-by-case analysis gives that $B(G)$ is one of the graphs drawn in Figures~\ref{fig: 4},~\ref{fig: 5} or~\ref{fig: 6}. We have tabulated some information on these graphs and the groups yielding these graphs inTable~\ref{tab:TTCR}. This table consists of three columns: the first column is one of the graphs $\Gamma$ in Figures~\ref{fig: 4},~\ref{fig: 5} or~\ref{fig: 6}, the second column are the Lewis types $X\subseteq\{1,2,3,4,5,6\}$ of the groups $G$ with $\Gamma\cong B(G)$ and in the third column we exhibit (for each $x\in X$) a group of Lewis type $x$ and with $\Gamma\cong B(G)$. The task in the rest of this section is proving the correctedness of Table \ref{tab:TTCR}.
\begin{lemma}\label{lemma:new}
Let $G$ be a group with $B(G)$ isomorphic to the second graph in Figure~$\ref{fig: 4}$. Then $G$ is not of Lewis type $4$.
\end{lemma}
\begin{proof}
We argue by contradiction and we suppose that $G$ is of type $4$. We use the notation established in~\cite{ML2} for the groups of type $4$ and we suppose that the reader is familiar with basic properties about groups in this class (in particular with Example~$2.4$ and Lemma~$3.4$ in~\cite{ML2}). Since $\cd(G)=\cd(G/\Z G)$ by~\cite[Lemma~$3.4$]{ML2}, we may suppose $\Z G=1$. In particular, $G=V\rtimes H$ and $H$ acts irreducibly as a linear group on the elementary abelian $p$-group $V$.
Recall, from~\cite{ML2}, that $K:=\F H$, $m:=|E:K|>1$, $K$ is cyclic, $|V|=q^m$ where $q$ is a power of the prime $p$, and $(q^m-1)/(q-1)$ divides $|K|$. Now, $\cd(G|V)=\{|K|\}$ and $\cd(G/V)$ consists of $1,m$ and eventually some other divisors of $m$, see~\cite[Lemma~$3.4$]{ML2}.
As $B(G)$ is isomorphic to the second graph in Figure~\ref{fig: 4}, we deduce that $|K|$ is a prime power (say $|K|=r^\ell$ for some prime number $r$ and some positive integer $\ell$), $m$ is a prime power (say $m=s^t$ for some prime number $s$ and some positive integer $t$) and $\cd(G/V)=\{1,s^t,s^{t'}\}$ for some $0<t'<t$. In particular, $t\ge 2$.
Recall that, given two positive integers $x$ and $y$, the prime number $z$ is said to be a primitive prime divisor of $x^y-1$ if $z$ divides $x^y-1$, but (for every $i\in \{1,\ldots,y-1\}$) $z$ does not divide $x^i-1$.
Let $z$ be a primitive prime divisor of $q^{m}-1=q^{s^t}-1$. As $(q^m-1)/(q-1)$ divides $|K|=r^\ell$, we deduce $z=r$. As $(q^{s^{t-1}}-1)/(q-1)$ divides $(q^{s^t}-1)/(q-1)$ and $t\ge 2$, we deduce that $r$ divides $q^{s^{t-1}}-1$, contradicting the fact that $r$ is a primitive prime divisor of $q^m-1=q^{s^t}-1$. Therefore $q^{s^t}-1$ has no primitive prime divisors. From a celebrated theorem of Zsigmondy~\cite{zsigmondy}, we deduce that either $q=2$ and $s^t=6$, or $s^t=2$ and $q$ is a Mersenne prime, however in both cases we obtain a contradiction.
\end{proof}
\begin{lemma}\label{lemma:new1}
Let $G$ be a group with $B(G)$ isomorphic to the second graph in Figure~$\ref{fig: 5}$, to the first or to the third graph in Figure~$\ref{fig: 6}$, or to the third graph in Figure~\ref{fig: 2}. Then $G$ is not of Lewis type $6$.
\end{lemma}
\begin{proof}
We argue by contradiction and we suppose that $G$ is of type $6$. We use the notation established in~\cite{ML2} for the groups of type $6$ and we suppose that the reader is familiar with basic properties about groups in this class (in particular with Example~$2.6$ and Lemma~$3.6$ in~\cite{ML2}). From~\cite[Lemma~$3.6$~(v),(vi)]{ML2}, $\cd(G)=\cd(G/A')\cup\cd(G|A')$, where $G/A'$ is a group of Lewis type $4$ and $\cd(G|A')$ consists of degrees that divide $|P||E:F|$ and are divisible by $p|B|$, (observe that $p$ and $|B|$ are relatively prime). These facts together imply that $B(G/A')$ is a union of paths and is obtained by deleting some blue vertices of $B(G)$ having at least two light blue neighbors. If $B(G)$ is the third graph in Figure~\ref{fig: 6}, then we can delete only one vertex from $B(G)$ in order to obtain $B(G/A')$. However, the resulting graph has three connected components, contradicting the fact that $G/A'$ is solvable. If $B(G)$ is the first graph in Figure~\ref{fig: 6}, then (again) we can delete only one vertex from $B(G)$ in order to obtain $B(G/A')$. Therefore, the resulting graph is the second graph in Figure~\ref{fig: 4}. However, Lemma~\ref{lemma:new} excludes this possibility. If $B(G)$ is the second graph in Figure~\ref{fig: 5}, then we can delete only one vertex from $B(G)$ in order to obtain $B(G/A')$. Now, the resulting graph is connected, contradicting the fact that it is a graph of Lewis type $4$ and hence disconnected. Finally, if $B(G)$ is the third graph in Figure~\ref{fig: 2}, then we can delete only one vertex from $B(G)$ to obtain $B(G/A')$. The result is a connected graph which is impossible.
\end{proof}
\begin{example}\label{exa:new}
{\rm Let $P$ be the group $\mathtt{SmallGroup}(2\,187,9\,308)$. A computation shows that $\mathrm{Aut}(P)$ contains a cyclic subgroup $H$ of order $10$ with the property that $\cd(P\rtimes H)=\{1,2,10,27\}$ and $P\rtimes H$ has Lewis type $1$. In particular, $G:=P\rtimes H$ is a finite soluble group of order $21\,870$ of Lewis type $1$ with $\cd(G)=\{1,2,10,27\}$ and with $B(G)$ isomorphic to the third graph in Figure~\ref{fig: 5}.
This example was constructed with the help of a computer after deducing some preliminary theoretical properties. Incidentally, this is the smallest example we managed to find, but we were not
able to prove that it is indeed the example of smallest cardinality with $B(G)$ isomorphic to the third graph in Figure~\ref{fig: 5}.}
\end{example}
\begin{example}\label{exa:newnew}
{\rm Let $G$ be the polycyclic group with presentation $G:=\langle x_1,\ldots,x_{15}\mid R\rangle$, where the set of polycyclic relations $R$ are given by
\begin{align*}
&x_1^2 = x_{15},\,
x_2^2 = x_{15},\,
x_3^3 = 1,\,
x_4^{11} = 1,\,
x_5^2 = x_{15},\,
x_6^2 = 1,\,
x_7^2 = 1,\,
x_8^2 = 1,\,
x_9^2 = 1,\,
x_{10}^2 = x_{15},\,
x_{11}^2 = 1,\\
& x_{12}^2 = 1,\,
x_{13}^2 = 1,\,
x_{14}^2 = 1,\,
x_{15}^2 = 1,\,
x_2^{x_1} = x_2 \cdot x_{15},\,
x_3^{x_2} = x_3^2,\,
x_4^{x_2} = x_4^{10},\,
x_5^{x_2} = x_5 \cdot x_{15},\\
&x_5^{x_3} = x_5 \cdot x_7 \cdot x_{10} \cdot x_{11} \cdot x_{12},\,
x_5^{x_4} = x_{10},\,
x_6^{x_3} = x_6 \cdot x_8 \cdot x_{11} \cdot x_{12} \cdot x_{13},
x_6^{x_4} = x_{11},\,
x_7^{x_3} = x_7 \cdot x_9 \cdot x_{12} \cdot x_{13} \cdot x_{14},\\
& x_7^{x_4} = x_{12},\,
x_8^{x_3} = x_5 \cdot x_7 \cdot x_8 \cdot x_{10} \cdot x_{12} \cdot x_{13} \cdot x_{14},\,
x_8^{x_4} = x_{13},\,
x_9^{x_3} = x_6 \cdot x_8 \cdot x_9 \cdot x_{10} \cdot x_{11} \cdot x_{12} \cdot x_{13} \cdot x_{14},\\
& x_9^{x_4} = x_{14},\,
x_{10}^{x_2} = x_7 \cdot x_8 \cdot x_{10} \cdot x_{15},\,
x_{10}^{x_3} = x_5 \cdot x_6 \cdot x_7 \cdot x_{12} \cdot x_{15},\,
x_{10}^{x_4} = x_5 \cdot x_{12} \cdot x_{13},\,
x_{10}^{x_5} = x_{10} \cdot x_{15},\,\\
& x_{10}^{x_6} = x_{10} \cdot x_{15}, \,
x_{10}^{x_7} = x_{10} \cdot x_{15}, \,
x_{10}^{x_8} = x_{10} \cdot x_{15}, \,
x_{11}^{x_2} = x_8 \cdot x_9 \cdot x_{11}, \,
x_{11}^{x_3} = x_6 \cdot x_7 \cdot x_8 \cdot x_{13} \cdot x_{15},\,\\
& x_{11}^{x_4} = x_6 \cdot x_{13} \cdot x_{14}, \,
x_{11}^{x_5} = x_{11} \cdot x_{15}, \,
x_{11}^{x_6} = x_{11} \cdot x_{15}, \,
x_{11}^{x_7} = x_{11} \cdot x_{15}, \,
x_{12}^{x_2} = x_5 \cdot x_7 \cdot x_9 \cdot x_{12}, \,\\
& x_{12}^{x_3} = x_7 \cdot x_8 \cdot x_9 \cdot x_{14} \cdot x_{15}, \,
x_{12}^{x_4} = x_7 \cdot x_{10} \cdot x_{12} \cdot x_{14}, \,
x_{12}^{x_5} = x_{12} \cdot x_{15}, \,
x_{12}^{x_6} = x_{12} \cdot x_{15}, \,\\
& x_{13}^{x_2} = x_5 \cdot x_6 \cdot x_7 \cdot x_8 \cdot x_{13} \cdot x_{15}, \,
x_{13}^{x_3} = x_5 \cdot x_7 \cdot x_8 \cdot x_9 \cdot x_{10} \cdot x_{12}, \,
x_{13}^{x_4} = x_8 \cdot x_{10} \cdot x_{11} \cdot x_{12} \cdot x_{13} \cdot x_{15}, \,\\
& x_{13}^{x_5} = x_{13} \cdot x_{15}, \,
x_{13}^{x_9} = x_{13} \cdot x_{15}, \,
x_{14}^{x_2} = x_6 \cdot x_7 \cdot x_8 \cdot x_9 \cdot x_{14} \cdot x_{15}, \,
x_{14}^{x_3} = x_5 \cdot x_6 \cdot x_7 \cdot x_8 \cdot x_9 \cdot x_{11} \cdot x_{13} \cdot x_{15}, \,\\
& x_{14}^{x_4} = x_9 \cdot x_{11} \cdot x_{12} \cdot x_{13} \cdot x_{14} \cdot x_{15}, \,
x_{14}^{x_8} = x_{14} \cdot x_{15}, \,
x_{14}^{x_9} = x_{14} \cdot x_{15}.
\end{align*}
The group $G$ has order $270\, 336=2^{13}\cdot 3\cdot 11$; moreover, $G$ contains a Hall $2'$-subgroup $K$ with $K$ cyclic and $G$ contains a normal $2$-subgroup $Q$ with $|G:QK|=2$. Furthermore, $Q/\gamma_2(Q)$ is elementary abelian of order $2^{10}=1\,024$, $\gamma_2(Q)$ is elementary abelian of order $2^2=4$ and $K$ centralizes $\gamma_2(Q)$. One might check that $G$ has Lewis type $5$. Finally, $\cd(G)=\{1,2,33,64\}$ and hence $B(G)$ is isomorphic to the second graph in Figure~\ref{fig: 5}.
As in Example~\ref{exa:new}, this example was constructed (with some luck) with the help of a computer after deducing some preliminary theoretical properties.}
\end{example}
\begin{example}\label{exa:newnewnew}
{\rm Let $G$ be the polycyclic group with presentation $G:=\langle x_1,\ldots,x_{16}\mid R\rangle$, where the set of polycyclic relations $R$ are given by
\begin{align*}
&x_1^3 = 1,
x_2^{11} = 1,
x_3^2 = x_{13},
x_4^2 = x_{13} \cdot x_{16},
x_5^2 = x_{13},
x_6^2 = x_{13},
x_7^2 = 1,
x_8^2 = 1,
x_9^2 = x_{16},
x_{10}^2 = x_{13},
x_{11}^2 = x_{13},\\
& x_{12}^2 = x_{16},
x_{13}^2 = 1,
x_{14}^2 = 1,
x_{15}^2 = 1,
x_{16}^2 = 1,
x_3^{x_1} = x_4 \cdot x_6 \cdot x_8,
x_3^{x_2} = x_4 \cdot x_6 \cdot x_7 \cdot x_8 \cdot x_{11} \cdot x_{12},\\
& x_4^{x_1} = x_5 \cdot x_7 \cdot x_9,
x_4^{x_2} = x_3 \cdot x_4 \cdot x_6 \cdot x_7 \cdot x_{12},
x_5^{x_1} = x_6 \cdot x_8 \cdot x_{10},
x_5^{x_2} = x_3 \cdot x_6 \cdot x_7 \cdot x_9,
x_5^{x_3} = x_5 \cdot x_{13},\\
& x_6^{x_1} = x_7 \cdot x_9 \cdot x_{11},
x_6^{x_2} = x_4 \cdot x_7 \cdot x_8 \cdot x_{10},
x_6^{x_3} = x_6 \cdot x_{13} \cdot x_{16},
x_6^{x_4} = x_6 \cdot x_{16},
x_6^{x_5} = x_6 \cdot x_{16},
x_7^{x_1} = x_8 \cdot x_{10} \cdot x_{12},\\
& x_7^{x_2} = x_5 \cdot x_8 \cdot x_9 \cdot x_{11} \cdot x_{13} \cdot x_{16},
x_7^{x_3} = x_7 \cdot x_{13} \cdot x_{16},
x_7^{x_4} = x_7 \cdot x_{16},
x_7^{x_5} = x_7 \cdot x_{13},
x_7^{x_6} = x_7 \cdot x_{13}, \\
& x_8^{x_1} = x_3 \cdot x_4 \cdot x_5 \cdot x_6 \cdot x_8 \cdot x_{11},
x_8^{x_2} = x_6 \cdot x_9 \cdot x_{10} \cdot x_{12} \cdot x_{16},
x_8^{x_3} = x_8 \cdot x_{16},
x_8^{x_4} = x_8 \cdot x_{16},
x_8^{x_6} = x_8 \cdot x_{13} \cdot x_{16},\\
& x_8^{x_7} = x_8 \cdot x_{13},
x_9^{x_1} = x_4 \cdot x_5 \cdot x_6 \cdot x_7 \cdot x_9 \cdot x_{12},
x_9^{x_2} = x_3 \cdot x_4 \cdot x_5 \cdot x_6 \cdot x_7 \cdot x_8 \cdot x_9 \cdot x_{10} \cdot x_{11} \cdot x_{13},
x_9^{x_3} = x_9 \cdot x_{16}, \\
& x_9^{x_4} = x_9 \cdot x_{13},
x_9^{x_5} = x_9 \cdot x_{16},
x_9^{x_6} = x_9 \cdot x_{13} \cdot x_{16},
x_9^{x_7} = x_9 \cdot x_{13} \cdot x_{16},
x_9^{x_8} = x_9 \cdot x_{16},
x_{10}^{x_1} = x_3 \cdot x_4 \cdot x_7 \cdot x_9 \cdot x_{10},\\
& x_{10}^{x_2} = x_4 \cdot x_5 \cdot x_6 \cdot x_7 \cdot x_8 \cdot x_9 \cdot x_{10} \cdot x_{11} \cdot x_{12} \cdot x_{16},
x_{10}^{x_3} = x_{10} \cdot x_{16},
x_{10}^{x_4} = x_{10} \cdot x_{13} \cdot x_{16},
x_{10}^{x_5} = x_{10} \cdot x_{13} \cdot x_{16},\\
& x_{10}^{x_6} = x_{10} \cdot x_{16},
x_{10}^{x_7} = x_{10} \cdot x_{13},
x_{10}^{x_9} = x_{10} \cdot x_{13},
x_{11}^{x_1} = x_4 \cdot x_5 \cdot x_8 \cdot x_{10} \cdot x_{11},
x_{11}^{x_2} = x_3 \cdot x_4 \cdot x_7 \cdot x_{10} \cdot x_{11} \cdot x_{12} \cdot x_{16}, \\
& x_{11}^{x_3} = x_{11} \cdot x_{13} \cdot x_{16},
x_{11}^{x_4} = x_{11} \cdot x_{13},
x_{11}^{x_6} = x_{11} \cdot x_{13} \cdot x_{16},
x_{11}^{x_8} = x_{11} \cdot x_{13} \cdot x_{16},
x_{11}^{x_9} = x_{11} \cdot x_{13},
x_{11}^{x_{10}} = x_{11} \cdot x_{13} \cdot x_{16},\\
& x_{12}^{x_1} = x_5 \cdot x_6 \cdot x_9 \cdot x_{11} \cdot x_{12},
x_{12}^{x_2} = x_3 \cdot x_6 \cdot x_9 \cdot x_{11} \cdot x_{12} \cdot x_{13},
x_{12}^{x_3} = x_{12} \cdot x_{13},
x_{12}^{x_4} = x_{12} \cdot x_{13},
x_{12}^{x_5} = x_{12} \cdot x_{13} \cdot x_{16},\\
& x_{12}^{x_6} = x_{12} \cdot x_{13} \cdot x_{16},
x_{12}^{x_8} = x_{12} \cdot x_{13},
x_{12}^{x_9} = x_{12} \cdot x_{13},
x_{12}^{x_{10}} = x_{12} \cdot x_{16},
x_{12}^{x_{11}} = x_{12} \cdot x_{13} \cdot x_{16},
x_{15}^{x_{14}} = x_{15} \cdot x_{16}.
\end{align*}
The group $G$ has order $540\, 672=2^{14}\cdot 3\cdot 11$; moreover, $G$ contains a Hall $2'$-subgroup $K$ with $K$ cyclic and $G$ contains a normal Sylow $2$-subgroup $P$. Furthermore, $P/\gamma_2(P)$ is elementary abelian of order $2^{12}=2\,048$, $\gamma_2(P)$ is elementary abelian of order $2^2=4$ and $K$ centralizes $\gamma_2(P)$. One might check that $G$ has Lewis type $1$. Finally, $\cd(G)=\{1,32,33,64\}$ and hence $B(G)$ is isomorphic to the second graph in Figure~\ref{fig: 5}.
}
\end{example}
\begin{remark}{\rm
If $|\cd(G)^{*}|=4$, then $B(G)$ is either the first or the second graph in Figure~\ref{fig: 6} or the third graph in Figure~\ref{fig: 4}. Except for the second graph in Figure~\ref{fig: 6}, as $\Gamma(G)$ has no isolated vertices, by ~\cite[Theorem 5.2]{ML2}, we deduce that $G$ has a normal non-abelian Sylow subgroup. Now Remark~\ref{rem:20} implies that $G$ is a group of type $1$ or $6$ in the sense of Lewis. For the first graph in Figure~\ref{fig: 6} the case of Lewis type $6$ is excluded by Lemma~\ref{lemma:new1}.
If $B(G)$ is the last graph in Figure~\ref{fig: 4}, then both connected components of $\Delta(G)$ are isolated vertices; so by Remark~\ref{rem:20} and the previous results we conclude that $G$ is a group of type $1$ (see also~\cite[Theorem~$3.1$]{LL}).
If $B(G)$ is the second graph in Figure~\ref{fig: 6}, then $\Gamma(G)\cong K_{1}+P_{2}$ consists of one isolated vertex and one edge and hence, by ~\cite[Theorem 3.3]{LL}, we deduce that $G$ is either a group of type $1$ or $4$ in the sense of Lewis. If $G$ has no non-abelian normal Sylow subgroup, then ~\cite[Theorem 5.2]{ML2} implies that the prime divisors of the isolated vertex of $\Gamma(G)$ gives the larger component of $\Delta(G)$, which is not the case. Thus $G$ has a non-abelian normal Sylow subgroup. This implies that $G$ is not a group of type $4$ and so it is a group of type $1$.
\vskip 0.3 true cm
Suppose $B(G)$ is one of the first two graphs in Figure~\ref{fig: 4}. As $\Delta(G)$ has two isolated vertices, from Remark~\ref{rem:20}, we conclude that $G$ is neither a group of type $3$ nor of type $6$ in the sense of Lewis. If $B(G)$ is the first graph in Figure~\ref{fig: 4}, then it is a $1$-regular bipartite graph. The structure and the Lewis type of such a group is explicitly explained in Theorem~\ref{thm: reg} below (and we refer the reader to this theorem for a detailed description). Finally, if $B(G)$ is the second graph in Figure~\ref{fig: 4}, then $G$ is a group of type $1$, $2$ or $5$ in the sense of Lewis (type $4$ does not arise because of Lemma~\ref{lemma:new}).
\vskip 0.3 true cm
Suppose $B(G)$ is either the first or the third graph in Figure~\ref{fig: 5}. By Remark~\ref{rem:20}, $G$ is not a group of type $2$ or $3$. If $B(G)$ is the third graph in this figure and $G$ has no non-abelian normal Sylow subgroup, then by ~\cite[Theorem 5.2]{ML2} we conclude that the prime divisors of the isolated vertex of $\Gamma(G)$ lie in a larger component of $\Delta(G)$ which is not the case for this graph. Hence $G$ has a non-abelian normal Sylow subgroup which implies that $G$ is of Lewis type $1$ or $6$.
Suppose $B(G)$ is the first graph in Figure~\ref{fig: 5}. As $|cd(G)^{*}|$ consists of co-prime degrees, with respect to its Fitting height which is either $2$ or $3$, $G$ has one of the
structures explained in~\cite[Lemma 4.1]{L1998}. By~\cite[Lemma 4.1, Theorem 4.5]{ML2} we have $h(G)=2$ if and only if $G$ is a group of type $1$. While $h(G)=3$, ~\cite[Lemma 4.1(a-iii)]{L1998} implies that $\F G$ is abelian and in particular $G$ has no non-abelian normal Sylow subgroup. Hence by Remark~\ref{rem:20} $G$ is either of Lewis type $4$ or $5$. Suppose $G$ is of Lewis type $5$. Considering the notations in ~\cite[Lemma 3.5]{ML2}, we deduce that $\{1,2,2^{a}+1\}\subseteq\cd(G)$, $\cd(G|Q')\neq\emptyset$ as $Q$ is non-abelian, and $\cd(G|Q')$ contains powers of $2$ that are divisible by $2^a$. Hence $a=1$, $\cd(G|Q')=\{2\}$, and $\cd(G)=\{1,2,3\}$ which is not the case. Thus in this case $G$ is not of type $5$, so it is of type four.
If $B(G)$ is the last graph in Figure~\ref{fig: 6}, then $\Gamma(G)$ has no isolated vertices and hence~\cite[Theorem 5.2]{ML2} implies that $G$ has a non-abelian normal Sylow subgroup. Now Remark~\ref{rem:20} verifies that $G$ is either a group of type $1$ or $6$. The case of Lewis type $6$ is excluded by Lemma~\ref{lemma:new1}.
Finally, if $B(G)$ is the second graph in Figure~\ref{fig: 5}, then $G$ is either a group of type $1$, $4$, $5$ or $6$. The case of Lewis type $6$ is excluded by Lemma~\ref{lemma:new1}.
We have summarized this remark in Table~\ref{tab:TTCR}.
}
\end{remark}
\begin{table}[ht]
\caption{Lewis types when $B(G)$ is a union of paths}
\centering
\begin{tabular}{|c|c|l|}\hline
Graph & Types & Examples \\
\hline
nr~1 Figure~~\ref{fig: 4}&1,4&$\mathtt{SmallGroup}(24,3)$ has type $1$\\
&&$\mathtt{PrimitiveSolvablePermGroup}(2,2,1,1)$ has type $4$\\
\hline
nr~2 Figure~~\ref{fig: 4}&1,2,5&$\mathtt{SmallGroup}(288,860)$ has type $1$\\
&&$\mathtt{PrimitiveGroup}(9,6)$ has type $2$\\
&&$\mathtt{SmallGroup}(48,28)$ has type $5$\\
\hline
nr~3 Figure~~\ref{fig: 4}&1&A family of examples are constructed in~\cite[Theorem 2.3]{LL}, the smallest arises by taking\\
&&(using the notation in~\cite[Theorem~$2.3$]{LL}) $p=3$, $a=2$ and $b=4$\\
\hline
nr~1 Figure~~\ref{fig: 5}&1,4&The group $PH$ in ~\cite[Example 3.4]{N} where $|\pi(k)|= 2$ is of Lewis type $1$\\
&&$\mathtt{PrimitiveSolvablePermGroup}(4,2,2,2)$ has type $4$\\
\hline
nr~2 Figure~~\ref{fig: 5}&1,4,5&See Example~\ref{exa:newnewnew} for a group of Lewis type $1$\\
&&$\mathtt{PrimitiveSolvablePermGroup}(4,2,1,4)$ has type $4$\\
&&See Example~\ref{exa:newnew} for a group of Lewis type $5$\\
\hline
nr~3 Figure~~\ref{fig: 5}&1,6&See Example~\ref{exa:new} for a group of Lewis type $1$\\
&&$\mathtt{SmallGroup}(1344,816)$ has type $6$\\
\hline
nr~1 Figure~~\ref{fig: 6}&1&A family of examples of type $1$ are constructed in~\cite[Theorem 2.3]{LL}, the smallest arises\\
&&by taking (using the notation in~\cite[Theorem~$2.3$]{LL}) $p=5$, $a=12$ and $b=24$\\
\hline
nr~2 Figure~~\ref{fig: 6}&1&$\mathtt{SmallGroup}(1920,240059)$ has type $1$ and $\cd(G)^*=\{3,5,8,15\}$\\
\hline
nr~3 Figure~~\ref{fig: 6}&1& For each three distinct primes $q$, $r$, and $s$, where $q\equiv 3\pmod 4$ and $q\equiv 1 \pmod{rs}$, \\&&there exists a solvable group $G$ with $\cd(G)=\{1,r,s,rs,q^{4},q^{5}\}$, see~\cite[Section~4]{Benjamin}.\\
&&This group has cardinality $q^{12}rs$.\\
\hline
\end{tabular}
\label{tab:TTCR}
\end{table}
\begin{figure}
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (2,1) {};
\node (m3) at (3,1) {};
\node[fill=blue] (m4) at (2,3) {};
\node[fill=blue] (m5) at (3,3) {};
\foreach \from/\to in {m2/m4,m3/m5}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (2,1) {};
\node[fill=blue] (m3) at (3,3) {};
\node[fill=blue] (m4) at (1,3) {};
\node (m5) at (3,1) {};
\node[fill=blue] (m6) at (4,3) {};
\foreach \from/\to in {m2/m3,m2/m4,m5/m6}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (2,1) {};
\node[fill=blue] (m3) at (3,3) {};
\node[fill=blue] (m4) at (1,3) {};
\node (m5) at (5,1) {};
\node[fill=blue] (m6) at (4,3) {};
\node[fill=blue] (m7) at (6,3) {};
\foreach \from/\to in {m2/m3,m2/m4,m5/m6,m5/m7}
\draw (\from) -- (\to);
\end{tikzpicture}
\caption{$B(G)$ as a union of paths with $|\rho(G)|=2$}
\label{fig: 4}
\end{figure}
\begin{figure}
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (2,1) {};
\node (m3) at (3,1) {};
\node[fill=blue] (m4) at (3,3) {};
\node[fill=blue] (m5) at (4,3) {};
\node (m6) at (5,1) {};
\foreach \from/\to in {m2/m4,m3/m5,m5/m6}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (2,1) {};
\node[fill=blue] (m3) at (1,3) {};
\node[fill=blue] (m4) at (3,3) {};
\node (m6) at (3,1) {};
\node[fill=blue] (m7) at (4,3) {};
\node (m8) at (5,1) {};
\foreach \from/\to in {m2/m3,m2/m4,m6/m7,m7/m8}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (8,1) {};
\node[fill=blue] (m3) at (7,3) {};
\node[fill=blue] (m4) at (9,3) {};
\node (m5) at (10,1) {};
\node (m6) at (11,1) {};
\node[fill=blue] (m7) at (10,3) {};
\foreach \from/\to in {m2/m3,m2/m4,m5/m4,m6/m7}
\draw (\from) -- (\to);
\end{tikzpicture}
\caption{$B(G)$ is a union of paths with $|\rho(G)|=3$, and $|\cd(G)^{*}|\leq 3$}
\label{fig: 5}
\end{figure}
\begin{figure}
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (8,1) {};
\node[fill=blue] (m3) at (7,3) {};
\node[fill=blue] (m4) at (9,3) {};
\node[fill=blue] (m5) at (4,3) {};
\node (m6) at (5,1) {};
\node[fill=blue] (m7) at (6,3) {};
\node (m8) at (7,1) {};
\foreach \from/\to in {m2/m3,m2/m4,m6/m5,m6/m7,m7/m8}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node[fill=blue] (m2) at (6,3) {};
\node[fill=blue] (m3) at (7,3) {};
\node[fill=blue] (m4) at (9,3) {};
\node[fill=blue] (m5) at (11,3) {};
\node (m6) at (7,1) {};
\node (m7) at (8,1) {};
\node (m8) at (10,1) {};
\foreach \from/\to in {m2/m6,m7/m4,m7/m3,m8/m4,m5/m8}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node[fill=blue] (m2) at (6,3) {};
\node (m6) at (7,1) {};
\node[fill=blue] (m9) at (8,3) {};
\node[fill=blue] (m4) at (9,3) {};
\node[fill=blue] (m5) at (11,3) {};
\node (m7) at (10,1) {};
\node (m8) at (12,1) {};
\node[fill=blue] (m10) at (13,3) {};
\foreach \from/\to in {m2/m6,m6/m9,m7/m4,m8/m5,m5/m7,m8/m10}
\draw (\from) -- (\to);
\end{tikzpicture}
\caption{$B(G)$ is a union of paths with $|\rho(G)|=3$, and $|\cd(G)^{*}|\geq 4$}
\label{fig: 6}
\end{figure}
In analyzing the graphs in this section, the reader should observe how the investigation of $B(G)$
requires the techniques developed for studying the graphs $\Gamma(G)$ and $\Delta(G)$ and {\em also} some number theoretic (or arithmetic) considerations.
We conclude this section proposing the following problem, which generalizes Question~1 in~\cite{H}. (Recall that we have solved~\cite[Question~1]{H} in Examples~\ref{ex:5} and~\ref{ex:6}.)
\begin{problem}
{\rm Determine all graphs $X$ with no cycles such that there exists a group $G$ with $X\cong B(G)$.}
\end{problem}
\subsection{Cycles and complete bipartite graphs}
Recall (from the introductory section) that Taeri~\cite{T} has proved that the bipartite divisor graph for the set of conjugacy class sizes of a finite group $G$ is a cycle if and only if it is a cycle of length six; moreover this happens if and only if $G\cong A\times \mathrm{SL}_2(q)$, for some abelian group $A$ and some $q\in \{4,8\}$. The situation is very different and much more rich for irreducible character degrees. From ~\cite[Section 6]{LM} we know that, for every pair of odd primes $p$ and $q$ such that $p$ is congruent to $1$ modulo $3$ and $q$ is a divisor of $p+1$, there exists a solvable group $G$ such that $\cd(G)=\{1,3q,p^{2}q,3p^{3}\}$.
This gives an example of a solvable group $G$ with $B(G)$ a cycle of length $6$.
On the other hand, among groups of order $588$, there are exactly two groups $G$ with $B(G)$ a cycle of length four. These groups have $\cd(G)=\{1,6,12\}$.
In ~\cite{H}, it is shown that, if $G$ is a finite group with $B(G)$ a cycle of length $n\geq 6$, then $\Delta(G)$ and $\Gamma(G)$ are cycles. This fact yields the following theorem.
\begin{theorem}[{{See~\cite[Theorem~4.5]{H}}}]~\label{thm:55}
Let $G$ be a finite group with $B(G)$ a cycle of length $n$. Then $n\in\{4,6\}$, $G$ is solvable, and $dl(G)\leq |\cd(G)|\leq 4$. In particular, if $B(G)$ is a cycle of length $4$, then there exists a normal abelian Hall subgroup $N$ of $G$ such that $\cd(G)=\{[G:I_{G}(\lambda)] : \lambda\in \Irr(N)\}$.
\end{theorem}
Since the cycle of length four is also the complete bipartite graph $K_{2,2}$, it seems natural to discuss here also the case $B(G)\cong K_{m,n}$, for some positive integers $m\geq 2$ and $n\geq 2$. When $B(G)$ is complete bipartite, the graphs $\Delta(G)$ and $\Gamma(G)$ are both complete. Therefore, by ~\cite[Theorem 7.3]{L} or~\cite[Main Theorem]{BCLP}, we deduce that $G$ is solvable. The best structural result on $G$ is given by Moosavi~\cite{mus}.
\begin{theorem}[{{See~\cite{mus}}}]
Let $G$ be a finite group with $B(G)$ complete bipartite. Then $G=AH$, where $A$ is an abelian normal Hall subgroup of $G$ and $H$ is either abelian or a non-abelian $p$-group for some prime $p$.
\end{theorem}
Here we observe that there exist groups $G$ where $B(G)$ is an arbitrary complete bipartite graph. The analogous problem for the bipartite divisor graph for the set of conjugacy class sizes seems considerably harder; it is widely open and it is stated in~\cite{HSpiga}.
\begin{proposition}\label{prop:2^m-1}For every positive integers $m$ and $k$, there exists a group $G$ with $B(G)\cong K_{m,k}$.
\end{proposition}
\begin{proof}
Let $m$ be a positive integer and let $p_1,\ldots,p_m$ be $m$ distinct prime numbers. Set $n:=p_1\cdots p_m$. From Dirichlet's theorem on primes in arithmetic progression, there exists a prime $p$ with $p\equiv 1\pmod n$. Let $p$ be one of these primes and let $P$ be a cyclic group of order $p$. Next, let $\alpha$ be an automorphism of $P$ of order $n$ and set $H:=\langle P,\alpha\rangle$. Clearly, $H$ is a Frobenius group of order $np$, with cyclic Frobenius complement $\langle \alpha\rangle$, with cyclic Frobenius kernel $P$ and with $\cd(H)=\{1,n\}$.
Let $k$ be a positive integer and let $G:=H^k$ be the Cartesian product of $k$ copies of $H$. Clearly, $$\cd(G)=\{1,n,n^2,\ldots,n^k\}$$
and hence $B(G)$ is the complete bipartite graph $K_{m,k}$.
\end{proof}
\section{Regular Bipartite Divisor Graph}\label{sec:reg}
A graph is said to be $k$-regular if each of its vertices has valency $k$.
Since cycles are $2$-regular connected graphs, the investigation of groups $G$ with $B(G)$ a cycle has inspired the investigation~\cite{Hregular} of groups $G$ where $B(G)$ is $k$-regular. It is
clear that $0$-regular graphs (that is, empty graphs) play no role in the study of bipartite divisor graphs. So we start by discussing the influence
of $1$-regularity of $B(G)$ (that is, $B(G)$ is a complete matching) on the group structure of $G$. (Theorem~\ref{thm: reg} is a refinement of~\cite[Theorem~$2.1$]{Hregular}, where we have improved its statement by taking into account~\cite{BH}.)
\begin{theorem}[{{See~\cite{Hregular}}}]~\label{thm: reg}
Let $G$ be a finite group with $B(G)$ $1$-regular. Then one of the following occurs:
\begin{itemize}
\item[(1)]$G$ is non-solvable, $B(G)=K_2+K_2+K_2$, $G\cong A\times \mathrm{PSL}_2(2^{n})$, where $A$ is abelian and $n\in\{2,3\}$;
\item[(2)]$G$ is solvable and one of the following cases holds:
\begin{itemize}
\item[(i)]$B(G)\cong K_2$ and $\cd(G)=\{1,p^\alpha\}$, for some prime $p$ and some positive integer $\alpha$. Moreover, either
\begin{itemize}
\item[(a)]$G\cong P\times A$, where $P$ is a non-abelian $p$-group and $A$ is abelian, or
\item[(b)]$\alpha=1$, $\F G$ is abelian and $|G:\F G|=p$, or
\item[(c)]$G'\cap \Z G=1$ and $G/\Z G$ is a Frobenius group with kernel $(G'\times \Z G)/\Z G$ and cyclic complement of order $p^\alpha=|G:G'\times \Z G|$.
\end{itemize}
\item[(ii)]$B(G)\cong K_2+K_2$, $h(G)\in\{2,3\}$ and $G$, with respect to its Fitting height, has one of the two structures mentioned in~\cite[Lemma 4.1]{L1998}. In particular:
\begin{itemize}
\item[(a)] If $h(G)=3$, then $\cd(G)=\{1,[G:{\bf F}_{2}(G)],[{\bf F}_{2}(G):\F G]\}$, where $[G:{\bf F}_{2}(G)]$ is a prime $s$ and ${\bf F}_{2}(G)/\F G$ is a cyclic $t$-group for some prime $t\neq s$. Moreover, $G$ has Lewis type $4$.
\item[(b)] If $h(G)=2$, then $\cd(G)=\{[G:\F G]\}\cup \cd(\F G)$, where $G/\F G$ is a cyclic $t$-group for some prime $t$ and $|\cd(\F G)|=2$. Moreover, $G$ has Lewis type $1$.
\end{itemize}
\end{itemize}
\end{itemize}
\end{theorem}
\begin{proof}
Except for the fact that the groups in (2iia) are of Lewis type $4$ and the groups in (2iib) are of Lewis type $1$, the result follows immediately from~\cite[Theorem~$2.1$]{Hregular} and using the main result of~\cite{BH} (when $n(B(G))=1$).
Suppose than that $G$ satisfies $B(G)=K_2+K_2$, $h(G)=3$, $\cd(G)=\{1,[G:{\bf F}_{2}(G)],[{\bf F}_{2}(G):\F G]\}$, where $[G:{\bf F}_{2}(G)]$ is a prime $s$ and ${\bf F}_{2}(G)/\F G$ is a cyclic $t$-group for some prime $t\neq s$. If follows readily from the description of the Lewis types and Remark~\ref{rem:20} that $G$ has type $4$ or $5$. Suppose that $G$ has type $5$. (We use the notation in~\cite[Lemma~$3.5$]{ML2}.) From~\cite[Lemma~$3.5$~(iii)]{ML2}, we deduce $2,2^a+1\in\cd(G)$. Moreover, from~\cite[Lemma~$3.5$~(iv)]{ML2}, we deduce that either $\cd(G|Q')=\emptyset$ or $\cd(G|Q') $ contains powers of $2$ that are divisible by $2^a$. Assume first that $\cd(G|Q')=\emptyset$. This means that every irreducible character of $G$ contains $Q'$ in its kernel, but this is clearly a contradiction because $Q'\ne 1$. Assume now that $\cd(G|Q')\ne\emptyset$. As $|\rho(G)|=2$, we must have $\rho(G)=\{2,2^a+1\}$ and hence $\cd(G|Q')=\{2\}$ and $a=1$. Therefore, $\cd(G)=\{1,2,3\}$. At this point to conclude we invoke~\cite[Theorem~$3.5$]{N}, which classifies the groups $X$ with $\cd(X)=\{1,m,n\}$ and $\gcd(m,n)=1$. Since $|G:\F G|=2\cdot 3=6$, we deduce that part~(1) of~\cite[Theorem~$3.5$]{N} holds. We infer that $\F G$ is abelian and hence so is $Q$, but this contradicts the description of the groups of type $5$.
Finally suppose that $G$ satisfies $B(G)=K_2+K_2$ and $h(G)=2$. Then $G$ is of Lewis type $1$ by Remark~\ref{rem:20}.
\end{proof}
The groups described in (1) and in (2i) are clear and, for each of these cases, there exists a group $G$ with $B(G)$ a complete matching. Now, $\mathtt{SmallGroup}(320,1012)$ provides an example satisfying (2iib). The groups in~(2iia) must be of type $4$ in Lewis' sense and examples occur plentiful ($\mathrm{Sym}(4)$ has type $4$ and $B(\mathrm{Sym}(4))=K_2+K_2$).
Let $G$ be a finite group with $B(G)$ a connected $2$-regular graph. As a connected $2$-regular graph is a cycle, by Theorem~\ref{thm:55}, $G$ is solvable with $dl(G)\leq 4$ and $B(G)$ is a cycle of length four or six. The following theorem shows that $B(G)$ cannot be a disconnected $2$-regular graph.
\begin{theorem}[{{See~\cite[Theorem~$3.2$ and Corollary~$3.3$]{Hregular}}}]\label{thm: 51}
Suppose that $G$ is a group with $B(G)$ $2$-regular. Then $G$ is solvable, $B(G)$ is connected and $B(G)$ is a cycle of length four or six. In particular, if $\diam(B(G))=2$, then there exists a normal abelian Hall subgroup $N$ of $G$ such that $\cd(G)=\{[G:I_{G}(\lambda)] : \lambda\in \mathrm{Irr}(N)\}$.
\end{theorem}
Theorem ~\ref{thm: 51} verifies that the union of two cycles is not the bipartite divisor graph of any finite group.
Finally, in the following two theorems, we consider the case where $B(G)$ is $3$-regular.
\begin{theorem}[{{See~\cite[Theorems~$3.4$ and~$3.5$]{Hregular}}}]~\label{thm: 4}
Let $G$ be a group with $B(G)$ $3$-regular. Then $B(G)$ is connected. Moreover, if $\Delta(G)$ is $n$-regular for $n\in\{2,3\}$, then $G$ is solvable and $\Delta(G)\cong K_{n+1}\cong \Gamma(G)$.
\end{theorem}
\begin{theorem}[{{See~\cite{Hregular}}}]~\label{cor: 1}
Let $G$ be a solvable group with $B(G)$ $3$-regular. Then:
\begin{itemize}
\item[(i)] If at least one of $\Delta(G)$ or $\Gamma(G)$ is not complete, then $\Delta(G)$ is neither $2$-regular, nor $3$-regular.
\item[(ii)] If $\Delta(G)$ is regular, then it is a complete graph. Furthermore, if $\Gamma(G)$ is not complete, then $\Delta(G)$ is isomorphic with $K_{n}$, for $n\geq 5$.
\end{itemize}
\end{theorem}
As a complete bipartite divisor graph $K_{m,m}$ is an $m$-regular graph, Proposition~\ref{prop:2^m-1} applied with $m=k$ yields infinitely many solvable groups whose bipartite divisor graph is $K_{m,m}$. In particular, we obtain an example of a group whose bipartite divisor graph is a $3$-regular graph.
In Table~\ref{tab:Tcr} we give some examples of $n$-regular bipartite divisor graphs for $n\in\{1,2,3\}$.
\begin{table}[ht]
\caption{Examples of $n$-regular $B(G)$}
\centering
\begin{tabular}{|c|c|c|}
\hline & connected & disconnected \\
\hline $1$-regular & $\mathrm{Sym}(3)$ & $\mathrm{PSL}_2(8)$ \\ \hline
$2$-regular& $\mathtt{SmallGroup}(588,41)$ & Does Not Exist \\ \hline
$3$-regular & $G$ as in Proposition~\ref{prop:2^m-1} with $m=k=3$ & Does Not Exist \\
\hline
\end{tabular}
\label{tab:Tcr}
\end{table}
As the reader can see, we know very little on groups $G$ with $B(G)$ a regular graph and on the possible bipartite regular graphs that might arise.
\begin{problem}
{\rm Construct (if possible) groups $G$ with $B(G)$ an $n$-regular graph with $n\ge 3$ and with $B(G)\ncong K_{n,n}$.}
\end{problem}
\section{Bounded order bipartite divisor graph of a finite group}\label{sec:bounded}
One of the questions that has been largely discussed by different authors is the classification of graphs that can occur as $\Delta(G)$, for some finite group $G$. To build confidence into this problem researchers have first considered graphs of bounded order. The first family of graphs that cannot occur as $\Delta(G)$ was discovered in~\cite{BL}; later, this family was generalized in ~\cite{BJL}. These families contain graphs with arbitrarily many vertices, however they provide a great help for the problem of classifying the graphs that do occur as $\Delta(G)$, when $\Delta(G)$ has at most six vertices. For instance, in~\cite{BJLL,L3}, the authors undertake a systematic investigation on the prime degree graphs of solvable groups with six vertices and they classify the disconnected graphs with six vertices.
Following these footsteps, in this section we study bipartite divisor graphs having at most $6$ vertices. When $B(G)$ has only two vertices, $B(G)=K_2$ and the group $G$ has only two character degrees and a great deal is known on these groups, see~\cite{BH} and the references therein (see also Theorem~\ref{thm: reg}~(2i)). When $B(G)$ has three vertices, the classification of $G$ boils down to the understanding of groups having only two character degrees, or of groups with $\cd(G)=\{1,p^\alpha,p^\beta\}$ (which in turn is a problem on $p$-groups).
\begin{theorem}[{{See~\cite{moo4}}}]
Let $G$ be a finite group with $B(G)$ connected and having at most four vertices. Then $G$ is solvable, $B(G)$ is one of the graphs in Figure~$\ref{fig: 11}$, and we have the following properties:
\begin{itemize}
\item[(i)] if $B(G)$ has two vertices, then $G'$ is abelian, $G=AP$, where $P\in Syl_{p}(G)$ and $A$ is an abelian normal $p$-complement;
\item[(ii)] if $B(G)$ has three vertices, then
\begin{itemize}
\item[(a)]$G=AP$, where $P\in Syl_{p}(G)$ and $A$ is an abelian normal $p$-complement, or
\item[(b)]$G'$ is abelian, $G'\cap \Z G=1$ and $\frac{G}{\Z G}$ is a Frobenius group with cyclic complement;
\end{itemize}
\item[(iii)] if $B(G)$ has four vertices, then
\begin{itemize}
\item[(c)] $G=AH$ is the semidirect product of an abelian normal subgroup $A$ and a
Hall subgroup $H$ which is either a Sylow $p$-subgroup of $G$ or an abelian $\{p,q\}$-subgroup, or
\item[(d)]$G'$ is abelian, $G'\cap \Z G=1$ and $\frac{G}{\Z G}$ is a Frobenius group with cyclic complement.
\end{itemize}
\end{itemize}
\end{theorem}
This theorem shows that when $B(G)$ has at most four vertices the structure of the graph $B(G)$ and {\em also } the structure of the group $G$ is well-understood. (If $B(G)$ is disconnected, then $B(G)=K_2+K_2$ and this case was dealt with in the previous section.) The same behavior occurs when $B(G)$ has five vertices.
\begin{theorem}[{See~\cite{moo5}}]
Let $G$ be a finite group with $B(G)$ connected and having five vertices. Then $B(G)$ is one of the graphs in Figure~$\ref{fig: 8}$,~$\ref{fig: 9}$ or~$\ref{fig: 10}$. Furthermore, we have the following properties:
\begin{itemize}
\item[(i)] If $|\rho(G)|=1$, then $G=AP$, where $P$ is a Sylow $p$-subgroup for some prime $p$ and $A$ is a normal abelian $p$-complement.
\item[(ii)] If $|\rho(G)|=2$, then $G$ is solvable and $G=HN$, where $H$ is either a Sylow $p$-subgroup or a Hall $\{p,q\}$-subgroup of $G$ and $N$ is a normal complement.
\item[(iii)] If $|\rho(G)|=3$, then $G$ is solvable and one of the following cases occurs:
\begin{itemize}
\item[(a)] $G=HN$ where $H$ is a Sylow $p$-subgroup or a Hall $\{p,q\}$-subgroup or a Hall abelian $\{p,q,r\}$-subgroup of $G$ and $N$ is its normal complement.
\item[(b)] $G=QN$, where $Q$ is an abelian Sylow $q$-subgroup of $G$ and $N$ is its normal complement.
\end{itemize}
\item[(iv)] If $|\rho(G)|=4$, then $G'$ is abelian, $G'\cap \Z G=1$ and $\frac{G}{\Z G}$ is a Frobenius group with cyclic complement.
\end{itemize}
\end{theorem}
The following example will be a useful tool to construct most of the groups in Table~\ref{tab:BOB}.
\begin{example}~\label{exam: mi}{\rm
Let $1<m_1<m_2<\cdots<m_r$ be distinct positive integers such that $m_i$ divides $m_{i+1}$ for all $i\in\{1,\ldots,r-1\}$. Then by ~\cite[Theorem 4.1]{N} there exists a group $G$ such that $\cd(G)=\{1,m_1,\ldots,m_r\}$. We denote this group by $G_{m_1,\ldots,m_r}$. Indeed, our Proposition~\ref{prop:2^m-1} is a very special case of this general result.}
\end{example}
\vskip 0.3 true cm
\begin{table
\caption{Examples of $G$ with $o(B(G))\leq 5$, where it is connected}
\centering
\begin{tabular}{|c|c|c|l|}\hline
Graph & Examples & Graph & Examples\\
\hline
nr~1 Figure~~\ref{fig: 8}&$Q_{8}\times Q_{8}\times Q_{8}\times Q_{8}$&nr~1 Figure~~\ref{fig: 11}&$\mathtt{Sym}(3)$\\
\hline
nr~2 Figure~~\ref{fig: 8}&$G_{210}$ as in Example~\ref{exam: mi}&nr~2 Figure~~\ref{fig: 11}&$\mathtt{SmallGroup}(32,6)$\\
&&& with normal Sylow $2$-subgroup\\
\hline
nr~1 Figure~~\ref{fig: 9}&$\mathtt{SmallGroup}(108,17)$&nr~2 Figure~~\ref{fig: 11}&$\mathtt{SmallGroup}(96,13)$\\
&&& with non-normal Sylow $2$-subgroup\\
\hline
nr~2 Figure~~\ref{fig: 9}&$G_{2,6,12}$ as in Example~\ref{exam: mi}&nr~3 Figure~~\ref{fig: 11}&$\mathtt{SmallGroup}(42,1)$\\
\hline
nr~3 Figure~~\ref{fig: 9}&$G_{6,12,24}$ as in Example~\ref{exam: mi}&nr~4 Figure~~\ref{fig: 11}&$\mathtt{SmallGroup}(930,1)$\\
\hline
nr~4 Figure~~\ref{fig: 9}&$\mathtt{SmallGroup}(72,15)$&nr~5 Figure~~\ref{fig: 11}&$\mathtt{SmallGroup}(384,20)$\\
&&& with non-normal Sylow $2$-subgroup\\
\hline
nr~1 Figure~~\ref{fig: 10}&$G_{5,30}$ as in Example~\ref{exam: mi}&nr~5 Figure~~\ref{fig: 11}&$\mathtt{SmallGroup}(128,71)$\\
&&& with normal Sylow $2$-subgroup\\
\hline
nr~2 Figure~~\ref{fig: 10}&$G_{15,30}$ as in Example~\ref{exam: mi}&nr~6 Figure~~\ref{fig: 11}&$\mathtt{SmallGroup}(588,38)$\\
\hline
nr~3 Figure~~\ref{fig: 10}&$G_{30,60}$ as in Example~\ref{exam: mi}&nr~7 Figure~~\ref{fig: 11}&$\mathtt{SmallGroup}(96,70)$\\
\hline
nr~4 Figure~~\ref{fig: 10}&$\mathtt{SmallGroup}(960,5748)$& \\
\hline
\end{tabular}
\label{tab:BOB}
\end{table}
\begin{figure}
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (1,1) {};
\node[fill=blue] (m3) at (1,3) {};
\foreach \from/\to in {m2/m3}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (2,1) {};
\node[fill=blue] (m3) at (1,3) {};
\node[fill=blue] (m4) at (3,3) {};
\foreach \from/\to in {m2/m3,m2/m4}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (1,1) {};
\node (m3) at (3,1) {};
\node[fill=blue] (m4) at (2,3) {};
\foreach \from/\to in {m2/m4,m3/m4}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (1,1) {};
\node (m3) at (2,1) {};
\node (m4) at (3,1) {};
\node[fill=blue] (m6) at (2,3) {};
\foreach \from/\to in {m2/m6,m3/m6,m4/m6}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (2,1) {};
\node[fill=blue] (m3) at (1,3) {};
\node[fill=blue] (m4) at (2,3) {};
\node[fill=blue] (m5) at (3,3) {};
\foreach \from/\to in {m2/m3,m2/m4,m5/m2}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (1,1) {};
\node (m3) at (2,1) {};
\node[fill=blue] (m4) at (1,3) {};
\node[fill=blue] (m6) at (2,3) {};
\foreach \from/\to in {m6/m3,m6/m2,m3/m4,m2/m4}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (1,1) {};
\node (m3) at (2,1) {};
\node[fill=blue] (m4) at (1,3) {};
\node[fill=blue] (m6) at (2,3) {};
\foreach \from/\to in {m6/m3,m3/m4,m2/m4}
\draw (\from) -- (\to);
\end{tikzpicture}
\caption{Connected bipartite divisor graphs of order at most four}
\label{fig: 11}
\end{figure}
\begin{figure}
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (2.5,1) {};
\node[fill=blue] (m3) at (1,3) {};
\node[fill=blue] (m4) at (2,3) {};
\node[fill=blue] (m5) at (3,3) {};
\node[fill=blue] (m6) at (4,3) {};
\foreach \from/\to in {m2/m3,m2/m4,m5/m2,m2/m6}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (1,1) {};
\node (m3) at (2,1) {};
\node (m4) at (3,1) {};
\node (m5) at (4,1) {};
\node[fill=blue] (m6) at (2.5,3) {};
\foreach \from/\to in {m6/m3,m6/m2,m6/m4,m6/m5}
\draw (\from) -- (\to);
\end{tikzpicture}
\caption{Connected bipartite divisor graphs of order five with one or four primes}
\label{fig: 8}
\end{figure}
\begin{figure}
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (2,1) {};
\node[fill=blue] (m3) at (1,3) {};
\node (m4) at (3,1) {};
\node[fill=blue] (m5) at (2,3) {};
\node[fill=blue] (m7) at (3,3) {};
\foreach \from/\to in {m2/m3,m2/m5,m2/m7,m4/m7}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (2,1) {};
\node[fill=blue] (m3) at (1,3) {};
\node (m4) at (3,1) {};
\node[fill=blue] (m5) at (2,3) {};
\node[fill=blue] (m7) at (3,3) {};
\foreach \from/\to in {m2/m3,m2/m5,m2/m7,m4/m7,m4/m5}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (2.5,1) {};
\node[fill=blue] (m3) at (2,3) {};
\node[fill=blue] (m4) at (3,3) {};
\node (m6) at (3.5,1) {};
\node[fill=blue] (m7) at (4,3) {};
\foreach \from/\to in {m2/m3,m2/m4,m2/m7,m7/m6,m6/m3,m6/m4}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (1.5,1) {};
\node (m3) at (2.5,1) {};
\node[fill=blue] (m5) at (1,3) {};
\node[fill=blue] (m6) at (2,3) {};
\node[fill=blue] (m7) at (3,3) {};
\foreach \from/\to in {m2/m5,m2/m6,m6/m3,m3/m7}
\draw (\from) -- (\to);
\end{tikzpicture}
\caption{Connected bipartite divisor graphs of order five and two primes}
\label{fig: 9}
\end{figure}
\begin{figure}
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (1,1) {};
\node (m3) at (2,1) {};
\node (m4) at (3,1) {};
\node[fill=blue] (m5) at (2,3) {};
\node[fill=blue] (m7) at (3,3) {};
\foreach \from/\to in {m2/m5,m3/m5,m5/m4,m4/m7}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (1,1) {};
\node (m3) at (2,1) {};
\node (m4) at (3,1) {};
\node[fill=blue] (m5) at (2,3) {};
\node[fill=blue] (m7) at (3,3) {};
\foreach \from/\to in {m2/m5,m3/m5,m5/m4,m4/m7,m3/m7}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (1,1) {};
\node (m3) at (2,1) {};
\node (m4) at (3,1) {};
\node[fill=blue] (m5) at (1.5,3) {};
\node[fill=blue] (m7) at (2.5,3) {};
\foreach \from/\to in {m2/m5,m3/m5,m5/m4,m4/m7,m3/m7,m7/m2}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (1,1) {};
\node (m3) at (2,1) {};
\node (m4) at (3,1) {};
\node[fill=blue] (m5) at (1.5,3) {};
\node[fill=blue] (m7) at (2.5,3) {};
\foreach \from/\to in {m2/m5,m3/m5,m4/m7,m3/m7}
\draw (\from) -- (\to);
\end{tikzpicture}
\caption{Connected bipartite divisor graphs of order five and three primes}
\label{fig: 10}
\end{figure}
\begin{remark}{\rm
Let $G$ be a finite group with $B(G)$ disconnected and having five vertices. A case-by-case analysis yields that $B(G)$ is one of the graphs in Figure~\ref{fig: 1}. In particular, $B(G)$ is a union of two paths and hence the structure of $G$ is described in Section~\ref{unionpaths}.
}
\end{remark}
\begin{figure}
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (2,1) {};
\node[fill=blue] (m3) at (1,3) {};
\node[fill=blue] (m4) at (3,3) {};
\node (m5) at (3,1) {};
\node[fill=blue] (m6) at (4,3) {};
\foreach \from/\to in {m2/m3,m2/m4,m5/m6}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (1,1) {};
\node (m3) at (3,1) {};
\node (m4) at (4,1) {};
\node[fill=blue] (m5) at (3,3) {};
\node[fill=blue] (m6) at (2,3) {};
\foreach \from/\to in {m6/m3,m6/m2,m5/m4}
\draw (\from) -- (\to);
\end{tikzpicture}
\caption{Disconnected bipartite divisor graphs of order five}
\label{fig: 1}
\end{figure}
\begin{figure}
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (2,1) {};
\node[fill=blue] (m3) at (1,3) {};
\node[fill=blue] (m4) at (3,3) {};
\node (m5) at (3,1) {};
\node[fill=blue] (m6) at (2,3) {};
\node[fill=blue] (m7) at (4,3) {};
\foreach \from/\to in {m2/m3,m2/m4,m2/m6,m5/m7}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (5,1) {};
\node[fill=blue] (m3) at (3,3) {};
\node (m4) at (2,1) {};
\node[fill=blue] (m5) at (1,3) {};
\node[fill=blue] (m6) at (4,3) {};
\node[fill=blue] (m7) at (6,3) {};
\foreach \from/\to in {m4/m3,m2/m6,m5/m4,m2/m7}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (1,1) {};
\node (m3) at (2,1) {};
\node (m4) at (3,1) {};
\node[fill=blue] (m5) at (2,3) {};
\node (m6) at (4,1) {};
\node[fill=blue] (m7) at (3,3) {};
\foreach \from/\to in {m2/m5,m3/m5,m4/m5,m6/m7}
\draw (\from) -- (\to);
\end{tikzpicture}
\caption{Disconnected bipartite divisor graphs of order six (part 1)}
\label{fig: 2}
\end{figure}
\begin{figure}
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (1,1) {};
\node[fill=blue] (m3) at (1,3) {};
\node (m4) at (2,1) {};
\node[fill=blue] (m5) at (2,3) {};
\node (m6) at (3,1) {};
\node[fill=blue] (m7) at (3,3) {};
\foreach \from/\to in {m2/m3,m4/m5,m5/m6,m6/m7}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (1,1) {};
\node[fill=blue] (m3) at (1,3) {};
\node (m4) at (2,1) {};
\node[fill=blue] (m5) at (2,3) {};
\node (m6) at (3,1) {};
\node[fill=blue] (m7) at (3,3) {};
\foreach \from/\to in {m2/m3,m4/m5,m6/m7,m5/m6,m4/m7}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (2,1) {};
\node[fill=blue] (m3) at (1,3) {};
\node[fill=blue] (m4) at (3,3) {};
\node (m6) at (3,1) {};
\node[fill=blue] (m7) at (4,3) {};
\node (m8) at (5,1) {};
\foreach \from/\to in {m2/m3,m2/m4,m6/m7,m7/m8}
\draw (\from) -- (\to);
\end{tikzpicture}
\quad\quad\quad
\begin{tikzpicture}
[scale=.8,auto=left,every node/.style={circle,fill=blue!20}]
\node (m2) at (1,1) {};
\node (m3) at (2,1) {};
\node (m4) at (3,1) {};
\node[fill=blue] (m5) at (1,3) {};
\node[fill=blue] (m6) at (2,3) {};
\node[fill=blue] (m7) at (3,3) {};
\foreach \from/\to in {m2/m5,m3/m6,m4/m7}
\draw (\from) -- (\to);
\end{tikzpicture}
\caption{Disconnected bipartite divisor graphs of order six and three primes (part 2)}
\label{fig: 3}
\end{figure}
\vskip 0.4 true cm
Finally in the last theorem of this paper, we look at disconnected bipartite graphs with six vertices, and we attempt to
determine whether each graph can or cannot occur as the bipartite divisor graph of a solvable group.
\begin{theorem}~\label{thm: six}
Let $G$ be a finite group with $B(G)$ disconnected and with six vertices. Then $B(G)$ is one of the graphs in Figure~$\ref{fig: 2}$ or~$\ref{fig: 3}$. Furthermore we have the following properties:
\begin{itemize}
\item[(i)] $n(B(G))=3$ if and only if $G\cong A\times \mathrm{PSL}_2(2^{n})$, where $n\in\{2,3\}$ and $A$ is an abelian group.
\item[(ii)] If $|\rho(G)|=4$, then $B(G)$ is the third graph in Figure~$\ref{fig: 2}$, $G$ is solvable and it is of Lewis type one or four with the structure explained in~\cite[Lemma 4.1]{L1998}
\item[(iii)]If $|\rho(G)|=2$, then $G$ is solvable and $B(G)$ is one the first two graphs in Figure~$\ref{fig: 2}$. If $B(G)$ is the second graph in Figure~$\ref{fig: 2}$, then $G$ is a group of Lewis type one. If $B(G)$ is the first graph in Figure~$\ref{fig: 2}$, then $G$ is of Lewis type one or five.
\item[(iv)] If $|\rho(G)|=3$ and $n(B(G))=2$, then $B(G)$ is one of the first three graphs in Figure~$\ref{fig: 3}$. If $G$ is non-solvable, then one of the following cases holds:
\begin{itemize}
\item[(a)] $G$ has a normal subgroup $U$ such that $U\cong \mathrm{PSL}_{2}(q)$ or $\mathrm{SL}_{2}(q)$ for some odd $q\geq 5$ and if $C=\cent G U$, then $C\leq \Z G$ and $G/C\cong \mathrm{PGL}_{2}(q)$; or
\item[(b)] $G$ has a normal subgroup of index $2$ that is a direct product of $\mathrm{PSL}_{2}(9)$ and a central subgroup $C$. Furthermore, $G/C\cong M_{10}$.
\end{itemize}
If $G$ is solvable, then it is of Lewis type one or six when $B(G)$ is one of the first two graphs in Figure~$\ref{fig: 3}$ and of Lewis type one, four, or five when $B(G)$ is the third graph in Figure~$\ref{fig: 3}$.
\end{itemize}
\end{theorem}
\begin{proof}
As $n(B(G))>1$ and $B(G)$ has six vertices, we have $|\rho(G)|+|\cd(G)^{*}|=6$, where $|\rho(G)|\in\{2,3,4\}$. So we consider three cases with respect to $|\rho(G)|$.
If $|\rho(G)|=4$, then $|\cd(G)|=3$. Now ~\cite[Theorem 12.15]{IS} and ~\cite[Corollary 4.2]{L} imply that $G$ is solvable of derived length at most $3$ and each connected component of $\Delta(G)$ is a complete graph. On the other hand, ~\cite[Theorem 4.3]{L} verifies that $K_{2}+K_{2}$ is not the prime degree graph of a solvable group, so we conclude that $B(G)$ is the third graph in Figure ~\ref{fig: 2}. Now by Remark~\ref{rem:20} we conclude that $G$ is of Lewis types one, four, five, or six. Similar to the proof of Lemma~\ref{lemma:new1}, we can see that $G$ is not a group of Lewis type six. If $G$ is of Lewis type five, then considering the notations in ~\cite[Lemma 3.5]{ML2} we conclude that either $2^{a}=2$ and $\cd(G)=\{1,2,3\}$ which contradicts the structure of $B(G)$ or $\cd(G|Q^{'})=\emptyset$ which is not possible as $Q$ is non-abelian. Therefore, $G$ is of Lewis type one or four.
If $|\rho(G)|=2$, then $B(G)$ has two connected components. It is easy to see that $B(G)$ is one of the first two graphs in Figure ~\ref{fig: 2}. First suppose that $B(G)$ is the second graph in Figure ~\ref{fig: 2}. By ~\cite[Theorem 7.1]{L}, the case $\Gamma(G)\cong K_{2}+K_{2}$ is impossible for a non-solvable group, therefore $G$ is solvable and, by Table~\ref{tab:TTCR}, $G$ is of Lewis type one. Now consider the first graph in Figure ~\ref{fig: 2}. As every character degree of $G$ is a power of some prime and $|\rho(G)|=2$, by ~\cite[Theorem 30.3]{Hu1}, we conclude that $G$ is solvable with $\Delta(G)\cong K_{1}+K_{1}$ and, by Remark~\ref{rem:20}, we deduce that $G$ is of Lewis type one, four, or five. Similar to the proof of Lemma~\ref{lemma:new}, we see that the case where $G$ has Lewis type four does not occur. Hence $G$ is either of Lewis type one or five.
Finally consider the case where $|\rho(G)|=|\cd(G)^{*}|=3$. If $n(B(G))=3$, then $B(G)$ is $1$-regular and, by Theorem~\ref{thm: reg} we can observe that $G\cong A\times \mathrm{PSL}_{2}(2^{n})$, where $n\in\{2,3\}$ and $A$ is an abelian group. Assume that $n(B(G))=2$ and $G$ is non-solvable. Since $|\cd(G)|=4$, ~\cite[Theorem A]{GA} implies that $G$ has one of the following structures:
\begin{itemize}
\item[(a)] $G$ has a normal subgroup $U$ such that $U\cong \mathrm{PSL}_{2}(q)$ or $\mathrm{SL}_{2}(q)$ for some odd $q\geq 5$ and if $C=\cent G U$, then $C\leq \Z G$ and $G/C\cong \mathrm{PGL}_2(q)$; or
\item[(b)] $G$ has a normal subgroup of index $2$ that is a direct product of $\mathrm{PSL}_{2}(9)$ and a central subgroup $C$. Furthermore, $G/C\cong M_{10}$.
\end{itemize}
In particular by ~\cite[Corollary B]{GA}, $\cd(G)=\{1,q-1,q,q+1\}$ for some odd prime power $q>3$ or $\cd(G)=\{1,9,10,16\}$. Consequently, $\Delta(G)$ has an isolated vertex and $B(G)$ is one of the first two graphs in Figure~\ref{fig: 3}. When $G$ is solvable, by Remark~\ref{rem:20} we can see that $G$ is a group of Lewis type one, four, five, or six. Suppose $G$ is a group of Lewis type four or five. As $G$ has no non-abelian normal Sylow subgroup, ~\cite[Theorem 5.2]{ML2} implies that one connected component of $\Gamma(G)$ contains only one degree $a$ where the prime divisors of $a$ lie in the larger connected component of $\Delta(G)$. None of the first two graphs in Figure~\ref{fig: 3} satisfies this property, so in these cases $G$ is not a group of Lewis type four or five. Hence it is of Lewis type one or six.
If $B(G)$ is the third graph in Figure~\ref{fig: 3}, then, by Table~\ref{tab:TTCR}, it is a group of Lewis type one, four, or five.
\end{proof}
\begin{example}{\rm
Here we give some examples of solvable groups whose bipartite divisor graphs have six vertices, are disconnected, but are not a union of paths.
\begin{itemize}
\item For $G=\mathtt{SmallGroup}(320,1581)$, we have $\cd(G)=\{1,2,4,8,5\}$, $B(G)$ is the first graph in Figure~\ref{fig: 2}, and $G$ is of Lewis type five.
\item Let $p$ be an odd prime and let $a$ be a positive integer with $|\pi(p^{a}-1)|\geq 3$ (the smallest case is $p=31$ and $a=1$). Let $b$ be a divisor of $p^{a}-1$ with exactly three prime divisors. Let $E$ be the extraspecial group of order $p^{2a+1}$ with exponent $p$. Then $E$ has an automorphism $\varphi$ of order $b$ which centralizes $\Z E$. Let $G:=E\rtimes \langle\varphi\rangle$. Then $\cd(G)=\{1,b,p^{a}\}$ and $B(G)$ is the third graph in Figure~\ref{fig: 2}, and $G$ is of Lewis type one.
\end{itemize}}
\end{example}
We were not able to find a group $G$ with $B(G)$ isomorphic to the second graph in Figure~\ref{fig: 3}. We leave this as an open question.
\begin{question}\label{question:11111}
Is there a finite group $G$ with $B(G)$ isomorphic to the second graph in Figure $\ref{fig: 3}$?
\end{question}
|
1,108,101,563,007 | arxiv | \section{BENCHMARK PROBLEMS}
Most of works describing infrared emission from astronomical sources
present various computational results. Yet, it is difficult to use
those results for the code verification. While many of the model
parameters are usually explicitly listed, very often it is not easy to
recognize all the underlying assumptions, nor to reproduce the grain
optical properties used in the calculations. To avoid these problems,
all input properties for the benchmark problems presented here are
defined analytically. Such approach will ease comparison of the results
obtained by other codes with those presented here.
\subsection{Scaling Properties of the Radiative Transfer Problem}
Traditionally, detailed modeling of IR radiation involved numerous
input quantities, necessitating a large number of calculations to
obtain successful fits, and diluting the value of the resulting
success. However, as pointed out by RR, the number of relevant
parameters can be drastically reduced by employing the scaling
properties of the radiative transfer problem. For example, the
luminosity of the central source is irrelevant, a fact by and large
ignored in modeling astronomical sources.
The importance of scaling was recently emphasized by Ivezi\' c \&
Elitzur (1995) who demonstrated that for given dust type, the IR
emission from late-type stars can be successfully described by a single
parameter --- the overall optical depth $\tau$\footnote{Assuming a
steady-state radiatively driven wind.}. All other quantities
(luminosity, mass, mass-loss rate, etc.) enter only indirectly through
their effect in determining $\tau$, and thus are irrelevant in the
modeling of IR emission. It was subsequently recognized that this
powerful scaling is a general property of radiative transfer and that
it can be extended to arbitrary geometries (Ivezi\' c \& Elitzur 1997,
hereafter IE97). IE97 point out that the spectral shape is the only
relevant property of the heating radiation when the inner boundary of
the dusty region is controlled by dust sublimation. Similarly, the
absolute scales of densities and distances are irrelevant; the geometry
enters only through angles, relative thicknesses and aspect ratios. The
actual magnitudes of densities and distances enter only through one
independent parameter, the overall optical depth. Dust properties
enter only through dimensionless, normalized distributions that
describe the spatial variation of density and the wavelength dependence
of scattering and absorption efficiencies. We now proceed to define the
benchmark problems in terms of these fully scaled quantities.
\subsection{Definition of the Benchmark Problems}
A central point source is embedded in a spherically symmetric
dusty envelope with an inner cavity free of dust. The source radiates as
a black body at a given temperature \symbol{T_\ast}. The dust is in radiative
equilibrium with the local radiation field, and this uniquely determines the
dust temperature, \symbol{T_{\rm d}}, at every point in the envelope. The scale of \symbol{T_{\rm d}}\
is determined by \Tin, the dust temperature at the inner boundary, \symbol{r_1}.
All radial positions can be scaled by this radius defining a new,
dimensionless variable $y=r/\symbol{r_1}$. The dimensionless outer radius of the
envelope, $Y = \symbol{r_2}/\symbol{r_1}$, is a free parameter. The dust density variation
with $y$ is assumed to be a power law $\propto y^{-p}$. The actual dust
density, envelope size and opacity scales are all combined into a value for
overall optical depth specified at some fiducial wavelength. We choose 1
\symbol{\umu{\rm m}}\ for this wavelength and denote the corresponding optical depth by
$\tau_1$.
Dust optical properties are specified as scale-free wavelength
dependent absorption and scattering opacities normalized to unity at 1
\symbol{\umu{\rm m}}, $q_{\rm abs}=\kappa_{\lambda,{\rm abs}}/\kappa_{1,{\rm abs}}$ and
$q_{\rm sca}=\kappa_{\lambda,{\rm sca}}/\kappa_{1,{\rm sca}}$,
respectively. For spherical amorphous grains with radius $a$, $q_{\rm abs}$
and $q_{\rm sca}$ are roughly constant for $\lambda < 2\pi a$, and fall off
as $\lambda^{-1}$ and $\lambda^{-4}$ for $\lambda > 2\pi a$,
respectively. To mimic this behavior\footnote{We assume isotropic
scattering.} with analytical functions, we choose
\begin{equation}
q_{\rm abs} = q_{\rm sca} = 1
\end{equation}
for $\lambda < 1 \symbol{\umu{\rm m}}$, and
\begin{equation}
q_{\rm abs} = {1 \over \lambda }
\end{equation}
\begin{equation}
q_{\rm sca} = {1 \over \lambda^4 }
\end{equation}
for $\lambda > 1 \symbol{\umu{\rm m}}$. The arbitrary choice of 1 \symbol{\umu{\rm m}}\ as transitional
wavelength is motivated both by typical astronomical grain sizes, and by desire
to have such a transition at short wavelengths where its effects are most
easily discernible. The chosen forms correspond to the maximal theoretically
possible efficiencies for spherical amorphous grains with radius 0.16 \symbol{\umu{\rm m}}\
(Greenberg 1971).
The above listed quantities fully specify the benchmark problems.
We perform calculations for \symbol{T_\ast} = 2500 K, \Tin = 800 K and $Y$ = 1000,
and produce eight different models: for two different density distributions,
$p=2$ and $p=0$, and four values of optical depth, $\tau_1$ = 1, 10, 100, 1000.
Selection of different density distributions and a range in optical depth
minimizes the chance that eventual inconsistencies in different numerical
schemes would not be noticed.
\subsection{Code Description}
The three codes used in this work have similar approach to the solution
of the radiative transfer problem, and none of them introduces any
approximations (e.g., closure relation in the moment methods, Auer
1984). Since the problem cannot be solved analytically, the solutions
are obtained on discrete spatial and wavelength grids. Within arbitrary
numerical accuracy which determines the grid sizes, the solutions can
be considered exact.
The spatial grids include the radial position and impact parameter grids,
which also directly determine the angular grid needed for the integrations
over solid angle. Sizes of the spatial grids range from \symbol{\sim} 10 to several
hundred points, depending on the method and overall optical depth for a given
model. The wavelength grid has typically \symbol{\sim} 100 points. To solve
the radiative transfer problem means to determine the radiation intensity
at each of the grid points, that is, to determine the wavelength
dependent intensity as a function of angle for every radial position.
The radiative transfer equation is an integro-differential equation; in
other words, the intensity at any grid point depends on the intensities at
all other grid points. Thus the problem cannot be solved by straightforward
techniques, and various numerical methods rely on iterations of
different types. These iterations can be performed either for the intensity,
or for its moments (energy density, flux, pressure, etc.). The codes
used in this work implement different numerical schemes to perform
the iterations, and we proceed with their brief descriptions.
\bigskip
{\bf Code 1} This code was originally developed by Yorke (1980) and
generalized by Men'shchikov \& Henning (1997) and Szczerba et al.
(1996, 1997). The original version solves differential moment equations
obtained by analytically integrating the radiative transfer equation
multiplied by the powers of the direction cosine, over the solid angle
(e.g. Auer 1984). Such an approach always results in a number of
differential equations of one less than the number of unknown
quantites (the moments of the radiation intensity). The closure
relation can be expressed in terms of the variable Eddington factor,
and this code calculates it directly from its defining equation after
every iteration. Later versions of the code were extended to improve
explicit ray tracing, with iterations repeated until convergence in
dust temperature and mean intensity is achieved.
\bigskip
{\bf Code 2} Groenewegen (1993) has developed a code which solves the
radiative transfer equation in spherical geometry from first
principles, assuming isotropic scattering. The intensity is evaluated
explicitly on sufficiently fine grids to obtain desired accuracy, and
iterations are repeated until the convergence is achieved. This code
was developed to allow for an explicit mass-loss rate dependence on
time and velocity law, rather than a power-law density distribution.
The model was tested against the results obtained with the model of
Rogers \& Martin (1984, 1986), for silicate grains up to an optical
depth at 9.5 $\mu$m of 50; differences were 1\% at most. For the
purpose of the present paper, models with $p$ = 2 (constant expansion
velocity and mass loss rate) and ${\tau}_1$ = 1, 10, 100 were
calculated (${\tau}_1$ = 1000 was not included for this
code because of unsatisfactory convergence).
\bigskip
{\bf Code 3} The third code, DUSTY, was developed by Ivezi\' c, Nenkova
\& Elitzur (1997) and is publicly available. It solves the integral
equation for the energy density obtained by analytically integrating
the radiative transfer equation. The subsequent numerical integration
is transformed into multiplication with a matrix of weight factors
determined purely by the geometry. The energy density at every point is
then determined by matrix inversion, obviating the need to iterate over
the energy density itself. That is, unlike other codes, DUSTY can
directly solve the pure scattering problem. The intensity is not
explicitly evaluated, and can be easily recovered from the source
function by using the weight factor matrix.
\begin{table}
\begin{center}
\caption{Dimensionless parameter $\Psi$ for eight benchmark models}
\smallskip
\begin{tabular}{@{}ccccc}
\hline
p\n(a) & $ \tau_1 = 1 $ & $\tau_1 = 10 $ & $\tau_1 = 100 $ & $\tau_1 = 1000 $ \\
\noalign{\smallskip}
2 & 3.48 & 5.42 & 13.1 & 84.1 \\
0 & 2.99 & 3.00 & 3.10 & 3.75 \\ \hline
\end{tabular}
\end{center}
\smallskip
(a) Power for the power-law describing the dust density distribution
\end{table}
\begin{figure}
\centering \leavevmode \ifSFB@referee\epsfxsize=0.5\hsize\else\epsfxsize=\hsize\fi \epsfbox[70 80 570 780]{Temp.ps}
\caption{Dust temperature distribution through the envelope for two
density distributions and optical depths at 1 \symbol{\umu{\rm m}}\ as marked.}
\end{figure}
\section{RESULTS}
The full solution of the radiative transfer problem is contained in the
radiation intensity. However, even in the spherically symmetric systems
it depends on three variables (position, angle, and wavelength) and its
presentation would be quite involved. When isotropic scattering
is assumed as here, the solution is also fully described by the energy
density since it fully defines the source function. Equivalently,
the emerging spectrum, the dust temperature distribution, and
dimensionless parameter
\eq{}{
\Psi = {4\sigma \Tin^4 \over F_1},
}
where $\sigma$ is the \v Stefan-Boltzman constant and $F_1$ is the
bolometric flux at $y$=1, also fully specify the solution\footnote{The
absolute size of the inner boundary can be determined using $\Psi$,
see eq. (27) of IE97} (IE97). That is, if these three quantities
obtained by different codes agree, then the intensity distributions
at every grid point agree, too.
\begin{figure}
\centering \leavevmode \ifSFB@referee\epsfxsize=0.5\hsize\else\epsfxsize=\hsize\fi \epsfbox[70 80 570 780] {Spec.ps}
\caption{Spectral energy distribution of the emerging radiation
for two density distributions and optical depths at 1 \symbol{\umu{\rm m}}\ as marked.}
\end{figure}
The values for $\Psi$ obtained for eight models calculated in this
work are given in Table 1. All three codes agree within significant
digits listed. Temperature distributions are presented in Figure 1.
Again, the agreement is better that 0.1\%. Emerging spectra are
shown in Figure 2. They are presented as dimensionless, distance
and luminosity independent spectral shape
$\lambda \symbol{F_\lambda} / \int{\symbol{F_\lambda} d\lambda}$. Differences between the results
obtained by different codes are smaller than the line thickness
($<$ 0.1\%).
The detailed numerical values for emerging spectra presented in
Figures 1 and 2, and for all other relevant quantities can be
obtained in computer readable form from \v Z. Ivezi\' c\footnote{E-mail
address: [email protected].}. These
results can be used for the verification of the wavelength dependent
radiative transfer codes, especially for the new multi-dimensional
ones. Also, as the numerical radiative-hydrodynamical codes for
modeling star and planet formation begin to include the wavelength
dependent radiation transfer, the benchmark problems presented here
might prove valuable for establishing confidence in the accuracy of
their results (Boss 1996).
\section*{Acknowledgments}
We thank M. Elitzur and G. Knapp for their careful reading, A. Boss for his
encouragement, and the referee C. Skinner for useful comments which helped
improve the manuscript.
|
1,108,101,563,008 | arxiv | \section{Introduction}
\label{sec:intro}
Mira A ($o$ Ceti; Mira) is an oxygen-rich, long-period variable star on the asymptotic giant branch (AGB). Together with Mira B (VZ Ceti), possibly a white dwarf \citep{mirab_wd}, they form the symbiotic binary system Mira AB. Mira A is the prototype of Mira variables. Its period of visual brightness variation is about 332 days and the visual $V$-band magnitude of the star varies by up to about 8.1 mag (a factor of $>1700$) in each cycle (based on the data in the American Association of Variable Star Observers, AAVSO, International Database). The large variation in the visual magnitude is caused by a combined effect of stellar pulsation and variable opacity of metal oxides whose abundance changes with the effective temperature of the star \citep{reid2002}. The distance of the Mira AB system was estimated to be $110{\pm}9$\,pc \citep{haniff1995}, which is based on the period-luminosity relation derived by \citet{feast1989}, the infrared $K$-band magnitude from \citet{robertson1981} and the period of the visual $V$ variation from the GCVS \citep{kholopov1987}. We will adopt this, which is roughly consistent with the revised Hipparcos value of $92 \pm 10\,{\rm pc}$ \citep{hipparcos}, throughout this article.
Traditionally, AGB star atmospheres have been probed by molecular absorption spectroscopy, which delivers spatially unresolved line-of-sight information. Examples include the detection of the near-infrared H$_2$O absorption band from the warm molecular forming layer (known as the MOLsphere) around M giant stars and Mira variables with the Infrared Space Observatory (ISO) \citep[e.g.][]{tsuji1997,woitke1999,tsuji2000}. In addition, mid-infrared interferometry with the Very Large Telescope Interferometer (VLTI) can also probe the molecular layers and dust shells around these stars \citep[e.g.][]{ohnaka2005,karovicova2011}. SiO and/or H$_2$O maser emission in the extended atmospheres of Mira variables has been imaged, see, for examples, \citet{cotton2004} and \citet{perrin2015} with the Very Long Baseline Array (VLBA), and \citet{rm2007} with the Very Large Array (VLA).
In order to test the predictions by existing hydrodynamical models for the extended atmospheres of Mira variables, which typically has a radius of only a few $R_{\star}$ (a few tens of milli-arcseconds for Mira A), high angular and spectral resolution observations of the molecular emission and absorption from these regions are mandatory. The Atacama Large Millimeter/submillimeter Array (ALMA) with long baselines thus allows us to reach the required angular resolution at high sensitivity and to study the detailed kinematics of the innermost envelope of Mira A. Observations of radio and (sub)millimetre wavelength molecular line emission/absorption, in particular the rotational transitions not exhibiting strong masers, may be used to compare and test the predicted structures of the extended atmospheres by hydrodynamical models. Through modelling the radiative transfer of the transition lines with the predicted atmospheric structures as the inputs, synthesised spectra can be produced and compared to the observed ones.
In this article, we present the new ALMA observations of the Mira AB system, which was selected as one of the Science Verification (SV) targets in the 2014 ALMA Long Baseline Campaign to demonstrate the high angular resolution capability of ALMA \citep{lbc2014}. Based on the visual magnitude data reported by the AAVSO, the stellar phase of Mira A is ${\sim}0.45$ at the time of this observation, and we will adopt this phase throughout the article. In Section 2, we describe the SV observation of Mira AB and the data processing. In Section 3, we present the results including the radio continuum data of Mira A and B in the SV dataset, and the images and spectra of the SiO and H$_2$O lines from Mira A as covered in the observations. In Section 4, we present our radiative transfer modelling results of the SiO and H$_2$O spectra of Mira A. In Section 5, we discuss the implications of our modelling results for our understanding of Mira A's extended atmosphere, including the structures, dust condensation process, shock dissipation and the kinematics and compare with predictions from hydrodynamical models.
\section{Observations and data processing}
\label{sec:obs}
The Mira AB system was observed with ALMA on 2014 October 17 and 25 (ALMA Band 3) and on 2014 October 29 and November 1 (ALMA Band 6) as part of the 2014 ALMA Long Baseline Campaign Science Verification with the longest baseline of $15.24\,{\rm km}$. \citep{lbc2014}. By referring to the AAVSO visual data for Mira, we find that the ALMA observations took place between the visual phases 0.42 (2014 Oct 17) and 0.47 (2014 Nov 01)\footnote{The period of Mira changes by a few days on the time span of decades \citep{templeton2009} and the dates of maximum need to be determined observationally for each cycle to provide a reliable phase scaling. In recent cycles, however, the phases preceding and during the maxima cannot be observed in the optical because Mira is too close to the Sun. To obtain the phases of the ALMA observations in 2014, we first phased the AAVSO data for the cycles in 2000--2013 with a period of 333 days and the date of maximum on JD\,2\,452\,161.0 which had a bright and very well-defined maximum (Kami\'{n}ski et al. in prep.). Then, the data covering the cycle of the 2014 ALMA observations were phased with the same period but their date of maximum (or phase 0) was determined by a phase shift of about $+0.05$ with respect to JD\,2\,452\,161.0 (as adopted for the 2000--2013 cycles). In this way we are able to match the data in the 2014 cycle with the phased data from 2000--2013 and therefore we obtained slightly later phases for the ALMA observations than those calculated from a periodicity scale given by AAVSO and used in \citet{mrm2015}.}.
The shortest baselines (and the maximum number of antennae) in the observations of Bands 3 and 6 are $29.07\,{\rm m}$ (38) and $15.23\,{\rm m}$ (39), respectively. The maximum recoverable scales\footnote{\label{footnote:mrs}defined to be $0.6 \times ({\text{wavelength}}/{\text{shortest baseline}})$ \citep{almaprimer}.} of the SiO lines in Bands 3 and 6 are therefore ${\sim}14{\farcs}8$ and ${\sim}11{\farcs}3$, respectively, and that of the H$_2$O $v_2=1$ line in Band 6 is $10{\farcs}5$. In Band 3, three continuum windows at 88.2, 98.2 and 100.2\,GHz were observed, in addition to four spectral line windows of 58.6\,MHz bandwidth around the transitions of $^{28}$SiO ${\varv} = 0,1,2$ $J=2-1$ and $^{29}$SiO ${\varv} = 0$ $J=2-1$. The channel width of the spectral windows is 61.0\,kHz (${\sim}0.21\,\kms$). In Band 6, a continuum window at 229.6\,GHz together with six spectral line windows of 117.2\,MHz bandwidth around $^{28}$SiO ${\varv} = 0,1,2$ $J=5-4$, $^{29}$SiO ${\varv} = 0$ $J=5-4$, H$_2$O $v_2=1$ $J_{K_a,K_c}=5_{5,0}-6_{4,3}$, and the H30$\alpha$ recombination line were observed. The channel width of the four SiO windows is 122.1\,kHz (${\sim}0.17\,\kms$) and that of the H$_2$O and H recombination line windows is 61.0\,kHz (${\sim}0.08\,\kms$). Table \ref{tab:lines} summarises the observed spectral lines and their parameters.
\begin{table*}[!htbp]
\caption{Observed spectral lines in ALMA Band 6.}
\label{tab:lines}
\centering
\begin{tabular}{lclrcc}
\hline\hline
Species & Spectral Line & Rest Frequency & $E_{\rm up}/k$ & $\Delta V$ \\
& & \hspace{4ex} GHz & K \hspace{1ex} & $\kms$ \\ \hline
SiO & ${\varv} = 0, J=5-4$ & 217.104919 & 31.26 & 0.169 \\
SiO & ${\varv} = 1, J=5-4$ & 215.596018 & 1800.17 & 0.170 \\
SiO & ${\varv} = 2, J=5-4$ & 214.088575 & 3551.97 & 0.169 \\
$^{29}$SiO & ${\varv} = 0, J=5-4$ & 214.385752 & 30.87 & 0.171 \\
H$_2$O & $\nu_2=1, J_{K_a,K_c}=5_{5,0}-6_{4,3}$ & 232.68670 & 3461.88 & 0.079 \\
\hline
\end{tabular}
\tablefoot{
Columns are, from left to right, molecule, quantum numbers of observed transition, rest frequency, energy level of the upper state, and channel width in velocity of the raw data. The SiO and $^{29}$SiO transition frequencies were taken form the Cologne Database for Molecular Spectroscopy (CDMS). They are based on laboratory data presented by \citet{mueller2013cdms} and have an accuracy of about 1\,kHz, except for the SiO ${\varv} = 0, J=5-4$ transition which has an accuracy of 2\,kHz. The H$_2$O transition frequency is taken from \citet{belov1987} and has an accuracy of 50\,kHz.}
\end{table*}
The SV data had been calibrated by staff members of the Joint ALMA Observatory (JAO) and the ALMA Regional Centres (ARCs), with the Common Astronomy Software Applications \citep[CASA;][]{casa} package\footnote{\url{http://casa.nrao.edu/}} version 4.2.2. Detailed calibration scripts, preliminarily calibrated data products (i.e., without self-calibration), and self-calibration solutions for both continuum and spectral line data are available at the ALMA Science Portal\footnote{\url{http://www.almascience.org/almadata/sciver/MiraBand6/}}. Self-calibration solutions were derived from the continuum data for the continuum data itself; and from the strongest spectral channels of the $^{28}$SiO ${\varv} = 1$ data, which exhibits strong maser emission, for the spectral line data. We downloaded the self-calibration solutions from the ALMA Science Portal and applied them to the preliminarily calibrated data, and then imaged the continuum and spectral line data. We use CASA version 4.2.2 for the self-calibration and imaging (except for the image binning task as mentioned below), and the Miriad package\footnote{\url{http://www.atnf.csiro.au/computing/software/miriad/}} for our continuum analysis (Sect. \ref{sec:result_cont} and Appendix \ref{sec:appendix_cont} \citep{miriad}).
We have determined the centre of Mira A's continuum emission to be at $(\text{RA}, \text{Dec}) = (02^{\rm h}19^{\rm m}20{\fs}795, -02{\degr}58{\arcmin}43{\farcs}05) = (34{\fdg}836\,646, -2{\fdg}978\,625)$ by fitting its image, produced from the visibility data before self-calibration, in the 229.6\,GHz continuum windows (i.e., spectral windows $\texttt{spw}=0, 7$ in the SV dataset). We adopt these coordinates as the absolute position of Mira A. The position and proper motion of Mira A in the Hipparcos Catalogue are $(34{\fdg}836\,611, -2{\fdg}977\,061)$ at the Julian epoch 1991.25 and $(9.33 \pm 1.99, -237.36 \pm 1.58)\,{\rm mas}\,{\rm yr}^{-1}$, respectively \citep{hipparcos}. At the epoch of the ALMA SV observation, ${\sim}2014.83$ (JD 2\,456\,959.6 and JD 2\,456\,962.7), the expected coordinates of Mira A due to proper motion should be $(34{\fdg}836\,667, -2{\fdg}978\,615) \pm (0{\fdg}000\,013, 0{\fdg}000\,010)$. So the observed absolute position of Mira A is within $2\sigma$ from the predicted position of the Hipparcos Catalogue.
We then produced two sets of spectral line images, with and without subtraction of the continuum, respectively. The continuum was subtracted, with the \texttt{uvcontsub} task in CASA, by fitting a linear polynomial to the real and imaginary parts of the visibility data of the line-free (i.e., continuum-only) channels in each spectral window. Our selection of the line-free channels was slightly different from that in the example imaging script provided along with the SV data\footnote{available from the CASA Guides: \url{http://casaguides.nrao.edu/index.php?title=ALMA2014_LBC_SVDATA&oldid=18077}}.
The spectral line image data cubes of the SiO and H$_2$O lines in ALMA Band 6 were created by the image deconvolution task, \texttt{clean}, in CASA. The task performs an inverse Fourier transform to the visibility data (``\uvd'') and creates a raw image data cube (the ``DIRTY'' image), then deconvolves the ALMA point-spread function from each frequency plane of the image with the Clark image deconvolution algorithm \citep{hogbomclean,clarkclean} (the ``CLEAN'' process). The product of image deconvolution for each frequency is a set of point sources (the CLEAN component models) which, in aggregate, reproduce the same input ``DIRTY'' image when convolving with the array's point-spread function. The task finally restores the CLEAN component models with a restoring beam (the CLEAN beam) of parameters either determined from fitting the point-spread function (taking its FWHM) or specified by the user. The local standard of rest (LSR) velocities covered by the image cubes range from $26.7\,\kms$ to $66.7\,\kms$, centred at the systemic (centre-of-mass) LSR velocity of $46.7\,\kms$, which corresponds to $57.0\,\kms$ in the heliocentric rest frame. We have determined the systemic velocity from the mid-point of the total velocity ranges of the entire line-emitting/absorbing region, assuming the global infall or expansion motions at the extreme velocities are symmetric about the stellar systemic velocity. We weighted the visibilities with a robust (Briggs) parameter of $\mathcal{R}_{\rm Briggs}=0.5$ and CLEANed the images down to a threshold of ${\sim}2$--$3\,{\rm mJy}\,{\rm beam}^{-1}$, which is about 1.5 times the rms noise level. We restored the images with a circular beam of FWHM $0{\farcs}032$ for the $^{28}$SiO and $^{29}$SiO lines, and of FWHM $0{\farcs}030$ for the H$_2$O $v_2=1$ line. The FWHM of the beams are the geometric means of the major- and minor-axes of the elliptical point-spread functions fitted by the \texttt{clean} task.
At 220\,GHz, the primary beam FWHM of the 12-m array is about $28\arcsec$, which is much larger than the size of the line emission/absorption, and therefore no primary beam correction is needed. Figure \ref{fig:siov0_full} is the only primary beam-corrected image in this article, which shows remote emission in the vibrational ground state $^{28}$SiO up to a distance of ${\sim}3{\arcsec}$. There is no significant difference in the flux of detectable emission from the image without primary beam correction.
The spectral line channel maps presented in Sect. \ref{sec:result_maps} and Appendix \ref{sec:appendix_maps} are further binned, with the image binning task \texttt{imrebin} (new from version 4.3.0) in CASA version 4.4.0. We use Python to generate plots, with the aid of the \texttt{matplotlib} plotting library (version 1.4.3) \citep{matplotlib}, the PyFITS module (version 3.3), and the Kapteyn Package\footnote{\url{http://www.astro.rug.nl/software/kapteyn/}} (version 2.3) \citep{kapteyn}.
The cell size and total size of the images are $0{\farcs}005 \times 0{\farcs}005$ and $15{\arcsec} \times 15{\arcsec}$, respectively. Figure \ref{fig:siov0_full} shows the map of the SiO ${\varv} = 0$ $J=5-4$ line at the systemic velocity channel over a $7{\farcs}5 \times 7{\farcs}5$ region centred at Mira A. As shown in this figure, there is remote, arc-like emission extending up to about 3 arcsec from the star between the LSR velocities of $43.7\,\kms$ and $49.7\,\kms$.
Moreover, we use the images without continuum subtraction for our spectral line modelling (Sects. \ref{sec:result_spec} and later), instead of the continuum-subtracted images as in the reference images in the ALMA Science Portal. As a result, our images only contain emission from spectral line and/or the radio continuum from Mira A and B throughout, without any real negative signals. As we will explain in further detail in Appendix \ref{sec:appendix_contsub}, we find spurious ``bumps'' in the absorption profiles of continuum-subtracted spectra. We believe that the image deconvolution of the strong line emission surrounding Mira's radio photosphere may have impaired the deconvolution of the region showing line absorption against the background continuum. The images (and hence the spectra) deconvolved without continuum subtracted should better represent the real emission and absorption of the SiO and H$_2$O lines.
\section{Results}
\label{sec:results}
\subsection{Continuum}
\label{sec:result_cont}
\citet{mrm2015} and \citet{vro2015} have independently analysed the continuum data of the Mira AB system in both ALMA Bands 3 (96\,GHz) and 6 (229\,GHz) of this SV dataset. Additionally, \citet{mrm2015} include the continuum data from their Q-band (46\,GHz) observation with the Karl G. Jansky Very Large Array (VLA) in 2014; \citet{vro2015} also include the continuum data from ALMA Band 7 (338\,GHz), of which some results have been reported by \citet{ramstedt2014}. From this SV dataset, both \citet{mrm2015} and \citet{vro2015} found that the visibilities of Mira A in the continuum of Band 6 can be better fitted with a two-component model, consisting of an elliptical uniform disk plus an additional Gaussian component, than a single-component model. Moreover, \citet{vro2015} found that the additional Gaussian component to be a compact, bright hotspot with a FWHM of ${\sim}4.7\,{\rm mas}$ and a brightness temperature of ${\sim}10\,000\,{\rm K}$.
We have conducted a similar analysis of the continuum data of Mira A and B as these authors for the Band 6 data. From the continuum map, the total flux of Mira A is $149.70 \pm 0.04\,{\rm mJy}$ and that of Mira B is $11.19 \pm 0.04\,{\rm mJy}$. In our model fitting, the elliptical uniform disk component for Mira A has a size of about $(51.2 \pm 0.1)\,{\rm mas} \times (41.0 \pm 0.1)\,{\rm mas}$, ${\rm PA} = -45.0\degr \pm 0.5\degr$ and a flux of $S_{229.6\,{\rm GHz}} = 102 \pm 9\,{\rm mJy}$. This corresponds to a brightness temperature of $1630 \pm 175\,{\rm K}$. For the additional Gaussian component of Mira A, the fitted flux is about $47 \pm 9\,{\rm mJy}$ and its FWHM is about $(26.4 \pm 0.2)\,{\rm mas} \times (22.4 \pm 0.2)\,{\rm mas}$, ${\rm PA} = 34.0\degr \pm 1.7\degr$, which is much larger than the size of the purported $4.7\,{\rm mas}$-hotspot. The brightness temperature of this Gaussian component corresponds to $1856 \pm 419\,{\rm K}$, which is much smaller than $10\,000\,{\rm K}$. Our elliptical Gaussian model for Mira B has a FWHM of about $(25.5 \pm 0.3)\,{\rm mas} \times (22.5 \pm 0.3)\,{\rm mas}$, ${\rm PA} = 72.7\degr \pm 3.6\degr$ and a flux of about $(11.3 \pm 0.5) \,{\rm mJy}$. Our results are in general consistent with those reported by \citet{mrm2015}. However, we did not find any evidence of the compact hotspot or reproduce the similar results of the visibility fitting as reported by \citet{vro2015}. We present our detailed continuum analysis of the visibility data in Appendix \ref{sec:appendix_cont}.
In this section, we only present our model fitting results using a single model component for Mira, i.e., an elliptical uniform disk or an elliptical Gaussian, but not both. Table \ref{tab:singlecontinuum} shows the results of our single-component fitting in the continuum window centred at 229.6\,GHz. The brightness temperature of the uniform disk model of Mira A is found to be $2611 \pm 51\,{\rm K}$.
\begin{table*}[!htbp]
\caption{Photospheric parameters from fitting the continuum visibility data, with the \texttt{uvfit} task in the Miriad software, of Mira A and B in the continuum window at 229.6\,GHz of ALMA Band 6.}
\label{tab:singlecontinuum}
\centering
\begin{tabular}{lccccc}
\hline\hline
Object & $S_{229.6\,{\rm GHz}}$ & $\theta_{\rm maj}$ & $\theta_{\rm min}$ & P.A. & $T_b$ \\
& (mJy) & (mas) & (mas) & ($\degr$) & (K) \\
\hline
\multicolumn{6}{c}{Mira A Uniform Disk Model + Mira B Gaussian Model} \\
\hline
Mira A (Disk) & $148.0 \pm 0.5$ & $45.96 \pm 0.03$ & $41.25 \pm 0.03$ & $-36.4 \pm 0.3$ & $2611 \pm 51$ \\
Mira B (Gaussian) & $ 11.4 \pm 0.5$ & $25.70 \pm 0.26$ & $21.38 \pm 0.30$ & $65.7 \pm 2.7$ & --- \\
\hline
\multicolumn{6}{c}{Mira A Gaussian Model + Mira B Gaussian Model} \\
\hline
Mira A (Gaussian) & $151.7 \pm 0.5$ & $30.50 \pm 0.02$ & $26.93 \pm 0.02$ & $-45.6 \pm 0.3$ & --- \\
Mira B (Gaussian) & $ 11.1 \pm 0.5$ & $25.73 \pm 0.28$ & $22.14 \pm 0.34$ & $78.2 \pm 2.9$ & --- \\
\hline
\end{tabular}
\tablefoot{The columns are (from left to right): the object being fitted and the type of model (uniform disk or Gaussian), total flux at 229.6\,GHz ($S_{229.6\,{\rm GHz}}$, mJy), major axis ($\theta_{\rm maj}$, mas), minor axis ($\theta_{\rm min}$, mas), the position angle (P.A., degrees) of each elliptical component, and the brightness temperature ($T_b$, K) of the uniform disk model for Mira A. The quoted uncertainties are all the formal quantities resulting from model fitting. The real uncertainty of the flux is dominated by the absolute flux calibration, which is estimated to be of order 20\%. The uncertainty of the brightness temperature including the contribution from the absolute flux calibration is about $565\,{\rm K}$. The total integrated fluxes of Mira A and B in the 229.6-GHz continuum map are $149.70 \pm 0.04\,{\rm mJy}$ and $11.19 \pm 0.04\,{\rm mJy}$, respectively.}
\end{table*}
In addition, we have also created continuum images by integrating all the line-free channels in each of the four SiO and one H$_2$O spectral line windows. By calculating the total flux, $S_{\nu}$, from Mira A within a $0{\farcs}25$-radius circle (which safely includes all possible continuum emission from Mira A, but does not contain any emission from Mira B) at respective frequencies, $\nu$, we derive an independent spectral index (using the spectral line windows in Band 6 only) of $1.82 \pm 0.33$, which is consistent with the value ($1.86$) derived by \citet{rm1997}.
\subsection{Images}
\label{sec:result_maps}
Figure \ref{fig:siov0_full} shows the map of the SiO ${\varv} = 0$ $J=5-4$ transition in the LSR velocity of $46.7\,\kms$, which is the systemic (centre-of-mass) velocity of Mira A, over a $7{\farcs}5 \times 7{\farcs}5$ box centred at Mira A. The position of Mira B, $02^{\rm h}19^{\rm m}20{\fs}826, -02{\degr}58{\arcmin}43{\farcs}12$, as determined by fitting its image produced from the {\uvd} before self-calibration, is also indicated on the map. To the west of Mira A, the SiO vibrational ground state emission extends to a larger projected radial distance than other directions. This emission feature emerges from the west and north-west of Mira A and appears as an arc-like feature, which turns south at around $2{\arcsec}$ west of the star and reaches a maximum projected distance of ${\sim}3{\arcsec}$.
As we will explain in Appendix \ref{sec:appendix_contsub}, there are spurious ``bumps'' in the spectra extracted from the line-of-sight towards the continuum of Mira in the maps produced from the data continuum-subtracted before imaging. Since we are more confident in the quality of the image deconvolution without the subtraction of the continuum, we extract the spectra from the maps retaining the continuum (``full data maps'') for our radiative transfer modelling in Sects. \ref{sec:result_spec} and later. These full data maps are presented in Appendix \ref{sec:appendix_maps}. In this section, we only show the maps that are first imaged with the continuum, and \emph{then} continuum-subtracted with the CASA task \texttt{imcontsub}. Such post-imaging continuum subtraction can avoid the spurious features seen in pre-imaging continuum-subtracted images (and also the spectra).
Figure \ref{fig:siov0chan_csub} shows the continuum-subtracted channel maps of the SiO ${\varv} = 0$ $J=5-4$ transition in the LSR velocity range of $35.7$--$58.7\,\kms$ over a $1{\farcs}1 \times 1{\farcs}1$ box centred at Mira A (Mira hereafter). Contour lines at the $-36$, $-6$, $6$, $12$, $24$, $48$, and $72\sigma$ levels, where $\sigma = 0.80\,{\rm mJy}\,{\rm beam}^{-1}$, are drawn to indicate the region with significant line absorption (yellow contours; negative signals) or emission (white contours; positive signals).
Figure \ref{fig:siov0chanzoomed_csub} shows the same channel maps as Fig. \ref{fig:siov0chan_csub}, but zoomed in to show the inner $0{\farcs}22 \times 0{\farcs}22$ region around Mira. While, globally speaking, the emission of the vibrational ground state SiO line in the inner winds of Mira (${\lesssim}0{\farcs}2$) appears to be spherically symmetric, we find significant inhomogeneities with stronger emission from clumps that are localised in relatively small regions and which stretch over limited velocity ranges.
As shown in Fig. \ref{fig:29siochan_csub}, the absorption and emission in the $J=5-4$ transition of the vibrational ground state of the $^{29}$SiO isotopologue appears to have a very similar extent as that observed in the analogous line to the main isotope of SiO. On larger scales, the $^{29}$SiO emission also appears to extend to the west of Mira, while its intensity falls off much more rapidly with increasing radius and no significant emission is seen beyond ${\sim}0.5\arcsec$. This is expected because the isotopic ratio of $^{28}$Si/$^{29}$Si in oxygen-rich giants is ${\gtrsim}13$ \citep[e.g.][]{tsuji1994,decin2010,ohnaka2014}. The $^{29}$SiO emission within ${\sim}0{\farcs}2$ also exhibits (1) general spherical symmetry and (2) localised, clumpy structures with more intense emission. While the maps in both isotopologues have the similar overall morphology, the peaks in the $^{29}$SiO emission do not all coincide with the $^{28}$SiO peaks.
Figures \ref{fig:siov2chan_csub} and \ref{fig:h2ov1chan_csub} show the continuum-subtracted maps of the SiO ${\varv} = 2$ $J=5-4$ and H$_2$O $v_2=1$ $J_{K_a,K_c}=5_{5,0}-6_{4,3}$ lines, respectively. Since the emission of these two lines is more smoothly distributed than that in the vibrational ground state SiO and $^{29}$SiO lines, we can clearly see ring-like emission structures around the line absorption against Mira's continuum in the velocity channels around the systemic velocity ($46.7\,\kms$). In most velocity channels, the emission from both lines are confined well within $0{\farcs}1$ from the centre of the continuum, and there is no remote emission beyond ${\sim}0{\farcs}1$ as in the ground state SiO lines.
Close to the systemic velocity, there is a clump at about 0$\farcs$05 to the east of Mira which strongly emits in both the SiO ${\varv} = 2$ and H$_2$O $v_2=1$ lines. The brightness temperatures of the SiO ${\varv} = 2$ and H$_2$O $v_2=1$ emission are ${\sim}600$\,K and ${\sim}1000$\,K, respectively. This eastern clump is not prominent, however, in the vibrational ground state SiO and $^{29}$SiO lines which have very low excitation energies (i.e., the upper-state energy, $E_{\rm up}$). This clump therefore probably contains shock-heated gas at a high kinetic temperature ($T_{\rm kin} \gtrsim 1000$\,K). On the other hand, the intensely-emitting clumps in the ground state of SiO or $^{29}$SiO lines do not appear in the highly excited SiO ${\varv} = 2$ and H$_2$O $v_2=1$ lines, which have excitation energies of $E_{\rm up}/k \gtrsim 3500\,{\rm K}$.
\begin{figure*}[!htbp]
\centering
\includegraphics[trim=0.0cm 0.2cm 0.0cm 0.2cm, clip, width=\singlemapwidth]{./fig/siov0_full.pdf}
\caption[]{The map of SiO ${\varv} = 0$ $J=5-4$ (with the continuum) at the channel of the systemic velocity ($46.7\,\kms$) with a channel width of $1.0\,\kms$. The positions of Mira A ($o$ Ceti; cyan cross) and Mira B (VZ Ceti; yellow cross) are indicated in the image. The horizontal and vertical axes are the relative offsets (arcsec) in the directions of right ascension ($X$) and declination ($Y$), respectively, with respect to the continuum centre of Mira A.
The white box centred at the fitted position of Mira A indicates the $0{\farcs}50 \times 0{\farcs}50$ region as shown in Fig. \ref{fig:array}, within which we extract the SiO and H$_2$O line spectra from an array of positions.
The horizontal and vertical axes are the relative offsets (arcsec) with respect to the Mira A in right ascension and declination, respectively.
The light green contours represent 4, 8, 16, and $32\sigma$ of the SiO emission from the gas near Mira A, where the map rms noise is $\sigma = 0.80\,{\rm mJy}\,{\rm beam}^{-1}$.
The circular restoring beam of $0{\farcs}032$ FWHM for the SiO image is indicated in white at the bottom-left.}
\label{fig:siov0_full}
\end{figure*}
\begin{figure*}[!htbp]
\centering
\includegraphics[trim=0.0cm 1.1cm 0.0cm 0.9cm, clip, width=\firstchannelmapwidth]{./fig/chan_sio54v0_imcsub.pdf}
\caption[]{Channel maps of post-imaging continuum-subtracted SiO ${\varv} = 0$ $J=5-4$ from LSR velocity $35.7\,\kms$ to $58.7\,\kms$, with a channel width of $1.0\,\kms$. The systemic velocity is $46.7\,\kms$. The horizontal and vertical axes indicate the relative offsets (arcsec) in the directions of right ascension ($X$) and declination ($Y$), respectively, with respect to the fitted absolute position of Mira A.
The white contours represent 6, 12, 18, 24, 48, and $72\sigma$ and yellow contours represent $-60$, $-36$ and $-6\sigma$, where $\sigma = 0.80\,{\rm mJy}\,{\rm beam}^{-1}$ is the map rms noise.
The circular restoring beam of $0{\farcs}032$ FWHM for the SiO image is indicated in white at the bottom-left in each panel.
In the first panel of the top row, orange contours at 0.1, 0.3, 0.5, 0.7, and 0.9 times the peak flux density ($73.4\,{\rm mJy}\,{\rm beam}^{-1}$) of the 229-GHz continuum emission are also drawn and the corresponding restoring beam of $0{\farcs}028$ FWHM is indicated in orange at the bottom-right.
The white box centred at Mira A indicates the $0{\farcs}22 \times 0{\farcs}22$ region of the zoomed maps of SiO ${\varv} = 0$ (Fig. \ref{fig:siov0chanzoomed_csub}), ${\varv} = 2$ (Fig. \ref{fig:siov2chan_csub}) and H$_2$O $v_2=1$ (Fig. \ref{fig:h2ov1chan_csub}).
In the second and third panels of the top row, the sizes of the uniform disk and Gaussian models, respectively, in our continuum analysis are drawn in blue.}
\label{fig:siov0chan_csub}
\end{figure*}
\begin{figure*}[!htbp]
\centering
\includegraphics[trim=0.0cm 1.1cm 0.0cm 0.9cm, clip, width=\channelmapwidth]{./fig/chan_sio54v0zoom_imcsub.pdf}
\caption[]{Same as Fig. \ref{fig:siov0chan_csub} for the zoomed ($0{\farcs}22 \times 0{\farcs}22$) channel maps of post-imaging continuum-subtracted SiO ${\varv} = 0$ $J=5-4$.
The white contours represent 6, 12, 18, 24, 48, and $72\sigma$ and yellow contours represent $-72$, $-60$, $-48$, $-36$, $-24$, $-12$, and $-6\sigma$, where $\sigma = 0.80\,{\rm mJy}\,{\rm beam}^{-1}$ is the map rms noise.}
\label{fig:siov0chanzoomed_csub}
\end{figure*}
\begin{figure*}[!htbp]
\centering
\includegraphics[trim=0.0cm 1.1cm 0.0cm 0.9cm, clip, width=\channelmapwidth]{./fig/chan_29sio_imcsub.pdf}
\caption[]{Same as Fig. \ref{fig:siov0chan_csub} for the channel maps of post-imaging continuum-subtracted $^{29}$SiO ${\varv} = 0$ $J=5-4$.
The white contours represent 6, 12, 24, 48, 96, and $144\sigma$ and yellow contours represent $-72$, $-54$, $-36$, and $-6\sigma$, where $\sigma = 0.65\,{\rm mJy}\,{\rm beam}^{-1}$ is the map rms noise.}
\label{fig:29siochan_csub}
\end{figure*}
\begin{figure*}[!htbp]
\centering
\includegraphics[trim=0.0cm 1.1cm 0.0cm 0.9cm, clip, width=\channelmapwidth]{./fig/chan_sio54v2_imcsub.pdf}
\caption[]{Same as Fig. \ref{fig:siov0chan_csub} for the zoomed ($0{\farcs}22 \times 0{\farcs}22$) channel maps of post-imaging continuum-subtracted SiO ${\varv} = 2$ $J=5-4$.
The white contours represent 6, 12, 18, 24, and $30\sigma$ and yellow contours represent $-48$, $-36$, $-24$, $-18$, $-12$, and $-6\sigma$, where $\sigma = 0.72\,{\rm mJy}\,{\rm beam}^{-1}$ is the map rms noise.}
\label{fig:siov2chan_csub}
\end{figure*}
\begin{figure*}[!htbp]
\centering
\includegraphics[trim=0.0cm 1.1cm 0.0cm 0.9cm, clip, width=\channelmapwidth]{./fig/chan_h2ov1_imcsub.pdf}
\caption[]{Same as Fig. \ref{fig:siov0chan_csub} for the zoomed ($0{\farcs}22 \times 0{\farcs}22$) channel maps of post-imaging continuum-subtracted H$_2$O $v_2=1$ $J_{K_a,K_c}=5_{5,0}-6_{4,3}$.
The white contours represent 6, 12, 18, 30, and $42\sigma$ and yellow contours represent $-24$, $-18$, $-12$, and $-6\sigma$, where $\sigma = 0.85\,{\rm mJy}\,{\rm beam}^{-1}$ is the map rms noise.
The circular restoring beam of $0{\farcs}030$ FWHM for the H$_2$O images is indicated in white at the bottom-left in each panel. }
\label{fig:h2ov1chan_csub}
\end{figure*}
\subsection{Spectra}
\label{sec:result_spec}
We have extracted the SiO and H$_2$O spectra from the centre of Mira's continuum, and from an array of positions at radii 0{\farcs}032, 0{\farcs}064, 0{\farcs}096, 0{\farcs}128 and 0{\farcs}160 from the centre, along the legs at PA = {0\degr}, {90\degr}, {180\degr}, and {270\degr}. The positions are shown in Fig. \ref{fig:array}, which is the map of SiO ${\varv} = 0$ $J=5-4$, without subtraction of the continuum, in the channel of the stellar systemic velocity ($v_{\rm LSR}=46.7\,\kms$). The full set of the spectra will be presented along with the modelling results in Sect. \ref{sec:modelling}. In view of the fact that the inner envelope around Mira is partially filled with intense clumpy emission, we did not compute the azimuthally-averaged spectra in order to avoid the averaged spectra from being contaminated by isolated intense emission and to obtain a more representative view of the general physical conditions of the envelope.
Figure \ref{fig:band6lines} shows the spectra of various lines in ALMA Band 6 extracted from the centre of the continuum. As we did not subtract the continuum from the data, the flat emission towards the low- and high-velocity ends of the spectra represents the flux from the radio continuum of Mira near the frequencies of the respective spectral lines. In Appendix \ref{sec:appendix_contsub}, we will show the spectra with the continuum subtracted (in the visibility data) before imaging.
The SiO ${\varv} = 1$ $J=5-4$ transition shows strong maser emission across a large range of LSR velocities, which introduces sharp spikes in its spectrum. For other lines which do not show strong maser emission (i.e., all except SiO ${\varv} = 1$), absorption against the continuum ranges between the offset velocity (relative to the stellar LSR velocity) of approximately $-4\,\kms$ and $+14\,\kms$. The absorption is in general redshifted relative to the systemic velocity. This indicates that the bulk of the material in the inner envelope is infalling towards Mira during the ALMA SV observation (near stellar phase 0.45). Infall motion at phase 0.45 is expected for another oxygen-rich Mira variable, W Hya, based on the detailed modelling of the CO $\Delta {\varv} = 3$ line profiles, as observed by \citet{lebzelter2005}, presented in the paper of \citet{nowotny2010}. The CO $\Delta {\varv} = 3$ lines probe the pulsation-dominated layers of the atmospheres of Mira variables, and therefore the radial velocity variation of these lines would indicate the infall or expansion velocities of the global motion of the extended atmospheres below the dust formation (and circumstellar wind acceleration) regions \citep[e.g.][]{hinkle1982,nowotny2005a}.
The spectra of $^{28}$SiO ${\varv} = 0$ $J=5-4$ and $^{29}$SiO ${\varv} = 0$ $J=5-4$ appear to be virtually identical. From the similarity of the line profiles and considering the high expected isotopic ratio of $^{28}$Si/$^{29}$Si ($\gtrsim 13$), the vibrational ground state $^{28}$SiO and $^{29}$SiO lines we see in Fig. \ref{fig:band6lines} are likely to be both very optically thick (saturated) and thermalised.
In Fig. \ref{fig:band6lines}, we can also see trends in the width and depth of the absorption profiles with excitation. The vibrationally excited SiO ${\varv} = 2$ and H$_2$O $v_2=1$ lines show narrower and shallower absorption than the two ground state SiO lines. This suggests that the vibrationally excited energy levels are less readily populated than the ground state levels, and hence the kinetic temperature of the bulk of the infalling material should be much lower than $3500$\,K, which corresponds to the excitation energies of SiO ${\varv} = 2$ and H$_2$O $v_2=1$ lines. This also explains the small radial extent of these two lines as shown in the channel maps because the kinetic temperature (and hence the excitation) in general falls off with the radial distance from the star. Because the SiO ${\varv} = 2$ and H$_2$O $v_2=1$ lines have very similar excitation energy ($E_{\rm up}/k \sim 3500$\,K), the difference in their line profiles is probably due to differences in the molecular abundance and molecular parameters such as the (de-)excitation rate coefficients.
There are two features in the spectra that strongly constrain our modelling in Sect. \ref{sec:modelling}. The first one is the small blueshifted emission feature at the offset velocities between $-10$ and $-3 \kms$. The size of the synthesised beam under robust weighting ($\mathcal{R}_{\rm Briggs}=0.5$) is about $0{\farcs}03$, which is comparable to that of the disk of the continuum emission (with minor axis about $0{\farcs}04$). Hence, some emission from the hottest inner layers of the envelope just outside the edge of the continuum disk is expected to ``leak'' into the beam. Since the innermost envelope shows global infall kinematics, the flux ``leakage'' should appear as excess blueshifted emission, i.e., an inverse P Cygni profile. We have also checked the spectra at different offset positions (some of which are modelled in Sect. \ref{sec:modelling}) and found that over the same blueshifted velocity range, the excess emission becomes more prominent as the continuum level decreases towards outer radial distances. For the H$_2$O transition, we also find a (much weaker) emission component near the offset velocity of $-3 \kms$, and a similar check at different offset positions also indicates that the component is likely to be real.
The other feature is presented by the redshifted wings in the offset velocity range between $+10$ and $+14 \kms$ of the $^{28}$SiO ${\varv} = 0$ and $2$, and the $^{29}$SiO ${\varv} = 0$ lines, which do not show strong maser emission. As shown in Fig. \ref{fig:band6lines}, the redshifted part of the absorption profiles of all these lines appears to be nearly identical. The lines could be in the optically thin regime only if the isotopic ratio of $^{28}$SiO/$^{29}$SiO is close to unity, which is not expected. So we believe that all the lines are in the optically thick regime in this velocity range. The brightness temperatures of the redshifted wings thus give an indication of the kinetic temperature of the coolest gas around the corresponding (infall) velocities.
\begin{figure*}[!htbp]
\centering
\includegraphics[trim=0.0cm 0.2cm 0.0cm 0.2cm, clip, width=\singlemapwidth]{./fig/array.pdf}
\caption[]{The map of SiO ${\varv} = 0$ $J=5-4$ (with the continuum) at the channel of the systemic velocity ($46.7 \kms$) with a channel width of $1\,\kms$. The centre of Mira's continuum is marked as a black cross. Orange contours are drawn, representing 10\%, 30\%, 50\%, 70\%, and 90\% of the peak continuum flux ($73.4\,{\rm mJy}\,{\rm beam}^{-1}$). The black plus signs ($+$) indicate the positions at which SiO and H$_2$O spectra are sampled and modelled in Sect. \ref{sec:modelling}. Along each arm of this array of points, the sampling positions are separated by 32\,mas. The circular restoring beam of $0{\farcs}032$ FWHM for the SiO image is indicated in white at the bottom-left and that of $0{\farcs}028$ FWHM for the 229-GHz continuum contours is indicated in orange at the bottom-right.}
\label{fig:array}
\end{figure*}
\begin{figure*}[!htbp]
\centering
\includegraphics[height=\spectraheight]{./fig/band6lines.pdf}
\caption{Spectral lines in ALMA Band 6 extracted from the line-of-sight towards the centre of Mira's continuum. The SiO ${\varv} = 1$ $J=5-4$ transition (red colour) shows intense maser emission around $+10\,\kms$, with the peak flux density of $1.73\,{\rm Jy}$ at $+8.8\,\kms$. The maser spectrum above $0.10\,{\rm Jy}$ is not shown in this figure.}
\label{fig:band6lines}
\end{figure*}
\section{Radiative transfer modelling}
\label{sec:modelling}
We have modelled the H$_2$O and SiO spectra with the radiative transfer code {\ratran}\footnote{\url{http://www.sron.rug.nl/~vdtak/ratran/}} \citep{ratran}. The public version of the code accepts one-dimensional input models only. Despite the clumpy structures of the inner envelope, we find that the line spectra exhibit general spherical symmetry within ${\sim}0{\farcs}16$ and therefore 1-D modelling is applicable. Since the ALMA SV observations only provide a snapshot of Mira's extended atmosphere in its highly variable pulsation cycles, and the hydrodynamical models that we will compare and discuss in Sect. \ref{sec:discuss-hydrodyn} are also one-dimensional, using multi-dimensional radiative transfer code probably does not lead to better understanding of the general physical conditions of Mira's extended atmosphere. {\ratran} solves the coupled level population and radiative transfer equations with the Monte Carlo method and generates an output image cube for each of the modelled lines. We then convolved the image cubes with the same restoring beam as in our image processing and extracted the modelled spectra from the same set of positions as the observed spectra (Figure \ref{fig:array}). In the following subsections, we describe the details of our modelling, including the molecular data of H$_2$O and SiO, and the input physical models for the inner envelope with the continuum.
\subsection{H$_2$O molecular data}
The molecular data include information about all the energy levels considered in our radiative transfer model, and all possible transitions among these levels. The energies and statistical weights of the energy levels, and the Einstein $A$ coefficients (the rates of spontaneous emission), frequencies, upper level energies and collisional rate coefficients at various kinetic temperatures of the transitions are stored in a molecular datafile. The molecular datafile of H$_2$O is retrieved from the Leiden Atomic and Molecular DAtabase\footnote{\url{http://home.strw.leidenuniv.nl/~moldata/}} \citep[LAMDA;][]{lamda}. The LAMDA H$_2$O datafile includes rovibrational levels up to about $E_{\rm up}/k = 7190\,{\rm K}$ \citep{h2olinelvs}. In our modelling, we only include 189 energy levels up to 5130\,K in order to speed up the calculation. The selection includes 1804 radiative transitions and 17\,766 downward collisional transitions. The numbers of energy levels and transitions were reduced by more than half and three quarters, respectively, as compared to the original LAMDA file. Experiments have shown that such truncation of the datafile only has minute effects on the modelled spectra. The Einstein $A$ coefficients were provided by the BT2 water line list\footnote{\url{http://www.exomol.com/data/molecules/H2O/1H2-16O}} \citep{h2olinelist} and the collisional rate coefficients of H$_2$O with ortho-H$_2$ and para-H$_2$ were calculated by \citet{h2ocollrates}. The rates for ortho-H$_2$ and para-H$_2$ were weighted in the method described in \citet{lamda}.
\subsection{SiO molecular data}
Our radiative transfer modelling of SiO lines considers the molecule's vibrational ground and the first two excited states (${\varv} = 0, 1$ and $2$) up to an upper-state energy, $E_{\rm up}/k$, of about 5120\,K, similar to that for our H$_2$O modelling. There are totally 167 rotational energy levels in these vibrational states, where $J({\varv} = 0) \le 69$, $J({\varv} = 1) \le 56$, and $J({\varv} = 2) \le 39$. Among these energy levels are 435 radiative transitions (subject to the dipole selection rule $\Delta J = \pm 1$) and 13\,861 downward collisional transitions. The energies and statistical weights of the energy levels, and the line frequencies and Einstein $A$ coefficients of the radiative transitions are obtained from the EBJT SiO line list\footnote{\url{http://www.exomol.com/data/molecules/SiO/28Si-16O}} \citep{exomol2013}. These values are similar to those in the Cologne Database for Molecular Spectroscopy (CDMS)\footnote{\url{http://www.astro.uni-koeln.de/cdms/}} (version Jan 2014) \citep{cdms2001,cdms2005,mueller2013cdms}.
The rate coefficients for collisions between SiO and H$_2$ molecules in the vibrational ground state (${\varv} = 0 \rightarrow 0$) are extrapolated from the scaled (by 1.38) rate coefficients between SiO--He collisions as derived by \citet{dayou2006}. The SiO--He rate coefficients only include rotational levels up to $J({\varv} = 0) = 26$ and H$_2$ gas temperature up to 300\,K \citep{dayou2006}. We extrapolate the ${\varv} = 0 \rightarrow 0$ rate coefficients to higher $J$ and $T$ with the methods presented in Appendix \ref{sec:appendix_purerotrates}. Our temperature-extrapolated rate coefficients are consistent, within the same order of magnitude, with the corresponding values in the LAMDA SiO datafile \citep{lamda}. Rate coefficients of the rotational transitions involving vibrationally excited states (i.e., ${\varv} = 1,2$, where $\Delta {\varv} = 0,1$) can be computed with the infinite-order sudden (IOS) approximation \citep[e.g.][]{goldflam1977}, of which the parameters are given by \citet{bg1983a,bg1983b} for $J({\varv}) \le 39$ and $1000\,\rm{K} \le T \le 3000\,\rm{K}$. We extrapolate the parameters of \citet{bg1983a,bg1983b} to higher $J$ (see Appendix \ref{sec:appendix_vibrotrates}) and assume the temperature dependence of the parameters for $T < 1000\,\rm{K}$ and $T > 3000\,\rm{K}$ to be the same as that for $1000\,\rm{K} \le T \le 3000\,\rm{K}$. For ${\varv} = 2 \rightarrow 0$ transitions, we simply assume the rate coefficients from ${\varv} = 2 \rightarrow 0$ to be 10\% of those from ${\varv} = 2 \rightarrow 1$ transitions. We note that these coefficients in general do not affect the radiative transfer significantly \citep[e.g.][]{lw1984,le1992}.
Our extrapolation scheme of the SiO--H$_2$ collisional rate coefficients (Appendix \ref{sec:appendix_siorates}) is different from that described by \citet{doel1990}, on which the rate coefficients adopted by \citet{doel1995} and \citet{h96sio} are based. In particular, their extrapolation of the rate coefficients (including those for ${\varv} = 0 \rightarrow 0$ transitions) was based entirely on the set of parameters given by \citet{bg1983a,bg1983b}, which was the most complete and accurate one available at that time; they also refrained from further extrapolating the parameters beyond $J({\varv}) = 39$ for ${\varv} = 0, 1, \ldots, 4$ and beyond the temperature range considered by \citet{bg1983a,bg1983b} \citep[for detailed discussion, see Sect. 7.2 of][]{doel1990}.
We use the Python libraries, NumPy\footnote{\url{http://www.numpy.org}} (version 1.9.2) \citep{numpy} and SciPy\footnote{\url{http://www.scipy.org}} (version 0.15.1) \citep{scipy} in our extrapolation of the SiO collisional rate coefficients and compilation of the molecular datafile. Line overlapping between SiO and H$_2$O transitions, which may significantly affect the pumping of SiO masers \citep[e.g.][and references therein]{desmurs2014}, is neglected.
\subsection{Continuum emission}
We include the continuum emission in the modelling. In {\ratran}, however, the ray-tracing code (\mbox{\textsc{sky}}) assumes that the size of the continuum is much smaller than the pixel size, which is not true in this ALMA dataset. Hence we cannot include the continuum by setting the default {\ratran} \mbox{\textit{central}} parameter, which describes the radius and blackbody temperature of the central source, in the straightforward manner. Instead, in our input physical model, we have created a \emph{pseudo}-continuum in the innermost three grid cells of the 1-D input model by setting (1) the outer radius of the third grid cell to be the physical radius of the radio continuum, (2) the ``kinetic temperature'' to be the brightness temperature of the continuum, (3) the outflow velocity to be zero, (4) the turbulence velocity to be $100\,\kms$ to get an effectively flat continuum spectrum within the velocity range of interest, and (5) the molecular abundance to be exceedingly high to get an optically thick core which blocks all the line emission from behind it. The exact number of grid cells representing the \emph{pseudo}-continuum does not affect the results. The velocity range of the {\ratran} image cubes was selected to be $\pm 25\,\kms$ from the systemic velocity, which is the same as for our ALMA image products.
In our modelling, the continuum level and spectral line absorption/emission were fitted from independent sets of parameters. The radius and effective temperature of the radio continuum were determined by fitting the modelled continuum levels to the ones in the observed spectra extracted from the centre, from 32\,mas and from 64\,mas. Beyond these distances the continuum level is effectively zero. The derived radius and effective temperature of the \emph{pseudo}-continuum is $R_{\rm continuum} = 3.60 \times 10^{13}\,{\rm cm}$ (21.8\,mas) and $2600\,{\rm K}$, respectively. These values are comparable to the mean radii and brightness temperatures of the elliptical disks fitted by us (Appendix \ref{sec:appendix_cont}), by \citet{mrm2015}, and by \citet{vro2015}.
\subsection{Modelling results}
\label{sec:model_results}
In the models of Mira's extended atmosphere and its inner wind, power-laws are adopted for the H$_2$ gas density and kinetic temperature profiles such that the density and temperature attain their maximum values at the outer surface of the radio photosphere, $R_{\rm continuum}$. The profiles of the physical parameters are expressed as functions of the radial distance from the continuum centre, which is defined as ``radius'' in the following discussion and in the plots of the input physical models. In order to reproduce the intensity of the spectra extracted from the centre and different projected distances, SiO abundance (relative to molecular hydrogen abundance) has to decrease with radius. We assume a simple two-step function for the SiO abundance, where the outer abundance is ${\sim}1\%$ of the inner abundance. The radius at which SiO abundance drops significantly is assumed to be $r_{\rm cond} = 1.0 \times 10^{14}\,{\rm cm} \approx 5\,R_{\star}$ in our preferred model. As we will discuss in Sect. \ref{sec:discuss-dust}, the observed spectra can still be fitted if $r_{\rm cond} \gtrsim 4\,R_{\star}$ or if the outer SiO abundance is ${\sim}10\%$ of the inner value (i.e., a degree of condensation of 90\%). The depletion of SiO molecule represents the dust condensation process in the transition zone between the inner dynamical atmosphere and the outer, fully accelerated circumstellar envelopes. For the H$_2$O molecule, however, condensation onto dust grains or solid ice is not expected in the modelled region where the gas temperature is at least a few hundred Kelvin. Furthermore, in the non-equilibrium chemical modelling of \citet{gobrecht2016}, the H$_2$O abundance in the inner winds of the oxygen-rich Mira variable IK Tau remains roughly constant with radius at a given stellar pulsation phase. So we assume the H$_2$O abundance (relative to H$_2$) near Mira to be constant at $5.0 \times 10^{-6}$ throughout the modelled region (out to $5.0 \times 10^{14}\,{\rm cm}\approx 25\,R_{\star} \approx 0{\farcs}3$). For the reason discussed in Section \ref{sec:model_preferred}, we have also considered an alternative H$_2$O abundance profile with a sharp increase in H$_2$O abundance near the radio photosphere.
In our radiative transfer modelling, the expansion/infall velocity, gas density, and gas kinetic temperature are the crucial parameters in the input physical model. We have empirically explored different types of profiles that are plausible in the inner winds and circumstellar envelopes of evolved stars. To improve the readability of the article, we present in Appendix \ref{sec:appendix_model} various plausible models that fail to reproduce the observed spectra. In this section, we only discuss our preferred model--Model 3--in which both infall and outflow layers coexist in the extended atmosphere of Mira. In Sect. \ref{sec:discuss-hydrodyn}, we compare the velocity, density, and temperature profiles in our preferred model with the those predicted by current hydrodynamical models of pulsating stellar atmospheres. We also model the line radiative transfer with the atmospheric structures derived from those hydrodynamical models.
\subsubsection{Preferred model: mixed infall and outflow}
\label{sec:model_preferred}
Our modelling shows that pure infall would produce too much emission in the blueshifted velocities of the spectra than is observed (Appendix \ref{sec:appendix_model}). The excess emission component, as we have discussed in Sect. \ref{sec:result_spec}, originates from the far-side of the innermost layer (beyond the radio photosphere) of Mira's extended atmosphere that is not blocked by the radio continuum disk. In our preferred model, we introduce a thin expanding layer (${\sim}5 \times 10^{11}\,{\rm cm} \approx 0.03\,R_{\star}$) in the innermost radii between the radio photosphere and the globally infalling layer. Alternating outflow and infall velocity profiles have been calculated numerically by \citet{bowen1988a,bowen1988b} for Mira-like variables, and subsequently adopted by \citet{h96sio,h01h2o} to simulate the SiO and H$_2$O masers from a Mira-like M-type variable star at a single stellar phase. The infall velocity immediately above this expanding layer is about $7.3\,\kms$, and the expansion velocity below this layer is about $4.0\,\kms$. The outer infalling gas and the inner expanding layer produce a shocked region, with the shock velocity of $\Delta V \lesssim 12\,\kms$, near the radio photosphere of Mira. The maximum gas infall speed of ${\sim}7\,\kms$ is consistent with the proper motions of SiO maser spots around another oxygen-rich Mira variable TX Cam, which lie in the velocity range of $5$--$10\,\kms$ \citep{diamond2003}. The emission from the far-side of the expanding layer would appear at redshifted velocities and the absorption from the near-side would be in the blueshifted part (i.e., the usual P Cygni profile). The excess emission from the pure infall models is therefore reduced to a level that fits the observed spectra.
To properly fit the line profiles, the radius of peak infall velocity is adopted to be $3.75 \times 10^{13}\,{\rm cm}$, where the gas density is almost $10^{13}\,{\rm cm}^{-3}$. Figure \ref{fig:model3} shows the important input parameters in our model, including the molecular H$_2$ gas density (top-left), infall velocity (top-right), molecular SiO and H$_2$O abundances (middle) and the gas kinetic temperature (bottom). The bottom row of Fig. \ref{fig:model3} also shows the excitation temperatures of the SiO and H$_2$O transitions (in colour).
Figures \ref{fig:m3siov0spec}, \ref{fig:m3siov2spec}, and \ref{fig:m3h2ov1spec} show the comparison of our modelled and observed spectra of SiO ${\varv} = 0$ $J=5-4$, SiO ${\varv} = 2$ $J=5-4$, and H$_2$O $v_2=1$ $J_{K_a,K_c}=5_{5,0}-6_{4,3}$, respectively. The top-left panel of these figures show the spectra extracted from the line-of-sight towards the continuum centre.
The top-right panel of Fig. \ref{fig:m3siov0spec} shows the modelled and observed SiO spectra at 32\,mas. In our modelled spectrum, there is a small absorption feature near the redshifted velocity of $+10\,\kms$, which is not seen in the data. This spectral feature is indeed part of the broad absorption as seen along the line-of-sight towards the radio continuum, which appears in the spectra at 32\,mas due to beam convolution. Hence, we may have introduced too much absorption, in particular near the peak infall velocities, to the model. Inhomogeneities in the images may have introduced additional emission features to the spectra, but this is not the case here. For example, there is a sharp spike in the observed spectra extracted from the southern position (in blue). This feature is due to an intensely emitting SiO clump at ${\sim}26\,{\rm mas}$ to the south of the continuum centre. The maximum brightness temperature of this clump in the map is ${\sim}2300\,{\rm K}$. The intense emission from this clump is probably due to maser action, because, if it were of thermal nature, then one would also expect the corresponding $^{29}$SiO line to be detected with intense emission from this clump. However, this clump is too far away from the other positions from which the spectra were extracted to contribute significant emission. Another possible explanation is that the infall velocity in our model may decrease too quickly with radius. For example, at the offset of 64\,mas, our modelled SiO ${\varv} = 0$ spectrum appears to be narrower than the observed spectra (middle-left panel of Fig. \ref{fig:m3siov0spec}). We have tried including a constant velocity layer of $10^{13}\,{\rm cm}$ at the peak infall velocity, but we still could not eliminate the absorption feature near $+10\,\kms$. If we adopt a much higher temperature, up to about $2600\,{\rm K}$, in the immediate proximity of the radio photosphere, then we would introduce too much blueshifted emission to the resultant spectra. Also, our spherically symmetric and homogeneous model obviously fails to reproduce the features arising in individual clumps.
In Fig. \ref{fig:m3h2ov1spec}, we present two different models of the H$_2$O spectral fitting using different input abundance profiles, which are plotted in the middle-right panel of Fig. \ref{fig:model3}. As shown in the top-left panel of Fig. \ref{fig:m3h2ov1spec}, the modelled spectra using constant abundance profile (``Model 3 abundance''; in red) do not fit well to the observed H$_2$O absorption spectra (in black) along the line-of-sight towards the continuum centre. In particular, the modelled spectrum does not show the strong observed absorption in the extreme redshifted velocities $>10\,\kms$. Hence, we have to introduce a sharp rise in the input H$_2$O abundance by about 10 times, to $5.0 \times 10^{-5}$, within the innermost region where the infall velocity peaks (``High H$_2$O abundance''; in blue) in order to reproduce the strong redshifted absorption feature in the spectrum.
Overall, considering the complexity of Mira's extended atmosphere and inner wind, we believe that Model 3 can satisfactorily reproduce most of the features in the observed SiO and H$_2$O spectra in ALMA Band 6. We therefore adopt it as our preferred model and use it as the base model of our further tests in Section \ref{sec:discussion}.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\modelwidth]{./fig_models/sio1195-lv-dens.pdf}
\includegraphics[width=\modelwidth]{./fig_models/sio1195-lv-vinf.pdf}\\
\includegraphics[width=\modelwidth]{./fig_models/sio1195-lv-abun.pdf}
\includegraphics[width=\modelwidth]{./extra_plots/h2o598-lv-abun.pdf}\\
\includegraphics[width=\modelwidth]{./fig_models/sio1195-lv-temp.pdf}
\includegraphics[width=\modelwidth]{./fig_models/h2o590-lv-temp-inv.pdf}
\caption{Inputs of our preferred model. Shown in the panels are the H$_2$ gas density (\textbf{top-left}) , infall velocity (negative represents expansion) (\textbf{top-right}), $^{28}$SiO abundance (\textbf{middle-left}), H$_2$O abundance (\textbf{middle-right}), and the kinetic temperature (in black) and excitation temperatures (in colours) of the three $^{28}$SiO transitions (\textbf{bottom-left}) and the H$_2$O transition (\textbf{bottom-right}). In the bottom-right panel, solid red line indicates positive excitation temperature (i.e., non-maser emission) of the H$_2$O transition, and the dashed red line indicates the absolute values of the negative excitation temperature (i.e., population inversion) between $1.7 \times 10^{14}$ and $2.4 \times 10^{14}\,{\rm cm}$. Small negative values for the excitation temperature would give strong maser emission. Vertical dotted lines mark the radii at which the spectra were extracted; coloured horizontal dotted lines in the bottom panels indicates the upper-state energy ($E_{\rm up}/k$) of the respective transitions. The innermost layer within $R_{\rm continuum}$ represents the grid cells for the \emph{pseudo}-continuum, in which the input values for H$_2$ gas density and molecular abundances are above the range of the plots.}
\label{fig:model3}
\clearpage
\end{figure*}
\begin{figure*}[!htbp]
\centering
\includegraphics[trim=1.0cm 2.0cm 2.0cm 1.5cm, clip, width=\multispecwidth]{./fig_models/sio1195-spec_sio54v0_all.pdf}
\caption{Preferred model: spectra of SiO ${\varv} = 0$ $J=5-4$ at various positions. The black histogram is the observed spectrum at the centre of continuum, green, blue, cyan and magenta histograms are the observed spectra along the eastern, southern, western and northern legs, respectively, at various offset radial distances as indicated in each panel. The red curves are the modelled spectra predicted by {\ratran}. Our model does not produce the population inversion (i.e., negative excitation temperature) required for maser emission in this SiO transition, so we do not expect our modelled spectra to show any maser emission, as seen in the spike in the upper-right panel (see text for the discussion of the spike).}
\label{fig:m3siov0spec}
\end{figure*}
\begin{figure*}[!htbp]
\centering
\includegraphics[trim=1.0cm 7.3cm 2.0cm 1.5cm, clip, width=\multispecwidth]{./fig_models/sio1195-spec_sio54v2_all.pdf}\caption{Preferred model: spectra of SiO ${\varv} = 2$ $J=5-4$ at various positions. The black histogram is the observed spectrum at the centre of continuum, green, blue, cyan and magenta histograms are the observed spectra along the eastern, southern, western and northern legs, respectively, at various offset radial distances as indicated in each panel. The red curves are the modelled spectra predicted by {\ratran}.}
\label{fig:m3siov2spec}
\end{figure*}
\begin{figure*}[!htbp]
\centering
\includegraphics[trim=1.0cm 7.3cm 2.0cm 1.5cm, clip, width=\multispecwidth]{./extra_plots/h2o598-spec_h2ov1_all.pdf}
\caption{Preferred model: spectra of H$_2$O $v_2=1$ $J_{K_a,K_c}=5_{5,0}-6_{4,3}$ at various positions. The black histogram is the observed spectrum at the centre of continuum, green, blue, cyan and magenta histograms are the observed spectra along the eastern, southern, western and northern legs, respectively, at various offset radial distances as indicated in each panel. The red curves are the modelled spectra predicted by {\ratran}, and the blue dashed curves are the same model adopting a high H$_2$O abundance (see the middle-right panel of Fig. \ref{fig:model3}).}
\label{fig:m3h2ov1spec}
\end{figure*}
\section{Discussion}
\label{sec:discussion}
\subsection{Caveat in the interpretation of the gas density}
\label{sec:limitations}
In our modelling, we assume that the gas in Mira's extended atmosphere is composed of purely neutral, molecular hydrogen (H$_2$) in its rotationally ground state, $J=0$. At radii close to the radio photosphere of evolved stars, atomic hydrogen \emph{could be} the dominant species in terms of number density \citep{glassgold1983,doel1990,rm1997}. \citet{glassgold1983} have demonstrated that, the atmosphere of an evolved star with the effective temperature of about 3000\,K would be essentially atomic, and those of a star of about 2000\,K would be essentially molecular. Since the effective temperature of the star is expected to be higher than the brightness temperature of the radio photosphere \citep[e.g.][]{rm1997}, there should be significant amount of atomic hydrogen present in the regions being modelled. In fact, intense hydrogen Balmer series emission lines have long been detected in the atmosphere of Mira \citep[e.g.][]{joy1926,joy1947,joy1954,gillet1983,fabas2011}. The hydrogen emission is thought to be the results of dissociation and recombination of the atom due to shock waves propagating through the partially ionized hydrogen gas in the atmospheres of Mira variables \citep[e.g.][]{fox1984,fadeyev2004}. In addition, molecular hydrogen could well be excited to higher rotational levels (see our discussion in Appendix \ref{sec:appendix_h2rotation}).
We note that the collisional rate coefficients between SiO molecule and atomic hydrogen (H) and electrons (e$^{-}$) have already been computed by \citet{palov2006} and \citet{varambhia2009}, respectively. However, in this study, we did not attempt to calculate the fractional distribution of atomic/molecular hydrogen, or to consider the collisions between SiO molecule and atomic hydrogen, helium or electrons. Hence, the derived H$_2$ gas density from our {\ratran} modelling is just a proxy of the densities of all possible collisional partners of SiO, including rotationally excited molecular hydrogen, atomic hydrogen, helium, and even electrons, in the extended atmosphere of Mira.
In order to examine how well the H$_2$ gas density in our preferred model is constrained, we have modelled the SiO and H$_2$O spectra by scaling the gas density by various factors. Figure \ref{fig:densitytest} shows the results of these sensitivity tests on the input gas density. We have found that the SiO spectra, extracted from the line-of-sight towards the centre of the continuum, does not vary too much, even if the gas density is varied by about an order of magnitude. On the other hand, the H$_2$O spectra extracted from the centre shows significant change in the absorption depth even if the gas density is changed by a factor of ${\sim}2$. Hence, assuming other input parameters (particularly the molecular abundances and gas temperature) of Model 3 are fixed, our derived gas density is tightly constrained. The gas density reaches $10^{12}$--$10^{13}\,{\rm cm}^{-3}$ just beyond the radio photosphere. This is consistent with other models that explain the radio continuum fluxes from Mira's radio photosphere \citep{rm1997}, and the near-infrared H$_2$O spectrum \citep{yamamura1999}. On the other hand, the derived gas density is much higher (by 2--4 orders of magnitude) than those predicted from hydrodynamical models (see Sect. \ref{sec:discuss-codex}).
\begin{figure*}[!htbp]
\centering
\includegraphics[trim=1.0cm 12.6cm 2.0cm 1.5cm, clip, width=\multispecwidth]{./extra_plots/density-spec_all.pdf}
\caption{Spectra of SiO ${\varv} = 0$ $J=5-4$ (left) and H$_2$O $v_2=1$ $J_{K_a,K_c}=5_{5,0}-6_{4,3}$ (right) extracted from the centre of the continuum. The black histogram is the observed spectrum, and the red curves are the modelled spectra from our preferred model, Model 3. The blue dashed curves are the modelled spectra by reducing the input H$_2$ gas density by a factor of 5; and the green dotted curves are the modelled spectra by increasing the input gas density by a factor of 5 for the modelling of SiO, and a factor of 2 for H$_2$O.}
\label{fig:densitytest}
\end{figure*}
\subsection{Structure of the extended atmospheres}
\label{sec:discuss-atm}
Our modelling results of the molecular emission and absorption of SiO and H$_2$O gas allow us to compare the structure of extended atmosphere of Mira as inferred from previous observations in various frequencies. We will first briefly summarise the relevant observations of Mira in Sect. \ref{sec:discuss-atm-intro}, then discuss our interpretation on Mira's molecular layer in Sect. \ref{sec:discuss-layer} and dust condensation zone in Sect. \ref{sec:discuss-dust}.
\subsubsection{Previous observations}
\label{sec:discuss-atm-intro}
Combining their centimetre-wavelength observations and millimetre/infrared fluxes in the literature, \citet{rm1997} have demonstrated that long-period variables have a ``radio photosphere'' with a radius about twice that of the optical/infrared photosphere. The latter is determined from the line-free regions of the optical or infrared spectrum and is defined as the stellar radius, $R_{\star}$. In the following discussion, we adopt the value of $R_{\star}$ to be $12.3\,{\rm mas}$ (or $292\,R_{\astrosun}$) as determined by \citet{perrin2004}. The spectral index in the radio wavelengths is found to be $1.86 ({\approx} 2)$, close to the Rayleigh-Jeans law at low frequencies of an optically thick blackbody \citep{rm1997}. \citet{mrm2015} and \citet{planesas2016} found that this spectral index can also well-fit the submillimetre flux densities of $o$ Cet at 338\,GHz in ALMA Band 7 and at 679\,GHz in ALMA Band 9, respectively.
The ``radio photosphere'' encloses a hot, optically thick molecular layer (${\sim} 2 \times 10^3$\,K) predominantly emitting in the infrared. Observations have revealed that this molecular layer lies between radii of ${\sim} 1 $ and $2\,R_{\star}$. \citet{haniff1995} found that, for $o$ Cet, the derived radius of the strong TiO absorption near 710\,nm with a uniform disk model is about $1.2 \pm 0.2\,R_{\star}$. \citet{perrin2004} derived, by fitting a model consisting of an infrared photosphere and a thin, detached molecular (H$_2$O+CO) layer to the infrared interferometric data, that the radius of the molecular layer around $o$ Cet is about $2.07 \pm 0.02\,R_{\star}$. Alternatively, \citet{yamamura1999} have modelled the H$_2$O spectral features in the near-infrared (${\sim}2\text{--}5\,\mu{\rm m}$) spectrum of $o$ Cet with a stack of superposing plane-parallel layers: the star, an assumed hot SiO (2000\,K) layer, a hot H$_2$O (2000\,K), and a cool H$_2$O (1200\,K) layers. Assuming the hot SiO layer has a radius of $2.0\,R_{\star}$, they derived the radii of the hot and cool H$_2$O layers to be $2.0\,R_{\star}$ and $2.3\,R_{\star}$, respectively. \citet{ohnaka2004} employed a more realistic model for the extended molecular layer with two contiguous spherical shells, a hotter and a cooler H$_2$O shells, above the mid-infrared photosphere. By fitting to the $11\,\mu{\rm m}$ spectrum, \citet{ohnaka2004} derived the radii of $1.5\,R_{\star}$ and $2.2\,R_{\star}$ for the hot (1800\,K) and cool (1400\,K) H$_2$O shells, respectively.
Beyond the molecular layer and the ``radio photosphere'', there is a ring-like region of SiO maser emission at the radius between $2\,R_{\star}$ and $3\,R_{\star}$. Maser emission naturally arises from a ring-like structure because the maser requires a sufficiently long path length of similar radial velocity in order to be tangentially amplified to a detectable brightness \citep{diamond1994}. Such a SiO maser ring has been imaged in detail at various stellar phases towards the oxygen-rich Mira variable TX Cam \citep[e.g.][]{diamond2003,yi2005}. For $o$ Cet, \citet{rm2007} have directly imaged the radio photosphere and the SiO $J=1$--$0$ maser emission at 43\,GHz and found that the radii of the radio photosphere and the SiO maser ring are about $2.1\,R_{\star}$ and $3.3\,R_{\star}$, respectively, with $R_{\star} = 12.29 \pm 0.02\,\rm mas$ being the radius of the infrared photosphere as model-fitted by \citet{perrin2004}.
Further out, beyond the SiO maser emission region, dust grains start to form. The major types of dust around oxygen-rich AGB stars are corundum (Al$_2$O$_3$) and silicate dust. Using the hydrodynamical model from \citet{ireland2004a} and \mbox{\citet{ireland2004b}}, \citet{gray2009} have modelled SiO maser emission in Mira variables and found that the presence of Al$_2$O$_3$ dust may either enhance or suppress SiO maser emission. From interferometric observations of various Mira variables at near-infrared ($2.2\,\mu$m), mid-infrared ($8$--$13\,\mu$m) and radio (43\,GHz, 7\,mm) wavelengths, \citet{perrin2015} fitted the visibility data with models similar to \citet{perrin2004} (stellar photosphere + detached shell of finite width) and found that Al$_2$O$_3$ dust predominantly forms between $3\,R_{\star}$ and $4.5\,R_{\star}$, while silicate dust forms in $12\,R_{\star}$--$16\,R_{\star}$, which is significantly beyond the radius of SiO maser emission and the silicate dust formation radius derived from previous observations \citep[e.g.][]{danchi1994}.
\subsubsection{Molecular layer}
\label{sec:discuss-layer}
From our visibility analysis (see Appendix \ref{sec:appendix_cont}), we determine the mean radius of the 1.3\,mm-photosphere, to be $R_{\rm 229\,GHz} = 22.90 \pm 0.05\,{\rm mas}$ ($543\,R_{\astrosun}$). This is about 1.9 times the size of the near-infrared photosphere ($R_{\star} = 12.3\,{\rm mas}$; $292\,R_{\astrosun}$) as determined by \citet{perrin2004}. As we have summarised, previous visibility modelling of near- (2--5\,$\mu$m) and mid-infrared (11\,$\mu$m) interferometric data has suggested the existence of an optically thick, hot molecular H$_2$O+SiO layer with a maximum radius of $2.3\,R_{\star}$ (${\sim}30\,{\rm mas}$) \citep{yamamura1999,ohnaka2004}. Thus, this ALMA SV observation has the sufficient angular resolution to resolve the hot molecular layer in the millimetre-wavelength regime and in addition allows probing its velocity structure.
Modelling the spectral lines of H$_2$O and SiO molecules at various projected radial distance from the star, we have determined that the kinetic gas temperature within the mid-infrared molecular layer ($30\,{\rm mas} \sim 5 \times 10^{13}$\,cm) has to be about 1400--2100\,K. The temperature range is consistent with those previously modelled by \citet{rm1997} from their centimetre-wavelength observations of Mira's radio photosphere, and by \citet{yamamura1999} and \citet{ohnaka2004} from infrared observations using simple models of contiguous, uniform molecular H$_2$O+SiO layers.
In our maps, the emission from the vibrationally excited ($E_{\rm up}/k > 3500\,{\rm K}$) SiO ${\varv} = 2$ and H$_2$O $v_2=1$ lines has an extent of ${\lesssim} 100$\,mas (${\lesssim} 8\,R_{\star}$). The core emission region of the SiO 5--4 ${\varv} = 0$ vibrational ground state line (i.e., excluding the extended filamentary or arc-like emission feature to the west/south-west) that is detected at $\ge 3\sigma$ has radii between 200\,mas ($3.3 \times 10^{14}\,{\rm cm}$; to the south-east) and 600\,mas ($9.9 \times 10^{14}\,{\rm cm}$; to the west), and the size of the half-maximum emission is roughly 100--150\,mas (see Fig. \ref{fig:siov0chan_csub}). Hence SiO emits rotational emission up to a radius of ${\sim}50\,R_{\star}$, far beyond the radius of the molecular layer probed by infrared interferometers. \citet{perrin2015}, assuming that the mid-infrared $N$-band visibilities between 7.80\,$\mu$m and 9.70\,$\mu$m as the only signature of gas-phase SiO emission, concluded that the SiO can only be found in gas phase within $3\,R_{\star}$. We suggest that this discrepancy is due to the excitation effect of SiO molecules. The ground state SiO line in the ALMA SV observation has an energy above the ground of only ${\sim}30\,{\rm K}$, and therefore is excited throughout the region within the silicate dust condensation zone. While the ALMA images indicate a significant amount of gas-phase SiO molecules, the gas temperature beyond the molecular layer (${\lesssim}1000\,{\rm K}$) is insufficient to collisionally excite the SiO molecules to higher vibrational states. Thus, the SiO molecule does not produce detectable infrared emission beyond ${\sim}3\,R_{\star}$ even if it is abundant there.
\subsubsection{Dust shells and the sequence of dust condensation}
\label{sec:discuss-dust}
The radii of dust shells around Mira have been measured with infrared interferometry at 11\,$\mu$m by \citet{danchi1994} and \citet{lopez1997}. A single silicate dust shell from 60--2500\,mas with a dust temperature of 1063\,K at the inner radius was adopted in the model of \citet{danchi1994}. \citet{lopez1997} used a two-shell model, composed purely of silicate grains, at radii of 50\,mas and 200\,mas. The dust temperature of the inner radius of the inner dust shell is about 1160--1380\,K. These results suggest that dust grains start to form around Mira at a temperature above 1000\,K and a radius of ${\sim}2$--$3\,R_{\star}$, where $R_{\star}$ was determined to be $19.3$--$23.6\,{\rm mas}$. If we use our adopted value of 12.3\,mas for $R_{\star}$, then the inner dust formation radius would be ${\sim}4$--$5\,R_{\star}$. Compared to the recent model of \citet{perrin2015}, this range is significantly smaller than the silicate formation radii, which is at least $12\,R_{\star}$, but is consistent with the radii of corundum formation.
Studies of silicate dust formation suggest that efficient condensation occurs only when the gas temperature drops to below 600\,K \citep[e.g.][]{gail1998}. This allows the SiO gas to emit for a much larger radius in the extended atmosphere of Mira, as compared to the radii of its dust shells derived previously and described above. The discussion of higher silicate dust condensation temperatures has been recently revived by \citet{gail2013}. Their new measurements of the vapour pressure of solid SiO suggest that gas-phase SiO molecules may first nucleate into SiO clusters, and then condense onto dust grains \citep{gail2013}. The gas temperature (assumed to be also the dust temperature at the inner boundary of the dust shell) at which SiO gas start to deplete is estimated to be about 600\,K, for a mass-loss rate, \ifmmode {\dot M} \else $\dot M$\fi, of ${\sim}10^{-6}\,M_{\astrosun}\,{\rm yr}^{-1}$, increasing to 800\,K for $\ifmmode {\dot M} \else $\dot M$\fi = 10^{-4}\,M_{\astrosun}\,{\rm yr}^{-1}$. The SiO nucleation process thus allows the depletion of SiO gas to begin at a higher gas temperature, i.e., at a smaller inner radius, than previously thought. However, the result of \citet{gail2013} still cannot explain the high dust temperature ($>1000\,{\rm K}$) as derived from visibility fitting by \citet{danchi1994} and \citet{lopez1997}.
In our Model 3, the radius at which the $^{28}$SiO abundance decreases significantly is adopted to be ${\sim}60\,{\rm mas}$, which corresponds to $1.0 \times 10^{14}\,{\rm cm}$ or ${\sim}5\,R_{\star}$. The modelled gas temperature at this radius is ${\sim}600\,{\rm K}$. Besides this SiO abundance profile, we have also tested with other 2-step functions in order to find out the maximum and minimum amount of SiO molecules, and the possible range of SiO depletion radii, required to reproduce the observed ALMA spectra. Figure \ref{fig:test-abun} shows three examples of alternative SiO abundance models. All these models are as good as our Model 3 in reproducing the spectra. Our experiments show that, at the very least, the SiO abundance should be ${\sim}1 \times 10^{-6}$ within ${\sim}4\,R_{\star}$ and ${\sim}10^{-8}$--$10^{-7}$ beyond that. In other words, SiO molecules cannot deplete onto dust grains in a significant amount within ${\sim}4\,R_{\star}$. This radius is consistent with the inner radius of the silicate dust shells derived in the literature. Our tests, however, have shown that the synthesised SiO spectra in the outer radii are not sensitive to a higher value of the SiO abundance, or the exact shape of the abundance profile. The actual radius where gas-phase SiO molecule condenses onto dust grains therefore may be much further from the star than $4\,R_{\star}$. Moreover, the actual degree of SiO gas depletion, through silicate dust condensation, nucleation of molecular clusters, or other gas-phase chemical reactions \citep[e.g.][]{gail2013,gobrecht2016}, may not be as high as assumed in our preferred model.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.65\textwidth]{./extra_plots/compare-lv-abun.pdf}
\caption{Alternative input $^{28}$SiO abundance profiles for Model 3 which produce similar modelled spectra. The solid black curve is the same 2-step abundance profile as in Model 3, which is close to the minimum possible abundance to fit the ALMA spectra. The three other coloured curves are alternative abundance profiles using also 2-step functions. The abundance profile 1 (blue, dashed) has an inner abundance of $1 \times 10^{-6}$ up to the radius of ${\sim}6\,R_{\star}$, and an outer abundance of $1 \times 10^{-7}$; profile 2 (green, dashed-dotted) has an inner abundance of $8 \times 10^{-7}$ up to ${\sim}10\,R_{\star}$, and an outer abundance of $1 \times 10^{-8}$; and profile 3 (red, dotted) has an inner abundance of $1 \times 10^{-6}$ up to ${\sim}4\,R_{\star}$, and an outer abundance of $1 \times 10^{-7}$.}
\label{fig:test-abun}
\end{figure*}
The gas temperature at which SiO gas start to deplete in our models is about $490$--$600\,{\rm K}$, much below the dust temperature at the inner dust shells as derived observationally by \citet{danchi1994} and \citet{lopez1997}. This temperature is also somewhat lower, by about $100\,{\rm K}$, than the gas temperature at which SiO gas starts to nucleate into clusters \citep{gail2013}. However, we note that the visibility models derived by \citet{danchi1994} and \citet{lopez1997} have assumed that the dust around Mira is composed of pure silicate grains. The derived parameters are therefore based on the adopted optical properties of silicate dust grains. Other possible compositions of dust grains around oxygen-rich stars, such as corundum or the mixture of corundum and silicate, cannot be excluded but have not been explored in those models.
Corundum, the crystalline form of aluminium oxide (Al$_2$O$_3$), has a high condensation temperature of ${\sim}1700\,{\rm K}$ \citep[e.g.][]{grossman1974,lorenzmartins2000} and is the most stable aluminium-containing species at a temperature below 1400\,K \citep{gail1998}. \citet{lml1990} have classified the circumstellar dust shells of oxygen-rich AGB stars into several groups according to the spectral features found in their mid-infrared spectral energy distributions (SEDs). These SED groups show (a) broad emission feature from 9 to 15\,$\mu$m, (b) multiple components with peaks near 10, 11.3 and 13\,$\mu$m, and (c) strong, well-defined characteristic silicate peaks at 9.8 and 18\,$\mu$m. \citet{lml1990} have also suggested that the circumstellar dust shells follow an evolutionary sequence starting from the class showing the broad feature, then multiple components and finally silicate features in the SED. From a survey of O-rich AGB stars, most of them being Mira variables, \citet{lorenzmartins2000} have successfully fitted the SEDs showing (a) broad features, (b) multiple components (or the intermediate class), and (c) silicate features and with corundum grains, a mixture of corundum and silicate grains, and pure silicate grains, respectively. Their results show that the inner radius of the modelled dust shells increases from the broad class, the intermediate class, and the silicate class. The fitted dust temperature of the hottest grains also follows the same sequence. In addition, they also found that the optical depths of the corundum-dominated emission are much smaller than those of the silicate-dominated emission. \citet{lorenzmartins2000} thus concluded that their results were consistent with the evolutionary sequence suggested by \citet{lml1990}. Corundum grains are the first species to form in the circumstellar dust shells, at a small radius of ${\sim}2$--$3\,R_{\star}$ and a temperature of ${\sim}1400$--$1600\,{\rm K}$. At a later stage, silicate grains start to form and dominate the emission features in the SED. The inner radius of the silicate dust shells and temperature of the hottest silicate grains are ${\sim}5$--$20\,R_{\star}$ and ${\sim}500$--$1000\,{\rm K}$, respectively.
Our modelling results have shown that gas-phase SiO starts to deplete at a radius of \emph{at least} $4\,R_{\star}$ and a gas temperature of ${\lesssim}600\,{\rm K}$. In addition, the observed spectra show that SiO molecules survive in the gas-phase well below 1000\,K. This is apparently inconsistent with the fitting by \citet{danchi1994} and \citet{lopez1997}, that the silicate dust shells form at a temperature above 1000\,K. We therefore suggest that the inner hot dust shells around Mira may indeed be composed of other grains types, possibly corundum, instead of silicate grains as previously assumed. Although no prominent spectral features of corundum have been reported \citep[e.g.][]{lopez1997,lobel2000}, we note that the corundum grains may be coated with silicates when the temperature becomes low further out in the dust shell \citep[e.g.][Sect. 6.1 and references therein]{karovicova2013}. The optical depth of pure corundum grains, which only exist close to the star, may also be much lower than that of silicate grains \citep[e.g.][]{lorenzmartins2000} and therefore the corundum features may not be easily distinguished in the SED from the silicate features.
\subsubsection{Maser emission}
\label{sec:discuss-maser}
Among all the spectral lines covered in this ALMA SV observation, only SiO ${\varv} = 1$ $J=5-4$ and ${\varv} = 1$ $J=2-1$ (in ALMA Band 3; not included in this article) exhibit strong maser emission (Fig. \ref{fig:band6lines}). In the images, isolated SiO ${\varv} = 1$ $J=5-4$ maser spots are seen outside Mira's radio photosphere, primarily at the radial distances between ${\sim}30$ and $120\,{\rm mas}$ from the fitted position of the radio continuum. The (relative) spatial distribution of the SiO ${\varv} = 1$ maser is consistent with those previously reported by \citet{boboltz2004} and \citet{cotton2008} (and references therein) in other lower-$J$ maser transitions. The presence of maser emission indicates that the SiO gas in those maser-emitting spots \emph{must not} be in local thermodynamic equilibrium (LTE). In our preferred (1-D and smooth) model, the gas density is uniformly high throughout the maser-emitting region and hence all possible maser action is quenched. The predicted excitation temperature does not show any negative values throughout the modelled region (Fig. \ref{fig:model3}) and therefore population inversion, a prerequisite for maser emission to take place, of the SiO ${\varv} = 1$ transition cannot be predicted by our simple model. We note that our model does not include any external infrared radiation field, in particular the infrared pumping band of SiO molecule near 8.1\,$\mu$m ($\Delta{\varv} = 1$, fundamental) and 4.0\,$\mu$m ($\Delta{\varv} = 2$, first overtone). The only input radiation field in our model is a blackbody of $2600\,{\rm K}$, representing the radio photosphere of Mira. This temperature is lower than the typical infrared effective temperature of Mira, which is about 3200\,K \citep{woodruff2004}. So the radiation field does not realistically approximate the radiative excitation of SiO molecule to higher vibrational states. Because our modelling aims to explain the \emph{general} physical conditions of Mira's extended atmosphere, we did not attempt to construct a sophisticated model to explain both maser and non-maser emission.
The water line covered in this SV observation, H$_2$O $v_2=1$ $J_{K_a,K_c}=5_{5,0}-6_{4,3}$, near 232.7\,GHz does not show any maser emission. \citet{gray2016} have conducted extensive radiative transfer modelling to explore the physical conditions under which the modelled H$_2$O lines (including all possible lines covered by ALMA) would exhibit maser emission in the envelopes of evolved stars. Slab geometry and silicate dust, which is optically thin in the millimetre-wavelengths and optically thick in the radiative pumping bands of H$_2$O's vibrational states (e.g., $v_2$ band at $6.27\,\mu{\rm m}$), have been assumed in their modelling \citep{gray2016}. The 232.7-GHz H$_2$O emission is seen from the radio photosphere up to about 80\,mas (Fig. \ref{fig:h2ov1chan_csub}). Hence, the H$_2$O-emitting region corresponds to the kinetic temperature of ${\sim}550$--$2100\,{\rm K}$, the gas density of $n_{{\rm H}_2} {\sim} 4 \times 10^{10}$--$1 \times 10^{13}\,{\rm cm}^{-3}$, and hence the H$_2$O molecular density of $n_{{\rm H}_2{\rm O}} {\sim} 2 \times 10^{5}$--$5 \times 10^{7}\,{\rm cm}^{-3}$ in our preferred model. Our derived H$_2$O molecular density lies well within the range in the model of \citet{gray2016} that is predicted to exhibit strong maser emission in the 232.7-GHz transition \emph{if} the dust temperature is high enough. The absence of the 232.7-GHz H$_2$O maser is consistent with a (silicate-type) dust temperature lower than approximately $900$, $1000$, and $1600\,{\rm K}$ for the respective gas kinetic temperature of about $500$, $1000$, and $1500\,{\rm K}$ \citep[see Fig. 10 of][]{gray2016}. These comparisons, although not a conclusive proof, suggest that hot dust grains that are optically thick at the $v_2$ band ($6.27\,\mu{\rm m}$) did not exist in Mira's extended atmosphere during the ALMA SV observation.
\subsection{Comparison with current hydrodynamic models}
\label{sec:discuss-hydrodyn}
\subsubsection{Early hydrodynamic models for stellar pulsation}
\label{sec:discuss-old-hydro-models}
There are many numerical hydrodynamical calculations on Mira variables to simulate the variation of pulsation velocity, number density, kinetic temperature as functions of the stellar phase and/or radial distance from the star. Pioneering work includes the studies of \citet{willsonhill1979}, \citet{hillwillson1979}, \citet{wood1979}, \citet{willson1987}, and \citet{beach1988} and \citet{bowen1988a,bowen1988b,bowen1989} (hereafter, Bowen's models). \citet{wood1979}, \citet{willson1987}, and \citet{bowen1988a,bowen1988b,bowen1989} have compared the effect of radiation pressure on dust on the mass-loss rate and the velocities of the stellar outflows. The outflow/infall velocity profiles as a function of radius as derived from these models are qualitatively similar. These authors all predict alternating outflow and infall layers in close proximity to the star, within about $4$--$6 \times 10^{13}\,{\rm cm}$. Beyond that radius, the dust-driven winds exhibit accelerating outflow. In the region where material expands and falls back, large-scale shocks are produced at the interface between the outer, infall layer and the inner outflow.
Infall motions in the extended atmosphere of the star can be observed as inverse P Cygni profiles in the spectra. The material at the near side of the star will show redshifted absorption and the material at the far side that is not blocked by the continuum will show blueshifted emission. These emission features are present even if there is no hot material (perhaps shock-heated) with a temperature higher than the continuum brightness temperature, and are also known as the ``nebular'' effect \citep[e.g.][]{bessell1996,scholz2000}, in which the large volume of the highly extended atmosphere, albeit only weakly emitting per unit volume, adds up to produce significant emission. For example, the redshifted absorption in the CO second overtone ($\Delta {\varv} = 3$) lines from $o$ Cet indicates infall motions in the deep photospheric layers of the star \citep{hinkle1984}. Results from the early spectroscopy by \citet{joy1926,joy1954} also suggest infall motion in the extended atmosphere of $o$ Cet, based on modern information on the systemic (centre-of-mass) velocity of the star \citep[see also the interpretation by][]{gabovits1936}.
Bowen's models have been adopted by \citet{h96sio} and \citet{h01h2o} to simulate the SiO (${\varv} = 1$ and $2$) and H$_2$O (ground state and $v_2=1$) masers from a template M-type Mira (parameters were based on $o$ Cet) at a single stellar phase, and by \citet{gh00} and \citet{h02} to simulate the variability of SiO (${\varv} = 1$) masers at various epochs of a stellar cycle of the model Mira. \citet{gray2009} have comprehensively reviewed the success and limitations of these precursor models for SiO maser simulations. One major drawback of Bowen's hydrodynamical solutions is that the pulsation phase in the model (with phase 0 defined as the moment when the inner boundary of the model atmosphere, or the ``piston'', is moving outwards at the maximum speed) is disconnected from the stellar phase as determined from the optical or infrared brightness variations \citep{h96sio}. In addition, the assumption of a constant infrared radiation field by dust and the stellar photosphere was also too simplistic for Mira variables \citep{gray2009}.
Solving the hydrodynamical equations as presented in \citet[Chap. 12]{rm1967}, who adopt the von Neumann-Richtmyer pseudo-viscosity method as the artificial shock dissipation mechanism \citep{vNR1950}, the authors of early hydrodynamic models have derived the dynamical structures around Mira-like variables. Bowen's models considered also the thermal relaxation of shocks via radiative cooling from neutral hydrogen atoms at high temperatures (${\gtrsim}6000\,{\rm K}$) and from other species (which is represented by an assumed cooling coefficient) at low temperatures \citep[see Sects. II(b) and II(c)(iii) of][]{bowen1988a}. Bowen's models predicted the existence of an extended (${\sim}10^{14}\,{\rm cm}$) post-shock region of elevated gas temperatures (${\sim}10\,000\,{\rm K}$) near the optical/infrared photosphere.
On the other hand, using the hydrogen Balmer series emission lines as the signatures of shock-heated region, \citet{fox1984} and \citet{fox1985} developed a theoretical model of shock waves in the atmosphere of Mira variables and predicted that these pulsation-driven shocks dissipate within a very thin region in the extended atmospheres, of a several orders of magnitude smaller extent than the stellar radius (i.e., $\ll 10^{12}\,{\rm cm}$). The circumstellar shock models by \citet{willacy1998} and \citet{gobrecht2016} predict that the cooling length of H$_2$ dissociation, depending on the shock velocities, is typically $<10^8\,{\rm cm}$. The very narrow post-shock region suggests that the relaxation of the shocked material towards radiative equilibrium and local thermodynamic equilibrium is essentially instantaneous and therefore the post-shock heating might be neglected. \citet{woitke1996} argued that Bowen's cooling rate was underestimated by a few orders of magnitude, thus resulting in an atmosphere with highly elevated gas temperature. Furthermore, based on the observational constraints from Mira's radio continuum emission at 8\,GHz, \citet{rm1997,rm1997b} have shown that the amplitude of gas temperature disturbance, probably due to shocks or pulsations, can only be about $300\,{\rm K}$ (assuming a shock propagation speed of $7.3\,\kms$). These are in contrast with Bowen's non-equilibrium models which expect a rather extended shock-heated region with highly elevated gas temperature. \citet{bessell1989} have compared the synthesised spectra from Bowen's non-equilibrium model with the observed spectrum between 600\,nm and 4\,$\mu$m, and found that they did not match at all. Moreover, the hydrogen spectra (e.g., H$\alpha$) predicted by \citet{luttermoser1990,luttermoser1992} using Bowen's model were much broader, and have different line profiles, than what was actually observed \citep{woodsworth1995}. Hence, the non-equilibrium models with extended high-temperature regions are not suitable for the extended atmospheres of Mira variables \citep[see also the discussions in][]{woitke1998,willson2000}.
For the purpose of verification, we have also conducted similar tests to \citet{bessell1989} by introducing to our preferred model (Model 3) an arbitrary extended layer (${\gtrsim}10^{12}\,{\rm cm}$) of elevated gas temperature (${\sim}4000\,{\rm K}$) at various radial distances from Mira. The elevated temperature is significantly higher than the brightness temperature of the stellar radio continuum (${\sim}2600\,{\rm K}$). Figure \ref{fig:test-hightemp} shows an example of our tests. Even though the gas temperature is elevated for just a relatively thin layer ($2 \times 10^{13}\,{\rm cm} \sim R_{\star}$) as compared to Bowen's non-equilibrium model (${\sim}10^{14}\,{\rm cm}$), the synthesised spectra exhibit very strong emission features that are absent in the data. In fact, if we further extend the zone of elevated gas temperature, the emission spikes only become more prominent. Therefore our tests suggest that during this particular ALMA SV observation, the extended atmosphere of Mira did not contain any extended (${\gtrsim}10^{12}\,{\rm cm}$) shock-heated (above the brightness temperature of the radio continuum) region as predicted from the non-equilibrium model. This is consistent with those shock wave models that suggest that the shock relaxation in Mira atmosphere takes place within a very thin zone compared to the stellar radius \citep[e.g.][]{fox1984,fox1985}. In addition, we do not expect the existence of stellar chromosphere, as suggested to explain the H$\alpha$ absorption lines from some semi-regular variables \citep{luttermoser1994,wood2004}, beyond the radio photosphere of Mira during this ALMA SV observation. Indeed, a stellar chromosphere may not exist around Mira at all because the H$\alpha$ line from the star has only been seen in emission, not absorption \citep[e.g.][]{joy1947,gillet1983,gillet1985}.
\begin{figure*}[!htbp]
\centering
\includegraphics[height=0.21\textheight]{./fig_models/sio1193-lv-temp.pdf}
\includegraphics[trim=1.0cm 7.3cm 2.0cm 1.5cm, clip, height=0.21\textheight]{./fig_models/sio1193-spec_sio54v2_all.pdf} \\
\caption{The preferred model with a layer of an elevated gas temperature. The layer has a width of $2.0 \times 10^{13}\,{\rm cm}$ and an arbitrarily chosen temperature of $4000\,{\rm K}$. The left panel shows the elevated gas temperature profiles and the right panel shows the observed and modelled SiO ${\varv} = 2$ $J=5-4$ spectra as examples. All other parameters are the same as our preferred model (Model 3).}
\label{fig:test-hightemp}
\end{figure*}
\subsubsection{More recent hydrodynamic models}
\label{sec:discuss-new-hydro-models}
Based on the assumption of negligible post-shock heating, \citet{bessell1996} and \citet{hofmann1998} have made series of pulsation models to theoretically calculate the photospheric structure (e.g., density, temperature, outflow/infall velocities) of Mira variables. Applying the predictions from these model series, \citet{bessell1996} and \citet{scholz2000} have predicted the near-infrared spectra of selected atomic and molecular lines, which exhibit both the normal and inverse P Cygni profiles, and show significant variations with the stellar phase.
New hydrodynamical solutions have also been derived by \citet{ireland2004a,ireland2004b} and adopted by \citet{gray2009} to simulate SiO maser emission in Mira variables. The model series are based on self-excited pulsation as in \citet{hofmann1998}, instead of ``piston''-generated pulsation in other hydrodynamical models. Simulations have shown that SiO masers would form a ring of typically ${\sim}2.2\,R_{\star}$ around the star \citep{gray2009}, which is close to but beyond the radius of the radio photosphere. The results are consistent with the previous observations of SiO maser shells around Mira by \citet{rm1997,rm2007}. Hence, strong SiO masers within the radio photosphere are prohibited \citep{rm1997,gray2009}. Comparing the hydrodynamical solutions of \citet{ireland2004a,ireland2004b} with observations, the strongest inner shocks (with $\Delta V > 20 \kms$) are enclosed within the radio photosphere, and the shocks beyond that only cause a velocity change of ${\sim}7 \kms$. The low shock velocity beyond the radio continuum is also consistent with the proper motion velocities of SiO masers observed in TX Cam \citep{diamond2003} and the nearly constant radio light curves observed in several Mira variables \citep{rm1997,rm1997b}. In addition, \citet{gray2009} have found that the presence of corundum (Al$_2$O$_3$) dust grains can both enhance or suppress SiO maser emission.
More recently, \citet{ireland2008,ireland2011} and \citet{scholz2014} have developed the Cool Opacity-sampling Dynamic EXtended ({\codex}) atmosphere models, based on an improved self-excited pulsation code by \citet{keller2006} to determine the pressure and luminosity, and an opacity-sampling method to derive the gas and dust temperatures \citep[see, also, the review by][]{ireland2011b}. The mass density and velocity profiles in the extended atmosphere can also be computed. Thus far, the {\codex} models have only been compared to the spectro-interferometric and spectro-photometric observations of Mira variables in the infrared wavelengths \citep[e.g.][]{woodruff2009,wittkowski2011,hillen2012}. Our radiative transfer modelling of the submillimetre molecular transitions from Mira as observed by the ALMA long baselines, including the vibrational ground state lines that are less affected by excitation effects, can provide an alternative probe to the detailed physical conditions in Mira's extended atmosphere at a spatial scale of a few $R_{\star}$. We will present in Sects. \ref{sec:discuss-codex} and \ref{sec:discuss-acceleration} our modelling results using the predicted atmospheric structures from {\codex} and alternative velocity profiles in Mira's extended atmosphere.
\subsubsection{Kinematics}
\label{sec:discuss-codex}
As described above, our high angular-resolution ALMA long baseline data clearly resolve the line emission/absorption from the extended atmosphere of Mira. Employing radiative transfer modelling we can test the atmospheric structures predicted by the latest hydrodynamical models. In particular, we consider the hydrodynamical model series, {\codex}, developed by \citet{ireland2008,ireland2011} and \citet{scholz2014}. We use the \texttt{o54} $5400\,L_{\astrosun}$ model series, which is based on the stellar parameters of our target source, Mira ($o$ Cet). From the {\codex} developers we have obtained the atmospheric structure information of the \texttt{o54} model series, including the total mass density, kinetic temperature \citep[which is not the computed non-grey equilibrium temperature in the final model atmospheres, see][]{ireland2008}, and the radial expansion or infall velocity of the gas at different radii. Among the 6 complete stellar pulsation cycles computed by the developers, the models at the stellar phases closest to this 2014 ALMA SV dataset (near phase 0.45) are models 250420\footnote{These model numbers correspond to various phases of the models in the \texttt{o54} series in chronological order. Models 248480--251160 are the models in the compact atmospheric cycles, during which most of the mass in the model atmosphere is contained within a relatively small radial extent; models 260820--263740 are those in the extended atmospheric cycles; and models 285180--291860 cover almost four consecutive pulsation cycles of the model atmosphere, with intermediate extents of the mass-zones \citep[see Fig. 1 and Tables 2, 3, and 4 of][]{ireland2011}.} (phase 0.38), 261740 (0.40), 286060 (0.41), 287880 (0.40), 289440 (0.40), and 291820 (0.41) \citep{ireland2008,ireland2011}.
Because of the assumption of instantaneous shock dissipation, there is no extended zone of elevated gas kinetic temperature above the stellar brightness temperature in the extended atmosphere. The gas temperature profiles of the {\codex} models are, respectively, lower than (by $<500\,{\rm K}$) and higher than (by $<700\,{\rm K}$) than what we have empirically derived in Sect. \ref{sec:model_results} for the inner radii (${\lesssim}5 \times 10^{13}\,{\rm cm} \approx 2.5\,R_{\star}$) and outer radii. The temperature in Mira's extended atmosphere is always $<2000\,{\rm K}$, except at radii very close to the radio continuum.
The {\codex} code computes the mass density of all particles in the atmosphere. In our {\ratran} input models, we convert the mass density to number density by division with the average mass of the particles in the wind, $m_{\rm part} = 0.7 m_{{\rm H}_2} + 0.3 m_{{\rm He}} = 4.3372 \times 10^{-24}\,{\rm g}$, assuming the typical helium mass fraction of $Y \approx 0.3$ \citep[e.g.][]{wood1977,lattanzio2003,ireland2008,ventura2013}. In our modelling, however, we only consider the collision of H$_2$O and SiO with rotational ground-state H$_2$ molecules only.
However, if we apply this conversion to the input models, the number density in Mira's extended atmosphere will be about $10^{8}$--$10^{10}\,{\rm cm}^{-3}$, which is too low to excite the molecules and to produce significant emission or absorption in the synthesised spectra. The deep absorption features observed towards the continuum disk and the emission profiles towards various radial distances from the star can only be reproduced by arbitrarily increasing the converted number density by a large factor. In our modelling, therefore, we scale the number density of all {\codex} models by a factor of $10^4$ such that the density outside the radio photosphere reaches at least $10^{12}\,{\rm cm}^{-3}$. This density is similar to the one adopted in our empirical modelling (Sect. \ref{sec:modelling}), and is also consistent with that derived by \citet{rm1997}, who modelled the centimetre-wavelength radio fluxes from the radio photospheres of Mira variables (including Mira). Furthermore, the density is also compatible with the lower limit of $10^{11}\,{\rm cm}^{-3}$, which is estimated by modelling the near-infrared H$_2$O spectrum and assuming a relatively high H$_2$O abundance of ${\sim}3 \times 10^{-4}$ \citep{yamamura1999}. If the adopted H$_2$O abundance is similar to our assumed value of ${\sim}10^{-5}$, then the lower limit of the gas density derived by \citet{yamamura1999} would also be close to the values in our modelling. We, however, note that the gas density should be interpreted with caution -- see our discussion in Sect. \ref{sec:limitations}.
In our initial test, we have replaced our preferred model (Model 3) with the {\codex}-predicted gas number density (scaled by $10^4$), kinetic temperature and radial velocity profiles. All other parameters, namely the SiO abundance profile, the local velocity dispersion, and the radio continuum are the same as Model 3. The outer radius of our model is greater than the outer boundaries of the {\codex} models. The outer boundaries depend on individual {\codex} models. For each model, we extrapolate the number density and kinetic temperature in power-laws near the outer boundary, i.e., linear extrapolation in log-log relation. We also assume that the infall or expansion velocity at and beyond the {\codex} model boundary to be constant because we have no information of the kinematics beyond that. Since the gas density and molecular SiO abundance are usually very low to cause significant excitation in the outer regions, the precise extrapolation method has little effect on the synthesised spectra.
Figure \ref{fig:test-codexvel-a} shows the results of our initial test of the velocity profiles from these 6 {\codex} models. Remarkably, all the 6 models near the observed stellar phase are able to qualitatively reproduce the general spectral features, including (1) the strong absorption profile in the line-of-sight towards the radio continuum of Mira, and (2) gradually decreasing emission flux and line width in the lines-of-sight towards increasing radial offsets from the star. Closer inspection reveals that all the 6 {\codex} models produce an extra high-velocity absorption wings in either the redshifted or blueshifted parts of the spectra towards the centre of the continuum. The velocities of the absorption wings correspond to the high-velocity gas at $10$--$20\,\kms$ near the radio continuum of Mira, i.e., $3.6 \times 10^{13} \,{\rm cm}$ ($1.8\,R_{\star}$). Because there is no sign of any absorption wings broader than ${\pm} 10\,\kms$ in the observed spectra, the strong velocity variation as seen in these models, in particular models 261740, 286060, 287880, and 289440, cannot explain the specific atmosphere structure of Mira during the time of the 2014 ALMA SV observation.
The velocity profiles of {\codex} models 250420 and 291820 are qualitatively similar to our Model 3 (see Fig. \ref{fig:model3}, top-right panel). First, the extended atmosphere exhibits slowly varying, infall motion over a large range of radii. Second, there is a sharp change in velocity, representing a strong shock front, with $\Delta V \gtrsim 10\,\kms$ just outside the radio continuum ($3.60 \times 10^{13} {\rm cm}$). The strong shock front in model 250420 is located at $3.64 \times 10^{13} {\rm cm}$ and that in model 291820 is at $3.83 \times 10^{13} {\rm cm}$ \citep{ireland2011}. In our second test, we increase the radius of our radio \emph{pseudo}-continuum to engulf the strong shock fronts in models 250420 and 291820. Figure \ref{fig:test-codexvel-c} shows the modelled spectra as a result of hiding the strong shock fronts. The high-velocity absorption wings in the blueshifted part of the SiO spectra extracted from the continuum centre has disappeared, and hence the synthesised spectra can now better fit the observed ALMA spectra. We therefore conclude that, at the time of the 2014 ALMA SV observation (stellar phase ${\sim}0.45$), there does not exist any strong shock with a high velocity of $\Delta V \gtrsim 20\,\kms$ in the extended atmosphere of Mira \emph{beyond} the radius of its 229-GHz radio photosphere. Furthermore, the infall and outflow velocities of the gas beyond the radio photosphere of Mira are bounded by ${\sim}7\,\kms$ and ${\sim}4\,\kms$, respectively.
Our finding is consistent with previous observations of SiO maser emission from other oxygen-rich Mira variables. For example, \citet{wittkowski2007} found that the expansion velocity of the SiO maser-emitting shell around S Ori is about $10\,\kms$ by fitting the projected radii of the maser spots and their line-of-sight velocities; \citet{diamond2003} also found that the infall velocities of the SiO maser emission around TX Cam range from $5$--$10\,\kms$ by tracing the proper motions of the maser spots. In fact, based on their shock damping model, \citet{rm1997b} have excluded the shock propagation velocities significantly (by a few $\kms$) higher than $7\,\kms$ in order to explain the multi-epoch radio flux variation of Mira at 8\,GHz, assuming that the amplitude of temperature disturbance due to shocks is ${\gtrsim}300\,{\rm K}$ or the factor of density compression is ${\gtrsim}2$. Hence, any gas infall or outflow motion with speed above $10\,\kms$ is unlikely beyond the radio photosphere of Mira, and therefore the very high-velocity shocks of $\Delta V \gtrsim 20\,\kms$ as predicted by the {\codex} models, in particular models 285180 (phase 0.80) and 287980 (phase 0.61) in the same \texttt{o54} model series \citep{ireland2011}, are not expected.
The above exercises have demonstrated that the atmospheric structures predicted by the {\codex} models can qualitatively reproduce the general spectral features of the molecular transitions originating from the extended atmospheres and the inner wind of Mira. As already suggested by the authors of {\codex}, the derived structures of the model atmosphere, such as the stellar radius, mass density, gas temperature, and velocity, exhibit significant cycle-to-cycle variations and appear to be chaotic \citep{ireland2011,ireland2011b}. The combination of the expansion/infall velocity profile and the locations of shock fronts are different in each cycle. It may take tens of stellar cycles (over a decade) for similar atmospheric structures to reappear \citep[e.g. Fig. 1 of][]{ireland2011}. We therefore expect that the radio and (sub)millimetre spectra of the molecular transitions would also exhibit significant cycle-to-cycle variation, in addition to rapid variation within a single cycle. Long-term (multi-cycle and multi-epoch) monitoring of Mira variables with the ALMA long baseline is therefore necessary to fully test the predictions from hydrodynamical models, especially the amplitudes of pulsation-driven shocks above the radio photospheres of these stars.
\begin{figure*}[!htbp]
\centering
\raisebox{0.1\codexspecheight}{\includegraphics[height=\codexmodelheight]{./fig_codex/sio1186-250420-lv-vinf.pdf}}
\put(-190,160){{\parbox{1.3cm}{{\codexplt} \\ 250420}}}
\includegraphics[trim=1.0cm 2.0cm 2.0cm 1.5cm, clip, height=\codexspecheight]{./fig_codex/sio1186-250420-spec_sio54v0_all.pdf} \\
\raisebox{0.1\codexspecheight}{\includegraphics[height=\codexmodelheight]{./fig_codex/sio1187-261740-lv-vinf.pdf}}
\put(-190,160){{\parbox{1.3cm}{{\codexplt} \\ 261740}}}
\includegraphics[trim=1.0cm 2.0cm 2.0cm 1.5cm, clip, height=\codexspecheight]{./fig_codex/sio1187-261740-spec_sio54v0_all.pdf} \\
\raisebox{0.1\codexspecheight}{\includegraphics[height=\codexmodelheight]{./fig_codex/sio1188-286060-lv-vinf.pdf}}
\put(-190,160){{\parbox{1.3cm}{{\codexplt} \\ 286060}}}
\includegraphics[trim=1.0cm 2.0cm 2.0cm 1.5cm, clip, height=\codexspecheight]{./fig_codex/sio1188-286060-spec_sio54v0_all.pdf}
\caption{Modified Model 3 with input gas density, kinetic temperature and velocity profiles from {\codex} models 250420 (top), 261740 (middle) and 286060 (bottom). Special scaling has been applied to the input gas density (see text). The infall velocity profiles are plotted on the left and the selected resultant spectra are shown on the right. We applied constant extrapolation of the infall velocity beyond the outer boundary of {\codex} models.}
\label{fig:test-codexvel-a}
\end{figure*}
\begin{figure*}[!htbp]
\ContinuedFloat
\centering
\raisebox{0.1\codexspecheight}{\includegraphics[height=\codexmodelheight]{./fig_codex/sio1189-287880-lv-vinf.pdf}}
\put(-190,160){{\parbox{1.3cm}{{\codexplt} \\ 287880}}}
\includegraphics[trim=1.0cm 2.0cm 2.0cm 1.5cm, clip, height=\codexspecheight]{./fig_codex/sio1189-287880-spec_sio54v0_all.pdf} \\
\raisebox{0.1\codexspecheight}{\includegraphics[height=\codexmodelheight]{./fig_codex/sio1190-289440-lv-vinf.pdf}}
\put(-190,160){{\parbox{1.3cm}{{\codexplt} \\ 289440}}}
\includegraphics[trim=1.0cm 2.0cm 2.0cm 1.5cm, clip, height=\codexspecheight]{./fig_codex/sio1190-289440-spec_sio54v0_all.pdf} \\
\raisebox{0.1\codexspecheight}{\includegraphics[height=\codexmodelheight]{./fig_codex/sio1191-291820-lv-vinf.pdf}}
\put(-190,160){{\parbox{1.3cm}{{\codexplt} \\ 291820}}}
\includegraphics[trim=1.0cm 2.0cm 2.0cm 1.5cm, clip, height=\codexspecheight]{./fig_codex/sio1191-291820-spec_sio54v0_all.pdf}
\caption[]{Continued for {\codex} models 287880 (top), 289440 (middle) and 291820 (bottom).}
\label{fig:test-codexvel-b}
\end{figure*}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\modelwidth]{./fig_codex/sio1184-250420-lv-vinf.pdf}
\put(-220,155){{\parbox{1.5cm}{{\codexplt} \\ 250420 \\ (modified)}}}
\includegraphics[width=\modelwidth]{./fig_codex/sio1185-291820-lv-vinf.pdf}
\put(-220,155){{\parbox{1.5cm}{{\codexplt} \\ 291820 \\ (modified)}}} \\
\includegraphics[trim=1.0cm 2.0cm 2.0cm 1.5cm, clip, width=\modelwidth]{./fig_codex/sio1184-250420-spec_sio54v0_all.pdf}
\includegraphics[trim=1.0cm 2.0cm 2.0cm 1.5cm, clip, width=\modelwidth]{./fig_codex/sio1185-291820-spec_sio54v0_all.pdf} \\
\includegraphics[trim=1.0cm 7.3cm 2.0cm 1.5cm, clip, width=\modelwidth]{./fig_codex/sio1184-250420-spec_sio54v2_all.pdf}
\includegraphics[trim=1.0cm 7.3cm 2.0cm 1.5cm, clip, width=\modelwidth]{./fig_codex/sio1185-291820-spec_sio54v2_all.pdf}
\caption{Similar to Fig. \ref{fig:test-codexvel-a}, except that the radius of the \emph{pseudo}-continuum in the model, $R_{\rm continuum}$ is changed to $3.65 \times 10^{13} {\rm cm}$ for model 250420 (left column) and $3.85 \times 10^{13} {\rm cm}$ for model 291820 (right column). The modelled continuum levels are therefore slightly higher than the ones adopting $R_{\rm continuum} = 3.60 \times 10^{13} {\rm cm}$. The strong shock fronts predicted in these two models are hidden inside the radio continuum. The middle and bottom rows show the selected resultant SiO spectra from the corresponding models.}
\label{fig:test-codexvel-c}
\end{figure*}
\subsubsection{Wind acceleration}
\label{sec:discuss-acceleration}
\citet{hofner2003} also developed models of the dynamical atmospheres by solving time-dependent dynamic equations and frequency-dependent radiative transfer equations (i.e. non-grey dynamical model), including a time-dependent description of dust formation for carbon-rich atmospheres \citep[see, also, the review by][]{hofner2015}. Based on H\"{o}fner's hydrodynamical model, \citet{gautschyloidl2004} were able to reproduce the infrared spectra of carbon-rich stars in the wavelength range of 0.5--25\,$\mu$m, observed at various stellar phases, with a single consistent model atmosphere for each star. \citet{nowotny2005a,nowotny2005b,nowotny2010} have also compared the synthesised spectra of CO and CN in the infrared wavelengths with the observed spectra of carbon-rich stars and found that the general line profiles and radial velocities of the observed photospheric lines can be explained by the model atmospheres derived from the hydrodynamical code of \citet{hofner2003}. On the other hand, in the case of oxygen-rich model atmospheres, \citet{hofner2003} adopted a simple parametrised description for the dust opacity as in Eq. (5) of \citet{bowen1988a}. If non-grey radiative transfer was considered in the dynamical models, however, there would be too little radiative pressure to drive the winds in oxygen-rich stars \citep[e.g.][]{woitke2006,hofner2007a}. Both \citet{woitke2006} and \citet{hofner2007b} have concluded that the iron content of the wind-driving dust grains must be low, otherwise the dust condensation radius would be too far away from the star ($>10\,R_{\star}$) and hence dust-driven winds could not form. \citet{hofner2007a} have considered the possibility that the winds being driven by a small amount of carbon-bearing grains in the oxygen-rich atmospheres. By varying the grain properties in their dynamical models, \citet{hofner2008} has predicted that the size of the wind-driving (iron-free) grains must be in the narrow range between 0.1 and 10\,$\mu$m. \citet{scicluna2015} have reported the detection of large grains with an average size of ${\sim}0.5\,\mu$m in the oxygen-rich circumstellar envelope of the red supergiant (RSG) VY Canis Majoris based on optical polarimetric imaging. However, the implication of their results on AGB stars, which are the low-mass counterparts of RSGs, is unclear. In contrast, \citet{ireland2011} and \citet{ireland2011b} have found that both large iron-poor grains and small ($<70\,{\rm nm}$) iron-rich grains may drive the winds in oxygen-rich atmospheres, although the material still has to be lifted to a radius of at least $3$--$5\,R_{\star}$, where dust grains start to condensate efficiently.
One notable difference between the results of the {\codex} code and that developed by H\"{o}fner is that {\codex} models exhibit large-scale velocity variations at radii up to ${\sim}10\,R_{\star}$, while the models based on H\"{o}fner's code only show large-scale velocity variation of $\Delta V > 10\,\kms$ within ${\sim}2$--$3\,R_{\star}$. In H\"{o}fner's model atmospheres, the winds are efficiently accelerated by (Fe-free) dust grains that condense at ${\sim}3\,R_{\star}$ \citep[e.g.][]{hofner2009,hofner2015} and therefore the velocity profile becomes a generally continuous outflow with small-amplitude pulsation ($\Delta V \lesssim 2\,\kms$) due to previous shock episodes \citep[e.g.][]{hofner2003,nowotny2005a,nowotny2010}.
We have conducted another test to examine whether large-scale velocity variations of $\Delta V {\sim} 5\,\kms$ at a large radial distance ${\sim}10\,R_{\star}$ from the star are possible. Instead of employing H\"{o}fner's model atmospheres directly, we have constructed a model nearly identical to our preferred Model 3 with a modified input velocity profile. Figure \ref{fig:test-velovariation} shows the results of this test. The model on the left column presents the input velocity profile (top panel), the modelled and observed SiO ${\varv} = 0$ (middle) and SiO ${\varv} = 2$ (bottom) spectra of Model 3 for comparison. The model on the right column shows the input and results of the alternative velocity variation. This alternative model exhibits significant increase in the infall velocity, with $\Delta V \approx 6\,\kms$ at ${\sim}9\,R_{\star}$, which can only be seen in Ireland's {\codex} atmospheres but not in H\"{o}fner's. As long as the infall velocity does not exceed ${\sim} 5\,\kms$, the modelled spectra would not show significant difference regardless of the radial distance where the gas from the extra shock episode is located. Based on the molecular spectra of Mira at this particular stellar phase alone, we can neither distinguish whether such an extra episode of shocked gas exists at all nor determine its possible distance from the star. On the other hand, our test concerning the SiO molecular abundance in Sect. \ref{sec:discuss-dust} shows that the radius at which SiO starts to condense onto dust grains is at least $4\,R_{\star}$. This suggests that the actual wind acceleration may occur beyond ${\sim}4\,R_{\star}$ and hence some velocity variations could still be possible beyond ${\sim}2$--$3\,R_{\star}$.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\modelwidth]{./fig_models/sio1195-lv-vinf.pdf}
\includegraphics[width=\modelwidth]{./fig_models/sio1179-lv-vinf.pdf} \\
\includegraphics[trim=1.0cm 2.0cm 2.0cm 1.5cm, clip, width=\modelwidth]{./fig_models/sio1195-spec_sio54v0_all.pdf}
\includegraphics[trim=1.0cm 2.0cm 2.0cm 1.5cm, clip, width=\modelwidth]{./fig_models/sio1179-spec_sio54v0_all.pdf} \\
\includegraphics[trim=1.0cm 7.3cm 2.0cm 1.5cm, clip, width=\modelwidth]{./fig_models/sio1195-spec_sio54v2_all.pdf}
\includegraphics[trim=1.0cm 7.3cm 2.0cm 1.5cm, clip, width=\modelwidth]{./fig_models/sio1179-spec_sio54v2_all.pdf}
\caption{The input radial velocity profile (top row) and the modelled SiO (middle and bottom rows) spectra of two nearly identical models, except that for the model on the left, there is only one large-scale velocity variation close to the radio continuum, while for the model on the right, there is an additional strong velocity variation near $1.8 \times 10^{14}\,{\rm cm} = 109\,{\rm mas} \sim 5\,R_{\star}$.}
\label{fig:test-velovariation}
\end{figure*}
\section{Conclusions}
\label{sec:concl}
\begin{enumerate}
\item With the long ALMA baselines of ${\sim}15\,{\rm km}$, we are now able to probe the physical conditions in the extended atmospheres and the inner winds (within the dust condensation zone) of AGB stars in unprecedented detail. Mira ($o$ Cet) has been observed as a Science Verification target in the 2014 ALMA Long Baseline Campaign. The angular resolution of the long baseline is ${\sim}30\,{\rm mas}$ at 220\,GHz, which is high enough to resolve the radio continuum of Mira. For the first time, spectral line absorption against the stellar radio continuum has been clearly imaged in the millimetre wavelengths (1.3\,mm) in the transitions of SiO ${\varv} = 0, 1, 2$ $J=5-4$ and H$_2$O $v_2=1$ $J_{K_a,K_c}=5_{5,0}-6_{4,3}$.
\item Through radiative transfer modelling, we are able to reconstruct the detailed physical conditions of Mira's extended atmosphere, namely the gas density, kinetic temperature, abundance of SiO and H$_2$O molecules, and the expansion/infall velocity as functions of radial distance from the star. We fit the SiO and H$_2$O spectra along the lines-of-sight towards the stellar continuum, and towards positions at various sky-projected radii and position angles from the star. In our preferred model which successfully reproduces the spectra of SiO ${\varv} = 0, 2$ and H$_2$O $v_2=1$, the extended atmosphere shows infall motion in general. A shock of velocity $\Delta V \sim 12\,\kms$ is found above Mira's 229\,GHz radio photosphere. The SiO abundance drops significantly from $1 \times 10^{-6}$ to $1 \times 10^{-8}$--$1 \times 10^{-7}$ at the radius of about $1.0 \times 10^{14}\,{\rm cm} = 5\,R_{\star}$, where $R_{\star} = 12.3\,{\rm mas} = 292\,R_{\astrosun}$ is our adopted radius of Mira's infrared photosphere. However, we have also shown that the SiO depletion radius may indeed be anywhere from $4\,R_{\star}$ outwards. In addition, the H$_2$O spectra may be better fitted by adopting an abundance distribution that show a sharp rise in abundance (by about 10 times) near the radio photosphere.
\item We have also tested the predictions from current hydrodynamical models, in particular the {\codex} model series that are tailored for Mira. We have used the predicted atmospheric structures as the inputs of our line radiative transfer modelling. The models successfully reproduce, qualitatively, the absorption features against the continuum. After fine-tuning the radial distances and the magnitudes of the major shock front(s), which are chaotic in nature, the synthesised spectra from the {\codex} models can then reasonably well-fit the observed spectra in this SV observation. Considering the chaotic nature of Mira's extended atmosphere, the modelled spectra from {\codex}'s atmospheres fit remarkably well to the observed ALMA spectra. In addition, we also demonstrated that some other models of Mira's circumstellar environment (e.g. the presence of a chromosphere) do not have support in this ALMA data.
\item We have carried out model fitting of Mira's radio continuum emission at 229.6\,GHz and compared with two other independent results published in mid-2015. Our continuum models for Mira A and B are consistent with those fitted by \citet{mrm2015}. The single uniform disk model for Mira A is consistent with a radio photosphere with a brightness temperature of $2611 \pm 51\,{\rm K}$. On the other hand, we have not found any sign of a compact (${\sim}4.7\,{\rm mas}$) hotspot ($T_b \sim 10\,000\,{\rm K}$) in Mira A's continuum as suggested by \citet{vro2015}, even though we have adopted essentially the identical fitting procedure as them and cross-checked with another visibility fitting software.
\item The long ALMA baselines have demonstrated its capability to produce high-angular resolution and high-sensitivity spectral line images in the extended atmospheres of evolved stars. In order to test the validity of current hydrodynamical models and address the long-standing puzzle of dust condensation and wind-driving mechanism in oxygen-rich evolved stars, long-term (multi-cycle and multi-epoch) monitoring of Mira variables and AGB stars are necessary.
\end{enumerate}
\begin{acknowledgements}
This paper makes use of the following ALMA data: ADS/JAO.ALMA{\#}2011.0.00014.SV. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. We thank M. Scholz, M.~J. Ireland and P.~R. Wood for kindly providing their model atmospheres from the \texttt{o54} series of the Cool Opacity-sampling Dynamic EXtended ({\codex}) atmosphere model. We thank M.~J. Reid for a careful reading of the manuscript and for his highly useful comments. We also thank the anonymous referee, L.~D. Matthews, and E.~W. Greisen for their insightful comments and suggestions, which have inspired some of our tests to address the imaging issues. We acknowledge with thanks the variable star observations from the AAVSO International Database contributed by observers worldwide and used in this research. This research has made use of NASA's Astrophysics Data System Bibliographic Services. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. PyFITS is a product of the Space Telescope Science Institute, which is operated by AURA for NASA. K.~T. Wong was supported for this research through a stipend from the International Max Planck Research School (IMPRS) for Astronomy and Astrophysics at the Universities of Bonn and Cologne and also by the Bonn-Cologne Graduate School of Physics and Astronomy (BCGS). K.~T. Wong also acknowledges the BCGS Honors Budget which covers the digitisation cost of the Ph.D. thesis of R.~C. Doel.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,108,101,563,009 | arxiv | \section{Introduction}
Spanning tree is a well-studied and fundamental structure in graph theory and combinations. The well-known minimum spanning tree (Min-ST) problem asks for a spanning tree of a weighted graph, with minimum total edge weight. In contrast, the maximum spanning tree (Max-ST) problem asks for a spanning tree with maximum total edge weight.
In the context of abstract graphs, the two problems are algorithmically equivalent in the sense that an algorithm for finding a Min-ST can also find a Max-ST within the same time bound (by simply negating the edge weights), and vice versa. The situation is quite different in the context of geometric graphs where vertices are points in the plane and edge wights are Euclidean distances between pairs of points. An algorithm that uses the geometry of the Euclidean plane for finding a Min-ST may not be useful for computing a Max-ST because there is no known geometric transformation for setting up a duality between the ``nearest'' and the ``farthest'' relations between points \cite{Monma1990}. In fact the existing geometric algorithms for the Min-ST and Max-ST problems exploit different sets of techniques.
\begin{figure}[htb]
\centering
\setlength{\tabcolsep}{0in}
$\begin{tabular}{cccc}
\multicolumn{1}{m{.25\columnwidth}}{\centering\includegraphics[width=.17\columnwidth]{fig/Min-ST.pdf}}
&\multicolumn{1}{m{.25\columnwidth}}{\centering\vspace{0pt}\includegraphics[width=.17\columnwidth]{fig/Max-ST.pdf}}
&\multicolumn{1}{m{.25\columnwidth}}{\centering\includegraphics[width=.17\columnwidth]{fig/Max-NC-ST.pdf}}
&\multicolumn{1}{m{.25\columnwidth}}{\centering\vspace{0pt}\includegraphics[width=.19\columnwidth]{fig/Max-ST-NB.pdf}}
\\
(a) Min-ST&(b) Max-ST &(c) Max-NC-ST&(d) Max-ST-NB
\end{tabular}$
\caption{(a) minimum spanning tree, (b) maximum spanning tree, (c) longest noncrossing spanning tree, and (d) longest spanning tree with four neighborhoods colored red, green, blue, and purple.}
\label{setting-fig}
\end{figure}
The problems of computing spanning trees with enforced properties (such as having minimum weight, maximum weight, bounded degree, or being noncrossing) have been well-studied in the last decades for both abstract graphs and geometric graphs.
We study two problems related to maximum spanning trees in geometric graphs. The maximum spanning tree and related problems, in addition to their fundamental nature, find applications in worst-case analysis of heuristics for various problems in combinatorial optimization \cite{Alon1995}, and in approminating maximum triangulations \cite[pp.~338]{Bern1996}. They also have applications in cluster analysis, where one needs to partition a set of entities into well-separated and homogeneous clusters \cite{Asano1988, Monma1990}. Maximum spanning trees are directly related to computing diameter and farthest neighbors which are fundamental problems in computational geometry, with many applications \cite{Agarwal1991}.
In the classical Euclidean Max-ST problem we are given a set of $n$ points in the plane (as vertices) and we want to find a spanning tree of maximum total edge length, where the length of every edge is the Euclidean distance between its two endpoints; see Figure~\ref{setting-fig}. This problem can be solved in $O(n^2)$ time by Prim's algorithm \cite{Fredman1987} for abstract graphs, and in $O(n\log n)$ time by an algorithm that uses the geometry \cite{Monma1990}. In contrast to the Euclidean Min-ST which is always noncrossing (because of the triangle inequality), the Euclidean Max-ST is almost always self-crossing.
One problem that we study in this paper is the {\em longest noncrossing spanning tree} (Max-NC-ST) problem which is to compute a noncrossing spanning tree of maximum length, as depicted in Figure~\ref{setting-fig}. It is not known whether or not this problem is NP-hard.
Another problem that we study is the {\em longest spanning tree with neighborhoods} (Max-ST-NB): Given a collection of $n$ regions (neighborhoods) in the plane, we want to find a maximum-length tree that connects $n$ representative points, one point from each polygon, as in Figure~\ref{setting-fig}. We emphasis that the tree should contain exactly one point from each neighborhood.
Each {\em neighborhood} is the union of simple polygons, and the neighborhoods are not necessarily disjoint. The neighborhoods are assumed to be colored by $n$ different colors.
The hardness of the Max-ST-NB problem is open. The difficulty lies in choosing the representative points; once these points are selected, the problem is reduced to the Euclidean Max-ST problem.
\subsection{Related work on the longest noncrossing spanning tree}
Inspired by the seminal work of Alon, Rajagopalan, and Suri in SoCG 1993 \cite{Alon1995}, the study of long noncrossing configurations in the plane has received considerable attention in recent years. Alon~{et~al.}~show how to compute constant-factor approximations of longest noncrossing spanning trees, perfect matchings, and Hamiltonian paths.
They show that the longest {\em star}, a tree in which one vertex is connected to all others, always gives a $0.5$ approximation of the longest spanning tree (a short proof of this claim is given in \cite{Dumitrescu2010}). As pointed out by Alon {et~al.}~the ratio $0.5$ between the lengths of a longest star and a longest (possibly crossing) spanning tree is the best possible (in the limit); this can be verified by placing $n/2$ points at arbitrary small neighborhood around $(0,0)$ and $n/2$ points at arbitrary small neighborhood around $(1,0)$. Therefore, to obtain a better approximation ratio one should take into account spanning trees other than stars. The ratio $0.5$ remained the best known for almost seventeen years until Dumitrescu and
T{\'{o}}th (STACS 2010) \cite{Dumitrescu2010} slightly improved it to $0.502$, which was then improved to $0.503$ by Biniaz {et~al.}~\cite{Biniaz2019}. The ratios $0.5$, $0.502$, and $0.503$ are obtained by considering the length of a longest (possibly crossing) spanning tree as the upper bound. Although such a tree provides a safe upper bound, it is not a valid solution for the Max-NC-ST problem. Alon~{et~al.}~show that if the longest crossing spanning tree were to used as the upper bound then the approximation ratio cannot be improved beyond $2/\pi<0.637$ because such a tree can be $\pi/2$ times longer than a longest noncrossing spanning tree.
Recently, Cabello~{et~al.}~\cite{Cabello2020} employed a longest noncrossing spanning tree as the upper bound and obtained a relatively significant improved ratio $0.512$.
The survey article by Eppstein \cite[pp.~439]{Eppstein2000} lists the hardness of Max-NC-ST as an open problem in the context of geometric network optimization. This problem has also been studied also for other structures. Alon {et~al.}~show approximation ratios $2/\pi$ and $1/\pi$ for the longest noncrossing perfect matching and Hamiltonian path, respectively. The ratio for the Hamiltonian path was improved to $2/(1+\pi)$ by Dumitrescu and
T{\'{o}}th who also gave the first (tho non constant-factor) approximation algorithm for the longest noncrossing Hamiltonian cycle. The longest noncrossing spanning tree is also studied in multipartite geometric graphs \cite{Biniaz2019}.
\subsection{Related work on the longest spanning tree with neighborhoods}
The Max-ST-NB problem has the same flavor as the Euclidean group Steiner tree problem in which we are given $n$ groups of points in the plane and the goal is to find a shortest tree that contains ``at least'' one point from each group. The general group Steiner tree problem is NP-hard and cannot be approximated by a factor $O(\log^{2-\epsilon} n)$ for any $\epsilon>0$ \cite{Halperin2003}.
The Max-ST-NB problem also lies in the concept of imprecision in computational geometry where each input point provided as a region of uncertainty and the exact position of the point may be anywhere in the region; see e.g. \cite{Dorrigiv2015, Loffler2010}.
Similar to that of Max-NC-ST, one can show that the longest star, in which one vertex of a polygon is connected to one vertex in all other polygons, achieves a $0.5$-approximate solution for the Max-ST-NB problem. Recently, Chen and Dumitrescu \cite{Chen2018} present an approximation algorithm with improved ratio $0.511$. Although their algorithm is simple, the analysis of its ratio is thoroughly involved. They also show that the approximation ratio of an algorithm that always includes a bichromatic diametral pair as an edge in the solution cannot be better than $\sqrt{2-\sqrt{3}}\approx 0.517$.
Analogous problem has been studied for other structures with neighborhoods, e.g., minimum spanning tree \cite{Blanco2017, Dorrigiv2015, Yang2007}, traveling salesman tour \cite{Arkin1994, Mitchell2007, Mitchell2010}, convex hull \cite{Loffler2010, Kreveld2008}, to name a few.
\subsection{Our contributions and approach}
We report improved approximation ratios for the Max-NC-ST and the Max-ST-NB problems. Our results are summarized in the following theorems.
\begin{theorem}
\label{neighborhood-thr}
A $0.524$-approximation for the longest spanning tree with neighborhoods can be computed in linear time after computing a bichromatic diameter.
\end{theorem}
\begin{theorem}
\label{noncrossing-thr}
A $0.519$-approximation for the longest noncrossing spanning tree can be computed in polynomial time.
\end{theorem}
The new approximation ratios are obtained mainly by employing the Euclidean Steiner ratio, which has not been used in this context earlier. The employment is not straightforward. We use the Steiner ratio to handle a situation where some points lie far from the diameter of the input. This situation is a bottleneck of previous algorithms, for both problems. To handle this situation we first obtain a lower bound for the length of the minimum spanning tree of a small subset of the input. We use this lower bound, along with the Steiner ratio, and obtain a lower bound on the length of the Steiner minimal tree of the subset. Then we use this new lower bound to construct a long spanning tree on the entire input. The employment of the Steiner ratio, not only improves the approximation ratios but also simplifies the analysis. To see why this employment is nontrivial, one may think of this counterintuitive question: how could a lower bound on the length of the ``minimum'' spanning tree lead to a constant-ratio approximation of the ``maximum'' spanning tree?
For the Max-NC-ST problem we give a polynomial-time approximation algorithm with improved ratio $0.519$. Following the successively improved trend of the approximation ratio from $0.5$ \cite{Alon1995}, to $0.502$ \cite{Dumitrescu2010}, $0.503$ \cite{Biniaz2019}, and to $0.512$ \cite{Cabello2020} shows that even a small improvement requires a significant effort. To obtain the new ratio we borrow some ideas from previous works and combine them with some new ideas (including the use of the Steiner ratio) along with a more refined analysis. The ratios obtained by our algorithm and that of \cite{Cabello2020} are with respect to the longest ``noncrossing'' spanning tree while the ratios of \cite{Alon1995, Biniaz2019, Dumitrescu2010} are with respect to the longest spanning tree.
For the Max-ST-NB problem we give an approximation algorithm with improved ratio $0.524$.
The algorithm is not complicated: we find a bichromatic diameter (farthest pair of input vertices with different colors) and use it to compute three stars and one {\em double-star} (a tree of diameter 3), and then report the longest one. After computing a bichromatic diameter (which is a well-studied problem) the rest of the algorithm takes linear time. Our analysis of the the ratio $0.524$ is relatively short, compared to that of Chen and Dumitrescu \cite{Chen2018} for the ratio $0.511$. The shortness comes form again the use of the Steiner ratio which takes care of a bottleneck situation (described above)---Chen and Dumitrescu devoted a thorough analysis for handling this situation.
As a secondary result, for the Max-ST-NB problem we show that the approximation ratio of an algorithm that always includes a bichromatic diameter in the solution cannot be better than $0.5$, thereby improving the previous upper bound $0.517$ by Chen and Dumitrescu.
\section{Preliminaries for the algorithms}
\label{preliminaries}
Both our algorithms make extensive use of trees with low diameter, such as stars and double-stars. In fact our solutions for the Max-ST-NB and the Max-NC-ST problems have diameters at most three and six, respectively. A {\em star}, centered at a vertex $p$, is a tree in which every edge is incident to $p$. A {\em double-star}, centered at two vertices $p$ and $q$, is a tree that contains the edge $pq$ and its every other edge is incident to either $p$ or $q$.
We denote the Euclidean distance between two points $p$ and $q$ in the plane by $|pq|$. A {\em geometric graph} is a graph whose vertices are points in the plane and whose edges are straight line segments. The length of a geometric graph $G$, denoted by $\len{G}$, is the total length of its edges. We denote by $D(p,r)$ the disk of radius $r$ that is centered at point $p$.
For a point set $P$, a {\em diametral pair} is a pair of points that attain the maximum distance. If the points in $P$ are colored, then a {\em bichromatic diametral pair} is defined as a pair of points with different colors that attain the maximum distance.
The {\em Euclidean Steiner tree} problem asks for the shortest connected geometric graph spanning a given set $P$ of points in the plane. The solution takes the form of a tree, called a Steiner minimal tree (SMT), that includes all points of $P$ along with zero or more extra vertices called {\em Steiner points} \cite{Bern1996}. The {\em Steiner ratio} is defined to be the infimum of the length of the Steiner minimal tree divided by the length of the minimum spanning tree, over all point sets in the plane:
\begin{linenomath*}
\[\rho=\inf_{P\subset \mathbb{R}^2}\left\{\frac{\len{\textrm{SMT}(P)}}{\len{\textrm{Min-ST}(P)}}\right\}.\]
\end{linenomath*}
An old conjecture of Gilbert and Pollak \cite{Gilbert1968} states that $\rho=\sqrt{3}/{2}\approx 0.866$; this is achieved when $P$ is the vertices of an equilateral triangle. Although the conjecture seems to be still open \cite{Innami2010, Ivanov2012}, it has been verified verified when $P$ has $3$ \cite{Gilbert1968}, $4$ \cite{Pollak1978}, $5$ \cite{Du1985}, or $6$ \cite{Rubinstein1991} points. For the purpose of our algorithms the original proof of Gilbert and Pollak for $|P|=3$ is enough.
\paragraph{A simple $0.5$-approximation algorithm.} Chen and Dumitrescu~\cite{Chen2018} gave the following simple $0.5$-approximation algorithm for the Max-ST-NB (a similar approach was previously used in \cite{Dumitrescu2010}).
Take a bichromatic diametral pair $(a,b)$ from the given $n$ neighborhoods; $a$ and $b$ belong to two different neighborhoods. Each edge of every optimal solution $T^*$ has length at most $|ab|$, and thus $\len{T^*}\leqslant (n-1)|ab|$. Pick an arbitrary point $p$ from each of the the other $n-2$ neighborhoods. Let $S_a$ be the star obtained by connecting $a$ to $b$ and all points $p$. Define $S_b$ analogously. By the triangle inequality $\len{S_a}+\len{S_b}\geqslant n |ab|\geqslant \len{T^*}$. Therefore the longer of $S_a$ and $S_b$ is a $0.5$-approximate solution for the Max-ST-NB problem. This idea also achieves a $0.5$-approximate solution for the Max-NC-ST problem (for which every input point can be viewed as a neighborhood).
\section{Maximum spanning tree with neighborhoods}
\label{neighborhood-section}
In this section we prove Theorem~\ref{neighborhood-thr}. Put $\delta = 0.524$.
We describe our algorithm for the Max-ST-NB first, as it easier to understand. It also gives insights on a better understanding of our algorithm for the Max-NC-ST problem.
To facilitate comparisons we use the same notation as of Chen and Dumitrescu \cite{Chen2018}. Let $\mathcal{X}=\{X_1,X_2,\dots, X_n\}$ be the given collection of $n$ polygonal neighborhoods of $N$ total vertices. We may assume that each $X_i$ is colored by a unique color. We present a $\delta$-approximation algorithm for computing a longest spanning tree with neighborhoods in $\mathcal{X}$.
Our algorithm selects representative points only from boundary vertices of polygonal neighborhoods. Thus, in the algorithm (but not in the analysis) we consider each polygonal neighborhood $X_i$ as the set of its boundary vertices, and consequently we consider $\mathcal{X}$ as a collection of $N$ points colored by $n$ colors.
Define the {\em longest spanning star} centered at a vertex $p\in X_i$ as the star connecting $p$ to its farthest vertex in every other neighborhood.
\paragraph{Algorithm.}The main idea of the algorithm is simple: we compute a spanning double-star $D$ and three spanning stars $S_1, S_2, S_3$, and then report the longest one.
Let $(a,b)$ be a bichromatic diametral pair of $\mathcal{X}$.
After a suitable relabeling assume that $a \in X_1$ and $b\in X_2$. We compute $D$ as follows.
Add the edge $ab$ to $D$. For each $X_i$, with $i\in\{3,\dots,n\}$, find a vertex $p_i\in X_i$ that is farthest from $a$ and find a vertex $q_i\in X_i$ that is farthest from $b$ (it might be the case that $p_i=q_i$). If $|ap_i|\geqslant |bq_i|$ then add $ap_i$ to $D$ otherwise add $bq_i$ to $D$. Observe that $D$ spans all neighborhoods in $\mathcal{X}$, and each edge of $D$ has length at least $|ab|/2$. Now we introduce the three stars.
Let $a'$ be a vertex of $X_1$ that is farthest from $a$, and let $b'$ be a vertex of $X_2$ that is farthest from $b$. Notice that $a'$ has the same color as $a$, and $b'$ has the same color as $b$. We compute $S_1$ as the longest spanning star that is centered at $a'$, and we compute $S_2$ as the longest spanning star that is centered at $b'$.
Now let $c$ be a vertex in $\mathcal{X}$ that maximizes $|ac|+|bc|$. The vertex $c$ can be in any of neighborhoods $X_1,\dots,X_n$, also it might be the case that $c=a'$ or $c=b'$. We compute $S_3$ as the longest spanning star that is centered at $c$.
\paragraph{Running time.} It is implied by a result of Biniaz~{et~al.}~\cite{Biniaz2018} that a bichromatic diametral pair of $\mathcal{X}$ can be found in $O(N\log N\log n)$ time (the algorithm of Bhattacharya and
Toussaint \cite{Bhattacharya1983} also computes a bichromatic diameter, but only for two-colored points). After finding $(a,b)$, the rest of the algorithm (finding $a', b', c$, and finding farthest points from $a$, $b$, $a'$, $b'$, $c$) takes $O(N)$ time.
\subsection{Analysis of the approximation ratio}
For the analysis we consider $\mathcal{X}$ as the initial collection of polygonal neighborhoods. Let $T^*$ denotes a longest spanning tree with neighborhoods in $\mathcal{X}$.
It is not hard to see that for any point in the plane, its farthest point in a polygon $P$ must be a vertex of $P$ (see e.g. \cite[Chapter 7]{deBerg2008}). Thus, any bichromatic diameter of $\mathcal{X}$ is introduced by two vertices of polygons in $\mathcal{X}$. Hence the pair $(a,b)$, selected in the algorithm, is a bichromatic diametral pair of the initial collection $\mathcal{X}$. Therefore, $|ab|$ is an upper bound for the length of edges in $T^*$. Recall our assumption from the algorithm that $a\in X_1$ and $b\in X_2$.
After a suitable rotation, translation, and scaling assume that $ab$ is horizontal, $a=(0,0)$, and $b=(1,0)$. Since $|ab|=1$ and $T^*$ has $n-1$ edges,
\begin{linenomath*}
\begin{equation}
\label{eq-upperbound-1}
\len{T^*}\leqslant (n-1)|ab|\leqslant n-1.
\end{equation}
\end{linenomath*}
\begin{lemma}
\label{Sa-Sb}
The double star $D$ is longer than any star that is centered at $a$ or at $b$.
\end{lemma}
\begin{proof}
Due to symmetry, we only prove this lemma for any star $S_a$ centered at $a$.
Recall that $D$ contains the edge $ab$ where $b\in X_2$, and for each $i\in\{3,\dots, n\}$ it contains the longer of $ap_i$ and $aq_i$ where $p_i$ and $q_i$ are the farthest vertices of $X_i$ from $a$ and $b$, respectively.
For any $i\in\{2,\dots, n\}$ let $r_i$ be the vertex of $X_i$ that is connected to $a$ in $S_a$. If $i=2$ then $|ar_i|\leqslant |ab|$. If $i>2$ then $|ar_i|\leqslant \max\{|ap_i|,|bq_i|\}$. Therefore $\len{D}\geqslant \len{S_a}$.
\end{proof}
Recall $a'$ and $b'$ as vertices of $X_1$ and $X_2$ that are farthest from $a$ and $b$, respectively.
\begin{lemma}
\label{aa-bb-large}
If $|aa'|\geqslant 2\delta$ or $|bb'|\geqslant 2\delta$ then $\max\{\len{S_1},\len{S_2},\len{D}\}\geqslant \delta\cdot \len{T^*}$.
\end{lemma}
\begin{proof}
First assume that $|aa'|\geqslant 2\delta$.
Choose a vertex $p_i$ in each neighborhood $X_i$, with $i\neq1$, and connect it to $a$ and to $a'$. We obtain two stars $S_a$ and $S_{a'}$ that are centered at $a$ and $a'$. The total length of these stars is at least
\begin{linenomath*} \[\sum_{i\in\{2,\dots,n\}}(|ap_i|+|p_ia'|)\geqslant \sum_{i\in\{2,\dots,n\}}|aa'|\geqslant 2\delta (n-1),\]
\end{linenomath*} and thus the longer star has length at least $\delta(n - 1)$. If $S_{a'}$ is longer then $\len{S_1}\geqslant \len{S_{a'}}\geqslant \delta(n - 1)\geqslant \delta\cdot\len{T^*}$, where the first inequality is implied by the fact that $S_1$ is the longest spanning star centered at $a'$ and the last inequality is implied by \eqref{eq-upperbound-1}. If $S_{a}$ is longer then $\len{D}\geqslant \len{S_{a}}\geqslant \delta(n - 1)\geqslant \delta\cdot\len{T^*}$, where the first inequality is implied by Lemma~\ref{Sa-Sb}.
If $|bb'|\geqslant 2\delta$, an analogous implication show that the length of $S_2$ or $D$ is at least $\delta\cdot\len{T^*}$.
\end{proof}
Having Lemma~\ref{aa-bb-large} in hand, in the rest if this section we turn our attention to the case where $|aa'|<2\delta$ and $|bb'|<2\delta$.
The intersection of two disks is called a {\em lens}. Define lens $L= D(a,1)\cap D(b,1)$ to be the region of distance at most $1$ from $a$ and $b$. Since $(a,b)$ is a bichromatic diametral pair and $|ab|=1$, all vertices of $X_3,\dots, X_n$ lie in $L$, all vertices of $X_1$ lie in $D(b,1)$, and all vertices of $X_2$ lie in $D(a,1)$. Moreover since $|aa'|<2\delta$ all vertices of $X_1$ lies in lens $L_1=D(b,1)\cap D(a,2\delta)$, and since $|bb'|<2\delta$ all vertices of $X_2$ lie in lens $L_2=D(a,1)\cap D(b,2\delta)$. See Figure~\ref{lens}(a) for an illustration. In this setting, all vertices of $\mathcal{X}$ lie in $L_1\cup L_2$.
\begin{figure}[htb]
\centering
\setlength{\tabcolsep}{0in}
$\begin{tabular}{cc}
\multicolumn{1}{m{.5\columnwidth}}{\centering\includegraphics[width=.34\columnwidth]{fig/Lune-L-E.pdf}}
&\multicolumn{1}{m{.5\columnwidth}}{\centering\vspace{0pt}\includegraphics[width=.4\columnwidth]{fig/Lune-L-E-2.pdf}}
\\
(a)&(b)
\end{tabular}$
\caption{Illustration of (a) lenses $L$, $L_1$, $L_2$, and (b) the ellipse $\partial E$ and the region $Q$.}
\label{lens}
\end{figure}
\iffalse
\begin{observation}
\label{MST-obs}
Let $q$ be a point in $L_{1}\cup L_{2}$. Consider the set $P=\{a,b,q\}$. If $q\in L$ then the MST of $P$ contains edges $aq$ and $bq$. If $q\in L_{2}\!\setminus\! L_1$ then the MST of $P$ contains edges $aq$ and $ab$. If $q\in L_{1}\!\setminus\! L_2$ then the MST of $P$ contains edges $bq$ and $ab$.
\end{observation}
\fi
We fix a parameter $\omega=\frac{6\delta}{\sqrt{3}}-1\approx 0.815$.
Let $E$ be the set of all points whose total distance from $a$ and $b$ is at most $\omega+2\delta$, i.e., $E=\{x\in\mathbb{R}^2: |xa|+|xb|\leqslant \omega+2\delta\}$. The boundary $\partial E$ of $E$ is an ellipse with foci $a$ and $b$. Put $Q=(L_1
\cup L_2)\!\setminus\! E$, as depicted in Figure~\ref{lens}(b).
\begin{lemma}
\label{qa-qb}
For any point $q\in Q$ it holds that $|aq|>\omega$ and $|bq|>\omega$.
\end{lemma}
\begin{proof}
The point $q$ lies in $L_1\cup L_2$, and thus $|aq|\leqslant 2\delta$ and $|bq|\leqslant 2\delta$. The point $q$ does not lie in $E$, and hence $|aq|+|bq|>\omega+2\delta$. Combining these inequalities yields $|aq|>\omega$ and $|bq|>\omega$.
\end{proof}
The next ``helper lemma'' is the place where we use the Steiner ratio to obtain a lower bound on the length of the Steiner minimal tree of a subset of input vertices.
\begin{lemma}
\label{Steiner-lemma}
For any two points $q\in Q$ and $p\in\mathbb{R}^2$ it holds that $|pa|+|pb|+|pq|> 3\delta$.
\end{lemma}
\begin{proof}
Put $P=\{a,b,q\}$. We are going to obtain a lower bound for the Steiner minimal tree of $P$ by the minimum spanning tree of $P$. The point $q$ lies in $L_2\!\setminus\! L\!\setminus\! E$ or in $L_1\!\setminus\! L\!\setminus\! E$ or in
$L\!\setminus\! E$.
If $q\in L_2\!\setminus\! L\!\setminus\! E$, then $bq$ is the longest side of the triangle $\bigtriangleup(a,b,q)$; this case is depicted in Figure~\ref{lens}(b). In this case Min-ST($P$) has edges $aq$ and $ab$. Thus, $\len{\textrm{Min-ST}(P)}=|aq|+|ab|> \omega + 1$ where the inequality holds by Lemma~\ref{qa-qb} and the fact that $|ab|=1$. If $q\in L_1\!\setminus\! L\!\setminus\! E$, then analogously we get $\len{\textrm{Min-ST}(P)}> \omega+1$. If
$q\in L\!\setminus\! E$, then $ab$ is the longest side of the triangle $\bigtriangleup(a,b,q)$, and hence Min-ST($P$) has edges $aq$ and $bq$. Thus, $\len{\textrm{Min-ST}(P)}=|aq|+|bq|> \omega+2\delta>\omega+1$ where the first inequality holds because $q\notin E$. Therefore, in all cases we have $\len{\textrm{Min-ST}(P)}> \omega+1$.
The union of the three segments $pa$, $pb$, and $pq$ form a tree that connects the points of $P$, as in Figure~\ref{lens}(b). The length of this tree cannot be smaller than the length of Steiner minimal tree of $P$, and thus $|pa|+|pb|+|pq|\geqslant \len{\textrm{SMT}(P)}$.
By using the Steiner ratio for three points (proved by Gilbert and Pollak \cite{Gilbert1968}) we get
\begin{linenomath*}
\[|pa|+|pb|+|pq|\geqslant \len{\textrm{SMT}(P)}\geqslant \frac{\sqrt{3}}{2}\cdot\len{\textrm{Min-ST}(P)}> \frac{\sqrt{3}}{2}\cdot (\omega+1)=3\delta.\qedhere
\]\end{linenomath*}
\end{proof}
The following two lemmas consider two cases depending on whether or not a vertex of $\mathcal{X}$ lies in $Q$. Both lemmas benefit from our helper Lemma~\ref{Steiner-lemma}. In Lemma~\ref{Q-not-empty} we use the helper lemma directly to obtain a lower bound on the maximum length of $S_3$ and $D$. In Lemma~\ref{Q-empty} we use the helper lemma indirectly to obtain a better upper bound on the length of $T^*$.
\begin{lemma}
\label{Q-not-empty}
If at least one vertex of $\mathcal{X}$ lies in $Q$ then $\max\{\len{S_3},\len{D}\}\geqslant \delta\cdot \len{T^*}$.
\end{lemma}
\begin{proof}
Let $q$ be any vertex of $\mathcal{X}$ in $Q$.
We have three cases: (i) $q\notin X_1$ and $q\notin X_2$, (ii) $q\in X_1$, (iii) $q\in X_2$. First consider case (i). After a suitable relabeling assume that $q\in X_3$. Consider an arbitrary representative vertex $p_i$ from each $X_i$ with $i\in\{4,\dots,n\}$. It is implied by Lemma~\ref{Steiner-lemma} that
\begin{linenomath*}
\[\sum_{i=4}^{n}|p_ia|+|p_ib|+|p_iq|>3\delta(n-3).\]
\end{linenomath*}
Let $x$ denotes the point in $\{a,b,q\}$ that has the largest total distance to all points $p_i$. By bounding the maximum with the average, the total distance of $p_i$'s to $x$ is at least $\delta(n-3)$. If $x=q$, then the star that connects $q$ to $p_4, \dots, p_n$, $a$, and $b$ has length \begin{linenomath*}
\[|qp_4|+\dots+|qp_n|+|qa|+|qb|>\delta(n-3)+ \omega + \omega> \delta (n-1)\geqslant \delta\cdot\len{T^*},\]
\end{linenomath*} where the inequalities hold by Lemma~\ref{qa-qb}, that fact that $w>\delta$, and \eqref{eq-upperbound-1}. In this case $\len{S_3}\geqslant \delta\cdot\len{T^*}$, where the vertex $c$ in the algorithm plays the role of $q$ in the analysis. If $x=a$, then the star that connects $a$ to $p_4, \dots, p_n$, $q$, and $b$ has length
\begin{linenomath*} \[|ap_4|+\dots+|ap_n|+|aq|+|ab|>\delta(n-3)+ \omega + 1>\delta(n-1)\geqslant \delta\cdot\len{T^*}.\]
\end{linenomath*}The length of this star is not larger than the length of $D$ (by Lemma~\ref{Sa-Sb}), and thus $\len{D}\geqslant \delta\cdot\len{T^*}$. If $x=b$, an analogous argument implies that $D$ is a desired tree, proving the lemma for case (i).
Now consider case (ii) where $q\in X_1$. Our proof of this case is somewhat similar to that of case (i). Consider an arbitrary representative vertex $p_i$ from each $X_i$ with $i\in\{3,\dots,n\}$. It is implied by Lemma~\ref{Steiner-lemma} that
$\sum_{i=3}^{n}|p_ia|+|p_ib|+|p_iq|>3\delta(n-2).$
Let $x$ denotes the point in $\{a,b, q\}$ that has the largest total distance to all points $p_i$. The total distance of $p_i$'s to $x$ is at least $\delta(n-2)$. If $x=q$, then the star connecting $q$ to $p_3, \dots, p_n$, and $b$ has length at least $\delta(n-2)+ \omega> \delta (n-1)\geqslant \delta\cdot\len{T^*}$. In this case $\len{S_3}\geqslant \delta\cdot\len{T^*}$, where $c$ plays the role of $q$. If $x=a$, then the star connecting $a$ to $p_3, \dots, p_n$, and $b$ has length at least $\delta(n-2)+1> \delta\cdot\len{T^*}$. The length of this star is not larger than that of $D$, and thus $\len{D}\geqslant \delta\cdot\len{T^*}$. If $x=b$, by an analogous argument $D$ is a desired tree, proving the lemma for case (ii).
Our proof of case (iii) is analogous to that of case (ii).
\end{proof}
\begin{lemma}
\label{Q-empty}
If no vertex of $\mathcal{X}$ lies in $Q$ then $\len{D}\geqslant \delta\cdot \len{T^*}$.
\end{lemma}
\begin{proof}
In this case all vertices of $\mathcal{X}$ lie in the region $R=(L_{1} \cup L_{2}) \!\setminus\! Q$, as illustrated in Figure~\ref{double-lens}.
Define the lens $L'=D(a,\delta)\cap D(b,\delta)$. Denote by $l$ the lowest point of $L'$. Disregarding symmetry, denote by $f$ the topmost intersection point of boundaries of $D(b,2\delta)$ and $E$ as in Figure~\ref{double-lens}. Consider the smallest disk $D_l$ with center $l$ that contains the entire region $R$. By basic geometry of circle-circle intersection (that boundaries of $D_l$ and $D(b,2\delta)$ intersect at two points) and circle-ellipse intersection (that boundaries of $D_l$ and $E$ intersect at four points---these four points are marked in Figure~\ref{double-lens}) we can verify that $D_l$ passes through $f$. Thus $f$ is the farthest point in $R$ from $l$. Conversely, it follows from circle-circle intersection that $l$ is the farthest point of $L'$ from $f$. Therefore $|lf|$ is an upper bound for the distance between any point in $L'$ and any point in $R$, and so is an upper bound on the length of any edge of $T^*$ with an endpoint in $L'$.
\begin{figure}[htb]
\centering
\includegraphics[width=.5\columnwidth]{fig/Lune-L-E-3.pdf}
\caption{The length $|lf|$ is the largest possible for any edge having an endpoint in $L'$.}
\label{double-lens}
\end{figure}
Since $f$ lies on $\partial E$ we have $|af|+|bf|=\omega+2\delta$, and since it lies on the boundary of $D(b,2\delta)$ we have $|bf|=2\delta$. Thus, $|af|=w$.
Therefore $f$ is an intersection point of boundaries of $D(a,\omega)$ and $D(b,2\delta)$ that are centered at $(0,0)$ and $(1,0)$, respectively. The point $l$ is an intersection point of boundaries of $D(a,\delta)$ and $D(b,\delta)$. By using the Pythagorean theorem and the circle equation, we can obtain the coordinates of $f$ and $l$, and give the following expression for $|lf|$:
\begin{linenomath*}
\[|lf|=\sqrt{\left(\frac{\omega^2-4\delta^2}{2}\right)^2 + \left(\sqrt{\omega^2-\left(\frac{1+\omega^2-4\delta^2}{2}\right)^2}+\sqrt{\delta^2-\frac{1}{4}}\right)^2}\approx0.9464<0.95.\]
\end{linenomath*}
We use $|lf|$ to obtain a better upper bound on the length of $T^*$.
Let $m$ be the number of neighborhoods $X_i$ that lie entirely in the interior of $L'$. Notice that $0\leqslant m\leqslant n-2$ ($X_1$ and $X_2$ do not lie in $L'$). The number of edges of $T^*$, that are incident to these neighborhoods, is at least $m$. Since each such edge is of length at most $|lf|$ ($<0.95$), we get
\begin{linenomath*}
\begin{equation}
\label{better-up}
\len{T^*}\leqslant (n-m-1)
+0.95 m=n -0.05m-1.
\end{equation}
\end{linenomath*}
Now we compute a lower bound on the length of $D$ in terms of $m$. One vertex of each neighborhood in $L'$ is connected to $a$ or $b$ (whichever is the farthest) in $D$. Each such connection has length at least $0.5$. There are $n-m-2$ neighborhoods that have vertices outside $L'$ (excluding $X_1$ and $X_2$). One vertex of each such neighborhood is connected to $a$ or $b$ (whichever is the farthest) in $D$. Each such connection has length at least $\delta$. The points $a$ and $b$ are connected to each other in $D$. Thus
\begin{linenomath*}
\begin{align}
\notag\len{D}&\geqslant \delta(n-m-2)+0.5m+1\\ \notag &=0.524n-0.024m-0.048\\\notag & >0.524(n
-0.05m-1)\\\notag &\geqslant 0.524\cdot \len{T^*},
\end{align}
\end{linenomath*}
where the equality holds by plugging $\delta=0.524$, and the last inequality holds by \eqref{better-up}.
\end{proof}
The cases considered in Lemmas~\ref{aa-bb-large}, \ref{Q-not-empty}, and \ref{Q-empty} ensure that the length of one of $S_1$, $S_2$, $S_3$, and $D$ is at least $\delta\cdot\len{T^*}$.
This concludes our analysis and proof of Theorem~\ref{neighborhood-thr}.
\paragraph{Remark.} Although the stars $S_1$, $S_2$, $S_3$ are noncrossing, the double-star $D$ can have crossing edges. Thus the algorithm of this section cannot be used for the Max-NC-ST problem.
\paragraph{Inclusion of bichromatic diameter.} The simple $0.5$-approximation algorithm of Chen and Dumitrescu \cite{Chen2018} (described in Section~\ref{preliminaries}) always includes the bichromatic diametral pair $(a,b)$ in the solution. Chen and Dumitrescu argue that the approximation ratio of an algorithm that always includes a bichromatic diametral pair in the solution cannot be larger than $\sqrt{2-\sqrt{3}}\approx 0.517$. We present an input instance that improves this upper bound to $0.5$, thereby showing that the ratio of the simple algorithm is tight, in this sense.
\begin{figure}[htb]
\centering
\includegraphics[width=.48\columnwidth]{fig/upper-bound.pdf}
\caption{Illustration of the upper bound $0.5$ for inclusion of a bichromatic diametral pair.}
\label{diametral-pair}
\end{figure}
Consider four points $p_0=(0,0)$, $p_1=(1,0)$, $p_2=(2,0)$, and $p_3=(3-2\varepsilon,0)$ for arbitrary small $\varepsilon > 0$, e.g. $\varepsilon=1/n$. Our input instance consists of neighborhoods $X_1,\dots, X_n$ where $X_1=\{p_0, p_3\}$, $X_2=\{p_2\}$, and each of $X_3,\dots,X_n$ has exactly one point that is placed at distance at most $\varepsilon$ from $p_1$; see Figure~\ref{diametral-pair}. In this setting, $(p_0,p_2)$ is the unique bichromatic diametral pair. Consider any tree $T$ that contains the bichromatic diameter $p_0p_2$ (this means that $p_3$ is not in $T$). Any edge of $T$ incident to $X_3,\dots,X_4$ has length at most $1+\varepsilon$. Therefore $\len{T}\leqslant 2+(1+\varepsilon)(n-2)<n+1$. Now consider the tree $T^*$ that does not contain $p_0p_2$ but connects each of $X_2,\dots,X_n$ to $p_3$. The length of $T^*$ is at least $(1-2\varepsilon)+(2-3\varepsilon)(n-2)> 2n-6$. This would establish the upper bound $0.5$ on the approximation ratio because $\frac{\len{T}}{\len{T^*}}\leqslant \frac{n+1}{2n-6}$ which tends to $1/2$ in the limit.
\section{Maximum noncrossing spanning tree}
In this section we prove Theorem~\ref{noncrossing-thr}. Put $\delta = 0.519$.
Our $\delta$-approximation algorithm for the Max-NC-ST problem borrows some ideas from previous works \cite{Alon1995, Biniaz2018, Cabello2020, Dumitrescu2010} and combines them with some new ideas (including the use of the Steiner ratio) along with a more refined analysis. For such a well-studied problem, neither coming up with new ideas nor their combination with previous ideas is an easy task.
We try to keep the proof short, but self-contained; we give a detailed description of our new ideas and a short description of borrowed ideas. To facilitate comparisons, we give a proper citation for each part of the proof that overlaps with previous works, and we use the same notation as in the most recent related work \cite{Cabello2020}.
Let $P\!\subset\! \mathbb{R}^2$ be the given point set of size $n$ and let $(u,v)$ be a diametral pair of $P$, throughout this section. After a suitable scaling assume that $|uv|=1$. Let $T^*$ be a longest noncrossing spanning tree of $P$. If $ab$ is a longest edge of $T^*$ then $|ab|\leqslant|uv|=1$, and thus
\begin{linenomath*}
\begin{equation}
\label{T-star-bound}\len{T^*}\leqslant (n-1)|ab|\leqslant (n-1)|uv|\leqslant n-1.
\end{equation}
\end{linenomath*}
Our plan is to construct a noncrossing spanning tree for $P$ of length at least $\delta\cdot\len{T^*}$. For a point $p\in P$, we denote by $S_p$ the star that connects $p$ to all other points of $P$. We start with the following simple lemma, proved in \cite{Dumitrescu2010}, which comes in handy in our construction.
\begin{lemma}
\label{two-star-lemma}
For any two points $p$ and $q$ in $P$ it holds that $\max\{\len{S_p},\len{S_q}\}\geqslant \frac{n}{2}|pq|$.
\end{lemma}
\begin{proof}
By bounding the maximum with the average and then using the triangle inequality we get:
\begin{linenomath*}
\[\max\left\{\len{S_p},\len{S_q}\right\}\geqslant \frac{1}{2}\left(\len{S_p}+\len{S_q}\right)= \frac{1}{2}\sum_{r\in P}(|pr|+|rq|)\geqslant \frac{1}{2}\sum_{r\in P}|pq|= \frac{n}{2}|pq|.\quad\qedhere
\]
\end{linenomath*}
\end{proof}
From Lemma~\ref{two-star-lemma} and \eqref{T-star-bound} we have $\max\{\len{S_u},\len{S_v}\}\geqslant\frac{n}{2}|uv|=\frac{n}{2}>\frac{1}{2}\cdot\len{T^*}$. Therefore, the longer of $S_u$ and $S_v$ is a $0.5$ approximation of $T^*$. As pointed out by Alon {et~al.}~\cite{Alon1995} the longest star may not give an approximation ratio better than $0.5$. To establish better ratios we need to consider trees that are not necessarily stars, and we need to incorporate more powerful ingredients.
Now we describe our $\delta$-approximation algorithm. It uses the noncrossing property of the optimal tree $T^*$ (similar to that of Cabello~{et~al.}~\cite{Cabello2020}).
\paragraph{Algorithm approach.}Guess a longest edge $ab$ of $T^*$, say by trying all pairs of points in $P$. For each guess $ab$, construct seven noncrossing spanning trees as described in the rest of this section. Then report the longest tree, over all guesses for $ab$.
\vspace{15pt}
From now on we assume that $ab$ is a longest edge of $T^*$. In the rest of this section we describe how to construct the seven noncrossing spanning trees in such a way that the length of the longest tree is at least $\delta\cdot\len{T^*}$. Fix a parameter $d=\frac{1}{2\delta}$.
\begin{lemma}
\label{ab-small}
If $|ab|\leqslant d$ then $ \max\left\{\len{S_u},\len{S_v}\right\}\geqslant \delta\cdot\len{T^*}$.
\end{lemma}
\begin{proof}
By Lemma~\ref{two-star-lemma}, the fact that $|uv|=1=2\delta\cdot d$, our assumption that $|ab|\leqslant d$, and \eqref{T-star-bound} we get
\begin{linenomath*} \[\max\left\{\len{S_u},\len{S_v}\right\}\geqslant \frac{n}{2}|uv|=\delta \cdot nd\geqslant \delta \cdot n|ab|>\delta\cdot\len{T^*}.\qedhere\]
\end{linenomath*}\end{proof}
Having Lemma \ref{ab-small} in hand, in the rest of this section we assume that $d\leqslant |ab|\leqslant 1$.
After a suitable rotation and a translation assume that $a=(0,0)$ and $b=(|ab|,0)$. Define the lens $L=D(a,1)\cap D(b,1)$; see Figure~\ref{noncrossing-fig1}(a). Since the diameter of $P$ is $1$, all points of $P$ lie in $L$. Fix parameters $\omega=0.16$ and $\hat{\beta}=0.44$ and define
\begin{linenomath*}\[\hat{\alpha}=\frac{2\delta+3\omega-2}{\omega-1}-\hat{\beta}\quad\quad\quad\lambda=\frac{6\delta}{\sqrt{3}}+1-|ab|\quad\quad\quad\gamma=\frac{(2\delta+\hat{\alpha}-1)|ab|}{\hat{\alpha}}.\]\end{linenomath*}
We use the parameter $\lambda$ along with the Steiner ratio to take care of the situation where some input points lie far from $a$ and $b$---this situation is a bottleneck for previous algorithms.
Define $E_1=\{x\in\mathbb{R}^2: |xa|+|xb|\leqslant \lambda\}$. The boundary $\partial E_1$ is an ellipse with foci $a$ and $b$. Put $Q=L\!\setminus\! E_1$, as depicted in Figure~\ref{noncrossing-fig1}(a). Lemma~\ref{Steiner-lemma-2}, below, plays an important role in improving the approximation ratio. We keep its proof short as it is somewhat similar to our proof of Lemma~\ref{Steiner-lemma}.
\begin{lemma}
\label{qa-qb-2}
For any point $q\in Q$ it holds that $|aq|>\lambda-1$ and $|bq|>\lambda-1$.
\end{lemma}
\begin{proof}
Since $q \in L$, $|aq|\leqslant 1$ and $|bq|\leqslant 1$. Since $q\notin E_1$, $|aq|+|bq|>\lambda$. Combining these inequalities yields $|aq|>\lambda - 1$ and $|bq|>\lambda - 1$.
\end{proof}
The next ``helper lemma'' is the place where we use the Steiner ratio.
\begin{lemma}
\label{Steiner-lemma-2}
For any two points $q\in Q$ and $p\in\mathbb{R}^2$ it holds that $|pa|+|pb|+|pq|> 3\delta$.
\end{lemma}
\begin{proof}
Put $P'=\{a,b,q\}$, and define $L'=D(a,|ab|)\cap D(b, |ab|)$ as in Figure~\ref{noncrossing-fig1}(a). If $q\in L'\!\setminus\! E_1$ then $\textrm{Min-ST}(P')$ has edges $aq$ and $bq$, and thus $\len{\textrm{Min-ST}(P')}=|aq|+|bq|>\lambda\geqslant 6\delta/\sqrt{3}$. If $q\in L\!\setminus\!L'\!\setminus\! E_1$ then $\textrm{Min-ST}(P')$ has the edge $ab$ together with the shorter of $aq$ and $bq$, which we may assume it is $aq$ by symmetry; this case is depicted in Figure~\ref{noncrossing-fig1}(a). Thus $\len{\textrm{Min-ST}(P')}=|aq|+|ab|>\lambda-1+|ab|=6\delta/\sqrt{3}$, where the inequality is implied by Lemma~\ref{qa-qb-2}. Therefore in all cases we have $\len{\textrm{Min-ST}(P')}>6\delta/\sqrt{3}$.
It follows from the Steiner ratio (for three points) that
\begin{linenomath*}
\[|pa|+|pb|+|pq|\geqslant \len{\textrm{SMT}(P')}\geqslant \frac{\sqrt{3}}{2}\cdot\len{\textrm{Min-ST}(P')}> \frac{\sqrt{3}}{2}\cdot \frac{6\delta}{\sqrt{3}}=3\delta.\qedhere
\]\end{linenomath*}
\end{proof}
\begin{figure}[htb]
\centering
\setlength{\tabcolsep}{0in}
$\begin{tabular}{cc}
\multicolumn{1}{m{.5\columnwidth}}{\centering\includegraphics[width=.39\columnwidth]{fig/Noncrossing-0.pdf}}
&\multicolumn{1}{m{.5\columnwidth}}{\centering\vspace{0pt}\includegraphics[width=.4\columnwidth]{fig/Noncrossing-1.pdf}}
\\
(a)&(b)
\end{tabular}$
\caption{(a) Illustration of the case where $P\cap Q\neq\emptyset$ and $q\in L\!\setminus\!L'\!\setminus\! E_1$. (b) Illustration of the longest edges starting from $c_1$ and $c_2$ when $P\cap Q=\emptyset$.}
\label{noncrossing-fig1}
\end{figure}
Lemmas \ref{old-helper} and \ref{beta-lemma} distinguish between two cases where $P\cap Q\neq \emptyset$ and $P\cap Q= \emptyset$. Both lemmas benefit from our helper Lemma~\ref{Steiner-lemma-2}. A combination of these three lemmas lead to a significant improvement on the approximation ratio. In Lemma~\ref{old-helper} we use the helper lemma directly to obtain a long tree for the case $P\cap Q\neq \emptyset$ which is a bottleneck case for previous algorithms. In Lemma \ref{beta-lemma} we use the helper lemme indirectly to obtain a better upper bound for the length of $T^*$.
\begin{lemma}
\label{old-helper}
If $P\cap Q\neq \emptyset$ then there is a noncrossing spanning tree for $P$ of length at least $\delta\cdot\len{T^*}$.
\end{lemma}
\begin{proof}
Consider any point $q\in P\cap Q$. By Lemma~\ref{qa-qb-2}, $|aq|\geqslant \lambda-1=6\delta/\sqrt{3}-|ab|$ and $|bq|\geqslant \lambda-1=6\delta/\sqrt{3}-|ab|$. Recall that $1/2\delta=d\leqslant|ab|\leqslant 1$ and $\delta=0.519$. Then
\begin{linenomath*}
\begin{equation}
\label{Sx-eq-1}
\min\left\{|ab|,|aq|,|bq|\right\}\geqslant \min\left\{\frac{1}{2\delta},\frac{6\delta}{\sqrt{3}}-|ab| ,\frac{6\delta}{\sqrt{3}}-|ab|\right\}\geqslant \min\left\{\frac{1}{2\delta},\frac{6\delta}{\sqrt{3}}-1 ,\frac{6\delta}{\sqrt{3}}-1\right\}>\delta.
\end{equation}
\end{linenomath*}
Let $p_4,\dots,p_n$ denote the points in $P\setminus\{a,b,q\}$. It is implied by Lemma~\ref{Steiner-lemma-2} that
\begin{linenomath*}
\begin{equation}
\label{Sx-eq-2}
\sum_{i=4}^{n}|p_ia|+|p_ib|+|p_iq|>3\delta(n-3).
\end{equation}
\end{linenomath*}
Denote by $x$ the point in $\{a,b,q\}$ that has the largest total distance to $p_4,\dots,p_n$, and denote by $y$ and $z$ the other two points. It is implied from \eqref{Sx-eq-2} that the total distance of $p_i$'s to $x$ is at least $\delta(n-3)$. In this setting, the star $S_x$ is a desired noncrossing tree because
\begin{linenomath*}
\[\len{S_x}=|xp_4|+\dots+|xp_n|+|xy|+|xz|> \delta(n-3)+\delta+\delta=\delta(n-1)\geqslant \delta\cdot\len{T^*},\]
\end{linenomath*} where the first inequality is implied by \eqref{Sx-eq-1} and the second inequality is implied by \eqref{T-star-bound}.
\end{proof}
Subdivide $L$ into three parts by two vertical lines $\ell_1$ and $\ell_2$ at $\omega|ab|$ and $(1-\omega)|ab|$, respectively (a similar subdivision is used in \cite{Biniaz2019, Cabello2020, Dumitrescu2010}). Define $E_2=\{x\in\mathbb{R}^2: |xa|+|xb|\leqslant \gamma\}$.
Let $M$ be the part of $L\cap E_2$ between $\ell_1$ and $\ell_2$. See Figure~\ref{noncrossing-fig1}(b). Let $\alpha$ be the fraction of points in $L\!\setminus\!E_2$, and let $\beta$ be the fraction of points in $M$. Observe that $1-(\alpha+\beta)$ fraction of points lie in parts of $L\cap E_2$ that are to the left of $\ell_1$ and to the right of $\ell_2$.
The next lemma is the place where we use the noncrossing property of $T^*$, similar to that of \cite[Lemma 3.2]{Cabello2020}.
Since Lemma \ref{old-helper} takes care of points in $Q$, we turn our attention to the case where no point of $P$ lies in $Q$. This constraint helps us to obtain a better upper bound on the length of $T^*$, which in turn leads to a better lower bound on the maximum length of the stars $S_u$ and $S_v$. Recall that $(u,v)$ is a diametral pair of $P$ and that $|uv|=1$.
\begin{lemma}
\label{beta-lemma}
If $P\cap Q=\emptyset$ and $\beta\geqslant\hat{\beta}$ then $\max\{\len{S_u},\len{S_v}\}\geqslant \delta\cdot\len{T^*}$.
\end{lemma}
\begin{proof}
Recall that $ab$ is a longest edge of $T^*$. Since $P\cap Q=\emptyset$, all points of $P$ lie in $L\cap E_1$. Let $c_1$ be the top intersection point of $\ell_1$ and $\partial E_2$, and let $c_2$ be the intersection point of $\ell_1$ and $ab$, as in Figure~\ref{noncrossing-fig1}(b). If we ignore symmetry then it follows from convexity that the longest possible edge starting in $M$ and not intersecting $ab$ has either $c_1$ or $c_2$ as an endpoint. The longest edge starting from $c_1$ and not intersecting $ab$ ends at the intersection point $z_1$ of the line through $c_1$ and $b$ with the boundary of $L$. The longest edge starting from $c_2$ ends at the intersection point $z_2$ of the boundaries of $L$ and $E_1$. See Figure~\ref{noncrossing-fig1}(b) for an illustration of these edges. Observe that $z_2$ is also an intersection point of the boundaries of $D(a,1)$ and $D(b, \lambda-1)$. By using the Pythagorean theorem and the ellipse and circle equations, we can obtain the coordinates of $c_1,c_2,z_1,z_2$, and give the following expressions for $|c_1z_1|$ and $|c_2z_2|$:
\begin{linenomath*}
\[
|c_2z_2|=\sqrt{\left(\frac{1+|ab|^2-\left(\lambda-1\right)^2}{2|ab|}-\omega|ab|\right)^2 + 1-\left(\frac{1+|ab|^2-\left(\lambda-1\right)^2}{2|ab|}\right)^2}
\]
\end{linenomath*}
\begin{linenomath*}
\[
|c_1b|=\sqrt{\left(1-\omega\right)^2|ab|^2+\frac{\left(\gamma^2-|ab|^2\right)}{\gamma^2}\left(\left(\frac{\gamma}{2}\right)^2-\left(\frac{|ab|}{2}-\omega|ab|\right)^2\right)}
\]
\end{linenomath*}
\begin{linenomath*}
\[|c_1z_1|\leqslant |c_1b|+\frac{\left(1-|ab|\right)|c_1b|}{\left(1-\omega\right)|ab|}
\]
\end{linenomath*}
where the upper bound on $|c_1z_1|$ is obtained by extending $c_1z_1$ to intersect the vertical line through the rightmost point of $L$; this line is depicted in Figure~\ref{noncrossing-fig1}(b).
\begin{figure}[htb]
\centering
\includegraphics[width=.9\columnwidth]{fig/plot2.jpg}
\caption{Plots of $|c_1z_1|$ (in red) and $|c_2z_2|$ (in blue) over $|ab|$ in interval $[d,1]$ (in green).}
\label{plot}
\end{figure}
Both $|c_1z_1|$ and $|c_2z_2|$ depend only on $|ab|$, and thus we denote them by functions $f_1(|ab|)$ and $f_2(|ab|)$, respectively. By considering the plots of these functions (Figure~\ref{plot}) it follows that for any $|ab|$ in interval $[d,1]$ we have $f_1(d)\geqslant f_1(|ab|)$ and $f_1(|ab|)>f_2(|ab|)$. Therefore, the longest possible edge starting in $M$ has length at most $f_1(d)$. By plugging the chosen constants in the above formula we get $f_1(d)\approx 0.913117<0.914$. Hence we obtain the following upper bound for the length of $T^*$:
\begin{linenomath*}
\[\len{T^*}\leqslant (1-\hat{\beta})n+0.914\hat{\beta}n=(1-0.086\hat{\beta})n<0.963n.\]
\end{linenomath*}
Recall that $\max\{\len{S_u},\len{S_v}\}\geqslant n/2$. Therefore,
\begin{linenomath*}
\[\frac{\max\{\len{S_u},\len{S_v}\}}{\len{T^*}}\geqslant\frac{0.5n}{0.963n}>0.519=\delta.\qedhere\]
\end{linenomath*}
\end{proof}
The following two lemmas do not use the constraint $P\cap Q=\emptyset$.
The next lemma, presented in \cite{Cabello2020}, is adapted to work for our definition of $\hat{\alpha}$ and $\gamma$.
\begin{lemma}
\label{alpha-lemma}
If $\alpha\geqslant\hat{\alpha}$ then $\max\{\len{S_a},\len{S_b}\}\geqslant \delta\cdot\len{T^*}$.
\end{lemma}
\begin{proof}
The proof is somewhat similar to that of Lemma~\ref{two-star-lemma}, except now we have a better lower bound for points in $L\!\setminus \! E_2$, which is $\gamma$. Thus
\begin{linenomath*}
\[\max\{\len{S_a},\len{S_b}\}\geqslant \frac{\len{S_a}+\len{S_b}}{2}\geqslant \frac{\alpha n\cdot\gamma+(1-\alpha)n\cdot|ab|}{2}=\frac{n(|ab|+\alpha(\gamma-|ab|))}{2}.\]
\end{linenomath*}
This and the fact that $\len{T^*}<n|ab|$ by \eqref{T-star-bound}, imply that
\begin{linenomath*}
\[\frac{\max\{\len{S_a},\len{S_b}\}}{\len{T^*}}> \frac{n(|ab|+\alpha(\gamma-|ab|))}{2n|ab|}\geqslant \frac{|ab|+\hat{\alpha}(\gamma-|ab|)}{2|ab|}=\delta.\qedhere\]
\end{linenomath*}
\end{proof}
The construction in the following lemma adapted from one by Biniaz {et~al.}~\cite{Biniaz2019}. We refine the construction according to our choice of points $a$, $b$, and refine the analysis according to our definition of $\omega$, $\hat{\alpha}$, and $\hat{\beta}$.
\begin{lemma}
\label{alpha-beta-lemma}
If $\alpha\leqslant \hat{\alpha}$ and $\beta\leqslant \hat{\beta}$ then there is a noncrossing spanning tree for $P$ of length at least $\delta\cdot\len{T^*}$.
\end{lemma}
\begin{proof}
We keep the description short. We construct two trees $T_a$ and $T_b$ such that the longer one is a desired tree. We describe the construction for $T_a$; the construction of $T_b$ is analogous.
\let\qed\relax\end{proof}
\vspace{-8.5pt}
\begin{wrapfigure}{r}{1.75in}
\centering
\vspace{+1pt}
\includegraphics[width=1.6in]{fig/Noncrossing-2.pdf}
\vspace{-5pt}
\end{wrapfigure}\noindent
\indent See the figure to the right for an illustration. Start by connecting all points to the right of $\ell_2$ to $a$ (red edges). Each such edge has length at least $(1-\omega)|ab|$. Let $ap_1,\dots, ap_m$ be these edges in radial order around $a$ as in the figure; notice that $m\geqslant 1$ because $b$ is to the right of $\ell_2$. Now we connect the points to the left of $\ell_1$: connect all points below $ap_1$ (that are not above $ab$) to $p_1$, connect all points above $ap_m$ (that are not below $ab$) to $p_m$, and connect all points between two consecutive edges $ap_i$ and $ap_{i+1}$ to $p_i$ for $1\leqslant i\leqslant m-1$ (blue edges). Each new edge has length at least $(1-2\omega)|ab|$.
Finally we connect the points in the region between $\ell_1$ and $\ell_2$. Let $\beta'$ be the fraction of points in this region. This region is subdivided into subregions by the current (red and blue) edges of $T_a$. Each subregion is bounded by one or two edges of $T_a$ in such a way that at least one edge is fully visible from the interior of the subregion. Connect all points in each subregion to the endpoint (of the visible edge) that gives a larger total distance; these new edges are shown in green. By an argument similar to the proof of Lemma~\ref{two-star-lemma} one can show that the total length of the new edges is at least $\beta'n(1-2\omega)|ab|/2$.
By the definition of $\alpha$, $\beta$, and $\beta'$ it holds that $\beta'\leqslant \alpha+\beta$. Since $\alpha\leqslant \hat{\alpha}$ and $\beta\leqslant \hat{\beta}$, we have $\beta'\leqslant \hat{\alpha}+\hat{\beta}$. The total fraction of points to the left of $\ell_1$ and to the right of $\ell_2$ is $1-\beta'$. Thus
\begin{linenomath*}
\begin{align}
\notag \len{T_a}+\len{T_b}&\geqslant (1-\beta')n(2-3\omega)|ab|+\beta'n(1-2\omega)|ab|\\ \notag
&=(2-3\omega+(\omega-1)\beta')n|ab|\\ \notag &\geqslant (2-3\omega+(\omega-1)(\hat{\beta}+\hat{\alpha}))n|ab|,
\end{align}
\end{linenomath*}
where the last inequality holds because $\omega-1<0$. Therefore
\begin{linenomath*}
\[
\pushQED{\qed}
\max\left\{\len{T_a}+\len{T_b}\right\}\geqslant\frac{2-3\omega+(\omega-1)(\hat{\beta}+\hat{\alpha})}{2}\cdot n|ab|=\delta \cdot n|ab|> \delta\cdot\len{T^*}.\qedhere
\popQED\]\end{linenomath*}
The cases considered in Lemmas \ref{ab-small}, \ref{old-helper}, \ref{beta-lemma}, \ref{alpha-lemma}, and \ref{alpha-beta-lemma} ensure that at least one of $S_u$, $S_v$, $S_a$, $S_b$, $T_a$, $T_b$, and $S_q$ (introduced in Lemma~\ref{old-helper}) has length at least $\delta\cdot\len{T^*}$. This concludes our analysis and proof of Theorem~\ref{noncrossing-thr}.
\section{Conclusions}
A natural open problem is to improve the presented bounds ($0.524$ for the Max-ST-NB problem and $0.519$ for the Max-NC-ST problem) further by employing more new ideas.
We believe there is a possibility to slightly improve both bounds by discretization. For example in our Max-ST-NB (resp. Max-NC-ST) algorithm if the points in the lens $L'$ (resp. the region $M$) are close to $ab$ then one could obtain a better upper bound on the length of $T^*$ otherwise obtain a better lower bound on the length of the double-star $D$ (resp. the star $S_a$ or $S_b$).
However, the improvement would be minor, and the required case analysis may hide the impact and beauty of the main techniques.
The improved approximation ratios are obtained mainly by employing the Steiner ratio, which has not been used in this context earlier. It would be interesting to see if the Steiner ratio can be used to design better algorithms for other related problems.
\bibliographystyle{abbrv}
|
1,108,101,563,010 | arxiv |
\section{Introduction}\label{sec:intro}
Recent research \cite{dixit2013towards, tootoonchian2010hyperflow, levin2012logically, koponen2010onix, berde2014onos} in Software-Defined Networking (SDN) employs multiple distributed controllers for scalability and reliability reasons.
Using distributed controllers allows the network to scale-out without introducing bottlenecks or single point of failure.
It also provides the network with redundancy and fault-tolerance.
Distributed SDN controllers need to communicate (via east/west interfaces) in order to synchronize their state information (we call this process \emph{controller state distribution}). Hence, they are subjected to issues similar to those affecting distributed datastores \cite{panda2013cap}.
A major issue is the trade-off between consistency and availability in the case of network partitioning, which was identified by Eric Brewer in the CAP (Consistency, Availability and Partitioning) conjecture \cite{brewer2000towards, brewer2012cap}.
The CAP conjecture states that in the case of network partitioning, a distributed system will have to choose between the consistency of the data or the availability of the system. Systems that prefer consistency over availability are labeled as \emph{strongly-consistent} systems. While, systems that have the ability to change their behavior (degree of consistency) are known as \emph{tunably-consistent} systems \cite{cassandraconfig, yu2000building}.
In SDN, the consistency level of state information exchanged among the distributed controllers can negatively affect the network application performance \cite{levin2012logically, Guo201495, aslan2016impact}, depending on the performance indicators being considered.
There exist a multitude of SDN applications, having different performance indicators. As such, some of these applications can tolerate the state information inconsistency for the sake of higher availability. Therefore, applications could be built on-top of tunably-consistent distributed controllers which could be tuned differently for each application. Earlier \cite{aslan2016adaptive}, we proposed the use of adaptive controllers \cite{aslan2016adaptive} running on-top of tunably-consistent controllers in order to autonomously handle setting the parameters of the tunably-consistent distributed controllers based on application-specific performance indicators.
In this paper, we investigate the feasibility of using adaptive controllers running on-top of tunable consistency models similar to that of Apache Cassandra \cite{lakshman2009cassandra, lakshman2010cassandra} or Amazon DB \cite{sivasubramanian2012amazon}.
We present a controller adaptation strategy that - given an application-specific indicator ($\chi$) - autonomously tunes the consistency level ($\Phi$) of the distributed controllers in order to maintain a certain value for such application-specific indicator. In presenting such strategy, we make the following contributions:
(1) we show how to quantize the level of consistency (subsection \ref{sec:proposed:tcm}) and how to use it in selecting appropriate values for the tunable consistency model parameters, and
(2) we show how online clustering techniques can be employed (subsection \ref{sec:proposed:am}) in order to map the application-specific performance indicators ($\chi$) into various consistency levels ($\Phi$).
The rest of this paper is organized as follows:
In \S\ref{sec:discussion}, we discuss the need for adaptive controllers in distributed SDN deployments.
We provide an overview on the topics of eventual and tunable consistency models in \S\ref{sec:background}.
\S\ref{sec:proposed} is the proposed realization of adaptive SDN controllers.
The evaluation is presented in \S\ref{sec:results}.
Finally \S\ref{sec:conclusion} will be our conclusion and an outline for possible foreseeable work.
\section{The Case for SDN Adaptive Controllers}\label{sec:discussion}
As aforementioned, the use of distributed controllers in large-scale SDN deployments is crucial. First, it can reduce the control delays as the control load is now handled by multiple controllers opposed to a single one. Second, it extricates the network from having a single point of failure embodied by the controller, hence increases the reliability and fault-tolerance of the network. Finally, employing distributed controllers allows the network to scale-out (horizontally) by adding more controllers. Dixit \emph{et al.} \cite{dixit2013towards}, suggested dynamically growing and shrinking the pool of controllers based on the traffic conditions, and to get rid of the controller/switch static mapping which can led to uneven distribution of the control load.
Managing distributed controllers in large-scale SDN environments can be a daunting task. First, those controllers are subjected to issues that affect distributed datastores \cite{panda2013cap} including the trade-off between consistency and availability of data during network partitioning. Next, there is a great number of SDN applications different in their requirements and employ different performance indicators. As such, some SDN applications may prefer different consistency and availability configurations \cite{koponen2010onix}. Finally, two or more applications with different requirements (could be conflicting) might be running on the controllers at the same time.
An example for an application that might prefer to lower its consistency for higher availability - as long as it is maintaining a certain level of performance - would be a load-balancer. The load-balancer would need to maintain information about the current load distribution in the network. However, as long as it is not creating routing loops (more in \cite{Guo201495}), it can tolerate some inconsistency in order to achieve a higher degree of availability. On the other hand, a firewall might represent an application that would not tolerate inconsistency and would prefer to be strongly consistent at the expense of being available.
We believe that distributed controllers that employ a tunable consistency model (similar to that of Apache Cassandra; see subsection \ref{sec:background:model}) are more suitable for large-scale SDN deployments that simultaneously run myriad of heterogeneous network applications.
Onix \cite{koponen2010onix} lets the applications make their own trade-off between consistency and availability by providing them with two data-stores: (1) a strongly consistent transactional data-store, and (2) an eventually consistent (more in the next section) in-memory distribute hash table (DHT).
Furthermore, we believe that an effective strategy to handle the case of heterogeneous applications, is by extending tunable consistency with an adaptive mode.
In such mode, the controllers given a per-application performance indicator will monitor the network behavior and adapt to the current conditions by autonomously tuning their consistency levels \cite{aslan2016adaptive}.
Adaptive distributed controllers can reduce the SDN applications development and maintenance cost by shifting the complexity of handling distribution issues out of the applications, reducing the application complexity. In addition, they can reduce the overhead of state distribution among the controllers.
\section{Background on Consistency}\label{sec:background}
In this section, we explain the consistency model used in a number of modern data-stores such as Apache Cassandra \cite{lakshman2009cassandra, lakshman2010cassandra} and Amazon DynamoDB \cite{sivasubramanian2012amazon}.
\subsection{Notations}
In distributed controllers, data are copied and stored at different controllers, such copies are known as \emph{replicas}. In this paper, we assume that no more than a copy of a certain data item will be stored at the same controller. We also use the term \emph{replicas} when referring to the machines storing the data copies.
Table \ref{tab:notation} shows the notations used throughout this paper.
\begin{table}[h!]
\caption{Notations used in this paper.}
\begin{tabularx}{.5\textwidth}{c | X}
\toprule
Symbol & Definition\\
\midrule
$M$ & the total number of nodes in a controllers cluster.\\
$N$ & the number of replicas ($N \leq M$), assumed to be set based on network policy and hence constant.\\
$R$ & the number of replicas that must confirm the read operation in order to be successful ($1 \leq R \leq M$).\\
$W$ & the number of replicas that must confirm the write operation in order to be successful ($1 \leq W \leq M$).\\
$\Phi$ & the consistency level indicator at the controller.\\
$\chi$ & the application-specific performance indicator.\\
\bottomrule
\end{tabularx}
\label{tab:notation}
\end{table}
\subsection{The Tunable Consistency Model}\label{sec:background:model}
The consistency model employed by Apache Cassandra is both an eventual and a tunable consistency model. Eventual consistency \cite{bailis2012probabilistically} is a consistency model where all replicas eventually receive the most up-to-date values after sometime if no further updated occurred. With tunable consistency, we refer to a property of a consistency model where the level of consistency can manually be tuned.
Cassandra allows the application to select between a number of predefined consistency levels, the most releavant ones are: (1) ONE, (2) QUORUM, and (3) ALL \cite{cassandraconfig}.
The first level `ONE' indicates that an operation is considered successful if one replica ($R = 1$) returned the most recent version in case of a read operation, or a confirmation is received from one replica ($W = 1$) in case of a write operation. This level provides a low latency and a high availability.
The second level `QUORUM' ($R + W > N$) indicates that an operation is considered a success if a quorum of replicas returned the most up-to-date version in case of a read operation, or a confirmation is received from quorum of replicas in case of a write operation. This level ensures strong consistency.
Finally, the `ALL' level indicates that an operation is considered a success if all of the replicas ($R = N$) responded and the most up-to-date version is calculated in case of a read operation, or a confirmation is received from all of the replicas ($W = N$) in case of a write operation. This level provides highest possible consistency level but the lowest availability.
For example (Fig. \ref{tunable}), for a write (or update) operation to be succeed it must be written successfully on $W$ different nodes, and for a following read operation to succeed $R$ nodes must respond and return some value.
In the first case (Fig. \ref{tunable:a}), $N=5$, $W=3$, and $R=3$.
At $t_{1}$, a write operation was requested and confirmed by three nodes (at random): $c_{1}$, $c_{2}$, and $c_{3}$, while the operation might have failed at $c_{4}$ and $c_{5}$, the overall operation is marked a success (recall $W=3$).
At $t_{2}$, a read operation was initiated and only three nodes (at random): $c_{3}$, $c_{4}$, and $c_{5}$ returned an answer.
And since $R + W > N$ then for sure one node from those that answered the read operation will hold the most up-to-date value, in this example it is node $c_{3}$.
On the other hand, in the second case (Fig. \ref{tunable:b}), $N=5$, $W=3$, and $R=2$.
At $t_{2}$, only two nodes (at random): $c_{4}$, and $c_{5}$ returned an answer to the read operation, yet the overall read operation is marked a success (recall $R=2$). Those nodes may not have the most up-to-date value. Thus, there is no guarantee if $R + W \le N$ that a read operation will return the most up-to-date value. However, after sometime, if no further updates occurred, all nodes will \emph{eventually} receive the most up-to-date values.
\begin{figure}
\centering
\subfloat[Case I: $W=3$, $R=3$ (strong consistency)]{
\label{tunable:a}
\resizebox{0.5\textwidth}{!}{
\input{tunable1}
}
}\\[5pt]
\subfloat[Case II: $W=3$, $R=2$ (eventual consistency)]{
\label{tunable:b}
\resizebox{0.5\textwidth}{!}{
\input{tunable2}
}
}
\caption{Tunable Consistency Model in $\mathcal{O} (1)$ P2P Distributed Datastores}
\label{tunable}
\end{figure}
As aforesaid, controllers might simultaneously be running multiple network applications, each having its own requirements.
The number of replicas ($N$) could also be application-specific, \emph{e.g.,} an application dealing with more important information would choose a higher number of replicas whereas an application dealing with less important information would choose a less value for $N$.
Even though the number of replicas ($N$) is application-specific, the nodes (controllers) themselves that are responsible for maintaining such replicas are decided by a consistent hashing function \cite{lakshman2010cassandra}.
\section{Proposed Adaptation Strategy}\label{sec:proposed}
The adaptation strategy requires the collaboration of different modules of an adaptive controller (proposed in \cite{aslan2016adaptive}).
In this section, we describe some of the modules needed for realizing the adaptive controllers architecture, namely: (1) the stored procedure compiler module, (2) the tunable consistency module, and (3) the adaptation module.
\subsection{Stored Procedure Compiler Module}
Applications often require different performance indicators ($\chi$). This module is needed to allow applications to instruct the controllers how to calculate their performance indicators (e.g., standard deviation between the loads in case of a load-balancing application). Moreover, in case that the applications are to run on physically separate machines from the controllers, the task of calculating the performance indicators is shifted to the controllers to reduce delays caused by the applications-controllers communication. An application installs a stored procedure similar to that used in Database systems \cite{iso:sql} at the controller which can be executed by the stored procedure compiler module in order to calculate the value of the application-specific performance indicator ($\chi$), when needed.
We assume that security measures are taken to prevent exploiting the use of the stored procedure compiler module and to ensure safe execution of the stored procedures at the controllers. The security aspects of the controllers are outside the scope of this paper.
\subsection{Tunable Consistency Module}\label{sec:proposed:tcm}
This module provides the adaptation module with a configurable consistency level parameter ($\Phi$) that can be tuned in order to change the level of consistency.
\noindent \textbf{Consistency Level Parameter.}
As aforesaid, the adaptation module requires a parameter that can be tuned in order to change the consistency level.
In the proposed strategy, we adopt the tunable consistency model discussed in section \ref{sec:background:model} as a base for our tunable consistency module.
Such model provides $R$, $W$, and $N$ as configurable parameters. However, mapping those parameters to a performance indicator ($\chi$) could be complex for the adaptation module.
$R$, $W$, and $N$ are specific parameters to this particular consistency model (Cassandra-like). Therefore, exposing $R$, $W$, and $N$ to the adaptation module would lower the modularity of the system \emph{i.e.,} it will be harder to replace the tunable consistency module with another one without having to modify the adaptation module.
Hence, the tunable consistency module provides the adaptation module with a single tunable parameter ($\Phi$) that directly relates to the consistency level, and the tunable consistency module is responsible for mapping that parameter ($\Phi$) into its internal specific parameters (\emph{e.g.,} $R$, $W$, and $N$).
\noindent \textbf{Measuring the Consistency Level.}
We chose the probability that a read returns the most recent version as the consistency level indicator ($\Phi$) (shown in (\ref{eqn:Phi})). In case of strong consistency ($R + W \geq N$), $\Phi = 1$ otherwise $\Phi = 1 - p_{s}$ where $p_{s}$ (shown in (\ref{eqn:ps})) is the probability that the read quorum does not include the last up-to-date version \cite{bailis2012probabilistically}. Figure \ref{fig:Phi}, shows $\Phi$ versus $R$ and $W$ in case of $N = 20$. $R$, $W$ and $N$ are positive integer values ($\in Z^{+}$), hence $\Phi(R, W, R)$ is a discrete function.
\begin{align}
p_{s} = \frac{\dbinom{N - W}{R}}{\dbinom{N}{R}} && \cite{bailis2012probabilistically}
\label{eqn:ps}
\end{align}
\begin{equation}
\Phi (R, W, N) =
\begin{cases}
1 - p_{s} & R + W \le N\\
1 & R + W > N
\end{cases}
\label{eqn:Phi}
\end{equation}
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[%
colorbar,
colorbar style={
width=0.15cm
},
width=4.5cm, height=4.5cm,
scale only axis,
xlabel={$R$}, ylabel={$W$}, zlabel={$\Phi$},
xmin=1, xmax=20, xmajorgrids,
ymin=1, ymax=20, ymajorgrids,
zmin=0, zmax=1, zmajorgrids,
axis lines=left,
grid=major
]
\addplot3+[only marks,scatter] file {consistency_n_20.dat};
\end{axis}
\end{tikzpicture}
\caption{Consistency Level $\Phi$ at $N=20$.}
\label{fig:Phi}
\end{figure}
\noindent \textbf{Controlling the Consistency Level.}
Once the adaptation module chooses a certain value ($\phi$) for ($\Phi$) that supposedly satisfies the application-specific performance indicator ($\chi$), the tunable consistency module needs to find values for $R$, $W$ and $N$ that gives ($\phi^{'} = \Phi(R, W, N)$), where ($\phi^{'}$) is as close as possible to the given $\phi$ (recall that \{$R$, $W$, $N$\} $\in Z^{+}$, and $\Phi(R, W, N)$ is a discrete function). We assume $N$ is constant per-application and is set as a system-wide policy by the network administrator. In (\ref{eqn:prop-proof}), we prove that swapping the values of $R$ and $W$ yields the same value for $\Phi$. This property helps in reducing the search space.
\begin{align*}
\Phi(R, W, N) &= 1 - \frac{\frac{(N - W)!}{(N - W - R)! \times R!}}{\frac{N!}{(N - R)! \times R!}} \nonumber && {\scriptstyle(Case: R + W \le N)} \\
&= 1 -\frac{(N - W)! \times (N - R)!}{(N - W - R)! \times N!} \nonumber
\end{align*}
\begin{align}
\Phi(W, R, N) &= 1 - \frac{(N - R)! \times (N - W)!}{(N - R - W)! \times N!} \nonumber \\
&= \Phi(R, W, N) && \blacksquare
\label{eqn:prop-proof}
\end{align}
The values for $R$ and $W$ that gives the nearest value to a certain value $\phi$ for $\Phi(R, W, N)$ could be found using (\ref{eqn:phi2rw}). A simple algorithm is shown in Algorithm \ref{algo:phi2rw} for finding the values of $R$ and $W$.
\begin{equation}
<R, W> = \argmin_{i,j} \Vert \Phi(i, j, N) - \phi \Vert
\label{eqn:phi2rw}
\end{equation}
\begin{algorithm}[h]
\small
\DontPrintSemicolon
\KwData{$N$, number of replicas}
\Begin{
min $\leftarrow$ $\infty$\;
\For{$i \in [1, N)$}{
\For{$j \in [i, N - i]$}{
dist $\leftarrow$ $\Vert\Phi(i, j, N) - \phi\Vert$\;
\If{dist < min}{
min $\leftarrow$ dist\;
R $\leftarrow$ i\;
W $\leftarrow$ j\;
}
}
}
}
\caption{Given a certain value ($\phi$) for ($\Phi$) find appropriate values for $R$ and $W$ in $\mathcal{O}(2)$.}
\label{algo:phi2rw}
\end{algorithm}
\subsection{Adaptation Module}\label{sec:proposed:am}
The adaptation module is responsible for selecting an appropriate configuration (\emph{i.e.,} consistency level ($\Phi$)) for the tunable consistency module given a certain performance level ($\chi$) which is calculated with the help of the stored procedure compiler module. In this section, we show how clustering can be used by the adaptation module in order to map a certain performance level ($\chi$) into a corresponding consistency level ($\Phi$).
\noindent \textbf{Monitoring.}
In order for the adaptation module to function properly, it needs to continuously collect sample data about the application performance and configuration of the tunable consistency module. In particular, it collects different values for the consistency level indicator ($\Phi$) and notes the corresponding performance level ($\chi$), then uses these values to update the clustering technique.
\noindent \textbf{Clustering.}
We use clustering in order to find a mapping between the application performance indicator ($\chi$) and the consistency level ($\Phi$).
First, the collected data is clustered by the application performance indicator ($\chi$) and each center will be a consistency level ($\Phi$).
Next, when a specific level of application performance is needed, the nearest cluster to the required performance level is located and the value of the associated consistency level ($\Phi$) will be used to select appropriate values for $R$ and $W$. We opt for online incremental clustering techniques \cite{beringer2006online, haveliwala2000scalable}. Although such techniques can yield less accurate results compared to those offline techniques but they scale better in terms of storage and they do not require re-clustering with every new measurement. We tested two online techniques: (1) Sequential K-means, and (2) Incremental K-means.
\noindent \textbf{Re-Clustering.}
In oftentimes the adaptation module may need to recalculate the cluster heads.
This is needed when there is a change in the network that affects the accuracy of the adaptation module in finding the closest configuration for a given performance level.
\noindent \textbf{Sequential K-means Clustering.}
The first technique that we tested was the sequential K-means clustering (shown in Algorithm \ref{algo:kmeans-seq}). Algorithm \ref{algo:kmeans-seq} is our adoptation of the ``sequential K-means'' algorithm presented in \cite{ackerman2014incremental}. This technique requires the number of clusters to be initially specified. The first n-measurement will be assigned to the n-clusters, and then every new measurement will be assigned to the nearest cluster, and finally the cluster's mean will be updated.
\begin{algorithm}[h]
\small
\DontPrintSemicolon
\SetKw{KwGoTo}{goto}
\KwData{$\chi_{k}$, $k^{th}$ application's specific performance indicator}
\KwData{$\Phi_{k}$, $k^{th}$ consistency level indicator}
\KwData{$N_{c}$, number of clusters}
\KwData{$N_{p}$, number of data points per cluster}
\KwData{$N_{t}$, total number of data points}
\SetKwFunction{nearest}{nearest}
\Begin{
$N_{t}$ $\leftarrow$ $\emptyset$\;
\If {$N_{t}$ $<$ $N_{c}$} {
$C_{k}.\chi$ $\leftarrow$ $\chi_{k}$\;
$C_{k}.\Phi$ $\leftarrow$ $\Phi_{k}$\;
$C_{k}.N_{p}$ $\leftarrow$ $C_{k}.N_{p}$ + $1$\;
}
\Else {
$i_{c}$ = \nearest($\chi_{k}$, $C$)\;
$C_{i_{c}}.\chi$ $\leftarrow$ $(C_{i_{c}}.\chi$ * $C_{i_{c}}.N_{p}) + \chi_{k}$\;
$C_{i_{c}}.\Phi$ $\leftarrow$ $(C_{i_{c}}.\Phi$ * $C_{i_{c}}.N_{p}) + \Phi_{k}$\;
$C_{i_{c}}.N_{p}$ $\leftarrow$ $C_{i_{c}}.N_{p}$ + 1\;
$C_{i_{c}}.\chi$ $\leftarrow$ $C_{i_{c}}.\chi$ / $C_{i_{c}}.N_{p}$\;
$C_{i_{c}}.\Phi$ $\leftarrow$ $C_{i_{c}}.\Phi$ / $C_{i_{c}}.N_{p}$\;
}
$N_{t}$ $\leftarrow$ $N_{t} + 1$\;
\SetKwProg{nearest}{Function}{}{}
\nearest{nearest}{
\KwData{$d$, datapoint}
\KwData{C$_{N}$, set of $N$ clusters}
\Begin{
idx $\leftarrow$ $\emptyset$; min $\leftarrow$ $\infty$\;
\For{$i \in N$}{
dist $\leftarrow$ $\Vert d.\chi - C_{i}.\chi \Vert$\;
\If{dist $<$ min}{
min $\leftarrow$ dist\;
idx $\leftarrow$ i\;
}
}
\KwRet idx\;
}
}
}
\caption{Using Sequential K-means Clustering at the Adaptation Module.}
\label{algo:kmeans-seq}
\end{algorithm}
\noindent \textbf{Incremental K-means Clustering.}
The second technique that we tested was the incremental K-means clustering (shown in Algorithm \ref{algo:kmeans-incr}). We adopt the ``incremental clustering'' algorithm presented in \cite{rokach2005clustering} as a base for Algorithm \ref{algo:kmeans-incr}. This technique does not require the number of clusters to be specified as it uses a dynamic number of clusters. Every new measurement will be assigned to the nearest cluster if it is close enough (based on a threshold). If none was found, then a new cluster will be added that includes this measurement. The threshold depends on the performance indicator $\chi$, thus we use the relative error as the distance measure to allow the use of a single threshold value for different performance indicators.
\begin{algorithm}[h]
\small
\DontPrintSemicolon
\SetKw{KwGoTo}{goto}
\KwData{$\chi_{k}$, $k^{th}$ application's specific performance indicator}
\KwData{$\Phi_{k}$, $k^{th}$ consistency level indicator}
\KwData{$N_{c}$, number of clusters}
\KwData{$N_{p}$, number of data points per cluster}
\KwData{$\tau$, threshold}
\SetKwFunction{nearest}{nearest}
\Begin{
\If {$N_{c}$ $>$ $0$} {
$i_{c}$ = \nearest($\chi_{k}$, $C$)
\If {$\Vert$ $C_{i_{c}}.\chi$ - $\chi_{k}$ $\Vert$ / $C_{i_{c}.\chi} < \tau$} {
$C_{i_{c}}.\chi$ $\leftarrow$ $(C_{i_{c}}.\chi$ * $C_{i_{c}}.N_{p}) + \chi_{k}$\;
$C_{i_{c}}.\Phi$ $\leftarrow$ $(C_{i_{c}}.\Phi$ * $C_{i_{c}}.N_{p}) + \Phi_{k}$\;
$C_{i_{c}}.N_{p}$ $\leftarrow$ $C_{i_{c}}.N_{p}$ + 1\;
$C_{i_{c}}.\chi$ $\leftarrow$ $C_{i_{c}}.\chi$ / $C_{i_{c}}.N_{p}$\;
$C_{i_{c}}.\Phi$ $\leftarrow$ $C_{i_{c}}.\Phi$ / $C_{i_{c}}.N_{p}$\;
}
\Else {
$C$.create\_new\_cluster($\chi_{k}$, $\Phi_{k}$)\;
}
}
\Else {
$C$.create\_new\_cluster($\chi_{k}$, $\Phi_{k}$)\;
}
\SetKwProg{nearest}{Function}{}{}
\nearest{nearest}{
\KwData{$d$, datapoint}
\KwData{C$_{N}$, set of $N$ clusters}
\Begin{
idx $\leftarrow$ $\emptyset$; min $\leftarrow$ $\infty$\;
\For{$i \in N$}{
dist $\leftarrow$ $\Vert d.\chi - C_{i}.\chi \Vert$\;
\If{dist < min}{
min $\leftarrow$ dist\;
idx $\leftarrow$ i\;
}
}
\KwRet idx\;
}
}
}
\caption{Using Incremental K-means Clustering at the Adaptation Module.}
\label{algo:kmeans-incr}
\end{algorithm}
\noindent \textbf{Latency.}
In some cases, a given performance level can be satisfied by a set of different $R$ and $W$ pairs.
Even though selecting any of them has no impact on the performance, it can have an impact on the latency.
The tunable consistency module will monitor the frequency of reads and writes for each application and given the property ($\Phi(W, R, N) = \Phi(R, W, N)$) proved in (\ref{eqn:prop-proof}).
If the application tends to do more reads then the tunable consistency module will set $R$ to be the smallest value in order to reduce the read latency, while if the application tends to do more writes then the tunable consistency module will set $W$ to be the smallest value in order to reduce the write latency.
\subsection{Application-Controller Interaction}
\begin{figure}[h]
\centering
\resizebox{0.45\textwidth}{0.5\textheight}{
\begin{sequencediagram}
\newinst{app}{:App}
\newinst{spcm}{:SPCM}
\newinst{am}{:AM}
\newinst{tcm}{:TCM}
\begin{messcall}{app}{$send(proc_{\chi})$}{spcm}{}
\end{messcall}
\begin{sdblock}{Monitoring}{}
\begin{call}{am}{$eval \chi$}{spcm}{$x = proc_{\chi}()$}
\end{call}
\begin{call}{am}{$eval \Phi$}{tcm}{$\phi = \Phi(R, W, N)$}
\end{call}
\begin{callself}{am}{$learn(x, \phi)$}{}
\end{callself}
\end{sdblock}
\begin{messcall}{app}{$request(\bar{\chi})$}{am}{}
\end{messcall}
\begin{callself}{am}{$\phi = lookup(\bar{\chi})$}{}
\end{callself}
\begin{messcall}{am}{$tune(\phi)$}{tcm}{}
\end{messcall}
\begin{callself}{tcm}{$<R, W> = calc(\phi)$}{}
\end{callself}
\end{sequencediagram}
}
\caption{The Sequence Diagram}
\label{fig:seq-diag}
\end{figure}
Figure \ref{fig:seq-diag} shows a sequence diagram for the proposed adaptation strategy. It shows the interaction between the application (App) and the various controller modules: stored-procedure compiler module (SPCM), adaptation module (AM), and the tunable consistency module (TCM).
Initially, the application creates a stored procedure ($proc_{\chi}$) that is responsible for calculating the application-specific performance indicator ($\chi$), and then sends that procedure to the controller where it gets executed by the stored-procedure compiler module when needed.
Next, the controller monitors and gathers samples ($x$ and $\phi$) for the application-specific performance indicator ($\chi$) and the corresponding consistency level ($\Phi$), respectively. Then for each sample, the adaptation module invokes the clustering algorithm ($learn(x, \phi)$).
Finally, when the application notifies ($request(\bar{\chi})$) the controller with a desired value ($\bar{\chi}$) for the performance indicator ($\chi$), the adaptation module uses the clustering algorithm to find an estimate ($lookup(\bar{\chi})$) for a corresponding value ($\phi$) for the consistency level indicator ($\Phi$). Then, the adaptation module notifies ($tune(\phi)$) the tunable consistency module with this value, which in-turn calculates ($calc(\phi)$) the module internal parameters ($R$ and $W$) and applies such configuration.
\section{Evaluation}\label{sec:results}
In order to evaluate the validity of the proposed adaptation strategy, we evaluated the effectiveness of the clustering techniques (sequential and incremental) in mapping performance indicators ($\chi$) to consistency levels ($\Phi$). In our evaluation, we assumed that the relationship between the application-specific performance indicator ($\chi$) and the consistency level indicator ($\Phi$) is one the following relations: (1) linear ($\chi = A\Phi + C$), (2) quadratic ($\chi = A\Phi^2 + B\Phi + C$), (3) cubic ($\chi = A\Phi^3 + B\Phi^2 + C\Phi + D$), or (4) logarithmic ($\chi = A.log_{10}(\Phi) + C$). $A$, $B$, $C$, and $D$ are constants. We used a sample of 1000 uniform random numbers to bootstrap the algorithms. Then, we chose 100 arbitrary uniform random test values for $\chi$ and let the adaptation module figure out appropriate values for $\Phi$ that satisfies the given values for $\chi$, and calculate the RMSE between the given $\chi$ values and the ones calculated using values of $\Phi$ returned by the adaptation strategy.
Figure \ref{fig:seq-rmse-vs-clusters} shows the RMSE of the sequential K-means technique (Algorithm \ref{algo:kmeans-seq}) versus the number of clusters. The results show, in the cases we tested, that with a reasonable number of clusters ($\geq$ 50) a plausible mapping (low RMSE) could be estimated between the application performance indicators ($\chi$) which we tested and the consistency level indicator ($\Phi$).
Figure \ref{fig:incr-rmse-vs-threshold} shows the RMSE of the incremental K-means technique (Algorithm \ref{algo:kmeans-incr}) versus the threshold. The results show, in the cases we tested, that a plausible mapping (low RMSE) could be estimated using a reasonable number of clusters ($\geq$ 50) by using a relatively small threshold ($\simeq$ 0.01).
The results also indicate that even though online clustering techniques can yield less accurate results compared to offline techniques however in the cases we tested the online clustering techniques were sufficient with a reasonable number of clusters being used.
\begin{figure}[h!]
\centering
\resizebox{!}{0.24\textheight}{
\begin{tikzpicture}
\begin{axis}[
xlabel=Number of Clusters Heads,
ylabel=RMSE,
cycle list name=exotic
]
\legend{Linear, Quadratic, Cubic, Logarithmic}
\addplot+[mark=star] file {linear.dat};
\addplot+[mark=o] file {quadratic.dat};
\addplot+[mark=square] file {cubic.dat};
\addplot+[mark=diamond] file {log.dat};
\end{axis}
\end{tikzpicture}
}
\caption{Root Mean Square Error (RMSE) vs Number of Cluster Heads.}
\label{fig:seq-rmse-vs-clusters}
\end{figure}
\begin{figure}[h!]
\centering
\resizebox{!}{0.15\textheight}{
\subfloat[RMSE vs Threshold]{
\begin{tikzpicture}
\begin{axis}[
xlabel=Threshold,
ylabel=RMSE,
cycle list name=exotic,
legend style={at={(0.02,0.98)},anchor=north west}
]
\legend{Linear, Quadratic, Cubic, Logarithmic}
\addplot+[mark=star] table[x index=0, y index=2] {incr_linear.dat};
\addplot+[mark=o] table[x index=0, y index=2] {incr_quadratic.dat};
\addplot+[mark=square] table[x index=0, y index=2] {incr_cubic.dat};
\addplot+[mark=diamond] table[x index=0, y index=2] {incr_log.dat};
\end{axis}
\end{tikzpicture}
}
\subfloat[Number of Cluster Heads vs Threshold]{
\begin{tikzpicture}
\begin{axis}[
xlabel=Threshold,
ylabel=Number of Cluster Heads,
cycle list name=exotic
]
\legend{Linear, Quadratic, Cubic, Logarithmic}
\addplot+[mark=star] table[x index=0, y index=1] {incr_linear.dat};
\addplot+[mark=o] table[x index=0, y index=1] {incr_quadratic.dat};
\addplot+[mark=square] table[x index=0, y index=1] {incr_cubic.dat};
\addplot+[mark=diamond] table[x index=0, y index=1] {incr_log.dat};
\end{axis}
\end{tikzpicture}
}
}
\caption{Root Mean Square Error (RMSE) vs Threshold.}
\label{fig:incr-rmse-vs-threshold}
\end{figure}
\section{Conclusion and Future Work}\label{sec:conclusion}
In this paper, we examined the feasibility of using adaptive controllers that are built on-top of tunable consistency models similar to that of Apache Cassandra.
We presented an adaptation strategy that selects feasible values for the consistency level indicator ($\Phi$) that satisfies a given application performance indicator ($\chi$).
We employed two online clustering techniques (sequential and incremental K-means) in order to find suitable mapping between $\chi$ and $\Phi$.
In the cases that we tested, our results showed that in the case of sequential K-means, with a reasonable number of clusters ($\geq$ 50), a plausible mapping (low RMSE) could be estimated between the application performance indicators ($\chi$) and the consistency level indicator ($\Phi$). In the case of incremental K-means, the results also showed that a plausible mapping (low RMSE) could be estimated using a similar number of clusters ($\geq$ 50) by using a small threshold ($\simeq$ 0.01).
In the future, we plan to evaluate the validity and effectiveness of the proposed consistency adaptation strategy using an implementation of an SDN application running on top of a cluster of a distributed controllers.
\section{Acknowledgments}
The second author acknowledge support from the Natural Sciences and Engineering Research Council of Canada (NSERC) through the NSERC Discovery Grant program.
|
1,108,101,563,011 | arxiv | \section{Introduction: Convexity and comparative convexity}
\subsection{Convexity: Basic definitions}
To start with, let us recall some elementary definitions of convexity, and then introduce the broader notion of {\em comparative convexity}~\cite{ConvexFunction-2006,Hormander2007}.
A set $\calX\subset\bbR^d$ is {\em convex} if and only if (iff):
\begin{equation}
\forall p,q\in\calX, \quad \forall\lambda\in [0,1],\quad (1-\lambda) p+\lambda q\in\calX.
\end{equation}
Geometrically speaking, this means that the line segment $[pq]$ is fully contained inside $\calX$.
A real-valued continuous function $F$ on a convex domain $\calX$ is said {\em convex} iff:
\begin{equation}\label{eq:Jineq}
\forall p,q\in\calX,\quad \forall\lambda\in [0,1],\quad F((1-\lambda) p+\lambda q) \leq \lambda F(p)+(1-\lambda)F(q).
\end{equation}
A convex function is necessarily continuous~\cite{ConvexFunction-2006}.
It is {\em strictly convex} iff:
\begin{equation}\label{eq:Jsineq}
\forall p,q\in\calX,\quad p\not=q,\quad \forall\lambda\in (0,1),\quad F(\lambda p+(1-\lambda)q) < \lambda F(p)+(1-\lambda)F(q).
\end{equation}
Historically, Jensen~\cite{Jensen-1905,Jensen-1906} defined in 1905 the notion of convexity using the
{\em midpoint convexity} property (see (1) of~\cite{Jensen-1906}, page 176):
\begin{equation}\label{eq:Jmidineq}
F(p)+F(q)\geq 2 F\left(\frac{p+q}{2}\right).
\end{equation}
A function satisfying this Jensen convexity inequality property may not be continuous~\cite{Maksa-2015}.
But it turns out that for a {\em continuous function} $F$, the midpoint convexity implies the general convexity definition of Eq.~\ref{eq:Jineq}, see~\cite{ConvexFunction-2006}. A continuous and twice differentiable real-valued function $F$ is strictly convex iff $F''(x)>0$.
The well-known generalized criterion for multivariate convex functions consists in checking the positive-definiteness of the Hessian of the function, $\nabla^2 F\succ 0$.
This characterization is due to Alexandrov~\cite{Alexandrov-1939} in 1939.
Let $\calC$ denote the class of strictly convex real-valued functions.
When function $F$ is convex, its {\em epigraph} $\calF=\{(x,y)\ :\ x\in\calX, y\in\bbR,\ F(x)\leq y\}$ of $F$ is a {\em convex object} of $\calX\times \bbR$.
We can interpret geometrically the convexity of Eq.~\ref{eq:Jineq} by noticing that the {\em chord line segment} linking $(p,F(p))$ to $(q,F(q))$ is above the function plot $(x,F(x))$.
Thus Inequality~\ref{eq:Jineq} can be written more generally as:
\begin{equation}\label{eq:Jineqg}
\forall p,q\in\calX,\quad \forall\lambda,\lambda'\in [0,1],\quad F((1-\lambda) p+\lambda q) \leq \lambda' F(p)+(1-\lambda')F(q).
\end{equation}
\subsection{Comparative convexity}
The notion of convexity can be generalized by observing that in Eq.~\ref{eq:Jmidineq}, rewritten as $\frac{F(p)+F(q)}{2}\geq F\left(\frac{p+q}{2}\right)$, {\em two arithmetic means} are used: One in the {\em domain} of the function ({\it ie.}, $\frac{p+q}{2}$), and one in the {\em codomain} of the function ({\it ie.}, $\frac{F(p)+F(q)}{2}$).
The branch of {\em comparative convexity}~\cite{ConvexFunction-2006} studies classes of $(M,N)$-convex functions $F$ that satisfies the following generalized midpoint convexity inequality:
\begin{equation}\label{eq:JD}
F(M(p,q))\leq N(F(p),F(q)),\quad \forall p,q\in\calX,
\end{equation}
where $M$ and $N$ are two abstract mean functions defined on the domain $\calX$ and codomain $\bbR$, respectively.
That is, the field of convexity can be defined informally as the study of function behaviors under the actions of means.
This generalization of convexity was first studied by Aumann~\cite{Aumann-1933} in 1933.
Let $\calC_{M,N}$ denote the class of strictly $(M,N)$-convex functions.
There are many kinds of means~\cite{Bullen-2013}.
For example, the well-known {\em Pythagorean means} for $p,q\in\bbR_{++}=(0,\infty)$ are:
\begin{itemize}
\item the {\em arithmetic mean} (A): $A(p,q)=\frac{p+q}{2}$,
\item the {\em geometric mean} (G): $G(p,q)=\sqrt{pq}$, and
\item the {\em harmonic mean} (H): $H(p,q)=\frac{2}{\frac{1}{p}+\frac{1}{q}}=\frac{2pq}{p+q}$.
\end{itemize}
Thus comparative convexity generalizes the notion of {\em ordinary} convexity that is obtained by choosing $M(x,y)=N(x,y)=A(x,y)$, the arithmetic mean.
Notice that it follows from the Arithmetic Mean-Geometric Mean (AM-GM) inequality:
\begin{equation}
\forall p,q\in (0,\infty), \lambda \in [0,1], p^{1-\lambda}q^\lambda \leq (1-\lambda)p + \lambda q,
\end{equation}
that $(A,G)$-convexity (commonly called, log-convexity) implies $(A,A)$-convexity, but not the converse.
Indeed, by definition, $F\in \calC_{A,G}$ satisfies the inequality $F(\frac{p+q}{2})\leq \sqrt{F(p)F(q)}$,
and the AM-GM inequality yields $\sqrt{F(p)F(q)}\leq \frac{F(p)+F(q)}{2}$.
Thus we have by transitivity $F(\frac{p+q}{2})\leq \frac{F(p)+F(q)}{2}$
That is, $F$ is ordinary convex: $F\in\calC$.
Therefore the $(A,G)$-convex functions are a proper subset of the ordinary convex functions: $\calC_{A,G}\subset \calC$.
Similarly, using the Arithmetic-Geometric-Harmonic (AGH) inequalities $A(x,y)\geq G(x,y)\geq H(x,y)$ (with equality iff $x=y$), we have the following function class inclusion relationship: $\calC_{A,H}\subset \calC_{A,G} \subset \calC$.
\subsection{Abstract means and barycenters}
An {\em abstract mean} $M(p,q)$ aggregates two values to produce an intermediate quantity that satisfies the {\em innerness property}~\cite{Bullen-2013}:
\begin{equation}
\min\{p,q\} \leq M(p,q) \leq \max\{p,q\}.
\end{equation}
To illustrate the richness of abstract bivariate means, let us describe two generic constructions of mean families:
\begin{description}
\item[Quasi-arithmetic means.]
The {\em quasi-arithmetic mean} is defined for a continuous strictly increasing function $f:I\subset\bbR \rightarrow J\subset\bbR$ as:
\begin{equation}
M_f(p,q) = f^{-1}\left( \frac{f(p)+f(q)}{2} \right).
\end{equation}
These means are also called Kolmogorov-Nagumo-de Finetti means~\cite{Kolmogorov-1930,Nagumo-1930,deFinetti-1931}.
Without loss of generality, we assume strictly increasing functions instead of monotonic functions since $M_{-f}=M_f$.
Indeed, $M_{-f}(p,q)=(-f)^{-1}(-f(M_f(p,q)))$ and $(-f)^{-1} \circ (-f)=\idf$, the identity function.
By choosing $f(x)=x$, $f(x)=\log x$ or $f(x)=\frac{1}{x}$, we obtain the Pythagorean arithmetic, geometric, and harmonic means, respectively.
Another family of quasi-arithmetic means are the {\em power means} also called {\em H\"older means}~\cite{Holder-1889}:
\begin{equation}
P_\delta(x,y)=\left(\frac{x^\delta+y^\delta}{2}\right)^{\frac{1}{\delta}},
\end{equation}
They are obtained for $f(x)=x^\delta$ for $\delta\not =0$ with $I=J=(0,\infty)$, and include in the limit cases the maximum and minimum values:
$\lim_{\delta\rightarrow\infty} P_\delta(a,b)=\max\{a,b\}$ and $\lim_{\delta\rightarrow -\infty} P_\delta(a,b)=\min\{a,b\}$.
The harmonic mean is obtained for $\delta=-1$: $H=P_{-\delta}$ and the quadratic mean $Q(p,q)=\sqrt{\frac{p^2+q^2}{2}}=P_2$ for $\delta=2$.
To get a smooth family of H\"older means, we define $P_0(x,y)=\sqrt{xy}$, the geometric mean, for $\delta=0$.
The power means are provably the only {\em homogeneous} quasi-arithmetic means: $M_\delta(\lambda a,\lambda b)=\lambda M_\delta(a,b)$ for any $\lambda\geq 0$, see Proposition 3 of~\cite{Pasteczka-2015}.
We refer the Reader to Appendix~\ref{sec:aqam} for an axiomatization of these quasi-arithmetic means due to Kolmogorov~\cite{Kolmogorov-1930} in 1930, and an extension to define {\em quasi-arithmetic expected values} of a random variable.
\item[Lagrange means.]
Lagrange means~\cite{LagrangianMeans-1998} (also termed Lagrangean means) are mean values derived from the mean value theorem.
Assume without loss of generality that $p < q$ so that the mean $m\in [p,q]$.
From the mean value theorem, we have for a differentiable function $f$:
\begin{equation}
\exists \lambda \in [p,q] : f'(\lambda) = \frac{f(q)-f(p)}{q-p}.
\end{equation}
Thus when $f'$ is a monotonic function, its inverse function $f^{-1}$ is well-defined, and the unique mean value mean $\lambda\in [p,q]$ can be defined as:
\begin{equation}
L_f(p,q)= \lambda = (f')^{-1}\left( \frac{f(q)-f(p)}{q-p} \right).
\end{equation}
For example, letting $f(x)=\log(x)$ and $f'(x)=(f')^{-1}(x)=\frac{1}{x}$, we recover the {\em logarithmic mean} (L), that is {\em not} a quasi-arithmetic mean:
$$
\begin{array}{ll}
m(p,q)
&=
\begin{cases}
0 & \text{if } p=0 \text{ or } q=0 ,\\
x & \text{if } p=q ,\\
\frac{q - p}{\log q - \log p} & \text{otherwise,}
\end{cases}
\end{array}
$$
The logarithmic mean is bounded below by the geometric mean and above by the arithmetic mean:
$G(p,q)\leq L(p,q)\leq A(p,q)$.
\end{description}
Both quasi-arithmetic and Lagrange mean generators are defined up to an affine term $ax+b$ with $a\not =0$.
Moreover, the intersection of the class of quasi-arithmetic means with the Lagrangean means has been fully characterized in~\cite{Pales-2011}, and include the arithmetic mean $A$.
In general, a mean is {\em strict} when $M(p,q)\in (p,q)$ for $p\not =q$, and {\em symmetric} when $M(p,q)=M(q,p)$.
Yet another interesting family of means are the {\em Stolarsky means} (S).
The Stolarsky means are not quasi-arithmetic means nor mean-value means, and are defined as follows
\begin{equation}
S_p(x,y)= \left(\frac{x^p-y^p}{p(x-y)}\right)^{\frac{1}{p-1}},\quad p\not\in \{0,1\}.
\end{equation}
In limit cases, the Stolarsky family of means yields the logarithmic mean (L) when $p\rightarrow 0$ and the {\em identric mean} (I) when $p\rightarrow 1$:
\begin{equation}
I(x,y) = \left( \frac{y^y}{x^x} \right)^{\frac{1}{y-x}}.
\end{equation}
The Stolarsky means belong to the family of {\em Cauchy mean-value means}~\cite{Bullen-2013} defined for two positive differentiable and strictly monotonic functions $f$ and $g$ such that $\frac{f'}{g'}$ has an inverse function.
The Cauchy mean-value mean is defined by:
\begin{equation}
C_{f,g}(p,q) = \left(\frac{f'}{g'}\right)^{-1} \left( \frac{f(q)-f(p)}{g(q)-g(p)} \right), \quad q\not= p,
\end{equation}
with $C_{f,g}(p,p) = p$.
The Cauchy means can be reinterpreted as Lagrange means~\cite{weightedCauchy-2006} by the following identity:
$C_{f,g}(p,q) = L_{f\circ g^{-1}}(g(p),g(q))$ since $((f\circ g^{-1})(x))'=\frac{f'(g^{-1}(x))}{g'(g^{-1}(x))}$:
\begin{eqnarray}
L_{f\circ g^{-1}}(g(p),g(q)) &=& ((f\circ g^{-1})'(g(x)))^{-1}\left( \frac{(f\circ g^{-1})(g(q))- (f\circ g^{-1})(g(p))}{g(q)-g(p)}\right),\\
&=& \left(\frac{f'(g^{-1}(g(x)))}{g'(g^{-1}(g(x)))}\right)^{-1}\left( \frac{f(q)-f(p)}{g(q)-g(p)} \right),\\
&=& C_{f,g}(p,q).
\end{eqnarray}
More generally, we may weight the values and consider {\em barycentric means} $M(p,q;1-\alpha,\alpha)=M_\alpha(p,q)$ for $\alpha\in [0,1]$.
Those {\em weighted means} further satisfy the following smooth interpolation property:
\begin{equation}
M_0(p,q)=p, \quad M_1(p,q)=q, \quad M_{1-\alpha}(p,q)=M_{\alpha}(q,p).
\end{equation}
For example, a {\em quasi-arithmetic barycentric mean} is defined for a monotone function $f$ by:
\begin{equation}
M_f(p,q;1-\alpha,\alpha)= M_{f,\alpha}(p,q) = f^{-1}\left( (1-\alpha)f(p)+ \alpha f(q) \right).
\end{equation}
\begin{definition}[Regular mean]
A mean is said {\em regular} if it is:
\begin{enumerate}
\item homogeneous,
\item symmetric,
\item continuous, and
\item increasing in each variable.
\end{enumerate}
\end{definition}
In this paper, we shall consider regular means and weighted means.
The Pythagorean means are regular means that can be extended to {\em Pythagorean barycenters} (weighted means) for $p,q\in\bbR_{++}$ as follows:
\begin{itemize}
\item the {\em arithmetic barycenter} (A): $A(p,q;1-\alpha,\alpha)=(1-\alpha)p+\alpha q$,
\item the {\em geometric barycenter} (G): $G(p,q;1-\alpha,\alpha)= p^{1-\alpha}q^\alpha$, and
\item the {\em harmonic barycenter} (H): $H(p,q;1-\alpha,\alpha)= \frac{1}{(1-\alpha)\frac{1}{p}+\alpha\frac{1}{q}}=\frac{pq}{\alpha p+(1-\alpha)q}$.
\end{itemize}
The power barycenters (P) are defined by $P_\delta(p,q;1-\alpha,\alpha)=\left((1-\alpha)p^\delta+\alpha q^\delta\right)^{\frac{1}{\delta}}$ for $\delta\not =0$.
Those power barycenters generalize the arithmetic and harmonic barycenters, and can be extended into a smooth family of barycenters
by setting $P_0(p,q;1-\alpha,\alpha)=G(p,q;1-\alpha,\alpha)$.
Let us give two families of means that are not quasi-arithmetic means:
The weighted {\em Lehmer mean}~\cite{Beliakov-2015} of order $\delta$ is defined for $\delta\in\bbR$ as:
\begin{equation}
L_\delta(x_1,\ldots,x_n;w_1,\ldots,w_n) = \frac{\sum_{i=1}^n w_i x_i^{\delta+1}}{\sum_{i=1}^n w_i x_i^\delta}.
\end{equation}
Notice that we have $L_{-\frac{1}{2}}=G$ (the geometric mean) since the denominator of $L_{-\frac{1}{2}}(p,q)$ rewrites as
$p^{-\frac{1}{2}}+q^{-\frac{1}{2}}=\frac{\sqrt{pq}}{\sqrt{p}+\sqrt{q}}$.
The Lehmer means intersect with the H\"older means only for the arithmetic, geometric and harmonic means.
However the Lehmer mean $L_2$ is not a regular mean since it is {\em not} increasing in each variable.
Indeed, $L_1(x,y)=\frac{x^2+y^2}{x+y}=C(x,y)$ is the contraharmonic mean.
The family of Lehmer barycentric means can further be encapsulated into the family of {\em Gini means}:
\begin{equation}
G_{\delta_1,\delta_2}(x_1,\ldots,x_n;w_1,\ldots,w_n) =
\left\{
\begin{array}{ll}
\left( \frac{\sum_{i=1}^n w_i x_i^{\delta_1}}{\sum_{i=1}^n w_i x_i^{\delta_2}}\right)^{\frac{1}{\delta_1-\delta_2}} & \delta_1\not=\delta_2,\\
\left(\prod_{i=1}^n x_i^{w_i x_i^\delta} \right)^{\frac{1}{\sum_{i=1}^n w_i x_i^\delta} } & \delta_1=\delta_2=\delta.
\end{array}
\right.
\end{equation}
Those families of Gini and Lehmer means are homogeneous means:
$G_{\delta_1,\delta_2}(\lambda x_1,\ldots,\lambda x_n;w_1,\ldots,w_n)= \lambda G_{\delta_1,\delta_2}(x_1,\ldots,x_n;w_1,\ldots,w_n)$ for any $\lambda>0$.
The family of Gini means include the power means: $G_{0,\delta}=P_\delta$ for $\delta\leq 0$ and $G_{\delta,0}=P_\delta$ for $\delta\geq 0$.
The Bajraktarevic means~\cite{Beliakov-2014} are also not regular.
Given a symmetric and homogeneous mean $M(x,y)$, we can associate a dual mean $M^*(x,y)=\frac{1}{M(\frac{1}{x},\frac{1}{y})}=\frac{xy}{M(x,y)}$ that is symmetric, homogeneous, and satisfies $(M^*)^*=M$. We write concisely $M^*=\frac{G^2}{M}$ (the geometric mean $G$ is self-dual), and we have $\min^*=\max$ and $\max^*=\min$.
\subsection{Paper outline}
The goal of this paper is to generalize the ordinary Jensen, Bregman and Bhattachayya distances~\cite{BR-2011} using an extended notion of convexity.
In particular, the classes of generalized convex functions $\calC_{M,N}$ generalize the ordinary convex functions (the standard $(A,A)$-convexity), and include the following classes:
\begin{itemize}
\item the class of {\em log-convex functions} ($M$ the arithmetic mean and $N$ the geometric mean),
\item the class of {\em multiplicatively convex functions} ($M$ and $N$ both geometric
means),
\item the class of {\em $M_p$-convex functions} ($M$ the arithmetic mean and $N$ the $p$-th power mean).
\end{itemize}
The paper is organized as follows:
Section~\ref{sec:GenCCD} defines the generalized Jensen and skew Jensen divergences from generalized convexity inequalities, extends the definitions to Jensen diversity, and
introduces the generalized Bregman divergences as a limit case of skew Jensen divergences.
Section~\ref{sec:QAB} considers the class of quasi-arithmetic means to report explicit formulas for these generalized Bregman divergences.
In Section~\ref{sec:GB}, we introduce a generalization of the statistical Bhattacharyya divergence and of the Bhattacharyya coefficient using the concept of comparable means, and show how to obtain closed-form expressions by adapting the means to the structure of the input distributions.
Finally, Section~\ref{sec:concl} concludes and hints at further perspectives.
For sake of completeness, the axiomatization of quasi-arithmetic means are reported in Appendix~\ref{sec:aqam}.
\section{Generalized Jensen, skewed Jensen and Bregman divergences}\label{sec:GenCCD}
The Jensen midpoint inequality of Eq.~\ref{eq:Jmidineq} can be used to build a symmetric dissimilarity measure $J_F(p,q)=J_F(q,p)$ for $F\in\calC$, originally called the Jensen difference in~\cite{JensenDiversity-1982}:
\begin{equation
J_F(p,q) = \frac{F(p)+F(q)}{2} - F\left(\frac{p+q}{2}\right) \geq 0.
\end{equation}
Nowadays, that distance is called a {\em Jensen divergence} (or a Burbea-Rao divergence~\cite{BR-2011}).
The term ``divergence'' is traditionally used in information geometry~\cite{IG-2016} instead of distance to emphasize the fact that the dissimilarity may not be a metric.
A divergence $D(p,q)$ only requires:
\begin{enumerate}
\item to satisfy the law of the indiscernible
\begin{equation}
D(p,q) = 0 \Leftrightarrow p=q,
\end{equation}
and
\item to be thrice differentiable in order define a differential-geometric structure involving a metric tensor and a cubic tensor~\cite{IG-2016}.
\end{enumerate}
It follows by construction from the Jensen inequality that $J_F(p,q) \geq 0$, and that $J_F(p,q)=0$ iff $p=q$ for a {\em strictly convex} function $F$.
\subsection{Jensen Comparative Convexity Divergences}
Let us extend the definitions of Jensen, skewed Jensen divergences to the setting of comparative convexity as follows:
\begin{definition}[Jensen Comparative Convexity Divergence, JCCD]
The Jensen Comparative Convexity Divergence (JCCD) is defined for a strictly $(M,N)$-convex function $F\in\calC_{M,N}:I\rightarrow \bbR$ by:
\begin{equation}\label{eq:JCCD}
\boxed{J_F^{M,N}(p,q) = N(F(p),F(q)))-F(M(p,q))}
\end{equation}
\end{definition}
For symmetric means $M$ and $N$, the JCCD is a symmetric divergence: $J_F^{M,N}(p,q)=J_F^{M,N}(q,p)$.
It follows from the strict $(M,N)$-convexity property of $F$ that $J_F^{M,N}(p,q)=0$ iff $p=q$.
The definition of the JCCD can be extended to skew JCCDs by taking the barycentric {\em regular} means:
\begin{definition}[Skew Jensen Comparative Convexity Divergence]\label{def:jccd}
The skew $\alpha$-Jensen Comparative Convexity Divergence (sJCCD) is defined for a strictly
$(M,N)$-convex function $F\in\calC_{M,N}:I\rightarrow \bbR$ where $M$ and $N$ are regular means and $\alpha\in (0,1)$ by:
\begin{equation}\label{eq:sJCCD}
\boxed{J_{F,\alpha}^{M,N}(p:q) = N_\alpha(F(p),F(q))-F(M_\alpha(p,q)).}
\end{equation}
\end{definition}
It follows that $J_{F,1-\alpha}^{M,N}(p:q)=J_{F,\alpha}^{M,N}(q,p)$.
The fact that $J_{F,\alpha}^{M,N}(p:q)\geq 0$ follows from the midpoint $(M,N)$-convexity property of function $F$
(see Theorem~A page 4 and Section 2.6 page 88 of~\cite{ConvexFunction-2006}).
In fact, the generalized midpoint convexity inequality plus the continuity assumption yields an exact characterization of $(M,N)$-convex functions, see~\cite{ConvexFunction-2006}.
The power means (including the harmonic mean, the arithmetic mean and the geometric mean by extension) are examples of regular means.
Note that the exponential function $\exp(x)$ is both $(L,L)$-convex and $(I,I)$-convex, two logarithmic and identric regular Stolarsky's means.
In some cases, when the barycentric means is well-defined for $\alpha\in\bbR$ ({\it ie.}, extrapolating values when $\alpha<0$ or $\alpha>1$), we can extend the skew Jensen divergences to $\alpha\in \bbR\backslash\{0,1\}$.
For example, using the arithmetic means $M=N=A$, we may define for $\alpha\in \bbR\backslash\{0,1\}$:
\begin{equation}\label{eq:sJCCDA}
J_{F,\alpha}(p:q) = \sign(\alpha(1-\alpha)) \left( A_\alpha(F(p),F(q))-F(A_\alpha(p,q))\right),
\end{equation}
where $A_\alpha(p,q)=(1-\alpha)p+\alpha q$.
\begin{example}[Jensen divergence for multiplicatively convex functions]
The class $\calC_{G,G}$ of strictly $(G,G)$-convex functions is called the class of {\em multiplicatively convex functions}.
Let $F\in\calC_{G,G}$. We get the $(G,G)$-Jensen divergence:
\begin{equation}
J^{G,G}_{\exp}(p,q) = \sqrt{F(p)F(q)}-F(\sqrt{pq}) \geq 0.
\end{equation}
We check that $J^{G,G}(x,x)=0$.
It turns out that $F\in\calC_{G,G}$ when $\log F (x)$
is a convex function of $\log x$ (see Lemma~\ref{lemma:cvx}).
Some examples of multiplicatively convex functions are $F(x)=\exp(x)$, $F(x)=\sinh(x)$, $F(x)=\Gamma(x)$ (the Gamma function generalizing the factorial), $F(x)=\exp(\log^2 x)$.
For example, take $F(x)=\exp(x)$.
Then the corresponding $(G,G)$-JCCD is:
\begin{equation}
J^{G,G}(p:q) = \exp\left(\frac{p+q}{2}\right) - \exp(\sqrt{pq})\geq 0.
\end{equation}
\end{example}
\subsection{Jensen Comparative Convexity Diversity Indices}
The $2$-point divergences ({\it ie.}, dissimilarity measure between two points) can be extended to a positively weighted set of values by defining a notion of {\em diversity}~\cite{JensenDiversity-1982} as:
\begin{definition}[Jensen Comparative Convexity Diversity Index, JCCDI]
Let $\{(w_i,x_i)\}_{i=1}^n$ be a set of $n$ positive weighted values so that $\sum w_i=1$.
Then the {\em Jensen diversity index} with respect to the strict $(M,N)$-convexity of a function $F$ is:
\begin{equation}\label{eq:JDI}
\boxed{J_F^{M,N}(x_1,\ldots, x_n; w_1, \ldots, w_n) = N(F(x_1),\ldots, F(x_n); w_1, \ldots, w_n) - F (M(x_1,\ldots, x_n; w_1, \ldots, w_n))}
\end{equation}
\end{definition}
It is proved in~\cite{Niculescu-2003} that $N(F(x_1),\ldots, F(x_n); w_1, \ldots, w_n) \geq F (M(x_1,\ldots, x_n; w_1, \ldots, w_n))$ for a continuous $(M,N)$-convex function. Therefore, we have $J_F^{M,N}(x_1,\ldots, x_n; w_1, \ldots, w_n) \geq 0$.
See also Theorem~A page 4 of~\cite{ConvexFunction-2006}.
When both means $M$ and $N$ are set to the arithmetic mean, this diversity index has been called the {\em Bregman information}~\cite{BD-2005} in the context of clustering with Bregman divergences. The Bregman information generalizes the notion of variance of a cluster obtained for the generator $F(x)=x^\top x$.
\subsection{Bregman Comparative Convexity Divergences}
Let us define the {\em Bregman Comparative Convexity Divergence} (BCCD), also called generalized $(M,N)$-Bregman divergence, as the limit case of skew JCCDs:
\begin{definition}[Bregman Comparative Convexity Divergence, BCCD]\label{def:bccd}
The Bregman Comparative Convexity Divergence (BCCD) is defined for a strictly $(M,N)$-convex function $F:I\rightarrow \bbR$ by
\begin{equation}\label{eq:BCCD}
\boxed{
B_{F}^{M,N}(p:q) = \lim_{\alpha\rightarrow 1^-} \frac{1}{\alpha(1-\alpha)}J_{F,\alpha}^{M,N}(p:q) = \lim_{\alpha\rightarrow 1^-} \frac{1}{\alpha(1-\alpha)} \left( N_\alpha(F(p),F(q)))-F(M_\alpha(p,q)) \right)
}
\end{equation}
\end{definition}
It follows from the symmetry $J_{F,\alpha}(p:q)=J_{F,1-\alpha}(q:p)$ (for symmetric means) that when the limits exists, we get the {\em reverse Bregman divergence}:
\begin{equation}
B_{F}^{M,N}(q:p)= \lim_{\alpha\rightarrow 0^+} \frac{1}{\alpha(1-\alpha)}J_{F,\alpha}^{M,N}(p:q) = \lim_{\alpha\rightarrow 0^+} \frac{1}{\alpha(1-\alpha)} \left( N_\alpha(F(p),F(q)))-F(M_\alpha(p,q)) \right).
\end{equation}
Note that the limits are one-sided limits.
Notice that when both means $M$ and $N$ are chosen as the arithmetic mean, we recover the ordinary Jensen, skew Jensen and Bregman divergences described and studied in~\cite{BR-2011}.
This generalization of Bregman divergences has also been studied by Petz~\cite{Petz-2007} to get generalized quantum relative entropies.
Petz defined the Bregman divergence between two points $p$ and $q$ of a convex set $C$ in a Banach space for a
given function $F:C\rightarrow \mathcal{B}(\mathcal{H})$ (Banach space induced by a Hilbert space $\mathcal{H}$) as:
\begin{equation}
B_F(p:q)=F(p)-F(q)-\lim_{\alpha\rightarrow 0^+} \frac{1}{\alpha}(F(q+\alpha(p-q)) -F(q)).
\end{equation}
Indeed, this last equation can be rewritten as:
\begin{eqnarray}
B_F(p:q) &=& \lim_{\alpha\rightarrow 0^+} \frac{1}{\alpha} (\alpha F(p)-(1-\alpha)F(q) - (F(q+\alpha(p-q))),\\
&=& \lim_{\alpha\rightarrow 1^-} \frac{1}{1-\alpha} (A_{\alpha}(F(p),F(q)) -F(A_{\alpha}(p,q))),\\
&=& \lim_{\alpha\rightarrow 1^-} \frac{1}{1-\alpha} J_{F,\alpha}^{A,A}(p,q).
\end{eqnarray}
When $C$ is the set of positive semi-definite matrices of unit trace and $F(x)=x\log x$, then the induced Bregman divergence
is Umegaki's relative entropy~\cite{Petz-2007}: $B_F(p:q)=\tr p(\log p -\log q)$.
Thus we have a general recipe to get generalized Bregman divergences:
Study the asymptotic barycentric symmetric mean expansions of $M(p,q;1-\alpha,\alpha)$ and $N(F(p),F(q);1-\alpha,\alpha)$ when $\alpha\rightarrow 0$, and deduce the generalized $(M,N)$-Bregman divergence provided the limits of $\frac{1}{\alpha}M(p,q;1-\alpha,\alpha)$ and $\frac{1}{\alpha}N(F(p),F(q);1-\alpha,\alpha)$
exist when $\alpha\rightarrow 0$.
Letting $\omega=2\alpha-1\in(-1,1)$ (or $\alpha=\frac{1+\omega}{2}\in(0,1)$) and using barycentric means, we can define the following divergence:
\begin{equation}
\boxed{D_{F,\omega}^{M,N}(p:q) = \frac{1}{1-\omega^2}\left( N\left(F(p),F(q);\frac{1-\omega}{2};\frac{1+\omega}{2}\right) -
F\left( M\left(p,q;\frac{1-\omega}{2};\frac{1+\omega}{2}\right)\right) \right)}
\end{equation}
Then the generalized Bregman divergences are obtained in the limit cases when $\omega\rightarrow\pm 1$.
Notice that in~\cite{Zhang-2004} (Sec. 3.5), Zhang defined a divergence functional from generalized (quasi-arithmetic) means:
For $f$ a strictly convex and strictly monotone increasing function, Zhang defined the divergence for $\rho=f^{-1}$ as follows:
\begin{equation}
\calD^{(\alpha)}_\rho(p,q)=\frac{4}{1-\alpha^2}\int_\calX \left(
\frac{1-\alpha}{2}p + \frac{1+\alpha}{2}q - M_\rho(p,q)
\right) \dnu(x).
\end{equation}
We shall investigate such generalized $(M,N)$-Bregman divergences when both means are weighted quasi-arithmetic means in Section~\ref{sec:QAB}.
\subsection{From univariate to multivariate separable divergences}
Multivariate divergences can be built from univariate divergences component-wise.
For example, let $P=(P_1,\ldots, P_d)$ and $Q=(Q_1,\ldots, Q_d)$ be two vectors of $\bbR^d$, and consider the following multivariate generalized Bregman divergence:
\begin{equation}
B_F(P:Q) = \sum_{i=1^d} B_{F_i}^{M_i,N_i}(P_i:Q_i),
\end{equation}
where $F_i\in\calC_{M_i,N_i}$ is a $(M_i,N_i)$-convex function.
These divergences can be decomposed as a sum of univariate divergences, and are thus called {\em separable divergences} in the literature~\cite{LearningDivergence-2015}.
\begin{remark}
Observe that the BCCD can be approximated in practice from the JCCD by taking small values for $\alpha>0$:
For example, the ordinary Bregman divergence can be approximated from the ordinary skew Jensen divergence as follows:
\begin{equation}
B_F(q:p) \simeq \frac{1}{\alpha(1-\alpha)} \left ((1-\alpha)F(p)+\alpha F(q) -F((1-\alpha) p + \alpha q)\right),\quad \mbox{$\alpha>0$ small}.
\end{equation}
This is all the more interesting in practice for approximating the Bregman divergence by skipping the calculation of the gradient $\nabla F$.
\end{remark}
We shall now report explicit formulas for the generalized Bregman divergences when using quasi-arithmetic means.
\section{Quasi-arithmetic Bregman divergences}\label{sec:QAB}
Let us report direct formulas for the generalized Bregman divergences defined with respect to quasi-arithmetic comparative convexity.
Let $\rho$ and $\tau$ be two continuous differentiable functions defining the quasi-arithmetic means $M_\rho$ and $M_\tau$, respectively.
\subsection{A direct formula}
By definition, a function $F\in\calC_{\rho,\tau}$ is $(M_\rho,M_\tau)$-convex iff:
\begin{equation}
M_\tau(F(p),F(q))) \geq F(M_\rho(p,q)).
\end{equation}
This $(M_\rho,M_\tau)$-midpoint convexity property with the continuity of $F$ yields the more general definition of $(M_\rho,M_\tau)$-convexity:
\begin{equation}
M_{\tau,\alpha}(F(p),F(q))) \geq F(M_{\rho,\alpha}(p,q)),\quad \alpha\in [0,1].
\end{equation}
Let us study those quasi-arithmetic Bregman divergences $B^{\rho,\tau}_F$ obtained when taking the limit:
\begin{equation}
B^{\rho,\tau}_F(q:p) = \lim_{\alpha\rightarrow 0} \frac{1}{\alpha(1-\alpha)} \left( M_{\tau,\alpha}(F(p),F(q)))- F\left(M_{\rho,\alpha}(p,q)\right) \right),
\end{equation}
for $M_{\rho,\alpha}$ and $M_{\tau,\alpha}$ two quasi-arithmetic barycentric means obtained for continuous and monotonic functions $\rho$ and $\tau$, respectively.
Recall that a quasi-arithmetic barycentric mean for a monotone function $\tau$ is defined by:
\begin{equation}
M_{\alpha}(p;q)= \tau^{-1}\left(\tau(p)+\alpha(\tau(q)-\tau(p))\right), \quad \alpha\in [0,1],\quad M_{0}(p;q)=p, M_{1}(p;q)=q.
\end{equation}
We state the generalized Bregman divergence formula obtained with respect to quasi-arithmetic comparative convexity:
\begin{theorem}[Quasi-arithmetic Bregman divergences, QABD\label{theo:qabd}]
Let $F:I\subset \bbR \rightarrow \bbR$ be a real-valued $(M_\rho,M_\tau)$-convex function defined on an interval $I$ for two strictly monotone and differentiable functions $\rho$ and $\tau$.
The quasi-arithmetic Bregman divergence (QABD) induced by the comparative convexity is:
\begin{equation}
\boxed{
B^{\rho,\tau}_F(p:q)
=
\frac{\tau(F(p))-\tau(F(q))}{\tau'(F(q))} - \frac{\rho(p)-\rho(q)}{\rho'(q)} F'(q).
}
\end{equation}
\end{theorem}
\begin{proof}
By taking the first-order Taylor expansion of $\tau^{-1}(x)$ at $x_0$, we get:
\begin{equation}
\tau^{-1}(x)\simeq_{x_0} \tau^{-1}(x_0) + (x-x_0) (\tau^{-1})'(x_0).
\end{equation}
Using the property of the derivative of an inverse function:
\begin{equation}
(\tau^{-1})'(x)=\frac{1}{(\tau'(\tau^{-1})(x))},
\end{equation}
it follows that the first-order Taylor expansion of $\tau^{-1}(x)$ is:
\begin{equation}
\tau^{-1}(x)\simeq \tau^{-1}(x_0)+ (x-x_0) \frac{1}{(\tau'(\tau^{-1})(x_0))}.
\end{equation}
Plugging $x_0=\tau(p)$ and $x=\tau(p)+\alpha(\tau(q)-\tau(p))$, we get a first-order approximation of the barycentric quasi-arithmetic mean $M_\tau$ when $\alpha\rightarrow 0$:
\begin{equation}
M_\alpha(p,q) \simeq p + \frac{\alpha(\tau(q)-\tau(p))}{\tau'(p)}.
\end{equation}
For example, when $\tau(x)=x$ ({\it ie.}, arithmetic mean), we have $A_\alpha(p,q) \simeq p+\alpha (q-p)$,
when $\tau(x)=\log x$ ({\it ie.}, geometric mean), we obtain $G_\alpha(p,q) \simeq p+\alpha p \log\frac{q}{p}$, and when $\tau(x)=\frac{1}{x}$ ({\it ie.}, harmonic mean) we get $H_\alpha(p,q) \simeq p+\alpha(p-\frac{p^2}{q})$.
For the regular power means, we have $P_\alpha(p,q) \simeq p+\alpha \frac{q^\delta-p^\delta}{\delta p^{\delta-1}}$.
These are first-order weighted mean approximations obtained by small values of $\alpha$.
Suppose $p < q$. Since $\tau$ is a monotone function: When $\tau$ is strictly increasing, $\tau'>0$ and $\tau(q)>\tau(p)$.
Therefore $\frac{\tau(q)-\tau(p)}{\tau'(p)}>0$. Similarly, when $\tau$ is strictly decreasing, $\tau'<0$ and $\tau(q)<\tau(p)$ so that $\frac{\tau(q)-\tau(p)}{\tau'(p)}>0$.
Now, consider the skewed Jensen Comparative Convexity Distance defined by:
\begin{equation}
J^{\tau,\rho}_{F,\alpha}(p:q)= (M_{\tau,\alpha}(F(p),F(q)) - F(M_{\rho,\alpha}(p,q) ) ),
\end{equation}
and apply a first-order Taylor expansion to get:
\begin{equation}
F(M_{\tau,\alpha}(p,q) ))\simeq F\left (p + \frac{\alpha(\tau(q)-\tau(p))}{\tau'(p)} \right)
\simeq F(p)+ \frac{\alpha(\tau(q)-\tau(p))}{\tau'(p)} F'(p)
\end{equation}
Thus it follows that the Bregman divergence for quasi-arithmetic comparative convexity is:
\begin{equation}
B^{\rho,\tau}_F(q:p) = \lim_{\alpha\rightarrow 0} \frac{1}{\alpha(1-\alpha)}J_{\tau,\rho,\alpha}(p:q) =
\frac{\tau(F(q))-\tau(F(p))}{\tau'(F(p))} - \frac{\rho(q)-\rho(p)}{\rho'(p)} F'(p),
\end{equation}
and the reverse Bregman divergence is:
\begin{equation}
B^{\rho,\tau}_F(p:q) = \lim_{\alpha\rightarrow 1} \frac{1}{\alpha(1-\alpha)} J^{\tau,\rho}_\alpha(p:q) = \lim_{\alpha\rightarrow 0} \frac{1}{\alpha(1-\alpha)} J^{\tau,\rho}_{\alpha}(q:p)
\end{equation}
For notational convenience, let us define the following {\em auxiliary function}:
\begin{equation}\label{eq:BDkappa}
\kappa_\gamma(x:y) = \frac{\gamma(y)-\gamma(x)}{\gamma'(x)}.
\end{equation}
Then the generalized Bregman divergence is written compactly as:
\begin{equation}
\boxed{B^{\rho,\tau}_F(p:q) = \kappa_\tau(F(q):F(p)) -\kappa_\rho(q:p) F'(q).}
\end{equation}
Table~\ref{tab:kappa} reports the auxiliary function instantiated for usual quasi-arithmetic generator functions.
Since power means are regular means, we get the following family of
{\em power mean Bregman divergences} for $\delta_1,\delta_2\in\bbR\backslash\{0\}$ with $F\in\calC_{P_{\delta_1},P_{\delta_2}}$:
\begin{equation}
\boxed{B^{\delta_1,\delta_2}_F(p:q) = \frac{F^{\delta_2}(p)-F^{\delta_2}(q)}{\delta_2 F^{\delta_2-1}(q)} - \frac{p^{\delta_1}-q^{\delta_1}}{\delta_1 q^{\delta_1-1}}F'(q) }
\end{equation}
\begin{table}
\centering
$$
\begin{array}{l|l|l}
\mbox{Type} & \gamma & \kappa_\gamma(x:y)= \frac{\gamma(y)-\gamma(x)}{\gamma'(x)} \\ \hline
A & \gamma(x)=x & y-x\\
G &\gamma(x)=\log x & x\log\frac{y}{x} \\
H & \gamma(x)=\frac{1}{x} & x^2\left(\frac{1}{y}-\frac{1}{x}\right)\\
P_\delta, \delta\not =0 &\gamma_\delta(x)=x^\delta & \frac{y^\delta-x^\delta}{\delta x^{\delta-1}} \\
\end{array}
$$
\caption{The auxiliary function $\kappa$ instantiated for the arithmetic, geometric and harmonic generators.
The generalized Bregman divergences write $B^{\rho,\tau}_F(p:q) = \kappa_\tau(F(q):F(p)) -\kappa_\rho(q:p) F'(q)$ for $F$ a real-valued $(M_\rho,M_\tau)$-convex generator. \label{tab:kappa}}
\end{table}
Note that when $\rho(x)=\tau(x)=x$ ({\it ie.}, quasi-arithmetic means yielding arithmetic means), we recover the fact that the skew Jensen difference tends to a Bregman divergence~\cite{BR-2011}:
\begin{equation}
\lim_{\alpha\rightarrow 0} \frac{1}{\alpha}J_{F,\alpha}(p:q) = B_F(q:p) = F(q)-F(p)-(q-p)F'(p),
\end{equation}
and
\begin{equation}
\lim_{\alpha\rightarrow 1} \frac{1}{1-\alpha} J_{F,\alpha}(p:q) = B_F(p:q) = F(p)-F(q)-(p-q)F'(q).
\end{equation}
Notice that we required function generator $F$ to be strictly $(M_\rho,M_\tau)$-convex and functions $\rho, \tau$ and $F$ to be differentiable in order to perform the various Taylor first-order expansions.
\end{proof}
In~\cite{BR-2011}, the Jensen divergence was interpreted as a {\em Jensen-Bregman divergence} defined by:
\begin{equation}
\JB_F(p,q)=\frac{B_F\left(p:\frac{p+q}{2}\right)+B_F\left(q:\frac{p+q}{2}\right)}{2}=\JB_F(q,p).
\end{equation}
The (discrete) {\em Jensen-Shannon divergence}~\cite{Lin-1991} is a Jensen-Bregman divergence for the Shannon information function $F(x)=\sum_{i=1}^d x_i \log x_i$, the negative Shannon entropy: $F(x)=-H(x)$.
It turns out that $\JB_F(p,q)=J_F(p,q)$.
This identity comes from the fact that the terms $p-\frac{p+q}{2}=\frac{p-q}{2}$ and $q-\frac{p+q}{2}=\frac{q-p}{2}=-\frac{p-q}{2}$ being multiplied by $F'(\frac{p+q}{2})$ cancel out.
Similarly, we can define the generalized {\em quasi-arithmetic Jensen-Bregman divergences} as:
\begin{equation}
\JB_F^{\rho,\tau}(p,q)=\frac{B_F^{\rho,\tau}\left(p:M_\rho(p,q)\right)+B_F^{\rho,\tau}\left(q:M_\rho(p,q)\right)}{2}.
\end{equation}
Consider $\tau=\idf$. Since $\rho(M_\rho(p,q))=\frac{\rho(p)+\rho(q)}{2}$, and $\rho(p)-\rho(M_\rho(p,q))=\frac{\rho(p)-\rho(q)}{2} =-(\rho(q)-\rho(M_\rho(p,q))$ we get the following identity:
\begin{equation}
\JB_F^{\rho,\idf}(p,q)= \frac{F(p)+F(q)}{2} - F(M_\rho(p,q)) = J_F^{\rho,\idf}(p,q).
\end{equation}
\begin{lemma}[Generalized equivalence of Jensen-Bregman divergences with Jensen divergences]
The $(M_\rho,M_\tau)$-Jensen-Bregman divergence amounts to a $(M_\rho,M_\tau)$-Jensen divergence when $\tau=\idf$ ({\it ie.}, $M_\tau=A$, the arithmetic barycentric mean): $\JB_F^{\rho,\idf}(p,q)= J_F^{\rho,\idf}(p,q)$.
\end{lemma}
\subsection{Case study: Pythagorean-convex Bregman divergences}
Let us report the Bregman divergence with respect to a multiplicatively convex function:
\begin{example}
For the geometric mean $\rho(x)=\tau(x)=\log x$, we get the following geometric Bregman divergence ($(G,G)$-Bregman divergence or multiplicative Bregman divergence) for a $(G,G)$-convex generator function $F$:
\begin{equation}
B_F^G(p:q) =\lim_{\alpha\rightarrow 0} \frac{1}{\alpha} J_{\tau,\alpha}(p:q) = F'(q)\KL(p:q)-\KL(F(p):F(q)),
\end{equation}
where $\KL(p:q)=p\log\frac{p}{q}$ is the renown Kullback-Leibler univariate function~\cite{BD-2005}.
\end{example}
\begin{corollary}[Pythagorean-convex Bregman divergences]
The Bregman divergences with respect to Pythagorean convexity are:
\begin{eqnarray}
B_F^{A,A}(p:q) &=& B_F(p:q) = F(p)-F(q)-(p-q)F'(q), \quad F\in\calC \\
B_F^{G,G}(p:q) &=& F(p)\log\frac{F(q)}{F(p)} + \left(p\log\frac{p}{q}\right)F'(q), \quad F\in\calC_{G,G}\\
B_F^{H,H}(p:q) &=& F^2(q) \left( \frac{1}{F(q)} -\frac{1}{F(p)} \right) + p^2 \left( \frac{1}{p} -\frac{1}{q} \right)F'(q), \quad F\in\calC_{H,H} \\
\end{eqnarray}
\end{corollary}
Similarly, the six other Pythagorean-convexity Bregman divergences can be uncovered.
Let us introduce a notion of dominance between means as follows:
\begin{definition}[Dominance]
A mean $M$ is said to dominate a mean $N$ iff $M_\alpha(x,y)\geq N_\alpha(x,y)$ for all $x,y\in I$ and $\alpha\in [0,1]$.
\end{definition}
We write $M\geq N$ or $N\leq M$ when $M$ dominates $N$.
\begin{definition}[Comparable means]
Two means $M$ and $N$ are said comparable if either $M$ dominates $N$ ({\em ie.}, $M\geq N$) or $N$ dominates $M$ ({\em ie.}, $M\leq N$).
\end{definition}
The power means are comparable means, and $P_{\delta_1}\leq P_{\delta_2}$ when $\delta_1<\delta_2$.
This explains the fundamental AGH inequality: $A=P_1\geq G=P_0 \geq H=P_{-1}$.
Lehmer means are comparable: $L_\delta\leq L_{\delta'}, \forall \delta\leq \delta'$.
The dual mean operator reverses the comparison order: If $M_1\leq M_2$ then $M_2^*\leq M_1^*$.
From the dominance relationships of means, we can get inequalities between these generalized Jensen/Bregman divergences.
For example, for an {\em increasing} function $F(x)$ that is both $(M_1,N_1)$-convex and $(M_2,N_2)$-convex, we have
$J_F^{M_1,N_1}(p:q)\geq J_F^{M_2,N_2}(p:q)$ when $N_1\geq N_2$ and $M_1 \leq M_2$.
Indeed, we check that in order to have:
\begin{equation}
N_1(F(p),F(q))-F(M_1(p,q)) \geq N_2(F(p),F(q))-F(M_2(p,q)),
\end{equation}
it is sufficient to have $N_1\geq N_2$ and $M_1 \leq M_2$ for an increasing function $F\in \calC_{N_1,M_1}\cap \calC_{N_2,M_2}$.
Let $\rho,\tau: I\rightarrow (0,\infty)$ be two synchronous (meaning both increasing or both decreasing) continuous bijective functions with $\tau/\rho$ nonincreasing.
Then the quasi-arithmetic mean $M_\rho$ is dominated by a quasi-arithmetic mean $M_\tau$: $M_\rho\leq M_\tau$.
\subsection{Checking the quasi-arithmetic convexity of functions}
To check whether a function $F$ is $(M,N)$-convex or not when using quasi-arithmetic means, we can use a reduction to standard convexity as follows:
\begin{lemma}[$(M_\rho,M_\tau)$-convexity to ordinary convexity~\cite{Aczel-1947}]\label{lemma:cvx}
Let $\rho:I\rightarrow\bbR$ and $\tau:J\rightarrow\bbR$ be two continuous and strictly monotonic real-valued functions with $\tau$ increasing,
then function $F:I\rightarrow J$ is $(M_\rho,M_\tau)$-convex iff function $G=F_{\rho,\tau} = \tau\circ F\circ\rho^{-1}$ is (ordinary) convex on $\rho(I)$.
\end{lemma}
\begin{proof}
Let us rewrite the $(M_\rho,M_\tau)$-convexity midpoint inequality as follows:
\begin{eqnarray}
F(M_\rho(x,y)) &\leq& M_\tau(F(x),F(y)),\\
F\left(\rho^{-1}\left(\frac{\rho(x)+\rho(y)}{2}\right)\right) &\leq& \tau^{-1}\left(\frac{\tau(F(x))+\tau(F(y))}{2}\right),\\
\end{eqnarray}
Since $\tau$ is strictly increasing, we have:
\begin{equation}
(\tau\circ F\circ\rho^{-1}) \left(\frac{\rho(x)+\rho(y)}{2}\right) \leq \frac{(\tau\circ F)(x)+(\tau\circ F)(y)}{2}.
\end{equation}
Let $u=\rho(x)$ and $v=\rho(y)$ so that $x=\rho^{-1}(u)$ and $y=\rho^{-1}(v)$ (with $u,v\in\rho(I)$).
Then it comes that:
\begin{equation}
(\tau\circ F\circ\rho^{-1})\left(\frac{u+v}{2}\right) \leq \frac{(\tau\circ F\circ\rho^{-1})(u)+(\tau\circ F\circ\rho^{-1})(v)}{2}.
\end{equation}
This last inequality is precisely the ordinary midpoint convexity inequality for function $G=F_{\rho,\tau}=\tau\circ F\circ\rho^{-1}$.
Thus $(M_\rho,M_\tau)$-convex is convex iff $G$ is ordinary convex.
\end{proof}
For example, a function $F$ is $M_p$-convex (a shortcut for $(A,M_p)$-convex) iff $F^p$ is convex when $p>0$.
Moreover, every $M_p$ convex function belongs to the class of $M_q$ convex functions when $q\geq p$.
When the functions are twice differentiable, this lemma allows one to check whether a function is $(M_\rho,M_\tau)$ by checking whether $(\tau\circ F\circ\rho^{-1})''>0$ or not.
For example, a function $F$ is $(H,H)$-convex iff $\frac{1}{F(1/x)}$ is convex.
Recall that the $(H,H)$-convexity of $F$ implies the following generalized Jensen midpoint inequality:
\begin{equation}
\frac{2F(p)F(q)}{F(p)+F(q)} \geq F\left( \frac{2pq}{p+q} \right).
\end{equation}
Another example is to check the $(G,A)$-strict convexity of twice-differentiable $F$ by checking that $x^2F''(x)+xF'(x)>0$ for $x>0$, etc.
Notice that we can also graphically check the $(M_\rho,M_\tau)$-convexity of a univariate function $F$ by plotting the function
$y=F_\tau(x_\rho)=(\tau \circ F)(x_\rho)$ with abscissa $x_\rho=\rho^{-1}(x)$.
We can thus give a complete $(P_{\delta_1},P_{\delta_2})$-convex characterization of functions $f:I\subset \bbR_{++}\rightarrow \bbR_{++}$, see~\cite{Maksa-2015}.
Define function $f_{\delta_1,\delta_2}:I_{\delta_1}\rightarrow\bbR$ with $I_{\delta}=\{ x^\delta\ : \ x\in I\}$ for $\delta\not=0$ (and $I_0=\{ \log x\ : \ x\in I\}$) as:
\begin{equation}
f_{\delta_1,\delta_2}(x) = \left\{
\begin{array}{ll}
\sign(\delta_2)f^{\delta_2}(x^{\frac{1}{\delta_1}}) & \delta_1\not=0,\delta_2\not =0\\
\sign(\delta_2)(f^{\delta_2}(\exp(x))) & \delta_1=0,\delta_2\not =0\\
\log(f(x^{\frac{1}{\delta_1}})) & \delta_1\not=0,\delta_2 =0\\
\log(f(\exp(x))) & \delta_1=0,\delta_2=0
\end{array}
\right.
\end{equation}
Then $f$ is $(P_{\delta_1},P_{\delta_2})$-convex on $I\subset \bbR_{++}$ iff $f_{\delta_1,\delta_2}$ is convex on $I_{\delta_1}$.
\subsection{Proper quasi-arithmetic Bregman divergences}
Applying the ordinary Bregman divergence on the ordinary convex generator $G(x)=\tau (F(\rho^{-1}(x)))$ for a $(M_\rho,M_\tau)$-convex function
with:
\begin{eqnarray}
G'(x) &=& \tau(F(\rho^{-1}(x)))'\\
&=& (F(\rho^{-1}(x)))' \tau'(F(\rho^{-1}(x)))\\
&=& (\rho^{-1}(x))' F'(\rho^{-1}(x)) \tau'(F(\rho^{-1}(x)))\\
&=& \frac{1}{(\rho'(\rho^{-1})(x))} F'(\rho^{-1}(x)) \tau'(F(\rho^{-1}(x))),
\end{eqnarray}
we get an ordinary Bregman divergence that is, in general, {\em different} from the generalized quasi-arithmetic Bregman divergence $B^{\rho,\tau}_F$:
\begin{eqnarray}
B_G(p:q) &=& G(p)-G(q)-(p-q)G'(q),\\
B_G(p:q) &=& \tau(F(\rho^{-1}(p)))-\tau(F(\rho^{-1}(q)))-(p-q) \frac{1}{(\rho'(\rho^{-1})(q))} F'(\rho^{-1}(q)) \tau'(F(\rho^{-1}(q)))
\end{eqnarray}
This is in general a different generalized Bregman divergence: $B_G(p:q)\not = B_F^{\rho,\tau}(p:q)$.
But we check that $B_G(p:q) = B_F^{\rho,\tau}(p:q)$ when $\rho(x)=\tau(x)=x$ (since we have the derivatives $\rho'(x)=\tau'(x)=1$).
Let us notice the following remarkable identity:
\begin{equation}\label{eq:rkid}
\boxed{B_F^{\rho,\tau}(p:q)=\frac{1}{\tau'(F(q))} B_G(\rho(p):\rho(q))}
\end{equation}
Since $\tau(x)$ is a strictly increasing function, we have $\tau'(x)>0$, and since $B_G$ is an ordinary Bregman divergence we have $B_G(p':q')\geq 0$
(and $B_G(p':q')=0$ iff $p'=q'$) for any pair of $p'$ and $q'$ of values. It follows that $B_F^{\rho,\tau}$ is a {\em proper generalized Bregman divergence}:
$B_F^{\rho,\tau}(p:q)\geq 0$ with equality iff $p=q$.
Notice that $B_G(\rho(p):\rho(q))=B_{H}(p:q)$ with $H=\tau\circ F$.
However function $H$ may not be strictly ordinary convex
since $H'(x)=F'(x)\tau'(F(x))$, and $H''(x)=F''(x)\tau'(F(x))+(F'(x))^2\tau''(F(x))$, and therefore $B_H$ may not be a Bregman divergence.
Function $H$ is strictly convex when $H''(x)=F''(x)\tau'(F(x))+(F'(x))^2\tau''(F(x))>0$.
\begin{theorem}[Proper generalized $(M_\rho,M_\tau)$-Quasi-Arithmetic Bregman divergence]
Let $F:I\subset \bbR \rightarrow \bbR$ be a real-valued $(M_\rho,M_\tau)$-convex function defined on an interval $I$ for two strictly monotone and differentiable functions $\rho$ and $\tau$, with $\tau$ strictly increasing.
The quasi-arithmetic Bregman divergence induced by the comparative convexity is a {\em proper} divergence:
\begin{eqnarray}
B^{\rho,\tau}_F(p:q)
&=&
\frac{\tau(F(p))-\tau(F(q))}{\tau'(F(q))} - \frac{\rho(p)-\rho(q)}{\rho'(q)} F'(q),\\
&=& \frac{1}{\tau'(F(q))} B_{\tau\circ F\circ \rho^{-1}}(\rho(p):\rho(q)) \geq 0,
\end{eqnarray}
with $B^{\rho,\tau}_F(p:q)=0$ iff $p=q$.
\end{theorem}
Using Taylor's first-order expansion with the exact Lagrangre remainder, we get:
$$
B^{\rho,\tau}_F(p:q) = \frac{1}{\tau'(F(q))} (\rho(p)-\rho(q))^2 G''(\rho(\xi)),
$$
for $\xi\in [pq]$.
\subsection{Conformal Bregman divergences in embedded space}
The generalized Bregman divergences $B_F^{\rho,\tau}(p:q)$ can also be interpreted as Bregman conformal divergences~\cite{Conformal-2016} on the $\rho$-embedding of parameters:
$B_F^{\rho,\tau}(p:q)= \kappa(\rho(q)) B_G(\rho(p):\rho(q))$
with positive conformal factor $\kappa(x)=\frac{1}{\tau'(F(\rho^{-1}(x)))}$ for a strictly increasing monotone function $\tau$.
\begin{corollary}[Generalized quasi-arithmetic Bregman divergences as conformal Bregman divergences]
The generalized quasi-arithmetic Bregman divergence $B^{\rho,\tau}_F(p:q)$ amounts to compute an ordinary Bregman conformal divergence in the $\rho$-embedded space:
$B_F^{\rho,\tau}(p:q)= \kappa(\rho(q)) B_G(\rho(p):\rho(q))$ with conformal factor $\kappa(x)=\frac{1}{\tau'(F(\rho^{-1}(x)))}>0$.
\end{corollary}
\subsection{Generalized Bregman centroids}
The identity of Eq.~\ref{eq:rkid} allows one to compute generalized Bregman centroids easily.
For a positively weighted set of $n$ scalars $(w_1,p_1), \ldots, (w_n,p_n)$,
define the generalized Bregman centroid as the minimiser of $\sum_{i=1}^n w_i B_F^{\rho,\tau}(c:p_i)$.
We have:
$$
\sum_{i=1}^n w_i B_F^{\rho,\tau}(c:p_i)=\sum_{i=1}^n \frac{w_i}{\tau'(F(p_i))} B_G(\rho(c):\rho(p_i)).
$$
Let $w_i'=\frac{w_i}{\tau'(F(p_i))}>0$, $W'=\sum_{i=1}^n w_i'$, $c'=\rho(c)$ and $p_i'=\rho(p_i)$.
Then it follows that the generalized Bregman centroid is unique and available in closed-form:
\begin{equation}
G'(c')= \sum_{i=1}^n \frac{w_i'}{W'} G'(p_i'),
\end{equation}
with $G'(x')=G'(\rho(x))=\frac{F'(x)\tau'(F(x))}{\rho'(x)}$.
Thus the generalized Bregman centroid can be interpreted as a regularized Bregman centroid (see the total Bregman centroid~\cite{tBD-2011}).
We can extend the $k$-means++ seeding~\cite{kmeanspp-2007,kvariatepp-2016}.
\subsection{Examples of quasi-arithmetic Bregman divergences}
Using Table~\ref{tab:kappa} and Eq~\ref{eq:BDkappa}, we can easily instantiate the various comparative-convexity divergences.
For example, considering $\rho(x)=x$ (arithmetic mean $A$), we get the following families of divergences:
\begin{description}
\item[$(A,A)$-divergences.] Ordinary case with $\rho=\tau=\mathrm{id}$, the identity function.
\begin{eqnarray}
J_F^{A,A}(p;q) &=& \frac{F(p)+F(q)}{2}-F\left(\frac{p+q}{2}\right),\\
J_{F,\alpha}^{A,A}(p:q) &=& (1-\alpha)F(p)+\alpha F(q)- F( (1-\alpha)p+\alpha q),\\
B_F^{A,A}(p:q) &=& F(p)-F(q)-(p-q)F'(q).
\end{eqnarray}
$F\in\calC_{A,A}$ iff $F\in\calC$.
\item[$(A,G)$-divergences.] $\rho=\mathrm{id}$, $\tau=\log$ the logarithmic function.
\begin{eqnarray}
J_F^{A,G}(p;q) &=& \sqrt{F(p)F(q)}-F\left(\frac{p+q}{2}\right),\\
J_{F,\alpha}^{A,G}(p:q) &=& F^{1-\alpha}(p)F^\alpha(q)- F( (1-\alpha)p+\alpha q),\\
B_F^{A,G}(p:q) &=& F(q)\log \frac{F(p)}{F(q)} -(p-q)F'(q).
\end{eqnarray}
$F\in\calC_{A,G}$ iff $\log\circ F$ is convex ({\it ie.}, a log-convex function).
\item[$(A,H)$-divergences.] $\rho=\mathrm{id}$, $\tau=\frac{1}{x}$ (with $\tau'(x)=-\frac{1}{x^2}$).
\begin{eqnarray}
J_F^{A,H}(p;q) &=& \frac{2F(p)F(q)}{F(p)+F(q)} -F\left(\frac{p+q}{2}\right),\\
J_{F,\alpha}^{A,H}(p:q) &=& \frac{1}{(1-\alpha)\frac{1}{F(p)}+ \alpha\frac{1}{F(q)} }- F( (1-\alpha)p+\alpha q),\\
&=& \frac{F(p)F(q)}{\alpha F(p)+(1-\alpha)F(q)}- F( (1-\alpha)p+\alpha q),\\
B_F^{A,H}(p:q) &=& F^2(q)\left(\frac{1}{F(q)} - \frac{1}{F(p)} \right)-(p-q)F'(q).
\end{eqnarray}
$F\in\calC_{A,H}$ iff $\frac{1}{x} \circ F=\frac{1}{F}$ is convex.
For example, $F(x)=\frac{1}{x\log x}$ is $(A,H)$-convex on $x>0$.
\item[$(A,M_\delta)$-divergences.] $\rho=\mathrm{id}$, $\tau=x^\delta$ for $\delta >0$.
\begin{eqnarray}
J_F^{A,M}(p;q) &=& \left(\frac{F^\delta(p)+F^\delta(q)}{2}\right)^{\frac{1}{\delta}} -F\left(\frac{p+q}{2}\right),\\
J_{F,\alpha}^{A,M}(p:q) &=& \left((1-\alpha)F^\delta(p)+\alpha F^\delta(q)\right)^{\frac{1}{\delta}} - F( (1-\alpha)p+\alpha q),\\
B_F^{A,M}(p:q) &=& \left(\frac{F(q)^\delta-F(p)^\delta}{\delta F(p)^{\delta-1}} \right)-(p-q)F'(q).
\end{eqnarray}
$F\in\calC_{A,M_\delta}$ iff $(x^\delta) \circ F=F^\delta$ is convex for $\delta>0$.
\end{description}
\section{Generalized statistical Bhattacharyya distances with comparative means}\label{sec:GB}
The Bhattacharyya distance~\cite{Bhattachayya-1943} (1943) is a {\em statistical distance} defined between two probability measures dominated by a measure $\nu$ (often, the Lebesgue measure or the counting measure).
Let $p(x)$ and $q(x)$ be the densities defined on the support $\calX$.
Then the Bhattacharyya distance is defined by:
\begin{equation}
\Bhat(p(x):q(x)) = -\log \int_\calX \sqrt{p(x)q(x)} \dnu(x).
\end{equation}
The skewed Bhattacharyya distance for $\alpha\in(0,1)$ is defined by
\begin{equation}
\Bhat_\alpha(p(x):q(x)) = -\log \int_\calX p^{\alpha}(x) q^{1-\alpha}(x) \dnu(x),
\end{equation}
with $\Bhat(p(x):q(x))=\Bhat_{\frac{1}{2}}(p(x):q(x))$.
The term $c_\alpha(p(x):q(x))=\int_\calX p^{\alpha}(x) q^{1-\alpha}(x) \dnu(x)$ is interpreted as a coefficient of similarity, also called the {\em Bhattacharyya affinity} coefficient~\cite{GenBhat-2014}.
This term plays an important role in information geometry~\cite{IG-2016} within the family of $\alpha$-divergences:
\begin{equation}
I_\alpha(p(x):q(x)) = \frac{1-\int_\calX p^{\alpha}(x) q^{1-\alpha}(x) \dnu(x)}{\alpha(1-\alpha)} = \frac{1-c_\alpha(p(x):q(x))}{\alpha(1-\alpha)}.
\end{equation}
Thus we can plug the Bhattacharyya distance in the $\alpha$-divergence by using the identity $c_\alpha(p(x):q(x))=\exp(-\Bhat_\alpha(p(x):q(x)))$.
The $\alpha$-divergences tend to the Kullback-Leibler divergence when $\alpha\rightarrow 1$ and to the reverse Kullback-Leibler divergence when $\alpha\rightarrow 0$, see~\cite{IG-2016}.
The standard Bhattacharyya distance is very well suited to the computation of the distance between members of the same exponential family~\cite{BR-2011}.
Indeed, let $p(x)=p(x;\theta_p)$ and $q(x)=p(x;\theta_q)$ be two distributions belonging to the same exponential family
$\{p(x;\theta) =\exp(\theta^\top x-F(\theta)) \ :\ \theta\in\Theta \}$, where $\Theta$ denotes the natural parameter space~\cite{EF-2009}.
Then we have:
\begin{equation}
\Bhat_\alpha(p(x;\theta_p):p(x;\theta_q)) = J_{F,1-\alpha}(\theta_p:\theta_q).
\end{equation}
Here, the term $1-\alpha$ comes from the fact that the coefficient has been historically defined for the geometric mean $p^\alpha(x) q^{1-\alpha}(x)$.
In~\cite{HolderDiv-2017}, the Bhattacharyya distance was extended to positive measures by defining a projective divergence relying on the H\"older inequality.
By definition, any {\em projective divergence} $D(p,q)$ satisfies $D(\lambda p,\lambda' q)=D(p,q)$ for any $\lambda,\lambda'>0$.
Here, we consider yet another rich generalization of the Bhattacharyya distance by noticing that $p^{\alpha}(x) q^{1-\alpha}(x)=G(q(x),p(x);\alpha,1-\alpha)$ is the geometric barycenter and that
$\int_\calX ({\alpha} p(x)+ {1-\alpha} q(x)) \dnu(x)=\int_\calX A(p(x),q(x);1-\alpha,\alpha) \dnu(x)= 1$ can be interpreted as a (hidden) unit denominator.
Thus consider two {\em comparable means}~\cite{Cargo-1965} $M$ and $N$ that guarantees by definition that $M(a,b;\alpha,1-\alpha) \leq N(a,b;\alpha,1-\alpha)$ for any value of $a,b$ and $\alpha\in [0,1]$ (written for short as $M\leq N$), and
define the generalized Bhattacharyya distance as follows:
\begin{definition}[Comparative-Mean Bhattacharyya Distance, CMBD]\label{def:GB}
For two distinct comparable means $M$ and $N$ such that $M \leq N$, the comparative-mean skewed Bhattacharyya distance is defined by:
\begin{equation}\label{eq:GenBhat}
\boxed{
\Bhat_\alpha^{M,N}(p(x):q(x)) = -\log \frac{\int_\calX M(p(x),q(x);1-\alpha,\alpha) \dnu(x)}{\int_\calX N(p(x),q(x);1-\alpha,\alpha) \dnu(x)}.
}
\end{equation}
\end{definition}
We have $\Bhat_\alpha^{M,N}(q(x):p(x)) = \Bhat_{1-\alpha}^{M,N}(p(x):q(x))$.
It follows from the property of abstract barycentric means that $\Bhat_\alpha^{M,N}(q(x):p(x))=0$ iff $M(p(x),q(x);1-\alpha,\alpha)=N(p(x),q(x);1-\alpha,\alpha)$ for strict distinct means, that is iff $p(x)=q(x)$.
When $M=G$ is chosen as the geometric mean and $N=A$ is taken as the arithmetic mean, we recover the ordinary skewed Bhattacharyya distance, modulo the fact that we swap $\alpha\leftrightarrow 1-\alpha$: $\Bhat_\alpha^{M,N}(p(x):q(x)) = \Bhat_{1-\alpha}(p(x):q(x))$.
When $M$ and $N$ are both {\em homogeneous} means, we end up with a {\em homogeneous} comparative-Mean Bhattacharyya distance.
That is, the divergence is invariant for the {\em same} scaling factor $\lambda$:
$\Bhat_\alpha^{M,N}(\lambda p(x): \lambda q(x)) = \Bhat_\alpha^{M,N}(p(x):q(x))$ for any $\lambda>0$.
See~\cite{Zhang-2013} for the definition of the homogeneous $(\alpha,\beta)$-divergences.
\begin{corollary}
The comparative-Mean Bhattacharyya distance for comparable homogeneous means yields a homogeneous statistical distance.
\end{corollary}
Since distinct power means are always comparable (ie., $P_{\delta_1} \leq P_{\delta_2}$ when $\delta_1<\delta_2$), we define the
{\em power-mean Bhattacharyya divergence} for $\delta_1, \delta_2\in\bbR\backslash\{0\}$ with $\delta_1\not=\delta_2$ as follows:
\begin{eqnarray}\label{eq:powermeanbhat}
\Bhat_\alpha^{\delta_1,\delta_2}(p(x):q(x)) &=& \frac{1}{\delta_1-\delta_2} \log\left( \frac{\int_\calX P_{\delta_1}(p(x),q(x);1-\alpha,\alpha) \dnu(x)}{\int_\calX P_{\delta_2}(p(x),q(x);1-\alpha,\alpha) \dnu(x)}\right),\\
&=& \frac{1}{\delta_1-\delta_2} \log \left( \frac{\int_\calX \left((1-\alpha)p^{\delta_1}(x)+\alpha q^{\delta_1}(x)\right)^{\frac{1}{\delta_1}} \dnu(x)}{
\int_\calX \left((1-\alpha)p^{\delta_2}(x)+\alpha q^{\delta_2}(x)\right)^{\frac{1}{\delta_2}} \dnu(x)
}\right).
\end{eqnarray}
\begin{remark}
Yet another type of divergences are {\em conformal divergences}~\cite{Conformal-2016}.
A conformal divergence $D_{h}(x:y)$ can be factorized as $D_h(x:y)=h(x:y)D(x:y)$ where $h(x:y)$ is a positive {\em conformal factor}
function~\cite{tJ-2015} and $D$ a base divergence.
Conformal divergences such as the total Jensen divergences~\cite{tJ-2015} or the total Bregman divergences~\cite{tB-2012} proved useful in practice to regularize the base divergence and to guarantee invariance by rotation of the coordinate system.
\end{remark}
When considering quasi-arithmetic means for $M=M_f$ and $N=M_g$ for two continuous and increasing functions $f$ and $g$ on an interval domain $I=[a,b]$, a necessary and sufficient condition~\cite{Cargo-1965}
for $M_f\leq M_g$ is that $g\circ f^{-1}$ is convex on interval $[f(a),f(b)]$.
Function $g$ is then said convex with respect to $f$.
Thus two quasi-arithmetic means $M_f$ and $M_g$ are comparable when either $g\circ f^{-1}$ is convex ($M_f\leq M_g$) or
$f\circ g^{-1}$ is convex ($M_f\geq M_g$).
{\em Relative convexity} studies the concept of convexity of a function $g$ with respect to
another function $f$: It is denoted by $f \triangleleft g$, with the notation $\triangleleft$ borrowed from~\cite{ConvexFunction-2006}.
A general characterization of the relative convexity $f\triangleleft g$ is as follows:
\begin{equation}
\forall x,y,z\in\calX, f(x)\leq f(y)\leq f(z) \Rightarrow \left| \begin{array}{ccc}
1 & f(x) & g(x)\cr
1 & f(y) & g(y)\cr
1 & f(z) & g(z)\cr
\end{array} \right|\geq 0,
\end{equation}
for $f$ a non-constant function.
When the domain $\calX=I$ is an interval and $f$ is a strictly increasing and continuous function, then
$M_f\leq M_g$ iff $f\triangleleft g$ ($g\circ f^{-1}$ is convex).
For example, $F$ is multiplicatively convex (type $(G,G)$-convexity) iff:
\begin{equation}
\forall x,y,z\in\calX, x \leq y \leq z,\quad
\Rightarrow \left| \begin{array}{ccc}
1 & \log x & f(x)\cr
1 & \log y & f(y)\cr
1 & \log z & f(z)\cr
\end{array} \right|\geq 0,
\end{equation}
Relative convexity is a sub-area of comparative
convexity.
For example, we have the following correspondences of comparative convexity classes of functions:
\begin{itemize}
\item $f\in\calC$ iff $\idf \triangleleft f$,
\item $f\in\calC_{A,G}$ iff $\idf \triangleleft \log f$,
\item $f\in\calC_{G,A}$ iff $\log \triangleleft f$,
\item $f\in\calC_{G,G}$ iff $\log \triangleleft \log f$.
\end{itemize}
The criterion of relative convexity can be used to recover the celebrated Arithmetic Mean-Geometric Mean-Harmonic Mean (AM-GM-HM inequality):
When $M=M_{\log}$ is chosen as the geometric mean and $N=M_{\idf}$ is taken as the arithmetic mean,
we check that we have $\log \triangleleft \idf$ (and $\idf\circ \exp=\exp$, $M_{\log} \leq M_{\idf}$), and we recover the ordinary skewed Bhattacharyya distance.
An interesting subfamily of Bhattacharyya distance is obtained for $N=A=M_\idf$. In that case, we have:
\begin{equation}\label{eq:GenBhatOne2}
\boxed{
\Bhat_\alpha^{M,N}(p(x):q(x)) = -\log \int_\calX M(p(x),q(x);1-\alpha,\alpha) \dnu(x),
}
\end{equation}
for $M_f\leq M_\idf$ with $f^{-1}$ ordinary convex.
We compare the $\delta$-th H\"older power mean with the $\delta'$-th H\"older power mean on $\bbR_{++}=(0,\infty)$ as follows:
$P_\delta\leq P_{\delta'}$ when $\delta\leq\delta'$ and $P_\delta\geq P_{\delta'}$ when $\delta\geq\delta'$.
Notice that in~\cite{GenBhat-2014}, the generalized Bhattacharyya coefficients $c_\alpha^f(p(x):q(x))=\int_\calX M_f(p(x),q(x);1-\alpha,\alpha) \dnu(x)$ were introduced to upper bound the Bayes' probability of error.
Here, we further extend this generalization by considering comparable means, and we rewrite the comparative-mean Bhattacharyya distance as:
\begin{equation}\label{eq:GenBhat2}
\boxed{\Bhat_\alpha^{M,N}(p(x):q(x)) = -\log \frac{ c_\alpha^M(p(x):q(x)) }{c_\alpha^N(p(x):q(x))}}
\end{equation}
where $c_\alpha^M(p(x):q(x))=\int_\calX M(p(x),q(x),\alpha,1-\alpha) \dnu(x)$ is a generalized Bhattacharyya affinity coefficient.
Notice that $c_\alpha^A(p(x):q(x))=1$ when choosing the arithmetic mean.
Those generalized Bhattacharyya distances are handy for getting closed-form formulas depending on the structure of the probability densities:
For example, consider the harmonic-arithmetic comparative-mean Bhattacharyya distance $\Bhat_\alpha^{H,A}(p(x):q(x))$ between two Cauchy distributions
$p(x)=p(x;s_1)$ and $q(x)=p(x;s_2)$ with $p(x;s)=\frac{s}{\pi(x^2+s^2)}$ for a scale parameter $s>0$.
It was shown in~\cite{GenBhat-2014} that:
\begin{equation}
c_\alpha^H(p(x;s_1):p(x;s_2))= \frac{s_1s_2}{((1-\alpha)s_1+\alpha s_2 )s_\alpha}.
\end{equation}
Therefore it comes that:
\begin{equation}
\Bhat_\alpha^{H,A}(p(x;s_1):p(x;s_2)) = -\log \frac{s_1s_2}{((1-\alpha)s_1+\alpha s_2 )s_\alpha}.
\end{equation}
The original $(G,A)$-Bhattacharyya distance does not allow to get a simple closed-form expression when dealing with Cauchy distributions.
It is thus a mathematical trick to tailor the generalized Bhattacharyya distance to the structure of the family of distributions in practice to get efficient algorithms.
Note that the Cauchy family of distributions do not form an exponential family, but can be interpreted as a deformed exponential family~\cite{Naudts-2014} by defining corresponding deformed logarithm and exponential functions.
Other examples of generalized Bhattacharyya coefficients with closed-form expressions are reported in~\cite{GenBhat-2014} using the power means for the Pearson type VII and multivariate $t$-distributions.
Notice that in the discrete case, we get a closed-form expression since integrals transforms into finite sums:
\begin{equation}
\Bhat_\alpha^{M,N}(p(x):q(x)) = -\log \frac{\sum_{i=1}^d M(p_i,q_i;1-\alpha,\alpha)}{\sum_{i=1}^d N(p_i,q_i;1-\alpha,\alpha)}.
\end{equation}
There are many statistical distances available that prove useful depending on application context~\cite{Basseville-2013}.
Comparative means allow one to define yet another one as follows:
\begin{remark}
Note that comparable means~\cite{Cargo-1965} satisfying $M_f\leq M_g$ allows to define a symmetric distance gap:
\begin{equation}
D_{f,g}(p(x),q(x)) = \int \left(M_g(p(x),q(x))-M_f(p(x),q(x))\right)\dnu(x)\geq 0
\end{equation}
A necessary and sufficient condition for $f,g:I=[a,b]\rightarrow\bbR$ is to have $g\circ f^{-1}$ convex on interval $[f(a),f(b)]$, see~\cite{Cargo-1965}.
\end{remark}
\section{Conclusion and discussion}\label{sec:concl}
We defined generalized Jensen divergences (Definition~\ref{def:jccd}) and generalized Bregman divergences (Definition~\ref{def:bccd}) using the framework of comparative convexity based on abstract means.
In particular, we reported a closed-form formula for the generalized Bregman divergences (Theorem~\ref{theo:qabd}) when considering quasi-arithmetic means.
We proved that those generalized quasi-arithmetic Bregman divergences are proper divergences that can be interpreted as conformal ordinary Bregman divergences on an embedded representation of input space.
Those generalized Bregman divergences can be fruitfully used in machine learning: Not only can we learn~\cite{LearningDivergence-2015} the separable convex generators $F_i$'s component-wise, but we can also learn the increasing functions $\rho_i$ and $\tau_i$ that induces the quasi-arithmetic means $M_{\rho_i}$ and $M_{\tau_i}$.
Finally, we introduced a generalization of the Bhattacharyya statistical distance (Definition~\ref{def:GB}) and of the Bhattacharyya coefficient for comparable means, and show that depending on the structure of the distributions we may obtain handy closed-form expressions or not.
In particular, the generalized Bhattacharyya distances yield homogeneous divergences when homogeneous comparative means are used.
Since minimizing skewed Bhattacharyya distances allows one to bound the probability of error, we may define similarly generalized Chernoff information~\cite{Chernoff-2013}, etc.
This work emphasizes that the theory of means are at the very heart of distances.
In general, a family of means can be investigated by studying comparison and equality, homogeneity, and exact characterization by functional equations or inequalities. There are many means~\cite{Bullen-2013} ({\it eg.}, Heinz means~\cite{Heinz-2006}, Gini means, Lehmer means, counter-harmonic means, Whitely means, Muirhead means) to consider to build and study novel family of distances and statistical distances. To derive generalized Bregman divergences, we need to study the asymptotic expansion of barycentric means. Generalization of means to weighted means is an interesting topic in itself~\cite{Witkowski-2006}.
We may also consider asymmetric weighted means~\cite{Qi-2000} to define corresponding Bregman divergences.
Properties of a family of means may also be considered.
For example, the power means form a {\em scale}~\cite{Pasteczka-2015}:
That means that there exists a bijection between $\delta\in\bbR$ and $P_\delta(u,v)$ for $u\not =v$.
Informally speaking, the family of power means allows one to interpolate between the minimum and the maximum.
Although the power means are the only homogeneous quasi-arithmetic means that form a scale, there exist other families of quasi-arithmetic means
that form a scale~\cite{Pasteczka-2015}. Means can also be defined for various types of data like matrices~\cite{Petz-2005}.
As a final remark, let us notice that we have defined several divergences using (extrinsic) means.
But divergences $D(\cdot:\cdot)$ induce a geometry where we can also be used to define (intrinsic) means $m$ (commonly called centers) by minimizing the loss function $\frac{1}{n} \sum_{i=1}^n D(p_i:m)$.
\section*{Acknowledgments}
The authors would like to thank Gautier Marti and Ga\"etan Hadjeres for reading a preliminary draft.
|
1,108,101,563,012 | arxiv | \section{\bf 1. Introduction}
I give a further calculation of R\'enyi entropy as a partial extension of earlier work,
[\putref{dowgjmsren}], to which I have to refer for basic explanation and definitions. For
this reason I proceed somewhat rapidly.
The field under consideration is a $p$--form propagated by the de Rham Laplacian,
${\cal O}=-d\delta-\delta d$. I am not aware of any simple general higher derivative product
operator like the GJMS one that involves the de Rham Laplacian.\footnotecheck\defaultoption[]\@footnote{ W\"unsch,
[\putref{Wunsch}], gives a useful treatment of standard conformally invariant operators
referring to $\delta d$ as the Maxwell operator.} Specific cases do factorise (see
[\putref{BandT}] and references there) but on a rather {\it ad hoc} basis. Osborn and
Stegiou, [\putref{OandS}], exhibit a four--derivative conformal form theory.
The conically deformed manifold used to define the R\'enyi entropy is the
$d$--dimensional periodic spherical $q$--lune described in [\putref{dowgjmsren}].
One motivation for the calculation is that it can be carried out in fair generality and that it
reinforces some general points appearing in [\putref{dowgjmsren}]. It also lends concrete
credence to the suggestions of Donnelly, Michel and Wall, [\putref{DMW}], on the role of
edge modes in entanglement entropy. (See also, [\putref{Huang}]).
The hyperbolic cylinder technique was applied by Nian and Zhou, [\putref{NandZ}], in order
to determine $p$--form R\'enyi entropies, A general procedure was outlined and the case
of $p=2$ treated in detail. (See later in section 7.)
\section{\bf 2. R\'enyi entropy}
The R\'enyi entropy, $S_n$, is defined by,
$$
S_n={nW(1)-W(1/n)\over1-n}\,,
\eql{renyi}
$$
where $W(q)$ here is the effective action on the periodic $q$--lune. $n=1/q$ is the
R\'enyi, or replica, index. $S_1$ is the entanglement entropy and $S'_1$ determines the
central charge, $C_T$, [\putref{Perlmutter}].
In even dimensions, the universal component of $S_n$, denoted by $\mbox{{\goth\char83}}_q$, is obtained
by substituting the value $\zeta(0)$ for the effective action, $W$, in (\puteqn{renyi}). $\zeta(s)$
is the spectral $\zeta$--function\ of the propagating operator, ${\cal O}$, on the conically deformed
manifold. I will refer to $\zeta(0)$ as the (universal part of the) `free energy' and
sometimes, imprecisely, as the conformal anomaly, even for any $p$ and $d$. For
conformal gauge fields, $p=d/2-1$. Their quantisation is well known. Because of $p$--form
isomorphisms, it is sufficient to consider coexact forms and make the full reconstruction
later.
\section{\bf 3. The coexact $\zeta$--function\ at 0}
The $\zeta$--function\ is determined by the coexact spectrum of the de Rham Laplacian on the
$q$--lune, S$_q$. The eigenvalues are the same as on the full sphere and standard. They
are $\lambda(p,m,a)$ where
$$
\lambda(p,m,a)=(a+1+m)^2-\alpha^2(a,p),\quad m=0,1,2,\ldots,
\eql{eigs}
$$
setting $a=(d-1)/2$ and $\alpha(a,p)\equiv a-p$. Note that there is a zero mode at $m=0$
when $p=d$.
The degeneracies are not the same as on the round sphere, of course They are given
below.
According to (\puteqn{eigs}) I can formally factorise the coexact de Rham Laplacian as
follows
$$
-d\delta-\delta d\equiv -\delta d=\big(\widehat B-\alpha\big)\big(\widehat B+\alpha\big)
$$
where
$$
\widehat B(a,p)=\sqrt{-\delta d+\alpha^2(a,p)}\,,
$$
has eigenvalues
$$
a+1+m\,,\quad m=0,1,\ldots\,,
$$
and $\zeta$--function,
$$
\zeta(s,a)=\sum_{m=0}^\infty{d(m)\over (a+1+m)^s}\,.
\eql{szeta}
$$
$d(m)$ is the level degeneracy, still to be determined.
This `simple' $\zeta$--function\ can be found in terms of the Barnes $\zeta$--function\ as I show below. This is
computationally advantageous because it can be shown, [\putref{Dowcmp}], that $\zeta(0)$
of $-\delta d$ ({\it i.e. } the universal part of the free energy) is the average of the $\zeta$--functions\ evaluated
at 0 of each linear factor, weitten,
$$
\mbox{{\goth\char70}}(p,d,q)={1\over2}\big(\zeta(0,a
+\alpha)+\zeta(0,a-\alpha)\big)\,.
\eql{pca}
$$
\section{\bf 4. The degeneracy }
To compute the coexact free energy one needs the degeneracy, $d(m)$. The solution to
the $p$--form spectral problem on regular sphere orbifolds, S$^d/\Gamma$, has been given in
[\putref{dowpform1,dowpform2}]. In the case when the deck group $\Gamma$ is the extended
dihedral action, the fundamental domain (the orbifold) is a single lune of angle $\pi/q$.
Geometrically, the periodic lune, referred to above, is obtained by combining such a single
lune with its contiguous reflection. The spectrum on the periodic lune is (or can be)
obtained by uniting the absolute (a) and relative (r) $p$--spectra on a single lune. I find
this convenient and adopt it here.
I should mention that the vector, $p=1$, spectrum and $\zeta(0)$ on the $d$ --dimensional,
$q$--sphere, was early directly determined by De Nardo, Fursaev and Miele, [\putref{NFM}].
The degeneracies are of course altered by the factoring. As often, it is best to organise
them into a generating function,\footnotecheck\defaultoption[]\@footnote{ A finite, `fermionic' Poincar\'e series on the
form orders, $p$, can also be introduced, but I will not use this here. $\sigma$ is often written
as $q$ but here this stands for the orbifold order.}
$$
d_b(p,\sigma)=\sum_{m=0}^\infty d_b(p,m)\,\sigma^m\,,
$$
where I have now explicitly indicated the dependence on the form order. The suffix $b$ is
the condition that the $p$--form satisfies on the boundary of the fundamental domain of
the action of $\Gamma$, {\it i.e. } either $b=a$ (absolute) or $b=r$ (relative). These are dual is the
sense that, for coexact forms,
$$
d_b(p,\sigma)=d_{*b}(d-1-p,\sigma)\,,\quad *a=r\,\,,**=id\,.
$$
Note that there are no coexact $d$--forms, $d_b(d,\sigma)=0$, although there is a zero mode.
Molien's theorem and invariant theory produce an expression for $d_b(p,\sigma)$ in terms of
the algebraic (integer) {\it degrees}, $\omega_i$, $i=1,\ldots,d$, which define the polytope
symmetry group, $\Gamma$. In the simple case of a dihedral action all the $\omega_i$ are unity
except for one, which equals $q$. (It is helpful to note that if $q=1$, the fundamental
domain is a hemisphere).
The corresponding coexact generating functions are functions of $q$ and are given,
[\putref{dowpform2}], by (in even dimensions),
$$
d_a(p,\sigma,q)={(-1)^{p+1}\over \sigma^{p+1}(1-\sigma)^{d-1}(1-\sigma^q)}\sum_{r=p+1}^d
(-1)^r\,e_r(\sigma^q,\sigma,\ldots,\sigma)\,,
\eql{deea}
$$
in terms of elementary symmetric functions, $e_r$, on $d$ arguments. Explicitly
$$
e_r(\sigma^q,\sigma,\ldots,\sigma)=\comb{d-1}r\,\sigma^r+\comb{d-1}{r-1}\,\sigma^{r-1+q}\,,
\eql{esf}
$$
which allows the simple $\zeta$--function, (\puteqn{szeta}), to be obtained. At this point, to make sense,
$q$ is an integer. After explicit calculation it can be continued into the reals.
\section{\bf 5. The cylinder kernel and the $\zeta$--function\ by Mellin transform}
The form of the eigenvalues suggests the construction of the quantity,
$$
T_b(p,\sigma,q,a)=\sigma^{a+1}\,d_b(p,\sigma,q)\,,
\eql{ck}
$$
which can be interpreted as the (traced) cylinder kernel\footnotecheck\defaultoption[]\@footnote{ Other names are `wave
kernel', `Poisson kernel',`single particle partition function', depending on the
interpretation of the parameter, $\tau$. } for the pseudo operator $\widehat B$, on putting
$\sigma=e^{-\tau}$, where $\tau$ is a propagation `time'.
The simple coexact spectral $\zeta$--function, (\puteqn{szeta}), then follows immediately as the Mellin
transform\footnotecheck\defaultoption[]\@footnote{ This approach has been around for a considerable time. See
[\putref{ChandD}] where the inversion behaviour under $\sigma\to1/\sigma$ was also brought into
play.} (I give the absolute expression),
$$\eqalign{
\zeta_a(s,a,p,q)&=i{\Gamma(1-s)\over2\pi}\int_{C_0}d\tau\,(-\tau)^{s-1}\,
T_a(p,e^{-\tau},q,a)\cr
&=(-1)^{p+1}{i\Gamma(1-s)\over2\pi}\int_{C_0}\!\!d\tau\,(-\tau)^{s-1}
{e^{-(a-p)\,\tau}\,\over(1-e^{-\tau})^{d-1}(1-e^{-q\tau})}\times\cr
&\hspace{********}\sum_{r=p+1}^{d}
(-1)^r\bigg[ \comb{d-1}r\,e^{-r\tau}+\comb{d-1}{r-1}\,e^{-(r-1+q)\tau}\bigg]\cr
&=(-1)^{p+1}\!\!\sum_{r=p+1}^d(-1)^r\bigg[ \comb{d-1}r\zeta_{\cal B}(s,a-p+r\mid\bom)\cr
&\hspace{****************}+
\comb{d-1}{r-1}\,\zeta_{\cal B}(s,a-p+r+q-1\mid\bom)\bigg]\,.
}
\eql{zet4}
$$
The $\zeta_{\cal B}$ are Barnes $\zeta$--functions\ and the vector, $\bom$, stands for the $d$--dimensional
set $\bom=(q,1,\ldots,1)=(q,{\bf1}_{d-1})$.
Duality gives the relative $\zeta_r(s,a,p,q)=\zeta_a(s,a,d-1-p,q)$ which has to be added to
the absolute expression in order to get the coexact $p$--form value on the {\it periodic}
(double) lune.
This $\zeta$--function\ is now substituted into (\puteqn{pca}) to give the required (coexact) free energy.
One sees that the four arguments of the pair of Barnes functions in (\puteqn{zet4}) have the
values
$
( \alpha+r\pm\alpha\,,\,\alpha+r+q-1\pm\alpha)\,.
$
The Barnes function at $s=0$ is a generalised Bernoulli polynomial and the square bracket
in (\puteqn{zet4}) equals,
$$\eqalign{
&{1\over d!q}\bigg[ \comb{d-1}rB^{(d)}_d(\alpha+r\pm\alpha\mid\bom)+
\comb{d-1}{r-1}\,B^{(d)}_d(\alpha+r+q-1\pm\alpha\mid\bom)\bigg]\,.\cr
}
\eql{sqa}
$$
The $q$-dependence can be simplified by using a symmetry property of the Bernoulli
functions. This produces for (\puteqn{sqa}),
$$\eqalign{
&{1\over d!q}\bigg[ \comb{d-1}rB^{(d)}_d(\alpha+r\pm\alpha\mid{\bom})+
(-1)^d\comb{d-1}{r-1}\,B^{(d)}_d(d-\alpha-r\mp\alpha\mid{\bom})\bigg]\,.\cr
}
\eql{sqa2}
$$
Written out, this equals
$$\eqalign{
&{1\over d!q}\bigg[ \comb{d-1}rB^{(d)}_d(2\alpha+r\mid{\bom})+
(-1)^d\comb{d-1}{r-1}\,B^{(d)}_d(d-2\alpha-r\mid{\bom})\cr
&+\comb{d-1}rB^{(d)}_d(r\mid{\bom})+
(-1)^d\comb{d-1}{r-1}\,B^{(d)}_d(d-r\mid{\bom})\bigg]\,.\cr
}
\eql{sqa3}
$$
For given form order, $p$, and dimension, $d$, all quantities can be evaluated easily by
machine and lead to an expression for the (absolute) coexact free energy, $\mbox{{\goth\char70}}_a(p,d,q)$,
as a rational function of the lune parameter, $q$, which can now be taken as a real
number.
\section{ \bf 6. The complete field theory $\zeta(0)$. Free energy and R\'enyi entropy}
Having the single (coexact) $p$--form quantity, it is necessary next to assemble the
ghosts--for--ghosts sum to get the complete $p$--form free energy, ${\cal F}(p,d,q)$. This is
a standard construction ({\it e.g.}\ [\putref{CandA}]) and yields, in particular, on the single lune,
$$
{\cal F}_b(p,d,q)=\sum_{l=0}^p(-1)^{p+l}\mbox{{\goth\char70}}_b(l,d,q)+(-1)^p\,(p+1)\delta_{br}\,.
\eql{gfg}
$$
The last term is a zero mode effect which exists only for relative conditions. The end
result is again a rational function of $q$ for given $p$ and $d$. I give a few absolute
examples,
$$\eqalign{
{\cal F}_a(1,4,q)&=-\frac{{q}^{4}+30\,{q}^{2}-660\,q+33}{360\,q}\cr
{\cal F}_a(2,4,q)&=-\frac{{q}^{4}-60\,{q}^{2}+1440\,q-57}{720\,q}\cr
{\cal F}_a(3,6,q)&=\frac{2\,{q}^{6}
-35\,{q}^{4}-1260\,{q}^{2}+42924\,q-1355}{15120\,q}\,.\cr
}
$$
The relative expressions can be most easily found from the difference,
$$
{\cal F}_a(p,d,q)-{\cal F}_r(p,d,q)=(-1)^p2(p+1)\,,
$$
or from duality.
As a final step, adding the absolute and relative quantities gives those on the {\it
periodic} lune (the $q$--deformed sphere). A few examples are,
$$\eqalign{
{\cal F}(0,2,q)&={q\over6}+{1\over6q}\cr
{\cal F}(1,4,q)&=-{q(q^2+30)\over180}-{1\over3}-{11\over60q}\cr
{\cal F}(2,4,q)&=-{q(q^2-60)\over360}+2+{57\over360q}\cr
{\cal F}(1,6,q)&=\frac{q\,(2{q}^{4}-35\,{q}^{2}-1260)}{7560}-{29\over90}
-{271\over1512q}\cr
{\cal F}(2,6,q)&=\frac{q(2\,{q}^{4}+35\,{q}^{4}+840)}{5040}+{31\over45}+{191\over1008q}\cr
{\cal F}(3,6,q)&=\frac{q(2\,{q}^{4}-35\,{q}^{2}-1260)}{7560}-{209\over90}-{271\over1512q}\cr
{\cal F}(3,8,q)&=-\frac{q(3\,{q}^{6}+56\,{q}^{4}+686\,{q}^{2}
+15120)}{90720}-{221\over210}-{2497\over12960q}\,.\cr
}
\eql{freen}
$$
As check, evaluation at $q=1$, the round sphere, provides agreement with the values
obtained by Raj, [\putref{Raj}].\footnotecheck\defaultoption[]\@footnote{ When calculating the heat--kernel coefficient,
relevant for any conformal anomaly, one should either use the $\zeta$--functions\ with zero modes
included, or add these in separately. This point arises for the $p=d$ form for which there
is a coexact zero mode. Excluding this, the coexact $d$--form vanishes.}
Samples of the resulting R\'enyi entropies are,
$$\eqalign{
\mbox{{\goth\char83}}_q(1,4)&={(q+1)(q^2+31)\over180}+{1\over3}\cr
\mbox{{\goth\char83}}_q(2,4)&={(q+1)(q^2-59)\over360}-2\cr
\mbox{{\goth\char83}}_q(2,6)&=-{(q+1)(2q^4+37q^2+877)\over5040}-{31\over45}\cr
\mbox{{\goth\char83}}_q(3,6)&=-{(q+1)(2q^4-33q^2-1293)\over7560}+{209\over90}\cr
\mbox{{\goth\char83}}_q(3,8)&=\frac{(q+1)(3q^6+59q^4+745q^2+15865)}{90720}+{221\over210}\,.\cr
}
\eql{renent}
$$
I have arranged the expressions in a way helpful for the remarks in the next section.
The significance of the non--conformal entropy is not clear to me.
Evaluation at $q=1$ gives the entanglement entropy and one finds
$\mbox{{\goth\char83}}_1(p,d)=-{\cal F}(p,d,1)$ essentially as a consequence of the fact that the free energy is
an extremum at the round sphere, $q=1$.\footnotecheck\defaultoption[]\@footnote{ Unfortunately, I have not been able
to prove this in general, but only case by case, the derivatives all having a factor of
$(q^2-1)$, even for any $p$ and $d$.} In the conformal case, this says that the
entanglement entropy is minus the conformal anomaly (in my sign conventions) on the
round sphere. For example, for one--forms in four--space this equals $31/45$, the
standard value. The higher--dimensional values were calculated by Capelli and D'Apollonio,
[\putref{CandA}], long ago using the same spectral data as here but organised differently.
\section{\bf 7. The shift and edge modes. Comparison with hyperbolic}
In [\putref{BandT}], [\putref{NandZ}] and elsewhere, in order to regain the standard
conformal anomaly on the round sphere, a `shift' was made to the value obtained from the
R\'enyi entropy found there on a hyperbolic cylinder approach.
Here, this is not necessary and I have written the obtained entropies in (\puteqn{renent}) so
as to make comparisons easier. The first part on the right--hand side is the hyperbolic
result, agreeing with the expressions in [\putref{NandZ}] for $p=1,2$. The second part is
the shift, applied in [\putref{NandZ}] to get the `correct' entropy. See also
[\putref{Huang,BandT}] for $p=1$. The expression for the gauge boson ($p=1$) free energy
is given by Fursaev, [\putref{Fursaev}].
The numerics\footnotecheck\defaultoption[]\@footnote{ An analyticall proof may be provided later.} thus reveal that the
constant term in the conformal $p$--form free energy is just minus the entanglement
entropy of a conformal $p-1$ form. These are the shifts that would have to be applied
when computing the entanglement entropy for a conformal gauge theory by any hyperbolic
method. It confirms the expectation that the shift is an entangling surface, ghost edge
mode effect, in particular the specific $p$--form gauge theory suggestion by Donnelly,
Michel and Wall, [\putref{DMW}]. The $p=1$ case was earlier considered by Huang,
[\putref{Huang}].
I note that the conformal R\'enyi entropy evaluated at $q=-1$ equals the constant part of
the free energy. This is expressed as,
$$
\mbox{{\goth\char83}}_{-1}(p,2p+2)=\mbox{{\goth\char83}}_1(p-1,2p)\,,
$$
which is in accord with the hyperbolic expressions for the entropy as they vanish when
$q=-1$.
A comparison of the free energies here and those in [\putref{NandZ}] and [\putref{BandT}]
show that they differ by the shift constant, and also by a term proportional to $1/q$ which
numerically equals minus twice the (Maxwell) Casimir energy on R$\times$S$^{d-1}$,
[\putref{GKT}], [\putref{dowqretspin}], as was the case also in
[\putref{dowgjmsren}].\footnotecheck\defaultoption[]\@footnote{This can probably be proved analytically but the calculation
is complicated by the ghost sum. This actually collapses, up to zero modes, on the
Einstein cylinder, [\putref{dowzero}]. A similar collapse should occur on the $q$--sphere as
$q\to0$.} The hyperbolic expressions are given as the first terms on the right--hand side
of (\puteqn{freen}).
\section{\bf 8. Derivatives and central charge}
Beccaria and Tseytlin, [\putref{BandT}], employ Perlmutter's relation, [\putref{Perlmutter}],
to find the central charge, $C_T$, from the R\'enyi entropy. I followed the same route, in a
different, compact geometry, in [\putref{dowgjmsren}] for scalars and spinors and now
extend these to standard gauge $p$--forms ({\it cf }\ Nian and Zhou, [\putref{NandZ}]).
I can, at the moment, proceed only dimension by dimension and form by form. Also, to
give the central charge meaning, I have to limit myself to the conformal results,
($p=d/2-1$). Then $C_T(d)$ computes to $[2,16,108,640,3500,\ldots]$ for
$d=2,4,\ldots,10,\ldots$
In fact I have added nothing numerically to the general formula of Buchel {\it et al},
[\putref{BEMPSS}],
$$
C_T(d)=\frac{{d}^{2}\,\left( d-2\right) !}{2(\left( {d}/{2}
-1\right) !)^2}
\eql{BEM}
$$
derived, after lengthy calculation, directly from the two point function of the
energy--momentum tensor in flat space. I have thus tested their formula by a quite
different, spectral method.
\section{\bf9. Comments}
In [\putref{dowgjmsren}] I derived general formulae for $C_T$ like (\puteqn{BEM}). Without
further serious simplification of the expressions used here, it is difficult to see how one
might obtain this simple result.
Although the organisation of the spectral data by using Barnes $\zeta$--functions\ is very efficient at
producing the answers, it generally gives no indication of any underlying reason for a
particular result which is often a consequence of machine evaluation.
I suggest that a GJMS--type product for the Maxwell operator, $-\delta d$, exists.
A more systematic hyperbolic calculation should be undertaken for higher $p$--forms in
order to confirm the nature of the shifts.
The $q\to0$ limit should be carefully investigated.
\begin{ignore}
\section{\bf Can the degeneracies be simplified.??????}
$$
d_a(p,\sigma,q)={(-1)^{p+1}\over \sigma^{p+1}(1-\sigma)^{d-1}(1-\sigma^q)}\sum_{r=p+1}^d
(-1)^r\,e_r(\sigma^q,\sigma,\ldots,\sigma)\,,
\eql{deea1}
$$
Explicitly
$$
e_r(\sigma^q,\sigma,\ldots,\sigma)=\comb{d-1}r\,\sigma^r+\comb{d-1}{r-1}\,\sigma^{r-1+q}\,,
\eql{esf}
$$
Try first $q=1$ (hemisphere) and show equivalence with [\putref{DandKi}] which is (P-forms
II section 8.) and start from this side,
$$\eqalign{
g_a(p,\sigma)&=\sum_{m=p+1}^d\comb{m-1}{p}{1\over(1-\sigma)^m}\cr
&={1\over(1-\sigma)^d}\sum_{m=p+1}^d\comb{m-1}{p}(1-\sigma)^{d-m}\cr
&={1\over(1-\sigma)^d}\sum_{m=p+1}^d\comb{m-1}{p}\sum_{s=0}^{d-m}
\comb{d-m}{s}(-1)^s\sigma^s\cr
&={1\over(1-\sigma)^d}\sum_{m=1}^d\comb{m-1}{p}\sum_{s=0}^{\infty}
\comb{d-m}{s}(-1)^s\sigma^s\cr
&={1\over(1-\sigma)^d}\sum_{s=0}^\infty\sum_{m=1}^{d}\comb{m-1}{p}
\comb{d-m}{s}(-1)^s\sigma^s\cr
}
$$
Vandermonde is
$$
\sum_{\mu=0}^n\comb{\mu}{j}\,\comb{n-\mu}{k-j}=\comb{n+1}{k+1}
$$
Set $\mu=m-1$, $j=p$, $k-j=s$, $n=d-1$ {\it i.e. } $k=s+p$
Hence
$$\eqalign{
g_a(p,\sigma)
&={1\over(1-\sigma)^d}\sum_{s=0}^\infty\sum_{m=1}^{d}\comb{m-1}{p}
\comb{d-m}{s}(-1)^s\sigma^s\cr
&={1\over(1-\sigma)^d}\sum_{s=0}^\infty\comb{d}{s+p+1}(-1)^s\sigma^s\cr
}
\eql{geea}
$$
Now, at $q=1$,
$$\eqalign{
d_a(p,\sigma.1)={(-1)^{p+1}\over\sigma^{p+1}(1-\sigma)^d}\sum_{r=p+1}^d(-1)^r\comb dr\sigma^r
}
$$
Now set $r=s+p+1$ and the two quantities are identical, the upper $r$ limit being
governed by the vanishing of the binomial.
Look at each term in (\puteqn{esf}) separately. First term bit like above but with $d\to d-1$
($r=d$ term is zero)
$$\eqalign{
d_a(p,\sigma,1)\big |_{d\to d-1}&={(-1)^{p+1}\over\sigma^{p+1}
(1-\sigma)^{d-1}}\sum_{r=p+1}^{d-1}(-1)^r\comb {d-1}r\sigma^r\cr
&=\sum_{m=p+1}^{d-1}\comb{m-1}{p}{1\over(1-\sigma)^m}\cr
}
\eql{firstt}
$$
using the previous identity.
Hence first term is, putting in the overall factors
$$
{1\over(1-\sigma^q)}\sum_{m=p+1}^{d-1}\comb{m-1}{p}{1\over(1-\sigma)^m}
\eql{firstt2}
$$
Try to reverse the procedure leading to (\puteqn{geea}). The first term has been dealt with in
(\puteqn{firstt2}).
Second term
$$\eqalign{
\sum_{r=p+1}^d(-1)^r&\comb{d-1}{r-1}\,\sigma^{r-1+q}=
-\sigma^q \sum_{\rho=p}^{d-1}(-1)^\rho\comb{d-1}{\rho}\,\sigma^{\rho}\cr
&=-(-1)^p\sigma^q\sum_{s=0}^{d-1-p}(-1)^s\comb{d-1}{s+p}\,\sigma^{s+p}\cr
&=-(-1)^p\sigma^{q+p}\sum_{s=0}^{\infty}(-1)^s\comb{d-1}{s+p}\,\sigma^{s}\cr
&=-(-1)^p\sigma^{q+p}\sum_{s=0}^{\infty}\sum_{m=1}^{d-1}(-1)^s
\comb{m-1}{p-1}\comb{d-1-m}{s}\,\sigma^{s}\cr
&=-(-1)^p\sigma^{q+p}\sum_{m=1}^{d-1}
\comb{m-1}{p-1}\sum_{s=0}^{\infty}\comb{d-1-m}{s}\,(-1)^s\sigma^{s}\cr
&=-(-1)^p\sigma^{q+p}\sum_{m=p}^{d-1}
\comb{m-1}{p-1}\sum_{s=0}^{d-1-m}\comb{d-1-m}{s}\,(-1)^s\sigma^{s}\cr
&=-(-1)^p\sigma^{q+p}\sum_{m=p}^{d-1}
\comb{m-1}{p-1}(1-\sigma)^{d-1-m}\equiv \sigma^{p+q} X\cr
}
$$
Notes:
Putting $s=\rho-p$. The finite upper limit occurs when $s+p=d-1$.
Vandermonde is
$$
\sum_{\mu=0}^n\comb{\mu}{j}\,\comb{n-\mu}{k-j}=\comb{n+1}{k+1}
$$
\vglue 20truept
$k+1=s+p$, $\mu=m-1$, $n-\mu=d-1-m$, $j=p-1$, $n+1=d-1$, $k-j=s$. Therefore
$n=d-2$ and $k=s+p-1$. Checks.
Outside factors
$$
{(-1)^{p+1}\over \sigma^{p+1}(1-\sigma)^{d-1}(1-\sigma^q)}
$$
Therefore $\sigma^p$ cancels to leave $1/\sigma$. Leave this outside. Signs combine to $+1$.
The factor $(1-\sigma)^{d-1}$ cancels
Make same split as before to cancel $(1-\sigma^q)$ on bottom. Therefore write
$\sigma^q\sigma^pX=-(1-\sigma^q)\sigma^pX+\sigma^pX$. Including the outside factors get
$$\eqalign{
{1\over(1-\sigma^q)}\sum_{m=p+1}^{d-1}\comb{m-1}{p}
{1\over(1-\sigma)^m}+&{1\over\sigma(1-\sigma^q)}\sum_{m=p}^{d-1}
\comb{m-1}{p-1}{1\over(1-\sigma)^{m}}\cr
&-{1\over\sigma}\sum_{m=p}^{d-1}
\comb{m-1}{p-1}{1\over(1-\sigma)^{m}}
}
$$
Multiplying by $\sigma^{a+1}$, Mellin gives ($a=(d-1)/2$,
$$\eqalign{
\sum_{m=p+1}^{d-1}\comb{m-1}{p}
\zeta_{\cal B}(s,a+1\mid q,{\bf1}_m)+&\sum_{m=p}^{d-1}
\comb{m-1}{p-1}\zeta_{\cal B}(s,a\mid q,{\bf1}_m)\cr
&-\sum_{m=p}^{d-1}
\comb{m-1}{p-1}\zeta_{\cal B}(s,a\mid{\bf1}_m)
}
$$
Check for $q=1$. All parameter are 1. Recursion is
$$
\zeta_{\cal B}(s,a+1\mid{\bf1}_{m+1})=\zeta_{\cal B}(s,a\mid{\bf1}_{m+1})
-\zeta_{\cal B}(s,a\mid{\bf1}_{m})
$$
Answer should be
$$
\sum_{m=p+1}^d\comb{m-1}p\zeta_B(s,a+1\mid{\bf1}_m)
$$
\section{\bf8. $q\to0$ limit and Casimir energies}
Alternative expressions for the free energies to those derived directly from the $\zeta$--function\
(\puteqn{zet4}) are more advantageous in order to make contact with the results in
[\putref{dowqretspin}]. These are obtained by re-organising the degeneracies in the cylinder
kernel, (\puteqn{ck})\footnotecheck\defaultoption[]\@footnote{ In other language this is a re-organisation of the
$q$--series.} Briefly, the binomial coefficients in the symmetric function, (\puteqn{esf}),
needed for the degeneracy, (\puteqn{deea}), are split into two factors using the
Vandermonde formula. This enables the sum to be taken over the {\it dimension} of the
Barnes $\zeta$--functions.
I do not give the algebraic details but just present the alternative expression to
(\puteqn{zet4}) for the simple $\zeta$--function,
$$\eqalign{
\zeta_a(s,a,p,q)=\sum_{m=p+1}^{d-1}\comb{m-1}{p}
\zeta_{\cal B}(s,a+1\mid q,{\bf1}_m)+&\sum_{m=p}^{d-1}
\comb{m-1}{p-1}\zeta_{\cal B}(s,a\mid q,{\bf1}_m)\cr
&-\sum_{m=p}^{d-1}
\comb{m-1}{p-1}\zeta_{\cal B}(s,a\mid{\bf1}_m)
}
$$
This can be used for formal as well as calculational purposes since it displays the
$p$--dependence more simply. Further, because of the vanishing of the binomial
coefficients, the lower summation limits can be taken to be $0$ or $-1$ {\it etc. }, at will. First
$s$ to 0 to obtain,
$$\eqalign{
q\,\zeta_a&(0,a,p,q)=-\sum_{m=p+1}^{d-1}{(-1)^m\over(m+1)!}\comb{m-1}{p}
B^{(m+1)}_{m+1}(a+1\mid q,{\bf1}_m) \cr-
&\sum_{m=p}^{d-1}{(-1)^m\over(m+1)!}
\comb{m-1}{p-1}B^{(m+1)}_{m+1}(a\mid q,{\bf1}_m)\cr
&-q\,\sum_{m=p}^{d-1}{(-1)^m\over m!}
\comb{m-1}{p-1}B^{(m)}_{m}(a\mid {\bf1}_m)
}
$$
Therefore at $q=0$,
$$\eqalign{
q\,\zeta_a&(0,a,p,q)\bigg|_{q=0}=-\sum_{m=p+1}^{d-1}{(-1)^m\over(m+1)!}\comb{m-1}{p}
B^{(m)}_{m+1}(a+1\mid {\bf1}_m) \cr-
&\sum_{m=p}^{d-1}{(-1)^m\over(m+1)!}
\comb{m-1}{p-1}B^{(m)}_{m+1}(a\mid {\bf1}_m)\,.\cr
}
\eql{z0}
$$
To get the free energy, the average (\puteqn{pca}) has to be taken.
At this point I give the expression for the Maxwell (conformal) Casimir energy on
R$\times$S$^{d-1}$, $p=d/2-1$
$$\eqalign{
E^M_0(p)
&=\sum_{m=p+1}^{2p+1}{(-1)^m\over(m+1)!}\comb{m-1}p
\,B^{(m)}_{m+1}\big(p+1\big)\,,\cr
}
\eql{maxen}
$$
The arguments of the $b$s in (\puteqn{z0}) are $(d-1)/2+1+1/2=d/2+1$,
$(d-1)/2+1-1/2=d/2$, and $d/2$, $d/2-1$ for the Maxwell case.
\end{ignore}
\vskip15truept
\noindent{\bf References.} \vskip5truept
\begin{putreferences}
\reference{BandT}{Beccaria,M. and Tseytlin,A.A. {\it $C_T$ for higher derivative conformal
fields and anomalies of (1,0) superconformal 6d theories}, ArXiv:1705.00305.}
\reference{NandZ}{Nian,J. and Zhou,Y. {\it R\'enyi entropy of free (2,0) tensor multiplet
and its supersymmetric counterpart}, \prD{93}{2016}{125010}, ArXiv:1511.00313}
\reference{Huang}{Huang,K--W., {\it Central Charge and Entangled Gauge Fields},
\prD {92}{2015}{025010}, ArXiv:1412.2730.}
\reference{NFM}{De Nardo,L., Fursaev,D.V. and Miele,G. {\it}\cqg{14}{1997}{1059}, ArXiv:
hep-th/9610011.}
\reference{Fursaev}{Fursaev,D.V.,{\it Entanglement R\'enyi Entropies in Conformal
Field Theories and Holography},{\it JHEP} 0609:018,2006. ArXiv:1201.1702.}
\reference{Raj}{Raj,H. {\it A note on sphere free energy of $p$--form gauge theory and
Hodge duality} ArXiv:1611.02507.}
\reference{DMW}{Donnelly.W, Michel,B. and Wall,A.C.{\it Electromagnetic
duality and entanglement entropy}, ArXiv:1611.0592.}
\reference{GKT}{Giombi,S. Klebanov,I.R. and Tan, Z.M. {\it The ABC of Higher--Spin AdS/CFT}.
ArXiv:1608.07611.}
\reference{dowzero}{Dowker,J.S. {\it Zero modes, entropy bounds and partition functions},
\cqg{20}{2003}{L105}, ArXiv:hep-th/0203026.}
\reference{dowqretspin}{Dowker,J.S. {\it Revivals and Casimir energy for a free Maxwell field
(spin-1 singleton) on $R\times S^d$ for odd $d$}, ArXiv:1605.01633.}
\reference{Wunsch}{W\"unsch,V. {\it On Conformally Invariant Differential Operators}, {\it Math.
Nachr.} {\bf 129} (1989) 269.}
\reference{BEMPSS}{Buchel,A.,Escobedo,J.,Myers,R.C.,Paulos,M.F.,Sinha,A. and Smolkin,M.
{\it Holographic GB gravity in arbitrary dimensions}, {\it JHEP} 1003:111,2010,
ArXiv: 0911.4257.}
\reference{dowpform1}{Dowker,J.S. {\it $p$--forms on$d$--spherical tessellations}, {\it J. Geom. and
Phys.} ({\bf 57}) (2007) 1505, ArXiv:math/0601334.}
\reference{dowpform2}{Dowker,J.S. {\it $p$--form spectra and Casimir energies on spherical
tessellations}, \cqg{23}{2006}{1}, ArXiv:hep-th/0510248.}
\reference{dowgjmsren}{Dowker,J.S. {\it R\'enyi entropy and $C_T$ for higher derivative
scalars and spinors on even spheres},ArXiv:1706.01369.}
\reference{GPW}{Guerrieri, A.L., Petkou, A. C. and Wen, C. {\it The free $\sigma$CFTs},
ArXiv:1604.07310.}
\reference{GGPW}{Gliozzi,F., Guerrieri, A.L., Petkou, A.C. and Wen,C.
{\it The analytic structure of conformal blocks and the
generalized Wilson--Fisher fixed points}, {\it JHEP }1704 (2017) 056, ArXiv:1702.03938.}
\reference{YandZ}{Yankielowicz, S. and Zhou,Y. {\it Supersymmetric R\'enyi Entropy and
Anomalies in Six--Dimensional (1,0) Superconformal Theories}, ArXiv:1702.03518.}
\reference{OandS}{Osborn.H. and Stegiou,A. {\it $C_T$ for Non--unitary CFTs in higher dimensions},
{\it JHEP} {\bf06} (2016) 079, ArXiv:1603.07307.}
\reference{Perlmutter}{Perlmutter,E. {\it A universal feature of CFT R\'enyi entropy}
{\it JHEP} {\bf03} (2014) 117. ArXiv:1308.1083.}
\reference{Norlund}{N\"orlund,N.E. {\it M\'emoire sur les polynomes de Bernoulli}, \am{43}{1922}{121}.}
\reference{dowqretspin}{Dowker,J.S. {\it Revivals and Casimir energy for a free Maxwell field
(spin-1 singleton) on $R\times S^d$ for odd $d$}, ArXiv:1605.01633.}
\reference{Dowpiston}{Dowker,J.S. {\it Spherical Casimir pistons}, \cqg{28}{2011}{155018},
ArXiv:1102.1946.}
\reference{Dowchem}{Dowker,J.S. {\it Charged R\'enyi entropy for free scalar fields}, \jpa{50}
{2017}{165401}, ArXiv:1512.01135.}
\reference{Dowconfspins}{Dowker,J.S. {\it Effective action of conformal spins on spheres
with multiplicative and conformal anomalies}, \jpa{48}{2015}{225402}, ArXiv:1501.04881.}
\reference{Dowhyp}{Dowker,J.S. {\it Hyperspherical entanglement entropy},
\jpa{43}{2010}{445402}, ArXiv:1007.3865.}
\reference{dowrenexp}{Dowker,J.S.{\it Expansion of R\'enyi entropy for free scalar fields},
ArXiv:1412.0549.}
\reference{CaandH}{Casini,H. and Huerta,M. {\it Entanglement entropy for the $n$-sphere},
\plb{694}{2010}{167}.}
\reference{Apps}{Apps,J.S. {\it The effective action on a curved space and its conformal
properties} PhD thesis (University of Manchester, 1996).}
\reference{Dowcen}{Dowker,J.S., {\it Central differences, Euler numbers and symbolic methods},
\break ArXiv:1305.0500.}
\reference{KPSS}{Klebanov,I.R., Pufu,S.S., Sachdev,S. and Safdi,B.R.
{\it JHEP} 1204 (2012) 074.}
\reference{moller}{M{\o}ller,N.M. \ma {343}{2009}{35}.}
\reference{BandO}{Branson,T., and Oersted,B \jgp {56}{2006}{2261}.}
\reference{BaandS}{B\"ar,C. and Schopka,S. {\it The Dirac determinant of spherical
space forms},\break {\it Geom.Anal. and Nonlinear PDEs} (Springer, Berlin, 2003).}
\reference{EMOT2}{Erdelyi, A., Magnus, W., Oberhettinger, F. and Tricomi, F.G. {
\it Higher Transcendental Functions} Vol.2 (McGraw-Hill, N.Y. 1953).}
\reference{Graham}{Graham,C.R. SIGMA {\bf 3} (2007) 121.}
\reference{Morpurgo}{Morpurgo,C. \dmj{114}{2002}{477}.}
\reference{DandP2}{Dowker,J.S. and Pettengill,D.F. \jpa{7}{1974}{1527}}
\reference{Diaz}{Diaz,D.E. {\it Polyakov formulas for GJMS operators from AdS/CFT},
{\it JHEP} {\bf 0807} (2008) 103.}
\reference{DandD}{Diaz,D.E. and Dorn,H. {\it Partition functions and double trace
deformations in AdS/CFT}, {\it JHEP} {\bf 0705} (2007) 46.}
\reference{AaandD}{Aros,R. and Diaz,D.E. {\it Determinant and Weyl anomaly of
Dirac operator: a holographic derivation}, ArXiv:1111.1463.}
\reference{CandA}{Cappelli,A. and D'Appollonio, G. \pl{487B}{2000}{87}.}
\reference{CandT2}{Copeland,E. and Toms,D.J. \cqg {3}{1986}{431}.}
\reference{Allais}{Allais, A. {\it JHEP} {\bf 1011} (2010) 040.}
\reference{Tseytlin}{Tseytlin,A.A. {\it On Partition function and Weyl anomaly of
conformal higher spin fields} ArXiv:1309.0785.}
\reference{KPS2}{Klebanov,I.R., Pufu,S.S. and Safdi,B.R. {\it JHEP} {\bf 1110} (2011) 038.}
\reference{CaandWe}{Candelas,P. and Weinberg,S. \np{237}{1984}{397}.}
\reference{ChandD}{Chang,P. and Dowker,J.S. {\it Vacuum energy on orbifold
factors of spheres}, \np{395}{1993}{407}, ArXiv:hep-th/9210013.}
\reference{Steffensen}{Steffensen,J.F. {\it Interpolation}, (Williams and Wilkins,
Baltimore, 1927).}
\reference{Barnesa}{Barnes,E.W. {\it Trans. Camb. Phil. Soc.} {\bf 19} (1903) 374.}
\reference{DowGJMS}{Dowker,J.S. {\it Determinants and conformal anomalies of
GJMS operators on spheres}, \jpa{44}{2011}{115402}.}
\reference{Dowren}{Dowker,J.S. {\it R\'enyi entropy on spheres}, \jpamt {46}{2013}{2254}.}
\reference{MandD}{Mansour,T. and Dowker,J.S. {\it Evaluation of spherical GJMS determinants},
2014, Submitted for publication.}
\reference{GandK}{Gubser,S.S and Klebanov,I.R. \np{656}{2003}{23}.}
\reference{Dow30}{Dowker,J.S. \prD{28}{1983}{3013}.}
\reference{Dowcmp}{Dowker,J.S. {\it Effective action on spherical domains},
\cmp{162}{1994}{633}, ArXiv:hep-th/9306154.}
\reference{DowGJMSE}{Dowker,J.S. {\it Numerical evaluation of spherical GJMS operators
for even dimensions} ArXiv:1310.0759.}
\reference{Tseytlin2}{Tseytlin,A.A. \np{877}{2013}{632}.}
\reference{Tseytlin}{Tseytlin,A.A. \np{877}{2013}{598}.}
\reference{Dowma}{Dowker,J.S. {\it Calculation of the multiplicative anomaly} ArXiv: 1412.0549.}
\reference{CandH}{Camporesi,R. and Higuchi,A. {\it J.Geom. and Physics}
{\bf 15} (1994) 57.}
\reference{Allen}{Allen,B. \np{226}{1983}{228}.}
\reference{Dowdgjms}{Dowker,J.S. \jpamt{48}{2015}{125401}.}
\reference{Dowsphgjms}{Dowker,J.S. {\it Numerical evaluation of spherical GJMS determinants
for even dimensions}, ArXiv:1310.0759.}
\end{putreferences}
\bye
|
1,108,101,563,013 | arxiv | \section{Introduction}
This paper deals with digital sequences modulo $m$.
Such sequences are ``simple'' in the sense that they are deterministic and uniformly recurrent sequences.
We show that the situation changes completely when we consider the subsequence along squares, i.e., we show that this subsequence is normal.\\
Thus, we describe a new class of normal numbers that can be efficiently generated, i.e., the first $n$ digits of the normal number can be generated
by using $O(n \log(n))$ elementary operations.
In this paper we let $\mathbb{N}$ denote the set of positive integers and we let $\mathbb{P}$ denote the set of prime numbers.
We let $\mathbb{U}$ denote the set of complex numbers of modulus $1$ and we use
the abbreviation $\e(x) = $ exp$(2\pi i x)$ for any real number $x$.\\
For two functions, $f$ and $g$ that take only strictly positive real values we write $f = O(g)$ or $f\ll g$ if $f/g$ is bounded.\\
We let $\floor{x}$ denote the floor function and $\{x\}$ denote the fractional part of $x$.
Furthermore, we let $\chi_{\alpha}(x)$ denote the indicator function for $\{x\}$ in $[0,\alpha)$.\\
Moreover we let $\tau(n)$ denote the number of divisors of $n$, $\omega(n)$ denote the number of distinct prime factors of $n$ and
$\varphi(n)$ denote the number of positive integers smaller than $n$ that are co-prime to $n$.\\
Furthermore, let $\varepsilon_j^{(q)}(n) \in \{0,\ldots,q-1\}$ denote the $j$-th digit in the base $q$ expansion of a non-negative integer $n$, i.e.,
$n = \sum_{j=0}^{r} \varepsilon_j^{(q)}(n) q^j$, where $r = \floor{\log_q(n)}$.
We usually omit the superscript, as we work with arbitrary but fixed base $q \geq 2$.
\subsection{Digital Sequences}
The main topic of this paper are digital sequences modulo $m'$.
We use a slightly different definition of digital functions than the one found in~\cite{AlloucheShallit}.
\begin{definition}
We call a function $b: \mathbb{N} \to \mathbb{N}$ a \emph{strongly block-additive $q$-ary function} or \emph{digital function} if there exist
$m\in \mathbb{N}_{>0}$ and $F: \{0,\ldots,q-1\}^{m} \to \mathbb{N}$ such that $F(0,\ldots,0) = 0$ and
\begin{align*}
b(n) = \sum_{j\in\mathbb{Z}} F(\varepsilon_{j+m-1}^{(q)}(n),\ldots,\varepsilon_{j}^{(q)}(n)),
\end{align*}
\end{definition}
where we define $\varepsilon_{-j}(n) = 0$ for all $j\geq1$.
The difference to the usual definition is the range of the sum ($\mathbb{N}_0$ or $\mathbb{Z}$) which does not matter for all appearing examples.
\begin{remark}
The name strongly block-additive $q$-ary function was inspired by (strongly) $q$-additive functions.
Bellman and Shapiro~\cite{bellmanShapiro} and Gelfond~\cite{gelfond1968} denoted a function $f$ to be $q$-additive if
\begin{align*}
f(a q^r + b) = f(a q^r) + f(b)
\end{align*}
holds for all $r \geq 1$, $1\leq a < q$ and $0\leq b < q^r$.
Mend\`es France~\cite{france} denoted a function $f$ to be strongly $q$-additive if
\begin{align*}
f(a q^r + b) = f(a) + f(b)
\end{align*}
holds for all $r \geq 1$, $1 \leq a < q$ and $0\leq b < q^r$.
Thus, we can write for a strongly $q$-additive function $f$,
\begin{align*}
f(n) = \sum_{j\in \mathbb{Z}} f(\varepsilon_j^{(q)}(n)).
\end{align*}
\end{remark}
A quite prominent example of a strongly block-additive function is the sum of digits function $s_q(n)$ in base $q$.
This is a strongly block-additive function with $m=1$ and $F(x) = x$.
In particular, $(s_2(n) \bmod 2)_{n\in\mathbb{N}}$ gives the well-known Thue--Morse sequence.\\
Another prominent example is the Rudin-Shapiro sequence $\mathbf{r} = (r_n)_{n\geq 0}$ which is given by the parity of the blocks of the form ``$11$'' in the digital expansion in base $2$.
Let $b$ be the digital sequence corresponding to
$q=2,m=2$ and $F(x,y) = x\cdot y$, then we find $r_n = (b(n) \bmod 2)$.
This can be generalized to functions that are given by the parity of blocks of the form ``$111\ldots11$'' for fixed length of the block;
these functions have for example been mentioned and studied in~\cite{mauduit_rivat_rs}.
Digital sequences are regular sequences (see for example~\cite{Cateland}).
Consequently we find that digital sequences modulo $m'$ are automatic sequences (see~\cite[Corollary 16.1.6]{AlloucheShallit})
which implies some interesting properties.
For a detailed treatment of automatic sequences see~\cite{AlloucheShallit}.
We define the subword complexity of a sequence $\mathbf{a}$, that takes only finitely many different values, as
\begin{align*}
p_{\mathbf{a}}(n) = \# \{(a_i,\ldots,a_{i+n-1}): i\geq0\}.
\end{align*}
It is well known that the subword complexity of automatic sequences is sub-linear (see \cite[Corollary 10.3.2]{AlloucheShallit}), i.e. for every automatic sequence $\mathbf{a}$ we have
\begin{align*}
p_{\mathbf{a}}(n) = O(n).
\end{align*}
For a random sequence $\mathbf{u} \in \{0,1\}^{\mathbb{N}}$ one finds that $p_{\mathbf{u}}(n) = 2^{n}$ with probability one.
Thus, automatic sequences are far from being random.
\subsection{Main Result}
It is well known that these properties are preserved when considering arithmetic subsequences of automatic sequences and, therefore, digital sequences modulo $m'$.
However, the situation changes completely when one considers the subsequence along squares.
\begin{definition}
A sequence $\mathbf{u} \in \{0,\ldots,m'-1\}^{\mathbb{N}}$ is normal if, for any $k \in \mathbb{N}$ and any $(c_0,\ldots,c_{k-1}) \in \{0,\ldots,m'-1\}^k$, we have
\begin{align*}
\lim_{N\to \infty} \frac{1}{N} \#\{i<N: u(i) = c_0,\ldots,u(i+k-1) = c_{k-1}\} = (m')^{-k}.
\end{align*}
\end{definition}
Drmota, Mauduit and Rivat showed a first example for that phenomenon~\cite{drmotaMauduitRivat2014}.
They considered the classical Thue--Morse sequence $(t_n)_{n\geq0}$ and showed that not only $p_{(t_{n^2})_{n\geq0}}(k) = 2^k$,
but were able to show that $(t_{n^2})_{n\geq 0}$ is normal.
The fact that $p_{(t_{n^2})_{n\geq 0}}(k) = 2^k$ had already been proven by Moshe~\cite{moshe}, who was able to give exponentially growing lower bounds for extractions of the
Thue--Morse sequence along polynomials of degree at least $2$.
In this paper we go one step further than Drmota, Mauduit and Rivat and show a similar result for general digital sequences.
\begin{theorem}\label{maintheorem}
Let $b$ be a digital function and $m'\in \mathbb{N}$ with $\gcd(q-1,m') = 1$ and\\
$\gcd(m',\gcd(\{b(n): n\in \mathbb{N}\})) = 1$.
Then $(b(n^2)\bmod m')_{n\in \mathbb{N}}$ is normal.
\end{theorem}
There are only few known explicit constructions of normal numbers in a given base (see for example \cite[Chapters 4 and 5]{normal}).
This result provides us with a whole class of normal sequences for any given base that can be generated efficiently,
i.e. it takes $O(n \log n)$ elementary operations to produce the first $n$ elements.\\
The easiest construction for normal sequences is the Champernowne construction that is given by concatenating the base $b$ expansion of successive integers.
This gives for example for base $10$: $123456789101112131415\ldots$.
Using the first $n'$ integers takes $O(n' \log(n'))$ elementary operations and gives a sequence of length $\Theta(n' \log(n'))$.\\
Scheerer~\cite{scheerer_normal} analyzed the runtime of some algorithms that produce absolutely normal numbers,
i.e. real numbers in $[0,1]$ whose expansion in base $b$ are normal for every base $b$.
Algorithms by Sierpinski~\cite{Sierpinski_normal} and Turing~\cite{Turing_normal} use double exponentially many operations
and algorithms by Levin~\cite{Levin_normal} and Schmidt~\cite{Schmidt_normal} use exponentially many operations.
Moreover, Becher, Heiber and Slaman~\cite{normal_polynomial} gave an algorithm that takes just above $n^2$ operations to produce the first $n$ digits.
Digital sequences modulo $m'$ have interesting (dynamical) properties.
Firstly, they are primitive and, therefore, uniformly recurrent (\cite[Theorem 10.9.5]{AlloucheShallit})
, i.e., every block that occurs in the sequence at least once, occurs infinitely often with bounded gaps.
There is a natural way to associate a dynamical system - the symbolic dynamical system - to a sequence that takes only finitely many values.
\begin{definition}
The symbolic dynamical system associated to a sequence $\mathbf{u} \in \{0,\ldots,m'-1\}^{\mathbb{N}}$ is the system $(X(\mathbf{u}),T)$,
where $T$ is the shift on $\{0,\ldots,m'-1\}^{\mathbb{N}}$ and $X(\mathbf{u})$ the closure
of the orbit of $\mathbf{u}$ under the action of $T$ for the product topology of $\{0,\ldots, m'-1\}^{\mathbb{N}}$.
\end{definition}
Some of the mentioned properties of automatic sequences also imply important properties for the associated symbolic dynamical system.
The fact that every digital sequence modulo $m'$, denoted by $\mathbf{u}$, is uniformly recurrent implies that the associated symbolic dynamical system is minimal;
i.e., the only closed $T$ invariant sets in $X(\mathbf{u})$ are $\emptyset$ and $X(\mathbf{u})$ - see for example~\cite{subst} or \cite{queffelec}.
Furthermore, the entropy of symbolic dynamical systems to a sequence $\mathbf{u}$, that takes only finitely many values, is equal to
\begin{align*}
\lim_{n\to\infty} \frac{\log(p_{\mathbf{u}}(n))}{n},
\end{align*}
(see for example~\cite{dynamics} or \cite{seq_complex}).
Consequently, we know that the entropy of the symbolic dynamical system associated to a digital sequence modulo $m'$ equals $0$,
and, therefore, the dynamical system is deterministic.
\subsection{Outline of the proof}
In order to prove our main result, we will work with exponential sums.
We present here the main theorem on exponential sums
and further show its connection to Theorem \ref{maintheorem}.
\begin{theorem}\label{Thexponentialsums}
For any integer $k\ge 1$ and $(\alpha_0,\ldots, \alpha_{k-1}) \in \{\frac0{m'},\ldots,\frac{m'-1}{m'}\}^k$
such that $(\alpha_0,\ldots,\alpha_{k-1}) \ne (0,\ldots, 0)$, there exists $\eta > 0$ such that
\begin{align}\label{eqThexponentialsums}
S_0 = \sum_{n<N} \e\rb{\sum_{\ell=0}^{k-1} \alpha_\ell b((n+\ell)^2) } \ll N^{1-\eta}.
\end{align}
\end{theorem}
\begin{lemma}
Theorem~\ref{Thexponentialsums} implies Theorem \ref{maintheorem}.
\end{lemma}
\begin{proof}
Let $(c_0,\ldots,c_{k-1}) \in \{0,\ldots,m'-1\}^k$ be an arbitrary sequence of length $k$.
We count the number of occurrences of this sequence in
$(b(n^2)\bmod m')_{n\leq N}$. Assuming that (\ref{eqThexponentialsums}) holds, we obtain by using the well known identity
$\sum_{n=0}^{m'-1} \e(\frac{n}{m'} \ell) = m'$ for $\ell \equiv 0 \bmod m'$ and $0$ otherwise
\begin{align*}
&\abs{ \{ n< N : (b(n^2)\bmod m', \ldots, b((n+k-1)^2)\bmod m') = (c_0,\ldots,c_{k-1}) \}} \\
&= \sum_{n< N} \ind_{[b(n^2) \equiv c_0\bmod m']} \cdots \ind_{[b((n+k-1)^2) \equiv c_{k-1}\bmod m']} \\
&= \sum_{n< N} \prod_{\ell = 0}^{k-1}\frac 1{m'} \sum_{\alpha_{\ell}' = 0}^{m'-1}
\e\rb{ \frac{\alpha_{\ell}'}{m'} \rb{ b((n+\ell)^2) - c_{\ell} } }
\end{align*}
\begin{align*}
&= \frac 1{(m')^k} \sum_{\substack{ (\alpha_0',\ldots,\alpha_{k-1}')\\ \in \{0,\ldots,m'-1\}^k }}
\e\rb{ -\frac {\alpha_0' c_0 + \cdots + \alpha_{k-1}'c_{k-1}}{m'}}
\sum_{n < N} \e\rb{ \sum_{\ell=0}^{k-1} \underbrace{\frac {\alpha_\ell'}{m'}}_{=:\alpha_{\ell}} b((n+\ell)^2) }\\
&= \frac{N}{(m')^k} + \mathcal{O}\rb{ N^{1-\eta} }
\end{align*}
with the same $\eta > 0$ as in Theorem \ref{Thexponentialsums}. \\
To obtain the last equality we separate the term with
$(\alpha_0', \ldots, \alpha_{k-1}') = (0,\ldots,0)$.
\end{proof}
The structure of the
rest of the paper is presented below.
In \Cref{sec:digital} we discuss some properties of digital sequences.
These properties will be very important for the estimates of the Fourier terms.
In Section \ref{cha:bounds}, we derive the main ingredients of the proof of Theorem~\ref{Thexponentialsums} which are upper bounds on the Fourier terms
\begin{displaymath}
H_\lambda^I(h,d) = \frac 1{q^\lambda} \sum_{0\le u < q^\lambda}
\e\rb{ \sum_{\ell=0}^{k-1} \alpha_\ell b_\lambda(u + \ell d + i_\ell) - h q^{-\lambda} },
\end{displaymath}
where $I = (i_0,\ldots, i_{k-1})\in\mathbb{N}^k$ with some special properties defined in \Cref{sec:Fourier_squares} and $b_{\lambda}$ is
a truncated version of $b$ which is properly defined in \Cref{def:truncated_function}.
The main results of Section \ref{cha:bounds} are Propositions \ref{Pro1} and \ref{Pro2}.
Proposition~\ref{Pro1} yields a bound on averages of Fourier transforms and
Proposition~\ref{Pro2} yields a uniform bound on Fourier transforms.
In Section \ref{cha:proof}, we discuss how Proposition~\ref{Pro1} and Proposition~\ref{Pro2}
are used to prove Theorem \ref{Thexponentialsums}.
The approach is very similar to \cite{drmotaMauduitRivat2014} and we will mainly describe how it has to be adapted.
We use Van-der-Corput-like inequalities in order to reduce our problem to sums depending only on few digits of $n^2, (n+1)^2, \ldots, (n+k-1)^2$.
By detecting these few digits, we are able to remove the quadratic terms, which allows a proper
Fourier analytic treatment. After the Fourier analysis, the remaining sum is split into two sums.
The first sum involves quadratic exponential sums which are dealt with using the results from \Cref{sec:gauss}.
The Fourier terms $H_\lambda^I(h,d)$ appear in the second sum and Propositions \ref{Pro1} and \ref{Pro2} provide the necessary bounds.
We have to distinguish the cases
$K = \alpha_0 + \cdots + \alpha_{k-1} \in \mathbb{Z}$ and $K \notin \mathbb{Z}$.
Sections~\ref{sec:equiv0} and \ref{sec:nequiv0} tackle one of these cases each.
In Section~\ref{sec:equiv0}, we prove that~-- if $K \in \mathbb{Z}$~-- we deduce
Theorem~\ref{Thexponentialsums} from Proposition~\ref{Pro1}.
For $K \notin \mathbb{Z}$, Section~\ref{sec:nequiv0} shows that we can deduce
Theorem~\ref{Thexponentialsums} from Proposition~\ref{Pro2}.
In Section \ref{cha:auxiliary}, we present some auxiliary results also used in \cite{drmotaMauduitRivat2014}.
\section{Digital Functions}\label{sec:digital}
In this section we discuss some important properties of digital functions.
We start with some basic definitions.
\begin{definition}\label{def:truncated_function}
We define for $0 \leq \mu \leq \lambda$ the truncated function $b_\lambda$ and the two-fold restricted function $b_{\mu,\lambda}$ by
\begin{displaymath}
b_\lambda(n) = \sum_{j < \lambda} F(\varepsilon_{j+m-1}(n),\ldots,\varepsilon_{j}(n)) \text{ and } b_{\mu,\lambda}(n) = b_{\lambda}(n)-b_{\mu}(n).
\end{displaymath}
\end{definition}
We see directly that $b_{\lambda}(.): \mathbb{N}\to\mathbb{N}$ is a $q^{\lambda+m-1}$ periodic function and we extend it to a ($q^{\lambda+m-1}$ periodic) function $\mathbb{Z}\to\mathbb{N}$
which we also denote by $b_{\lambda}(.): \mathbb{Z}\to\mathbb{N}$.
For any $n\in \mathbb{N}$, we define $F(n):=F(\varepsilon_{m-1}(n),\ldots,\varepsilon_{0}(n))$.
Since $F(0) = 0$, we can rewrite $b(n)$ and $b_{\lambda}(n)$ for $\lambda \geq 1$ as follows
\begin{align*}
b(n) &= \sum_{j \geq 0} F\rb{\floor{\frac{q^{m-1}n}{q^j}}}\\
b_{\lambda}(n) &= \sum_{j=0}^{\lambda+m-2} F\rb{\floor{\frac{q^{m-1}n}{q^j}}}.
\end{align*}
We show that for any block-additive function, we can choose $F$ without loss of generality such that it fulfills a nice property.
\begin{lemma}
Let $b: \mathbb{N} \to \mathbb{N}$ be a strongly block-additive function corresponding to $F'$.
Then, there exists another function $F$ such that $b$ also corresponds to $F$ and
\begin{align}
\label{property_b'2} \sum_{j = 1}^{m-1} F(nq^j) = 0
\end{align}
holds for all $n\in \mathbb{N}$.
\end{lemma}
\begin{proof}
We start by defining a new function
\begin{align*}
G(n) := \sum_{j=1}^{m-1} F'(n q^j).
\end{align*}
This already allows us to define the function $F$:
\begin{align*}
F(n) := F'(n) + G(n) - G(\floor{n/q}).
\end{align*}
We find directly that $G(0) = F(0) = 0$.
It remains to show that $b$ corresponds to $F$ and that \eqref{property_b'2} holds, which are simple computations,
\begin{align*}
\sum_{j \geq 0} F\rb{\floor{\frac{q^{m-1}n}{q^j}}}&= \sum_{j \geq 0} F'\rb{\floor{\frac{q^{m-1}n}{q^j}}} \\
&\qquad + \sum_{j \geq 0} G\rb{\floor{\frac{q^{m-1}n}{q^j}}} - \sum_{j \geq 0} G\rb{\floor{\frac{q^{m-1}n}{q^{j+1}}}}\\
&= b(n) + G(0) = b(n).
\end{align*}
Furthermore, we find
\begin{align*}
\sum_{j = 1}^{m-1} F(nq^j) &= \sum_{j = 1}^{m-1} F'(nq^j) + \sum_{j = 1}^{m-1} G(nq^j) - \sum_{j = 1}^{m-1} G(nq^{j-1})\\
&= \sum_{j = 1}^{m-1} F'(nq^j) + G(n q^{m-1}) - G(n)\\
&= \sum_{j = 1}^{m-1} F'(nq^j) + 0 - \sum_{j=1}^{m-1} F'(n q^j) = 0.
\end{align*}
\end{proof}
We assume from now on that for any strongly block-additive function $b$ \eqref{property_b'2} holds.
This allows us to find an easier expression for $b$:
\begin{corollary}
Let $b(n)$ be a digital function fulfilling \eqref{property_b'2}.
Then
\begin{align*}
b(n) = \sum_{j \geq 0} F\rb{\floor{\frac{n}{q^j}}}
\end{align*}
and
\begin{align*}
b_{\lambda}(n) = \sum_{j = 0}^{\lambda-1} F\rb{\floor{\frac{n}{q^j}}}
\end{align*}
holds for all $n, \lambda \in \mathbb{N}$.
\end{corollary}
We easily find the following recursion.
\begin{lemma}\label{le:rec_b}
Let $\alpha \in \mathbb{N}, n_1\in \mathbb{N}$ and $0\leq n_2 < q^{\alpha}$. Then
\begin{align}\label{eq:recursion_b_lambda}
b_{\lambda}(n_1 q^{\alpha}+n_2) = b_{\lambda-\alpha}(n_1) + b_{\alpha}(n_1 q^{\alpha} + n_2)
\end{align}
holds for all $\lambda >\alpha$ and
\begin{align}\label{eq:recursion_b}
b(n_1 q^{\alpha}+n_2) = b(n_1) + b_{\alpha}(n_1 q^{\alpha} + n_2).
\end{align}
\end{lemma}
\begin{proof}
We compute $b_{\lambda}(n_1q^{\alpha} + n_2)$
\begin{align*}
b_{\lambda}(n_1q^{\alpha}+n_2) &= \sum_{j=0}^{\lambda-1} F\rb{\floor{\frac{n_1 q^{\alpha} + n_2}{q^j}}}\\
&= \sum_{j=\alpha}^{\lambda-1} F\rb{\floor{\frac{n_1 q^{\alpha} + n_2}{q^j}}} + \sum_{j=0}^{\alpha-1} F\rb{\floor{\frac{n_1 q^{\alpha} + n_2}{q^j}}}\\
&= \sum_{j=0}^{\lambda-\alpha-1} F\rb{\floor{\frac{n_1}{q^j}}} + \sum_{j=0}^{\alpha-1} F\rb{\floor{\frac{n_1 q^{\alpha} + n_2}{q^j}}}\\
&= b_{\lambda-\alpha}(n_1) + b_{\alpha}(n_1 q^{\alpha} + n_2).
\end{align*}
The second case can be treated analogously.
\end{proof}
As we are dealing with the distribution of digital functions along a special subsequence, we will start discussing some distributional result for digital functions.
\begin{lemma}\label{le:b_mod_m'}
Let $b$ be a strongly block-additive function and $m'>1$.
Then the following three statements are equivalent.
\begin{enumerate}[$(i)$]
\item $\exists n \in \mathbb{N}: m'\, \nmid \, b(n)$
\item $\exists n<q^m: m' \, \nmid \, F(n)$
\item $\exists n<q^m: m' \, \nmid \, b(n)$
\end{enumerate}
\end{lemma}
\begin{proof}
Obviously $(iii) \implies (i)$.
Next we show $(i) \implies (ii)$:\\
Let $n_0$ be the smallest natural number $>0$ such that $m' \, \nmid \, b(n_0)$.
By Lemma~\ref{le:rec_b} holds
\begin{align*}
b(n_0) = b(\floor{n_0/q}) + F(n_0).
\end{align*}
By the definition of $n_0$, holds $m' \, \mid \, b(\floor{n_0/q})$ and, therefore, $m' \, \nmid \, F(n_0) = F( n_0 \bmod q^m)$.
It remains to prove $(ii) \implies (iii)$:\\
Let $n_0$ be the smallest natural number $>0$ such that $m' \, \nmid \, F(n_0)$.
By $(ii)$, we have $n_0<q^m$.
We compute $b(n_0) \bmod m'$,
\begin{align*}
b(n_0) = \sum_{j\geq 0} F\rb{\floor{\frac{n_0}{q^j}}} \equiv F(n_0) \not \equiv 0 (\bmod m')
\end{align*}
as $\floor{\frac{n_0}{q^j}}<n_0$ for $j\geq 1$ implies that $F\rb{\floor{\frac{n_0}{q^j}}}\equiv 0 (\bmod m')$.
\end{proof}
\begin{remark}
The following example shows that we can not replace $m' \, \nmid \, .$ by $\gcd(m',.) = 1$ in Lemma~\ref{le:b_mod_m'}:\\
Let $m = 1, q=3,m' = 6$ and $F(0)=0, F(1) = 2, F(2) = 3$.
We see that $\gcd(m',F(n)) >1$ for all $n <q^m=3$ and also $\gcd(m',b(n))>1$ for all $n<q^m=3$.
However, $b(5) = F(1) + F(2) = 5$ and $\gcd(m',b(5)) = 1$.
\end{remark}
Next, we show a technical result concerning block-additive functions, which will be useful later on.
\begin{lemma}\label{le:complicated_b_not_const}
Let $b$ be a strongly block-additive function in base $q$ and $k>1$ such that $\gcd(k,q-1) = 1$ and $\gcd(k, \gcd(\{b(n): n \in \mathbb{N}\})) = 1$.
Then there exist integers $\mathbf{e}_1,\mathbf{e}_2<q^{2m-1}$ such that
\begin{align}
\begin{split}
\label{eq:b_not_const}
&b(q^{m-1}(\mathbf{e}_1 + 1)-1) - b(q^{m-1}(\mathbf{e}_1+1)) \\
\not \equiv\quad & b(q^{m-1}(\mathbf{e}_2 + 1)-1) - b(q^{m-1}(\mathbf{e}_2+1)) (\bmod k)
\end{split}
\end{align}
holds.
\end{lemma}
\begin{proof}
Without loss of generality we can restrict ourselves to the case $p \in \P$ where $p\, \mid \, k$.
Let us assume on the contrary that there exists $c$ such that
\begin{align*}
b(q^{m-1}(\mathbf{e} +1)-1) - b(q^{m-1}(\mathbf{e}+1)) \equiv c (\bmod p)
\end{align*}
holds for all $\mathbf{e} < q^{2m-1}$.
Under this assumption, we find a new expression for $b(n)\bmod p$, where $n<q^{m}$:
\begin{align*}
n \cdot q^{m-1} c &\equiv \sum_{\mathbf{e}<n q^{m-1}} \rb{b(q^{m-1}(\mathbf{e}+1)-1) - b(q^{m-1}(\mathbf{e}+1))}\\
&\equiv \sum_{\mathbf{e}<n q^{m-1}} \rb{b(\mathbf{e}) + b_{m-1}(q^{m-1} \mathbf{e} + q^{m-1}-1) - b(\mathbf{e}+1)}\\
&\equiv -b(n q^{m-1}) + \sum_{\mathbf{e}<nq^{m-1}} b_{m-1}(q^{m-1} \mathbf{e} + q^{m-1}-1)\\
&\equiv -b(n q^{m-1}) + n \sum_{\mathbf{e}<q^{m-1}} b_{m-1}(q^{m-1} \mathbf{e} + q^{m-1}-1).
\end{align*}
The last equality holds since $b_{m-1}(q^{m-1}\mathbf{e} + q^{m-1}-1)$ is a $q^{m-1}$ periodic function in $\mathbf{e}$.
This gives
\begin{align}\label{eq:le_rest_b}
b(n) = b(nq^{m-1}) \equiv n \rb{\sum_{\mathbf{e}<q^{m-1}} b_{m-1}(q^{m-1} \mathbf{e} + q^{m-1}-1) - q^{m-1} c} (\bmod p).
\end{align}
By comparing this expression for $b(1)$ and $b(q)$ - note that $b(1) = b(q)$ - we find
\begin{align*}
(q-1) \rb{\sum_{\mathbf{e}<q^{m-1}} b_{m-1}(q^{m-1} \mathbf{e} + q^{m-1}-1) - q^{m-1} c} \equiv 0 (\bmod p)\\
\sum_{\mathbf{e}<q^{m-1}} b_{m-1}(q^{m-1} \mathbf{e} + q^{m-1}-1) - q^{m-1} c \equiv 0 (\bmod p)
\end{align*}
as $\gcd(p,q-1) = 1$.
Together with \eqref{eq:le_rest_b}, this implies that $p\, \mid \, b(n)$ for all $n<q^{m}$.
This is a contradiction to $\gcd(p, \gcd(\{b(n): n \in \mathbb{N}\})) = 1$ by Lemma~\ref{le:b_mod_m'}.
\end{proof}
We will use this result in a different form.
\begin{corollary}\label{co:diff_b}
Let $b$ be a strongly block-additive function in base $q$ and $m'>1$ such that $\gcd(m',q-1) = 1$ and $\gcd(m', \gcd(\{b(n): n \in \mathbb{N}\})) = 1$.
For every $\alpha \in \{\frac{1}{m'},\ldots,\frac{m'-1}{m'}\}$ exist $\mathbf{e}_1,\mathbf{e}_2<q^{2m-1}$ and $d\in \mathbb{N}$ such that $d\alpha \not \in \mathbb{Z}$ and
\begin{align*}
&b(q^{m-1}(\mathbf{e}_1 + 1)-1) - b(q^{m-1}(\mathbf{e}_1+1))\\
& \qquad - b(q^{m-1}(\mathbf{e}_2 + 1)-1) + b(q^{m-1}(\mathbf{e}_2+1)) = d.
\end{align*}
\end{corollary}
\begin{proof}
Let $\alpha = \frac{x}{y}$ where $\gcd(x,y) = 1$ and $1<y\, \mid \, m'$.
We apply Lemma~\ref{le:complicated_b_not_const} for $k = y$ and find $\mathbf{e}_1,\mathbf{e}_2$ such that
\begin{align*}
&b(q^{m-1}(\mathbf{e}_1 + 1)-1) - b(q^{m-1}(\mathbf{e}_1+1))\\
&\qquad - b(q^{m-1}(\mathbf{e}_2 + 1)-1) + b(q^{m-1}(\mathbf{e}_2+1)) = d,
\end{align*}
where
\begin{align*}
d \not \equiv 0 (\bmod y).
\end{align*}
This implies
\begin{align*}
d \alpha = \frac{dx}{y} \not \equiv 0 (\bmod 1).
\end{align*}
\end{proof}
\section{Bounds on Fourier Transforms} \label{cha:bounds}
The goal of this section is to prove Propositions~\ref{Pro1} and~\ref{Pro2}. To find the necessary bounds we first need to recall one important result
on the norm of matrix products which was first presented by Drmota, Mauduit and Rivat~\cite{drmotaMauduitRivat2014}.
Afterwards, we deal with Fourier estimates and formulate Proposition~\ref{Pro1} and Proposition~\ref{Pro2}.
The following Sections~\ref{sec:Pro1} and~\ref{sec:Pro2} give proofs of Proposition~\ref{Pro1} and Proposition~\ref{Pro2}, respectively.
\subsection{Auxiliary Results for the Bounds of the Fourier Transforms}
In this section we state necessary conditions under which the product of matrices decreases exponentially with respect to the matrix row-sum norm.
\begin{lemma}
\label{le:matrixnorm}
Let ${\mathbf M}_\ell$, $\ell \in \mathbb{N}$, be $N\times N$-matrices with complex entries $M_{\ell;i,j}$, for ${1\le i,j\le N}$,
and absolute row sums
\begin{displaymath}
\sum_{j=1}^N |M_{\ell;i,j}| \le 1 \text{ for } 1 \leq i \leq N.
\end{displaymath}
Furthermore, we assume that there exist integers $m_0\ge 1$ and $m_1\ge 1$ and constants $c_0> 0$ and $\eta > 0$ such that
\begin{enumerate}
\item every product ${\mathbf A} = (A_{i,j})_{(i,j)\in\{1,\ldots,N\}^2}$ of $m_0$ consecutive matrices ${\mathbf M}_\ell$ has the property that,
\begin{align} \label{matrixNorm1}
|A_{i,1}| \ge c_0 \quad \mbox{or}\quad \sum_{j=1}^N |A_{i,j}| \le 1-\eta \text{ for every row } i;
\end{align}
\item every product ${\mathbf B} = (B_{i,j})_{(i,j)\in\{1,\ldots,N\}^2}$ of $m_1$ consecutive matrices ${\mathbf M}_\ell$ has the property
\begin{align} \label{matrixNorm2}
\sum_{j=1}^N |B_{1,j}| \le 1-\eta.
\end{align}
\end{enumerate}
Then there exist constants $C> 0$ and $\delta> 0$ such that
\begin{align}\label{eqLe0001}
\norminf{\prod_{\ell = r}^{r+k-1} {\mathbf M}_\ell } \le C q^{-\delta k}
\end{align}
uniformly for all $r\ge 0$ and $k\ge 0$ (where $\norminf{\cdot}$ denotes the matrix row-sum norm).
\end{lemma}
\begin{proof}
See \cite{drmotaMauduitRivat2014}.
\end{proof}
\begin{lemma}\label{le:sum_4}
Let $x_1,x_2,\xi_1,\xi_2 \in \mathbb{R}$. Then
\begin{align*}
\abs{\e(x_1) + \e(x_1+\xi_1)} + \abs{\e(x_2) + \e(x_2+\xi_2)} \leq 4 - 8 \rb{\sin\rb{\frac{\pi \norm{\xi_1-\xi_2}}{4}}}^2.
\end{align*}
\end{lemma}
\begin{proof}
The proof is a straight-forward computation and can be found for example at the end of the proof of \cite[Lemma 12]{mauduit_rivat_rs}.
\end{proof}
\subsection{Fourier estimates}\label{sec:Fourier_squares}
In this section, we discuss some general properties of the occurring Fourier terms.
For any $k\in\mathbb{N}$, we denote by $\mathcal{I}_k$ the set of integer vectors $I = (i_0,\ldots,i_{k-1})$ with $i_0 < q^{m-1}$ and
$i_{\ell-1} \leq i_{\ell} \leq i_{\ell-1} + q^{m-1}$ for $1\le \ell\le k-1$.\\
Furthermore, we denote by $\mathcal{I}'_k$ the set of integer vectors $I' = (i'_0,\ldots,i'_{k-1})$ with $i'_0 = 0$ and
$i'_{\ell-1} \leq i'_{\ell} \leq i'_{\ell-1} +1$.\\
This set $\mathcal{I}_k$ obviously consists of $q^{m-1}(q^{m-1}+1)^{k-1}$ elements.
For any $I\in\mathcal{I}'_k$, $h\in\mathbb{Z}$ and $(d,\lambda)\in\mathbb{N}^2$, we define
\begin{align}\label{eq:def-H}
H_{\lambda}^{I}(h,d) = \frac 1{q^{\lambda+m-1}} \sum_{0\le u < q^{\lambda+m-1}}
\e \rb{ \sum_{\ell = 0}^{k-1} \alpha_\ell b_{\lambda}(u+\ell d + i_\ell) - hu q^{-\lambda-m+1} },
\end{align}
for fixed coefficients $\alpha_\ell \in \{\frac0{m'},\ldots,\frac{m'-1}{m'}\}$.
This sum $H_{\lambda}^{I}(\,.\,,d)$ can then be seen as the discrete Fourier transform
of the function
\begin{displaymath}
u \mapsto \e\rb{ \sum_{\ell = 0}^{k-1} \alpha_\ell b_{\lambda}(u+\ell d + i_\ell) },
\end{displaymath}
which is $q^{\lambda+m-1}$ periodic.
Furthermore, we define the important parameter
\begin{displaymath}
K := \alpha_0 + \cdots + \alpha_{k-1}.
\end{displaymath}
We would like to find a simple recursion for $H_{\lambda}$ in terms of $H_{\lambda-1}$.
Instead we relate it to a different function for which the recursion is much simpler:
\begin{align*}
G_{\lambda}^{I} (h,d) = \frac{1}{q^{\lambda}} \sum_{u<q^{\lambda}} \e\rb{\sum_{\ell = 0}^{k-1} \alpha_{\ell} b_{\lambda}(q^{m-1}(u+\ell d) + i_{\ell})-huq^{-\lambda}}.
\end{align*}
This sum $G_{\lambda}^{I}(\,.\,,d)$ can then be seen as the discrete Fourier transform of the function
\begin{displaymath}
u \mapsto \e\rb{\sum_{\ell = 0}^{k-1} \alpha_{\ell} b_{\lambda}(q^{m-1}(u+\ell d) + i_{\ell})},
\end{displaymath}
which is $q^{\lambda}$ periodic.
We show now how $G$ and $H$ are related.
\begin{lemma}\label{Le0}
Let $I \in \mathcal{I}'_k, h \in \mathbb{Z}, (d,\lambda) \in \mathbb{N}^2$ and $\delta \in \{0,\ldots,q^{m-1}-1\}$. It holds
\begin{small}
\begin{align} \label{eq:H-recursion}
H_{\lambda}^{I}(h,q^{m-1} d+\delta) = \frac{1}{q^{m-1}} \sum_{\varepsilon = 0}^{q^{m-1}-1} \e\rb{-\frac{h \varepsilon}{q^{\lambda+m-1}} }
\G_{\lambda}^{J_{\varepsilon, \delta}}(h,d),
\end{align}
\end{small}
where
\begin{align*}
J_{\varepsilon,\delta} = J_{\varepsilon,\delta}(I) = \rb{i_{\ell} + \ell \delta + \varepsilon}_{\ell \in \{0,\ldots,k-1\}} \in \mathcal{I}_k.
\end{align*}
\end{lemma}
\begin{proof}
One checks easily that $J_{\varepsilon,\delta}(I)\in \mathcal{I}_k$.
We evaluate $H_{\lambda}^{I}(h,q^{m-1} d + \delta)$:
\begin{align*}
&H_{\lambda}^{I}(h,q^{m-1} d + \delta)\\
=&\frac 1{q^{\lambda+m-1}}
\sum_{0\le u < q^{\lambda+m-1}} \e\rb{\sum_{\ell = 0}^{k-1} \alpha_\ell b_{\lambda}(u+\ell (q^{m-1} d + \delta) + i_\ell) - h u q^{-\lambda-m+1} }\\
=& \frac{1}{q^{\lambda+m-1}} \sum_{\varepsilon <q^{m-1}} \sum_{0 \leq u < q^{\lambda}} \e\rb{-\frac{h (q^{m-1} u)}{q^{\lambda+m-1}}} \e\rb{-\frac{h \varepsilon}{q^{\lambda+m-1}}} \\
&\qquad \cdot \e\rb{\sum_{\ell=0}^{k-1} \alpha_{\ell} b_{\lambda}(q^{m-1} u + \varepsilon + \ell (q^{m-1} d + \delta) +i_{\ell})}\\
=& \frac{1}{q^{\lambda+m-1}} \sum_{\varepsilon <q^{m-1}} \sum_{ u < q^{\lambda}} \e\rb{-\frac{h u}{q^{\lambda}}} \e\rb{-\frac{h \varepsilon}{q^{\lambda+m-1}}}\\
&\qquad \qquad \cdot \e\rb{\sum_{\ell=0}^{k-1}\alpha_{\ell} b_{\lambda}\rb{(u + \ell d)q^{m-1} + (\ell \delta + i_{\ell} +\varepsilon)}}\\
=& \frac{1}{q^{m-1}} \sum_{\varepsilon <q^{m-1}} \e\rb{-\frac{h \varepsilon}{q^{\lambda+m-1}} }
\G_{\lambda}^{J_{\varepsilon, \delta}}(h,d).
\end{align*}
\end{proof}
Next we define a transformation on $\mathcal{I}_k$ and a weight function $v$.
\begin{definition}
Let $j\geq 1$ and $\varepsilon,\delta \in \{0,\ldots,q^{j}-1\}$.
Then, we define for $I \in \mathcal{I}_k$
\begin{align*}
&T_{\varepsilon,\delta}^{j}(I) := \rb{\floor{\frac{i_{\ell} + q^{m-1}(\varepsilon + \ell \delta)}{q^{j}}}}_{\ell \in \{0,\ldots,k-1\}}\\
&v^{j}(I,\varepsilon,\delta) := \e\rb{\sum_{\ell<k} \alpha_{\ell} \cdot b_{j}(i_{\ell} + q^{m-1}(\varepsilon + \ell \delta))}.
\end{align*}
\end{definition}
We see immediately that $\abs{v^{j}(I,\varepsilon,\delta)} = 1$ for all possible values of $j,I,\varepsilon$ and $\delta$.
Furthermore, we extend the definition of $T^{j}$ for arbitrary $\varepsilon,\delta$ by
\begin{align*}
T_{\varepsilon,\delta}^{j}(I) := T_{\varepsilon \bmod q^j,\delta \bmod q^j}^{j}(I).
\end{align*}
The next Lemma shows some basic properties of these functions.
\begin{lemma}
Let $\lambda,j,j_1,j_2 \in \mathbb{N}$, $\varepsilon,\delta \in \{0,\ldots,q^{j}-1\}$ and $\varepsilon_i,\delta_i \in \{0,\ldots,q^{j_i}-1\}$. Then, the following facts hold.
\begin{itemize}
\item $T_{\varepsilon,\delta}^{j}(I) \in \mathcal{I}_{k}$\\
\item $T_{\varepsilon_2, \delta_2}^{j_2} \circ T_{\varepsilon_1, \delta_1}^{j_1} = T_{\varepsilon_2 q^{j_1} + \varepsilon_1,\delta_2 q^{j_1} + \delta_1}^{j_1+j_2}$\\
\item $G_{\lambda}^{I}(h,d) = \frac{1}{q^{\lambda}}\sum_{u<q^{\lambda}} v^{\lambda}(I,u,d) \e(-huq^{-\lambda})$.
\end{itemize}
\end{lemma}
\begin{proof}
The first two facts are direct consequences of basic properties of the floor function and the last fact is just a reformulation of the definition of $G$ in terms of $v$.
\end{proof}
Now we can find a nice recursion for the Fourier transform $G$.
\begin{lemma}\label{le:rec_G}
Let $I \in \mathcal{I}_k, h \in \mathbb{Z}, d,\lambda \in \mathbb{N}$ and $1\leq j\leq \lambda, \delta \in \{0,\ldots,q^{j}-1\}$. We have
\begin{align*}
G_{\lambda}^{I}(h,q^jd+\delta) = \frac{1}{q^{j}} \sum_{\varepsilon<q^j} \e(-h\varepsilon q^{-\lambda})v^j(I,\varepsilon,\delta)\cdot G_{\lambda-j}^{T_{\varepsilon,\delta}^{j}(I)}(h,d).
\end{align*}
\end{lemma}
\begin{proof}
We evaluate $G_{\lambda}^{I} (h,q^j d + \delta)$ and use \eqref{eq:recursion_b_lambda}:
\begin{small}
\begin{align*}
G_{\lambda}^{I} &(h,q^j d + \delta) = \frac{1}{q^{\lambda}} \sum_{u < q^{\lambda}} \e\rb{\sum_{\ell = 0}^{k-1} \alpha_{\ell} b_{\lambda}(q^{m-1}(u+\ell (q^j d + \delta)) + i_{\ell}) - h u q^{-\lambda}}\\
&= \frac{1}{q^j} \sum_{\varepsilon<q^j} \frac{1}{q^{\lambda-j}} \sum_{u < q^{\lambda-j}} \e\rb{\sum_{\ell = 0}^{k-1} \alpha_{\ell} b_{\lambda} (q^{m-1+j} (u+\ell d) + q^{m-1} (\varepsilon + \ell \delta) + i_{\ell})}\\
&\quad \cdot \e(-h(u q^j +\varepsilon)q^{-\lambda})\\
&= \frac{1}{q^j} \sum_{\varepsilon<q^j} \e\rb{\sum_{\ell = 0}^{k-1} \alpha_{\ell} b_j(q^{m-1}(\varepsilon + \ell \delta) + i_{\ell})} \e(-h\varepsilon q^{-\lambda})\\
&\quad \cdot \frac{1}{q^{\lambda-j}} \sum_{u<q^{\lambda-j}} \e\rb{\sum_{\ell = 0}^{k-1} \alpha_{\ell} b_{\lambda-j} \rb{q^{m-1}(u + \ell d) + \floor{\frac{\varepsilon q^{m-1} + \ell \delta q^{m-1} + i_{\ell}}{q^j}}} - h u q^{-\lambda+j}}\\
&= \frac{1}{q^j} \sum_{\varepsilon<q^j} v^j(I,\varepsilon, \delta) \e(-h \varepsilon q^{-\lambda})\cdot G_{\lambda-j}^{T_{\varepsilon, \delta}^j(I)}(h,d).
\end{align*}
\end{small}
\end{proof}
The following propositions are crucial for our proof of the main Theorem~\ref{Thexponentialsums}.
\begin{proposition}\label{Pro1}
If $K \equiv 0 (\bmod 1)$ and $\frac 12 \lambda \le \lambda' \le \lambda$, then there exists $\eta > 0$ such that
for any $I\in \mathcal{I}'_k$
\begin{displaymath}
\frac 1{q^{\lambda'}} \sum_{0\le d < q^{\lambda'}} \abs{H_\lambda^{I}(h,d)}^2 \ll q^{-\eta \lambda}
\end{displaymath}
holds uniformly for all integers $h$.
\end{proposition}
\begin{proposition}\label{Pro2}
If $K \not \equiv 0 (\bmod 1)$, then there exists $\eta > 0$ such that for any $I\in \mathcal{I}'_k$
\begin{displaymath}
\abs{ H_\lambda^{I}(h,d) } \ll q^{-\eta L} \max_{J \in \mathcal{I}_k} \abs{ G_{\lambda-L}^{J}(h,\floor{ d/q^L }) }
\end{displaymath}
holds uniformly for all non-negative integers $h,d$ and $L$.
\end{proposition}
Proofs for Proposition~\ref{Pro1} and~\ref{Pro2} are given in the following sections.
\subsection{Proof of Proposition~\ref{Pro1}}\label{sec:Pro1}
This section is dedicated to prove Proposition~\ref{Pro1}.
We start by reducing the problem from $H_{\lambda}^{I}(h,d)$ to $G_{\lambda}^{I}(h,d)$ for which we have found a nice recursion.
\begin{proposition}\label{pro1_new}
For $K \in \mathbb{Z}$ and $\frac{1}{2} \lambda \le \lambda' \le \lambda$, we find $\eta > 0$ such that for any $I \in \mathcal{I}_k$
\begin{align*}
\frac{1}{q^{\lambda'}} \sum_{0 \le d< q^{\lambda'}} \abs{G_{\lambda}^{I}(h,d)}^2 \ll q^{-\eta \lambda}
\end{align*}
holds uniformly for all integers $h$.
\end{proposition}
\begin{lemma}
Proposition \ref{pro1_new} implies Proposition \ref{Pro1}.
\end{lemma}
\begin{proof}
We see by \eqref{eq:H-recursion} that
\begin{align*}
\abs{H_{\lambda}^{I}(h,d)}^2 \leq \max_{J\in \mathcal{I}_k} \abs{G_{\lambda}^{J}(h,\floor{d/q^{m-1}})}^2
\leq \sum_{J\in \mathcal{I}_k} \abs{G_{\lambda}^{J}(h,\floor{d/q^{m-1}})}^2.
\end{align*}
Thus we find
\begin{align*}
\frac{1}{q^{\lambda'}} \sum_{0 \le d< q^{\lambda'}} \abs{H_{\lambda}^{I}(h,d)}^2 \leq
\sum_{J\in \mathcal{I}_k} \frac{1}{q^{\lambda'}} \sum_{0 \le d< q^{\lambda'}} \abs{G_{\lambda}^{J}(h,\floor{d/q^{m-1}})}^2
\ll q^{-\eta \lambda}.
\end{align*}
\end{proof}
Using Lemma~\ref{le:rec_G}, it is easy to establish a recursion for
\begin{align*}
\Phi_{\lambda, \lambda'}^{I, I'}(h) = \frac{1}{q^{\lambda'}} \sum_{0 \leq d < q^{\lambda'}} \G_{\lambda}^{I}(h,d) \overline{\G_{\lambda}^{I'}(h,d)},
\end{align*}
where $h\in\mathbb{Z}$, $(\lambda,\lambda')\in\mathbb{N}^2$ and $(I,I')\in\mathcal{I}_k^2$.
For $\lambda,\lambda'\geq 1$ and $1\leq j\leq \min(\lambda,\lambda')$ it yields for $\Phi_{\lambda, \lambda'}^{I,I'}(h)$ the following expression
\begin{align*}
\frac{1}{q^{3j}} \sum_{\delta <q^j} \sum_{\varepsilon_1 <q^j} \sum_{\varepsilon_2 <q^j} \e\rb{-\frac{(\varepsilon_1-\varepsilon_2)h}{q^{\lambda}}}
v^j(I,\varepsilon_1,\delta) \overline{v^j(I,\varepsilon_2,\delta)} \Phi_{\lambda-j,\lambda'-j}^{T^j_{\varepsilon_1, \delta}(I), T^j_{\varepsilon_2, \delta}(I')}(h).
\end{align*}
To find this recursion, one has to split up the sum over $0 \leq d < q^{\lambda'}$ into the equivalence classes modulo $q^j$.
This identity gives rise to a vector recursion for $\Psi_{\lambda, \lambda'}(h) = \rb{\Phi_{\lambda, \lambda'}^{I,I'}(h)}_{(I,I')\in \mathcal{I}_k^2}$.
We use the recursion for $j=1$:
\begin{align*}
\Psi_{\lambda, \lambda'}(h) = \mathbf{M}(h/q^{\lambda}) \cdot \Psi_{\lambda-1, \lambda'-1}(h)
\end{align*}
where the $2^{2(k-1)} \times 2^{2(k-1)}$-matrix $\mathbf{M}(\beta) = (M_{(I,I'),(J,J')}(\beta))_{((I,I'),(J,J')) \in \mathcal{I}_k^2 \times \mathcal{I}_k^2}$
is independent of $\lambda$ and $\lambda'$. By construction, all absolute row sums of $\textbf{M}(\beta)$ are bounded by $1$.
It is useful to interpret these matrices as weighted directed graphs.
The vertices are the pairs $(I,I') \in \mathcal{I}_k^2$ and, starting
from each vertex, there are $q^3$ directed edges to the vertices $(\T_{\varepsilon_1, \delta}(I),\T_{\varepsilon_2, \delta}(I'))$
- where $(\delta, \varepsilon_1,\varepsilon_2)\in\{0,\ldots,q-1\}^3$ - with corresponding weights
\begin{align*}
\frac{1}{q^3} \e\rb{-\frac{(\varepsilon_1-\varepsilon_2)h}{q^{\lambda}}} v^1(I,\varepsilon_1,\delta) \overline{v^1(I',\varepsilon_2,\delta)}.
\end{align*}
Products of $j$ such matrices correspond to oriented paths of length $j$ in these graphs, which are weighted with the corresponding products.
The entries at position $((I,I'),(J,J'))$ of such product matrices correspond to the sum of weights along paths from $(I,I')$ to $(J,J')$.
Lemma~\ref{le:rec_G} allows us to describe this product of matrices directly.
\begin{lemma}
The entry $((I,I'),(J,J'))$ of $\mathbf{M}(h/q^{\lambda}) \cdot \mathbf{M}(h/q^{\lambda-1}) \cdot \ldots \cdot \mathbf{M}(h/q^{\lambda-j+1})$ equals
\begin{align*}
\frac{1}{q^{3j}} \sum_{\delta<q^j} \sum_{\varepsilon_1,\varepsilon_2 < q^j} \ind_{[T_{\varepsilon_1, \delta}^{j}(I) = J]} \ind_{[T_{\varepsilon_2, \delta}^{j}(I') = J']}
v^j(I,\varepsilon_1,\delta) \overline{v^j(I',\varepsilon_2,\delta)} \e\rb{-\frac{(\varepsilon_1 - \varepsilon_2)h}{q^{\lambda}}}.
\end{align*}
\end{lemma}
\begin{proof}
Follows directly by Lemma~\ref{le:rec_G}.
\end{proof}
This product of matrices corresponds to oriented paths of length $j$.
They can be encoded by the triple $(\varepsilon_1, \varepsilon_2, \delta)$ and they correspond to a path from $(I,I')$ to
$(T_{\varepsilon_1, \delta}^{j}(I), T_{\varepsilon_2, \delta}^{j}(I'))$ with unimodular weight
$v^j(I,\varepsilon_1,\delta) \overline{v^j(I',\varepsilon_2,\delta)} \e\rb{-\frac{(\varepsilon_1 - \varepsilon_2)h}{q^{\lambda}}}$.
To simplify further computations we define
\begin{align*}
n_{(I,I'),(J,J')}^{(j)} := \sum_{\delta<q^j} \sum_{\varepsilon_1,\varepsilon_2 < q^j} \ind_{[T_{\varepsilon_1, \delta}^{j}(I) = J]} \ind_{[T_{\varepsilon_2, \delta}^{j}(I') = J']}
\end{align*}
and find directly that
\begin{align*}
\sum_{(J,J') \in \mathcal{I}_{k}^2} n_{(I,I'),(J,J')}^{(j)} = q^{3j}
\end{align*}
and the absolute value of the entry $((I,I'),(J,J'))$ of $\mathbf{M}(h/q^{\lambda}) \cdot \mathbf{M}(h/q^{\lambda-1}) \cdot \ldots \cdot \mathbf{M}(h/q^{\lambda-j+1})$
is bounded by $n_{(I,I'),(J,J')}^{(j)} q^{-3j}$.
In order to prove Proposition~\ref{Pro1}, we will use Lemma~\ref{le:matrixnorm} uniformly for $h$ with $\mathbf{M}_l = \mathbf{M}(h/q^l)$.
Therefore, we need to check Conditions~\eqref{matrixNorm1} and~\eqref{matrixNorm2}.
Note that, since $\frac{1}{2} \lambda \leq \lambda' \leq \lambda$, we have
\begin{align*}
\Psi_{\lambda, \lambda'}(h) = \mathbf{M}(h/q^{\lambda}) \cdots \mathbf{M}(h/q^{\lambda-\lambda'+1}) \Psi_{\lambda-\lambda',0}(h).
\end{align*}
\begin{lemma}
The matrices $M_{l}$ defined above fulfill Condition~\eqref{matrixNorm1} of Lemma~\ref{le:matrixnorm}.
\end{lemma}
\begin{proof}
We need to show that there exists an integer $m_0 \geq 1$ such that every product
\begin{align*}
\mathbf{A} = (A_{(I,I'),(J,J')})_{((I,I'),(J,J')) \in \mathcal{I}_k^2 \times \mathcal{I}_k^2}
\end{align*}
of $m_0$ consecutive matrices $\mathbf{M}_l=\mathbf{M}(h/q^l)$ verifies Condition~\eqref{matrixNorm1} of Lemma~\ref{le:matrixnorm}.
We define $m_0 = m-1 + \ceil{\log_{q}(k+1)}$.
It follows directly from the definition, that $T_{0,0}^{m_0}(I) = \mathbf{0}$
for all $I \in \mathcal{I}_k$.
In the graph interpretation this means that for every vertex $(I,I')$ there is a path of length $m_0$ from $(I,I')$ to $(\mathbf{0},\mathbf{0})$.
Fix a row indexed by $(I,I')$ in the matrix $\mathbf{A}$.
We already showed that the entry $A_{(I,I'),({\mathbf 0},{\mathbf 0})}$ is the sum of at least one term of absolute value $q^{-3 m_0}$, i.e.,
$n_{(I,I'),(\mathbf{0},\mathbf{0})}^{(m_0)} \geq 1$.
There are two possible cases. If the absolute row sum is at most
\begin{align*}
\le 1 - \eta
\end{align*}
with $\eta \leq q^{-3m_0}$
then we are done.
In case the absolute row sum is strictly greater than $1 - \eta$,
we show that $|A_{(I,I'),({\mathbf 0},{\mathbf 0})}| \ge q^{-3m_0}/2$:
The inequality $|A_{(I,I'),({\mathbf 0},{\mathbf 0})}| < q^{-3m_0}/2$ implies that $A_{(I,I'),({\mathbf 0},{\mathbf 0})}$
is the sum of at least two terms of absolute value $q^{-3m_0}$, i.e. $n_{(I,I'),(\mathbf{0},\mathbf{0})}^{(m_0)} \geq 2$.
Thus, we can use the triangle inequality to bound the absolute row sum by
\begin{displaymath}
\sum_{(J,J')} | A_{(I,I'),(J,J')} | \leq \abs{A_{(I,I'),(\mathbf{0},\mathbf{0})}} + q^{-3 m_0} \sum_{(J,J') \neq (\mathbf{0},\mathbf{0})} n_{(I,I'),(J,J')}^{(m_0)}.
\end{displaymath}
Since
\begin{align*}
\sum_{(J,J')} n_{(I,I'),(J,J')}^{(m_0)} = q^{3 m_0}
\end{align*}
we find
\begin{align*}
\sum_{(J,J')} | A_{(I,I'),(J,J')} | &\leq \abs{A_{(I,I'),(\mathbf{0},\mathbf{0})}} + 1 - q^{-3 m_0} n_{(I,I'),(\mathbf{0},\mathbf{0})}^{(m_0)}\\
&\leq q^{-3 m_0}/2 + 1 - 2 q^{-3 m_0} < 1 - q^{-3 m_0}.
\end{align*}
This contradicts the assumption that the absolute row sum is strictly greater than
\begin{align*}
1 - \eta \geq 1-q^{-3m_0}.
\end{align*}
Consequently, we find
\begin{align*}
|A_{(I,I'),(\mathbf{0},\mathbf{0})}| \geq c_0 \text{ for } c_0 = q^{-3m_0}/2.
\end{align*}
\end{proof}
\begin{lemma}
The matrices $M_{l}$ fulfill Condition~\eqref{matrixNorm2} of Lemma~\ref{le:matrixnorm}.
\end{lemma}
\begin{proof}
We need to show that there exists an integer $m_1\geq 1$ such
that for every product
\begin{align*}
\mathbf{B}=(B_{(I,I'),(J,J')})_{((I,I'),(J,J'))\in\mathcal{I}_k^2\times\mathcal{I}_k^2}
\end{align*}
of $m_1$ consecutive matrices
${\mathbf M}_l = {\mathbf M}(h/q^l)$
the absolute row-sum of the first row is bounded by $1-\eta$.
We concentrate on the entry $B_{({\mathbf 0},{\mathbf 0}),({\mathbf 0},{\mathbf 0})}$,
i.e. we consider all possible paths from
$({\mathbf 0},{\mathbf 0})$ to $({\mathbf 0},{\mathbf 0})$
of length $m_1$ in the corresponding graph and show that a positive
saving for the absolute row sum is just due to the structure of this entry.
Since $T_{00}^{m+\floor{\log_q(k)}}({\mathbf 0}) = T_{10}^{m+\floor{\log_q(k)}} ({\mathbf 0}) = {\mathbf 0}$,
we have at least two paths from $(\mathbf{0},\mathbf{0})$ to $(\mathbf{0},\mathbf{0})$ and it follows
that the entry $B_{({\mathbf 0},{\mathbf 0}),({\mathbf 0},{\mathbf 0})}$
is certainly a sum of $k_0 = k_0(m_1)\ge 2$ terms of absolute value $q^{-3m_1}$
(for every $m_1 \ge m + \floor{\log_q(k)}$).
This means that there are $k_0\ge 2$ paths
from $({\mathbf 0},{\mathbf 0})$ to $({\mathbf 0},{\mathbf 0})$
of length $m_1$ in the corresponding graph, or in other words $n_{({\mathbf 0},{\mathbf 0}),({\mathbf 0},{\mathbf 0})}^{m_1} = k_0(m_1) \geq 2$
Our goal is to construct two paths $(\varepsilon_1^i, \varepsilon_2^i,\delta^i)$ from $(\mathbf{0}, \mathbf{0})$ to $(\mathbf{0},\mathbf{0})$ such that
\begin{align*}
\abs{\sum_{i=1}^{2} v^{m_1}(\mathbf{0},\varepsilon_1^i,\delta^i) \overline{v^{m_1}(\mathbf{0},\varepsilon_2^i,\delta^i)}
\e\rb{-\frac{(\varepsilon_1^i - \varepsilon_2^i)h}{q^{\lambda}}}}
\leq 2 - \eta
\end{align*}
holds for all $h\in \mathbb{Z}$.
We construct a path from $\mathbf 0$ to $(q^{m-1}-1,\ldots,q^{m-1}-1,q^{m-1},\ldots,q^{m-1}) =: I_0 \in \mathcal{I}_{k}$ with exactly $n_0+1$ times $q^{m-1}-1$
(where $n_0 = \min \{n \in \mathbb{N}: \alpha_n \neq 0\}$).
We set $n_1 = \floor{ \log_{q}(k)} + m$ and find the following lemma.
\begin{lemma}\label{le:pathI0}
Let $n_0,n_1$ and $I_0$ be as above.
Then
\begin{align*}
T^{n_1}_{q^{n_1}-n_0-1, 1}(\mathbf{0}) = I_0.
\end{align*}
\end{lemma}
\begin{proof}
This follows directly by the definitions and simple computations.
\end{proof}
By applying Lemma~\ref{le:pathI0} we find a transformation from $\mathbf 0$ to $I_0$.
This gives a path from $(\mathbf{0},\mathbf{0})$ to $(I_0,I_0)$ by applying this transformation component-wise.
We concatenate this path with another path $(\mathbf{e}_1, \mathbf{e}_2,0)$ of length $n_2 = 3m-1$ where $\mathbf{e}_i < q^{2m-1}$.
The weight of the concatenation of these two paths equals
\begin{align*}
&v^{n_1}(\mathbf{0},q^{n_1}-n_0-1,1) v^{n_2}(I_0,\mathbf{e}_1,0)\\
&\qquad \qquad \cdot \overline{v^{n_1}(\mathbf{0},q^{n_1}-n_0-1,1)} \overline{v^{n_2}(I_0,\mathbf{e}_2,0)} \e\rb{-\frac{(\mathbf{e}_1-\mathbf{e}_2)h}{q^{\lambda-n_1}}}\\
&\qquad = v^{n_2}(I_0,\mathbf{e}_1,0)\overline{v^{n_2}(I_0,\mathbf{e}_2,0)}\e\rb{-\frac{(\mathbf{e}_1-\mathbf{e}_2)h}{q^{\lambda-n_1}}}.
\end{align*}
We denote by $I_{0|\ell}$ the $\ell$-th coordinate of $I_0$ and see that
\begin{align*}
T_{\mathbf{e}_i, 0}^{3m-1}(I_0) &= \rb{\floor{\frac{I_{0|\ell} + q^{m-1}\mathbf{e}_i}{q^{3m-1}}}}_{\ell \in \{0\ldots k-1\}}\\
&\leq \rb{\floor{\frac{q^{m-1} + q^{m-1}(q^{2m-1}-1)}{q^{3m-1}}}}_{\ell \in \{0\ldots k-1\}} \\
&= \rb{\floor{\frac{q^{m-1} \cdot q^{2m-1}}{q^{3m-1}}}}_{\ell \in \{0\ldots k-1\}}= \mathbf{0}
\end{align*}
Thus, we have found for each $\mathbf{e}_1,\mathbf{e}_2 < q^{2m-1}$ a path from $(\mathbf{0},\mathbf{0})$ to $(\mathbf{0},\mathbf{0})$.
We can use the special structure of $I_0$ to make the weight of this path more explicit:
At first, we note that
\begin{align*}
\sum_{\ell = 0}^{n_0} \alpha_{\ell} = \alpha_{n_0}
\end{align*}
by the definition of $n_0$.
Furthermore, we use the condition $K = \sum_{\ell} \alpha_{\ell} \in \mathbb{Z}$ to find
\begin{align*}
\sum_{\ell = n_0+1}^{k-1} \alpha_{\ell} \equiv -\alpha_{n_0} (\bmod 1).
\end{align*}
We find by the definition of $v$ that for each $\mathbf{e}<q^{2m-1}$,
\begin{align*}
v^{3m-1}(I_0,\mathbf{e},0) &= \e\rb{\sum_{\ell = 0}^{k-1} \alpha_{\ell} b_{3m-1}(q^{m-1} \mathbf{e} + I_{0|\ell})}\\
&= \e\rb{\alpha_{n_0} \rb{b_{3m-1}(q^{m-1} \mathbf{e} + q^{m-1}-1) - b_{3m-1}(q^{m-1}\mathbf{e} + q^{m-1})}}\\
&= \e\rb{\alpha_{n_0} \rb{b(q^{m-1} \mathbf{e} + q^{m-1}-1) - b(q^{m-1}(\mathbf{e} + 1)}}.
\end{align*}
We find by Corollary~\ref{co:diff_b} that there exist $\mathbf{e}_1,\mathbf{e}_2<q^{2m-1}$ such that
\begin{align*}
&b(q^{m-1}(\mathbf{e}_1 + 1)-1) - b(q^{m-1}(\mathbf{e}_1+1))\\
&\qquad - b(q^{m-1}(\mathbf{e}_2 + 1)-1) + b(q^{m-1}(\mathbf{e}_2+1)) = d
\end{align*}
and $\alpha_{n_0} d \not \in \mathbb{Z}$.
We now compare the following two paths from $(\mathbf{0},\mathbf{0})$ to $(\mathbf{0},\mathbf{0})$ of length $m_1 = n_1+n_2 = \floor{\log_q(k)} + 4m-1$:\\
\begin{itemize}
\item $(\mathbf{e}_1q^{n_1} + q^{n_1}-n_0-1,\mathbf{e}_2q^{n_1} + q^{n_1}-n_0-1,1)$:
We split up this path into the path of length $n_1$ from $(\mathbf{0},\mathbf{0})$ to $(I_0,I_0)$ and the path of length $n_2$ from $(I_0,I_0)$ to $(\mathbf{0},\mathbf{0})$:
The first path can be described by the triple $(q^{n_1}-n_0-1, q^{n_1}-n_0-1, 1)$ and its weight is obviously $1$.\\
The second path - i.e. the path from $(I_0,I_0)$ to $(\mathbf{0},\mathbf{0})$ - can be described by the triple $(\mathbf{e}_1,\mathbf{e}_2,0)$ and its weight equals
\begin{align*}
&v^{n_2}(I_0,\mathbf{e}_1,0) \overline{v^{n_2}(I_0,\mathbf{e}_2,0)} \e\rb{-\frac{(\mathbf{e}_1-\mathbf{e}_2)h}{q^{\lambda-n_1}}} \\
&\qquad= \e\rb{\alpha_{n_0} \rb{b(q^{m-1} (\mathbf{e}_1+1)-1) - b(q^{m-1}(\mathbf{e}_1 + 1))}} \\
&\qquad \qquad \overline{\e\rb{\alpha_{n_0} \rb{b(q^{m-1} (\mathbf{e}_2+1)-1) - b(q^{m-1}(\mathbf{e}_2 + 1)}}}\e\rb{-\frac{(\mathbf{e}_1-\mathbf{e}_2)h}{q^{\lambda-n_1}}}\\
&\qquad = \e(\alpha_{n_0} d) \e\rb{-\frac{(\mathbf{e}_1-\mathbf{e}_2)h}{q^{\lambda-n_1}}}.
\end{align*}
Thus, the overall weight of the path from $(\mathbf{0},\mathbf{0})$ to $(\mathbf{0},\mathbf{0})$ has weight
\begin{align*}
\e(\alpha_{n_0} d) \e\rb{-\frac{(\mathbf{e}_1-\mathbf{e}_2)h}{q^{\lambda-n_1}}}.
\end{align*}
\item $(\mathbf{e}_1 q^{n_1}, \mathbf{e}_2 q^{n_1}, 0)$: we compute directly the weight of this path:
\begin{align*}
&v^{m_1}(\mathbf{0},\mathbf{e}_1 q^{n_1},0) \overline{v^{m_1}(\mathbf{0},\mathbf{e}_2 q^{n_1},0)} \e\rb{-\frac{(\mathbf{e}_1-\mathbf{e}_2)h}{q^{\lambda-n_1}}}\\
&\qquad = \e\rb{\sum_{\ell=0}^{k-1} \alpha_{\ell} b_{m_1}(\mathbf{e}_1 q^{n_1}) - \sum_{\ell=0}^{k-1} \alpha_{\ell} b_{m_1}(\mathbf{e}_2 q^{n_1})} \e\rb{-\frac{(\mathbf{e}_1-\mathbf{e}_2)h}{q^{\lambda-n_1}}}\\
&\qquad = \e\rb{K (b_{m_1}(\mathbf{e}_1 q^{n_1}) - b_{m_1}(\mathbf{e}_2 q^{n_1}))} \e\rb{-\frac{(\mathbf{e}_1-\mathbf{e}_2)h}{q^{\lambda-n_1}}}\\
&\qquad = \e\rb{-\frac{(\mathbf{e}_1-\mathbf{e}_2)h}{q^{\lambda-n_1}}}.
\end{align*}
\end{itemize}
We recall quickly that $\alpha_{\ell} \in \{\frac{0}{m'},\ldots,\frac{m'-1}{m'}\}$ for all $\ell \in \{0,\ldots,k-1\}$ and, therefore, also
$\alpha_{n_0} \in \{\frac{0}{m'},\ldots,\frac{m'-1}{m'}\}$.
We finally see that
\begin{align*}
|B_{({\mathbf 0},{\mathbf 0}),({\mathbf 0},{\mathbf 0})}| &\le \rb{k_0-2+\abs{\e(\alpha_{n_0} d) \e\rb{-\frac{(\mathbf{e}_1-\mathbf{e}_2)h}{q^{\lambda-n_1}}} + \e\rb{-\frac{(\mathbf{e}_1-\mathbf{e}_2)h}{q^{\lambda-n_1}}}}} q^{-3m_1}\\
&= (k_0-2 +|1+\e(\alpha_{n_0} d)|)q^{-3m_1} \\
&= (k_0-2 +2\abs{\cos\rb{\pi \alpha_{n_0}d}})q^{-3m_1} \\
&= \rb{k_0-2 +2\abs{1-2\rb{\sin\rb{\frac{\pi \alpha_{n_0}d}{2}}}^2}}q^{-3m_1} \\
&\leq \rb{k_0 - 4 \rb{\sin\rb{\frac{\pi}{2m'}}}^2}q^{-3m_1}.
\end{align*}
Thus we have
\begin{align*}
\sum_{(J,J')} | B_{(0,0),(J,J')} |
&\leq
\rb{k_0-4 \rb{\sin\rb{\frac{\pi}{2m'}}}^2} q^{-3m_1} + (1-k_0 q^{-3m_1}) \\
&\leq
1 - 4 \rb{\sin\rb{\frac{\pi}{2m'}}}^2 \cdot q^{-3m_1} .
\end{align*}
Therefore condition~\eqref{matrixNorm2} of Lemma~\ref{le:matrixnorm}
is verified with $m_1 = \floor{\log_q(k)} + 4m-1$ and $\eta = 4 \rb{\sin\rb{\frac{\pi}{2m'}}}^2 q^{-3m_1}\geq 4\rb{\sin\rb{\frac{\pi}{2m'}}}^2 k^{-3} q^{-12m+3}>0$.
\end{proof}
At the end of this section, we want to recall the important steps of the proof of Proposition~\ref{Pro1}.
At first we observe that
\begin{align*}
\frac{1}{q^{\lambda'}} \sum_{0\leq d < q^{\lambda'}}|G_{\lambda}^{I}(h,d)|^2 = \Phi_{\lambda,\lambda'}^{I,I}(h).
\end{align*}
Thus Proposition~\ref{Pro1} is equivalent to $\Phi_{\lambda,\lambda'}^{I,I}(h) \ll q^{-\eta \lambda}$.
Next we considered the vector $\Psi_{\lambda,\lambda'}(h) = \rb{\Phi_{\lambda,\lambda'}^{I,I'}(h)}_{(I,I') \in \mathcal{I}_k^2}$ and find the recursion
\begin{align*}
\Psi_{\lambda,\lambda'}(h) = \mathbf{M}(h/q^{\lambda}) \cdots \mathbf{M}(h/q^{\lambda-\lambda'+1}) \Psi_{\lambda-\lambda',0}(h)
\end{align*}
Then we defined $\mathbf{M}_{\ell} := \mathbf{M}(h/q^{\ell})$ and showed that we can apply Lemma~\ref{le:matrixnorm}.
Therefore we know that~-- since $\abs{\Phi_{\lambda-\lambda'+1,0}^{I,I'}(h)} \leq 1$
\begin{align*}
|\Phi_{\lambda,\lambda'}^{I,I'}(h)| \leq \norminf{\mathbf{M}_{\lambda}\cdots \mathbf{M}_{\lambda-\lambda'+1}} \leq C q^{-\delta \lambda'} \leq C q^{- \delta \lambda/2}
\end{align*}
with $C$ and $\delta$ obtained by Lemma~\ref{le:matrixnorm}. Thus we know that $\Phi_{\lambda,\lambda'}^{I,I'}(h) \ll q^{-\eta \lambda}$ with $\eta = \delta/2$ uniformly for all $h$.
This concludes the proof of Proposition~\ref{Pro1}.
\subsection{Proof of Proposition \ref{Pro2}}\label{sec:Pro2}
We start again by reducing the problem from $H_{\lambda'}^{I'}(h,d)$ to $G_{\lambda}^{I}(h,d)$, for possibly different values of $\lambda,\lambda'$ and $I,I'$.
\begin{proposition}\label{Pro2_new}
For $K \not \equiv 0 (\bmod 1)$ there exists $\eta>0$ such that for any $I \in \mathcal{I}_{k}$
\begin{align}
\abs{G_{\lambda}^{I}(h,d)} \ll q^{-\eta L} \max_{J \in \mathcal{I}_k} \abs{G_{\lambda-L}^{J}(h,\floor{d/q^{L}})}
\end{align}
holds uniformly for all non-negative integers $h,d$ and $L$.
\end{proposition}
\begin{lemma}
Proposition~\ref{Pro2_new} implies Proposition~\ref{Pro2}.
\end{lemma}
\begin{proof}
Follows directly by \eqref{eq:H-recursion}.
\end{proof}
We assume from now on that $K \notin \mathbb{Z}$ holds.
We formulate Lemma~\ref{le:rec_G} as a matrix vector multiplication:
\begin{align*}
G_{\lambda}(h,q^jd+\delta) = \frac{1}{q^j} M^{j}_{\delta}\rb{\e\rb{-\frac{h}{q^{\lambda}}}}G_{\lambda-j}\rb{h,d}
\end{align*}
where for any $\delta \in \{0,\ldots, q^j-1\}$ and $z \in \mathbb{U}$ we have
\begin{align*}
M^{j}_{\delta}(z) = \sum_{\varepsilon = 0}^{q^j-1}(\mathbf{1}_{[J = T^{j}_{\varepsilon, \delta}(I)]} v^j(I,\varepsilon,\delta) z^{\varepsilon})_{(I,J) \in \mathcal{I}_k^2}.
\end{align*}
To prove Proposition~\ref{Pro2_new} we aim to show that
\begin{align}\label{eq:M_saving}
\exists m_1\in \mathbb{N},\eta' \in \mathbb{R}^+ \text{ such that } \forall \delta<q^{m_1}, z\in \mathbb{U} \text{ holds } \norm{M^{m_1}_{\delta}(z)}_{\infty} \leq q^{m_1}-\eta'.
\end{align}
Indeed, we find that this is already sufficient to show Proposition~\ref{Pro2_new}.
\begin{lemma}
\eqref{eq:M_saving} implies Proposition~\ref{Pro2_new}.
\end{lemma}
\begin{proof}
We first note that
\begin{align*}
\norm{M^{j}_{\delta}(z)}_{\infty} \leq q^{j}
\end{align*}
holds for all $z \in \mathbb{U}$, $j\in \mathbb{N}$ and $\delta <q^j$ by definition.\\
Next we split the digital expansion of $d\bmod q^{L}$ - read from left to right - into $\floor{L/m_1}$ parts of length $m_1$ and possible one part of length $L\bmod m_1$.
We denote the first parts by $\delta_1,\ldots,\delta_{\floor{L/m_1}}$ and the last part by $\delta_0$, i.e.,
\begin{align*}
d = q^{L \bmod m_1}\rb{\sum_{j=1}^{\floor{L/m_1}} \delta_j \cdot q^{\floor{L/m_1}-j}} + \delta_0.
\end{align*}
Thus we find
\begin{align*}
\max_{I\in \mathcal{I}_k} &\abs{G_{\lambda}^{I}(h,d)} = \norm{G_{\lambda}(h,d)}_{\infty}\\
&\leq \frac{1}{q^{L}}\max_{z\in \mathbb{U}}\norm{M^{L}_{d}(z)}_{\infty} \cdot \norm{G_{\lambda-L}(h,\floor{d/q^{L}})}_{\infty}\\
&\leq \frac{1}{q^{L}}\prod_{j=1}^{\floor{L/m_1}} \max_{z\in\mathbb{U}}\norm{M^{m_1}_{\delta_j}(z q^{m_1(j-1)})}_{\infty} \cdot q^{(L\bmod m_1)} \cdot \norm{G_{\lambda-L}(h,\floor{d/q^{L}})}_{\infty}\\
&\leq \frac{1}{q^{L}}(q^{m_1}-\eta')^{\floor{L/m_1}} q^{(L\bmod m_1)} \cdot \norm{G_{\lambda-L}(h,\floor{d/q^{L}})}_{\infty}\\
&\ll q^{-L\eta}\cdot \norm{G_{\lambda-L}(h,\floor{d/q^{L}})}_{\infty}
\end{align*}
where $\eta = \frac{\eta'}{q^{m_1}\log(q^{m_1})}>0$.
\end{proof}
Throughout the rest of this section, we aim to prove \eqref{eq:M_saving}.
Therefore, we try to find for each $I\in \mathcal{I}_{k}$ and $\delta<q^{m_1}$ a pair $(\varepsilon_1,\varepsilon_2)$ and $m_1'\leq m_1$ such that for all $z\in \mathbb{U}$ holds
\begin{align}\label{eq:goal_eps_12}
\begin{split}
&T_{\varepsilon_i,\delta}^{m_1'}(I) = T_{\varepsilon_i+1,\delta}^{m_1'}(I),\\
&\abs{v^{m_1'}(I,\varepsilon_1,\delta) + z v^{m_1'}(I,\varepsilon_1+1,\delta)} + \abs{v^{m_1'}(I,\varepsilon_2,\delta) + z v^{m_1'}(I,\varepsilon_2+1,\delta)} \leq 4-\eta'.
\end{split}
\end{align}
Let us assume for now that \eqref{eq:goal_eps_12} holds.
Indeed we find
\begin{align*}
\norm{M^{m_1'}_{\delta}(z)}_{\infty} &= \max_{I\in\mathcal{I}_{k}} \max_{z\in \mathbb{U}} \sum_{J\in\mathcal{I}_k} \abs{\sum_{\varepsilon<q^{m_1'}} \ind_{[T_{\varepsilon,\delta}^{m_1'}(I)=J]} z^{\varepsilon} v^{m_1'}(I,\varepsilon,\delta)}
\end{align*}
However, we find for each $I$ some $\varepsilon_1,\varepsilon_2$ fulfilling \eqref{eq:goal_eps_12}.
This gives
\begin{align*}
&\max_{z\in \mathbb{U}} \sum_{J\in\mathcal{I}_k} \abs{\sum_{\varepsilon<q^{m_1'}} \ind_{[T_{\varepsilon,\delta}^{m_1'}(I)=J]} z^{\varepsilon} v^{m_1'}(I,\varepsilon,\delta)}\\
&\qquad \leq \rb{q^{m_1'}-4} + \sum_{i=1}^{2} \abs{\sum_{j=0}^{1}z^{\varepsilon_i+j} v^{m'_1}(I,\varepsilon_i+j,\delta) }\\
&\qquad \leq q^{m'_1} - \eta'.
\end{align*}
Thus, we find in total
\begin{align*}
\norm{M^{m_1}_{\delta}(z)}_{\infty} \leq q^{m_1-m'_1} (q^{m_1'}-\eta') \leq q^{m_1} - \eta'.
\end{align*}
It just remains to find $\varepsilon_1,\varepsilon_2,m'_1$ fulfilling \eqref{eq:goal_eps_12} and this turns out to be a rather tricky task.
We fix now some arbitrary $I\in \mathcal{I}_{k}$ and $\delta \in \mathbb{N}$.
We start by defining for $0\leq x \leq (4m-2)k$ and $c \in \mathbb{N}$
\begin{align*}
M_{x,c} = M_{x,(c\bmod q^{x})}:= \{\ell <k : \floor{i_{\ell}/q^{m-1}} + d \ell \equiv c (\bmod q^{x})\}
\end{align*}
and show some basic properties of $M_{x,c}$.
\begin{lemma}\label{le:c_0}
For every $x<q^{(4m-2)k}$ exists $c_0$ such that
\begin{align*}
\sum_{\ell \in M_{x,c_0}} \alpha_{\ell} \not \in \mathbb{Z}.
\end{align*}
\end{lemma}
\begin{proof}
One finds easily that
\begin{align*}
\{0,\ldots,k-1\} = \bigcup_{c<q^{x}} M_{x,c},
\end{align*}
which means that $\{M_{x,c}: c<q^{x}\}$ is a partition of $\{0,\ldots,k-1\}$ for each $x$.
Thus, we find for every $x$
\begin{align*}
\sum_{c} \sum_{\ell \in M_{x,c}} \alpha_{\ell} = \sum_{\ell<k} \alpha_{\ell} = K \not \in \mathbb{Z}
\end{align*}
and the proof follows easily.
\end{proof}
\begin{lemma}\label{le:x_0}
Let $d<q^{(4m-2)k}$ and $I \in \mathcal{I}_k$.
Then, there exists $0\leq x_0\leq (4m-2)(k-1)$ such that for each $c<q^{x_0}$ exists $c^{+} < q^{x_0+(4m-2)}$ such that
\begin{align*}
M_{x_0,c} = M_{x_0+(4m-2),c^{+}}.
\end{align*}
\end{lemma}
\begin{remark}
This is equivalent to the statement that
\begin{align*}
\floor{i_{\ell_1}/q^{m-1}} + d \ell_1 \equiv \floor{i_{\ell_2}/q^{m-1}} + d \ell_2 (\bmod q^{x_0})
\end{align*}
implies
\begin{align*}
\floor{i_{\ell_1}/q^{m-1}} + d \ell_1 \equiv \floor{i_{\ell_2}/q^{m-1}} + d \ell_2 (\bmod q^{x_0+4m-2})
\end{align*}
\end{remark}
\begin{proof}
We have already seen that $\{M_{x,c}: c<q^{x}\}$ is a partition of $\{0,\ldots,k-1\}$.
Furthermore, we find for $0\leq x\leq(4m-2)k$ and $c<q^{x}$ that
\begin{align*}
M_{x,c} = \bigcup_{c'<q^{4m-2}} M_{x+(4m-2),c+q^{x}c'}.
\end{align*}
This implies that $\{M_{x+4m-2,c}: c<q^{x+4m-2}\}$ is a refinement of $\{M_{x,c}: c<q^{x}\}$ and we find
\begin{align*}
&\{M_{(4m-2)\cdot 0,c}: c<1\} \geq \{M_{(4m-2)\cdot 1,c}: c<q^{4m-2}\}\\
&\qquad \geq \ldots \geq \{M_{(4m-2)k,c}: c<q^{(4m-2)k}\}.
\end{align*}
It is well known that the maximal length of a chain in the set of partitions of $\{0,\ldots,k-1\}$ is $k$.
This means that there exists $x'_0$ such that $\{M_{(4m-2)x'_0,c}: c<q^{(4m-2)x'_0}\} = \{M_{(4m-2)(x'_0+1),c'}: c'<q^{(4m-2)(x'_0+1)}\}$.
\end{proof}
Furthermore, we define
\begin{align*}
\beta_{x,c} := \sum_{\ell \in M_{x,c}} \alpha_{\ell}.
\end{align*}
We can now choose $m_1 := (4m-2)k$, $m'_1 := x_0+(4m-2)$ where $x_0$ is given by Lemma~\ref{le:x_0}.
We consider $c_0<q^{x_0}$ and $c^{+}_0$ provided by Lemma~\ref{le:c_0} and Lemma~\ref{le:x_0}
and know that $\beta_{x,c_0} \notin \mathbb{Z}$.
Therefore we apply Corollary~\ref{co:diff_b} and find $\mathbf{e}_1,\mathbf{e}_2<q^{2m-1}$ such that
\begin{align*}
&b(q^{m-1}(\mathbf{e}_1 + 1)-1) - b(q^{m-1}(\mathbf{e}_1+1))\\
&\qquad - b(q^{m-1}(\mathbf{e}_2 + 1)-1) + b(q^{m-1}(\mathbf{e}_2+1)) = d
\end{align*}
and $d\beta_{x,c_0}\notin \mathbb{Z}$.
We are now able to define
\begin{align*}
\varepsilon_1 = (q^{x_0+m-1}(\mathbf{e}_1+1)-c_0^{+}-1)\bmod q^{x_0+4m-2}\\
\varepsilon_2 = (q^{x_0+m-1}(\mathbf{e}_2+1)-c_0^{+}-1)\bmod q^{x_0+4m-2}.
\end{align*}
It just remains to check \eqref{eq:goal_eps_12} which we split up into the following two lemmata.
\begin{lemma}
Let $x_0,\varepsilon_i$ be defined as above.
Then
\begin{align*}
T_{\varepsilon_i,d}^{x_0+4m-2}(I) = T_{\varepsilon_i+1,d}^{x_0+4m-2}(I)
\end{align*}
holds.
\end{lemma}
\begin{proof}
We need to show that
\begin{align}\label{eq:equal_TI}
\floor{\frac{i_{\ell} + q^{m-1}(\ell d + \varepsilon_i)}{q^{x_0+4m-2}}} = \floor{\frac{i_{\ell} + q^{m-1}(\ell d + \varepsilon_i +1)}{q^{x_0+4m-2}}}
\end{align}
holds for all $\ell<k$ and $i = 1,2$. We know that $\ell$ belongs to $M_{x_0+4m-2,c^{+}}$ for some $c<q^{x_0}$.
Thus, we find for $j=0,1$
\begin{align*}
\floor{\frac{i_{\ell} + q^{m-1}(\ell d + \varepsilon_i +j)}{q^{x_0+4m-2}}} &= \floor{\frac{(i_{\ell} \bmod q^{m-1}) + q^{m-1}(c^{+} + \varepsilon_i +j)}{q^{x_0+4m-2}}}\\
&= \floor{\frac{c^{+} + \varepsilon_i+j}{q^{x_0+3m-1}}}
\end{align*}
Therefore, \eqref{eq:equal_TI} does hold, unless
\begin{align*}
c^{+} + \varepsilon_i +1 \equiv 0 (\bmod q^{x_0+3m-1}).
\end{align*}
We find
\begin{align*}
c^{+} + \varepsilon_i +1 \equiv c^{+} + q^{x_0+m-1}(\mathbf{e}_i+1)-c_0^{+} (\bmod q^{x_0+3m-1}).
\end{align*}
We first consider the case $c\neq c_0$:
\begin{align*}
c^{+} + \varepsilon_i +1 \equiv c - c_0 \not \equiv 0 (\bmod q^{x_0})
\end{align*}
For $c = c_0$:
\begin{align*}
c_0^{+} + \varepsilon_i +1 \equiv q^{x_0+m-1}(\mathbf{e}_i+1) (\bmod q^{x_0+3m-1})
\end{align*}
However
\begin{align*}
\mathbf{e}_i+1 \not \equiv 0 (\bmod q^{2m})
\end{align*}
as $\mathbf{e}_i <q^{2m-1}$.
Thus, \eqref{eq:equal_TI} holds.
\end{proof}
\begin{lemma}
There exists $\eta'>0$ only depending on $m'$ such that for $x_0$ and $\varepsilon_i$ defined as above holds
\begin{align}\label{eq:eps_saving}
\sum_{i=1}^{2}\abs{v^{x_0+4m-2}(I,\varepsilon_i,\delta) + z \cdot v^{x_0+4m-2}(I,\varepsilon_i+1,\delta)} \leq 4-\eta'
\end{align}
for all $z\in \mathbb{U}$.
\end{lemma}
\begin{proof}
We start by computing the weights $v^{x_0+4m-2}(I,\varepsilon_i+j,\delta)$.
For arbitrary $\varepsilon < q^{\lambda_0+4m-2}$, we find:
\begin{align*}
v^{x_0+4m-2}&(I,\varepsilon,d) = \prod_{\ell<k} \e(\alpha_{\ell} b_{x_0+4m-2}(i_{\ell} + q^{m-1}(\varepsilon + \ell d)))\\
&= \prod_{\ell<k} \e(\alpha_{\ell} b_{m-1}(i_{\ell} + q^{m-1}(\varepsilon + \ell d))) \e\rb{\alpha_{\ell} b_{x_0+3m-1}\rb{\floor{i_{\ell}/q^{m-1}} + \varepsilon + \ell d}}\\
&= \e(g(\varepsilon)) \cdot \prod_{\ell<k} \e\rb{\alpha_{\ell} b_{x_0+3m-1}\rb{\floor{i_{\ell}/q^{m-1}} + \varepsilon + \ell d}}.
\end{align*}
where
\begin{align*}
g(\varepsilon) := \sum_{\ell<k} \alpha_{\ell} b_{m-1}(i_{\ell} + q^{m-1}(\varepsilon + \ell d)).
\end{align*}
Note that $g(\varepsilon)$ only depends on $\varepsilon \bmod q^{m-1}$.
We can describe this product by using the weights $\beta$ defined above.
\begin{align*}
v^{x_0+4m-2}(I,\varepsilon,d) &= \e(g(\varepsilon)) \cdot \prod_{c'<q^{x_0+4m-2}} \e\rb{\beta_{x_0+4m-2,c'} \cdot b_{x_0+3m-1} \rb{c' + \varepsilon}}.
\end{align*}
Furthermore, we can rewrite every $c'<q^{x_0+4m-2}$ for which $\beta_{x_0+4m-2,c'} \not = 0$ as some $c^{+}$ where $c<q^{x_0}$.
This gives then
\begin{align*}
v^{x_0+4m-2}&(I,\varepsilon,d) = \e(g(\varepsilon)) \cdot \prod_{c<q^{x_0}} \e\rb{\beta_{x_0,c} \cdot b_{x_0+3m-1} \rb{c^{+} + \varepsilon}}\\
&= \e(g(\varepsilon)) \cdot \prod_{c<q^{x_0}} \e\rb{\beta_{x_0,c} \cdot b_{x_0}(c^{+} + \varepsilon)} \cdot \prod_{c<q^{x_0}} \e\rb{\beta_{x_0,c} \cdot b_{3m-1}\rb{\floor{\frac{c^{+}+\varepsilon}{q^{x_0}}}}}
\end{align*}
Thus we find for $\varepsilon = \varepsilon_i+j$ that:
\begin{align*}
v^{x_0+4m-2}&(I,\varepsilon_i+j,d) = \e(g(\varepsilon_i+j)) \cdot \prod_{c<q^{x_0}} \e\rb{\beta_{x_0,c} \cdot b_{x_0}(c^{+} + \varepsilon_i+j)}\\
&\qquad \cdot \prod_{c<q^{x_0}} \e\rb{\beta_{x_0,c} \cdot b_{3m-1}\rb{\floor{\frac{c^{+}+\varepsilon_i+j}{q^{x_0}}}}}\\
&= \e(g(-c_0^{+}-1+j)) \cdot \prod_{c<q^{x_0}} \e\rb{\beta_{x_0,c} \cdot b_{x_0}(c^{+} -c_0^{+}-1+j)} \\
&\qquad \cdot \prod_{c<q^{x_0}} \e\rb{\beta_{x_0,c} \cdot b_{3m-1}\rb{q^{m-1}(\mathbf{e}_i+1)+\floor{\frac{c^{+}-c_0^{+}-1+j}{q^{x_0}}}}}\\
&= \e(g(-c_0^{+}-1+j)) \cdot \prod_{c<q^{x_0}} \e\rb{\beta_{x_0,c} \cdot b_{x_0}(c^{+} -c_0^{+}-1+j)} \\
&\qquad \cdot \prod_{\substack{c<q^{x_0}\\c\neq c_0}} \e\rb{\beta_{x_0,c} \cdot b_{3m-1}\rb{q^{m-1}(\mathbf{e}_i+1)+\floor{\frac{c^{+}-c_0^{+}-1+j}{q^{x_0}}}}}\\
&\qquad \cdot \e\rb{\beta_{x_0,c_0} \cdot b_{3m-1}(q^{m-1}(\mathbf{e}_i+1)-1+j)}.
\end{align*}
For $c \neq c_0$, we find
\begin{align*}
\floor{\frac{c^{+}-c_0^{+}-1}{q^{x_0}}} = \floor{\frac{c^{+}-c_0^{+}}{q^{x_0}}}
\end{align*}
as $c^{+} \equiv c \not \equiv c_0 \equiv c_0^{+} \mod q^{x_0}$.
Consequently, we find
\begin{align*}
&v^{x_0+4m-2}(I,\varepsilon_i,d) = \e(x_i)\\
&v^{x_0+4m-2}(I,\varepsilon_i+1,d) = \e(x_i+\xi_i)
\end{align*}
where
\begin{align*}
x_i &= g(-c_0^{+}-1) + \sum_{c<q^{x_0}} \beta_{x_0,c} \cdot b_{x_0}(c^{+} -c_0^{+}-1) \\
&\qquad + \sum_{\substack{c<q^{x_0}\\c\neq c_0}} \beta_{x_0,c} \cdot b_{3m-1}\rb{q^{m-1}(\mathbf{e}_i+1)+\floor{\frac{c^{+}-c_0^{+}}{q^{x_0}}}}\\
&\qquad + \beta_{x_0,c_0} \cdot b_{3m-1}(q^{m-1}(\mathbf{e}_i+1)-1)
\end{align*}
and
\begin{align*}
\xi_i &= g(-c_0^{+}) + \sum_{c<q^{x_0}} \beta_{x_0,c} \cdot b_{x_0}(c^{+} -c_0^{+}) + \beta_{x_0,c_0} \cdot b_{3m-1}(q^{m-1}(\mathbf{e}_i+1))\\
& - g(-c_0^{+}-1)- \sum_{c<q^{x_0}} \beta_{x_0,c} \cdot b_{x_0}(c^{+} -c_0^{+}-1) - \beta_{x_0,c_0} \cdot b_{3m-1}(q^{m-1}(\mathbf{e}_i+1)-1).
\end{align*}
Also, we find
\begin{align*}
\xi_1-\xi_2 = \beta_{x_0,c_0}d \notin \mathbb{Z},
\end{align*}
where
\begin{align*}
&b(q^{m-1}(\mathbf{e}_1+1)) - b(q^{m-1}(\mathbf{e}_1+1)-1)\\
&\qquad -b(q^{m-1}(\mathbf{e}_2+1)) + b(q^{m-1}(\mathbf{e}_2+1)-1)=d.
\end{align*}
This implies
\begin{align*}
\norm{\xi_1-\xi_2} \geq \frac{1}{m'}.
\end{align*}
It remains to apply Lemma~\ref{le:sum_4} to find that \eqref{eq:eps_saving} holds with $\eta' = 8 \rb{\sin\rb{\frac{\pi}{4m'}}}^2$.
\end{proof}
At the end of this section, we recall the important steps of the proof of Proposition~\ref{Pro2_new}.\\
We started to rewrite our recursion for $G_{\lambda}^{I}$ into a matrix vector multiplication
\begin{align*}
G_{\lambda}(h,q^Ld+\delta) = \frac{1}{q^L} M^{L}_{\delta}\rb{\e\rb{-\frac{h}{q^{\lambda}}}}G_{\lambda-L}\rb{h,d}.
\end{align*}
We then split up this matrix $M^{L}_{\delta}(.)$ into a product of many matrices $M^{m_1}_{\delta_j}(.)$, where $m_1 = (4m-2)k$.
Thereafter, we showed that $\norm{M^{m_1}_{\delta_j}(.)}\leq q^{m_1}-\eta$, where $\eta = 8 \rb{\sin\rb{\frac{\pi}{4m'}}}^2$.
This implies then Proposition~\ref{Pro2_new}.\\
To show that $\norm{M^{m_1}_{\delta_j}}\leq q^{m_1}-\eta$, we found two different $\varepsilon_i$ such that
\begin{align*}
&T_{\varepsilon_i,\delta}^{m_1'}(I) = T_{\varepsilon_i+1,\delta}^{m_1'}(I),\\
&\abs{v^{m_1'}(I,\varepsilon_1,\delta) + z v^{m_1'}(I,\varepsilon_1+1,\delta)} + \abs{v^{m_1'}(I,\varepsilon_2,\delta) + z v^{m_1'}(I,\varepsilon_2+1,\delta)} \leq 4-\eta'
\end{align*}
holds for all $z\in \mathbb{U}$.
\section{Proof of the Main Theorem} \label{cha:proof}
In this section, we complete the proof of Theorem \ref{Thexponentialsums} following the ideas and structure of \cite{drmotaMauduitRivat2014}.
As the proof is very similar, we only outline it briefly and comment on the important changes.
The structure of the proof is similar for both cases:
At first we want to substitute the function $b$ by $b_{\mu,\lambda}$. This can be done by applying Lemma
\ref{Lecarry0} and Lemma \ref{lemma:van-der-corput} in the case $K \in \mathbb{Z}$.
For the case $K \notin \mathbb{Z}$ we have to use Lemma \ref{lemma:van-der-corput} first.
Thereafter, we apply Lemma \ref{Lecarry1} to detect the digits between $\mu$ and $\lambda$.
Next, we use characteristic functions to detect suitable values for $u_1(n), u_2(n), u_3(n)$.
Lemma \ref{lemma:better-koksma1} allows us to replace the characteristic functions by exponential sums.
We split the remaining exponential sum into a quadratic and a linear part and find that the quadratic part is negligibly small.
For the remaining sum, we apply Proposition \ref{Pro1} or \ref{Pro2}~-- depending on whether $K \in \mathbb{Z}$.
The case $K \notin \mathbb{Z}$ needs more effort to deal with.
\subsection{\texorpdfstring{The case $K \in \mathbb{Z}$}{The case K in Z}} \label{sec:equiv0}
In this section, we show that, if $K = \alpha_0 + \cdots +
\alpha_{k-1} \in \mathbb{Z}$,
Proposition \ref{Pro1}
provides an upper bound for the sum
\begin{displaymath}
S_0 = \sum_{n < N} \e\rb{ \sum_{\ell=0}^{k-1} \alpha_\ell b((n+\ell)^2) }.
\end{displaymath}
Let $\nu$ be the unique integer such that $q^{\nu-1} < N \leq q^{\nu}$
and we choose all appearing exponents - i.e. $\lambda,\mu,\rho$, etc. - as in \cite{drmotaMauduitRivat2014}.
By using Lemma~\ref{Lecarry0}, and the same arguments as in \cite{drmotaMauduitRivat2014}, we find
\begin{align}\label{eq:S0-S1-even}
S_0= S_1 + \mathcal{O}\rb{q^{\nu - (\lambda-\nu)}},
\end{align}
where
\begin{displaymath}
S_1 = \sum_{n< N} \e\rb{ \sum_{\ell=0}^{k-1} \alpha_\ell
b_\lambda((n+\ell)^2) }.
\end{displaymath}
Now we use Lemma \ref{lemma:van-der-corput} - with $Q= q^{\mu+m-1}$ and $S = q^{\nu-\mu}$ - to relate $S_1$ to a sum in terms of $b_{\mu,\lambda}$:
\begin{align}\label{eq:S1-S2-even}
|S_1|^2 \ll \frac {N^2}S + \frac NS \Re(S_2),
\end{align}
where
\begin{align*}
S_2 = \sum_{1\le s < S} \rb{ 1 - \frac sS } S_2'(s)
\end{align*}
and
\begin{align*}
S_2'(s) =
\sum_{n\in I(N,s)} \e\rb{ \sum_{\ell=0}^{k-1} \alpha_\ell
(b_{\mu,\lambda}((n+\ell)^2)
- b_{\mu,\lambda}((n+\ell+sq^{\mu+m-1})^2))},
\end{align*}
where
$I(N,s)$ is an interval included in $[0,N-1]$ (which we do not specify).
Next we use Lemma \ref{Lecarry1} to detect the digits of $(n+\ell)^2$ and $(n+\ell + sq^{m-1}q^{\mu})^2$ between $\mu$ and $\lambda+m-1$ - with a negligible error term.
Therefore, we have to take the digits between
$\mu' = \mu-\rho'$ and $\mu$ into account, where $\rho'>0$ will be chosen later.
We set the integers $u_1=u_1(n)$, $u_3=u_3(n)$, $v=v(n)$, $w_1=w_1(n)$,
and $w_3=w_3(n)$ to satisfy the conditions of Lemma~\ref{Lecarry1} and detect them by characteristic functions.
Thus, we find
\begin{align}\label{eq:S'2-S'3-even}
S_2'(s) = S_3'(s) + \mathcal{O}(q^{\nu-\rho'}),
\end{align}
where
\begin{small}
\begin{align*}
S_3'(s) &= \sum_{0\le u_1 < U_1} \sum_{0\le u_3 < U_3} \sum_{n\in I(N,s)} \left(\chi_{q^{\mu'-\lambda-m+1}}
\rb{\frac{n^2}{q^{\lambda+m-1}}-\frac{u_1}{U_1}} \chi_{q^{\mu'-\nu-1}} \rb{\frac{2n}{q^{\nu+1}}-\frac{u_3}{U_3}} \right.\\
&\left. \cdot \e\rb{ \sum_{\ell=0}^{k-1} \alpha_\ell (b_{\rho',\lambda-\mu+\rho'}(u_1+\ell u_3)
- b_{\rho',\lambda-\mu+\rho'}(u_1+\ell u_3+ v(n) q^{\rho'} + 2 \ell s q^{m-1}q^{\rho'})}\right),
\end{align*}
\end{small}
where $\chi_{\alpha}$ is defined by \eqref{eq:definition-chi} and $U_1 = q^{\lambda+m-1-\mu'}, U_3 = q^{\nu-\mu'+1}$.
Lemma \ref{lemma:better-koksma1} allows us to replace the
characteristic functions $\chi$ by trigonometric polynomials.
More precisely, using \eqref{eqS-S}
with
$H_1 = U_1 q^{\rho''}$ and $H_3 = U_3 q^{\rho''}$
for some suitable $\rho'' > 0$ (which is a fraction of $\nu$ chosen later),
we have
\begin{align}\label{eq:S'3-S4}
S'_3(s) = S_4(s) + \mathcal{O}(E_1) + \mathcal{O}(E_3) + \mathcal{O}(E_{1,3}),
\end{align}
where $E_1, E_3$ and $E_{1,3}$ are the error terms specified in \eqref{eqS-S} and
\begin{align*}
S_4(s&) = \sum_{0\le u_1 < U_1}
\sum_{0\le u_3 < U_3} \sum_{0\le v < q^{\lambda-\mu+m-1} } \\
&\sum_{n\in I(N,s)} \left ( A_{U_1^{-1},H_1} \rb{\frac{n^2}{q^{\lambda+m-1}}-\frac{u_1}{U_1}} A_{U_3^{-1},H_3} \rb{\frac{2n}{q^{\nu+1}}-\frac{u_3}{U_3}} \right . \\
&\cdot \e\rb{ \sum_{\ell=0}^{k-1} \alpha_\ell (b_{\rho',\lambda-\mu+\rho'}(u_1+\ell u_3)
- b_{\rho',\lambda-\mu+\rho'}(u_1+\ell u_3+ v q^{\rho'} + 2 \ell s q^{m-1} q^{\rho'}))}\\
&\cdot \left . \frac 1{q^{\lambda-\mu+m-1}} \sum_{0\le h < q^{\lambda-\mu+m-1}} \e\rb{ h\frac{2sq^{m-1}n-v}{q^{\lambda-\mu+m-1}}}\right),
\end{align*}
where we use the last sum to detect the correct value of $v = v(n)$.
The error terms $E_1$, $E_3$, $E_{1,3}$ can easily be
estimated with the help of Lemma~\ref{lemma:incomplete-gauss-sum}, just as in \cite{drmotaMauduitRivat2014}.
By using the representations of $A_{U_1^{-1},H_1}$ and
$A_{U_3^{-1},H_3}$, we obtain
\begin{align*}
S_4(s&) = \frac 1{q^{\lambda-\mu+m-1}} \sum_{|h_1| \le H_1} \sum_{|h_3| \le H_3} \sum_{0\le h < q^{\lambda-\mu+m-1} } a_{h_1}(U_1^{-1},H_1)\,a_{h_3}(U_3^{-1},H_3) \\
& \sum_{0\le u_1 < U_1} \sum_{0\le u_3 < U_3} \sum_{0\le v < q^{\lambda-\mu+m-1}} \e\Biggl(- \frac{h_1u_1}{U_1} - \frac{h_3u_3}{U_3} - \frac{hv}{q^{\lambda-\mu+m-1}} \Biggr) \\
& \e\Biggl( \sum_{\ell=0}^{k-1} \alpha_\ell (b_{\rho',\lambda-\mu+\rho'}(u_1 + \ell u_3)
- b_{\rho',\lambda-\mu+\rho'}(u_1 + \ell u_3+ v q^{\rho'} +2 \ell s q^{m-1} q^{\rho'})) \Biggr) \\
& \cdot \sum_{n} \e\rb{ \frac{h_1n^2}{q^{\lambda+m-1}} +\frac{h_3n}{q^\nu} + \frac{2hsn}{q^{\lambda-\mu}}}.
\end{align*}
We now distinguish the cases $h_1 = 0$ and $h_1\ne 0$.
For $h_1\ne 0$, we can estimate the exponential sum by using Lemma~\ref{lemma:incomplete-gauss-sum} and the following estimate
\begin{align}
\sum_{1\leq h_1\leq H_1}\sqrt{\gcd(h_1,q^{\lambda})} \ll_{q} H_1.
\end{align}
Thus, we find
\begin{align*}
\sum_{0<|h_1| \le H_1} \sum_{|h_3| \le H_3} \sum_{h=0}^{q^{\lambda-\mu+m-1} -1} \left|\sum_n \e\rb{ \frac{h_1n^2}{q^{\lambda+m-1}} +\frac{h_3n}{q^\nu} +
\frac{2hsn}{q^{\lambda-\mu}}} \right| \ll \lambda H_1 H_3 q^{\lambda/2 + \lambda-\mu}.
\end{align*}
This gives then
\begin{align}\label{eq:S4-S5-even}
S_4(s) = S_5(s) + \mathcal{O}(\lambda q^{3\lambda/4}),
\end{align}
where $S_5(s)$ denotes the part of $S_4(s)$ with $h_1 = 0$.
We set $u_1 = u_1'' + q^{\rho'} u_1'$ and $u_3 = u_3'' + q^{\rho'} u_3'$ (where $0\le u_1'', u_3'' < q^{\rho'}$).
Furthermore, we define $i_\ell = \lfloor (u_1''+\ell u_3'')/q^{\rho'}\rfloor$.
As $I = (i_\ell)_{0\le \ell < k} = (\lfloor (u_1''+\ell u_3'')/q^{\rho'}\rfloor)_{0\le \ell < k}$ is contained in
$\mathcal{I}'_k$, we have - by the same arguments as in \cite{drmotaMauduitRivat2014} -
\begin{align*}
S_5(s) &\le \sum_{|h_3| \le H_3} \sum_{0\le h < q^{\lambda-\mu+m-1}} \frac 1{q^{\nu+1-\mu}} \sum_{0\le u_3' < q^{\nu-\mu+1}}\\
&\qquad \sum_{I \in \mathcal{I}_k} \left| H_{\lambda-\mu}^{I}(h,u_3') \overline {H_{\lambda-\mu}^{I}(h,u_3'+2sq^{m-1})} \right|\\
&\qquad \cdot \min\rb{ N, \left| \sin\rb{ \pi \rb{ \frac{h_3}{q^\nu} + \frac{2hs}{q^{\lambda-\mu}}}} \right|^{-1} }.
\end{align*}
Using the estimate $\left|H_{\lambda-\mu}^{I}(h,u_3'+2sq^{m-1})\right|\le 1$ and
the Cauchy-Schwarz inequality, yields
\begin{align*}
\sum_{0\le u_3' < q^{\nu-\mu+1}} &\left| H_{\lambda-\mu}^{I}(h,u_3') \overline {H_{\lambda-\mu}^{I}(h,u_3'+2sq^{m-1})} \right|\\
&\le q^{(\nu-\mu+1)/2} \rb{ \sum_{0\le u_3' < q^{\nu-\mu+1}} \left| H_{\lambda-\mu}^{I}(h,u_3') \right| ^2 }^{1/2}.
\end{align*}
We now replace $\lambda$ by $\lambda-\mu+m-1$, $\lambda'$ by $\nu-\mu+1$ and apply Proposition~\ref{Pro1}.
\begin{align*}
S_5(s) &\ll q^{-\eta(\lambda-\mu)/2} \sum_{|h_3| \le H_3} \sum_{h = 0}^{q^{\lambda-\mu+m-1} -1}
\min\rb{ N, \left| \sin\rb{ \pi \rb{ \frac{h_3}{q^\nu} + \frac{2hs}{q^{\lambda-\mu+m-1}}}} \right|^{-1} }.
\end{align*}
Next we average over $s$ and $h$, as in \cite{drmotaMauduitRivat2014}, by applying Lemma~\ref{le:sum_sum_sin}.
Thus we have a factor $\tau(q^{\lambda-\mu}) \ll_{q} (\lambda-\mu)^{\omega(q)}$ compared to $\tau(2^{\lambda-\mu}) = \lambda-\mu+1$.
Combining all the estimates as in \cite{drmotaMauduitRivat2014} gives then
\begin{align*}
|S_0| \ll q^{\nu-(\lambda-\nu)} + \nu^{(\omega(q)+1)/2} q^\nu q^{-\eta (\lambda-\nu)/2}
+ q^{\nu-\rho'/2} + q^{\nu - \rho''/2} + \lambda^{1/2} q^{\nu/2 + 3\lambda/8}
\end{align*}
~-- provided that the following conditions hold
\begin{align*}
&2\rho' \le \mu \le \nu-\rho',
\quad \rho'' < \mu'/2, \quad
\mu' \ll 2^{\nu- \mu'}, \quad
2\mu' \ge \lambda, \\
&(\nu-\mu) + 2(\lambda-\mu) + 2(\rho'+\rho'') \le \lambda/4, \quad
\nu-\mu'+\rho'' + \lambda - \mu \le \nu.
\end{align*}
For example, the choice
\begin{align*}
\lambda = \nu+\left\lfloor \frac{\nu}{20}\right\rfloor \mbox{ and } \rho' = \rho'' = \left\lfloor \frac{\nu}{200}\right\rfloor
\end{align*}
ensures that the above conditions are satisfied.
Summing up we proved that for $\eta'<\min(1/200,\eta/40)$ - where $\eta$ is given by Proposition~\ref{Pro1} - holds
\begin{align*}
S_0 \ll q^{\nu(1-\eta')} \ll N^{1-\eta'}
\end{align*}
which is precisely the statement of Theorem~\ref{Thexponentialsums}.
\subsection{\texorpdfstring{The case $K \not \in \mathbb{Z}$}{The case K not in Z}}
\label{sec:nequiv0}
In this section, we show that, for $K = \alpha_0 + \cdots +
\alpha_{k-1} \not \in \mathbb{Z}$,
Proposition \ref{Pro2}
provides an upper bound for the sum
\begin{align*}
S_0 = \sum_{n< N} \e\rb{\sum_{\ell=0}^{k-1} \alpha_\ell b((n+\ell)^2)}.
\end{align*}
Let $\mu$, $\lambda$, $\rho$ and $\rho_1$ be integers satisfying
\begin{align}\label{eq:mu-lambda-rho}
0 \leq \rho_1 < \rho < \mu = \nu-2\rho < \nu < \lambda = \nu+2\rho < 2\nu
\end{align}
to be chosen later - just as in \cite{drmotaMauduitRivat2014}.
Since $K \not \in \mathbb{Z}$ we can not use Lemma \ref{Lecarry0} directly.
Therefore, we apply Lemma~\ref{lemma:van-der-corput} with $Q=1$
and $R=q^\rho$. Summing trivially for $1\leq r \leq R_1=q^{\rho_1}$ yields
\begin{align*}
\abs{S_0}^2 \ll \frac{N^2R_1}{R} + \frac{N}{R} \sum_{R_1 < r < R} \rb{1-\frac{r}{R}} \Re(S_1(r)),
\end{align*}
where
\begin{align*}
S_1(r) = \sum_{n \in I_1(r)} \e\rb{\sum_{\ell=0}^{k-1} \alpha_\ell \rb{ b((n+\ell)^2) - b((n+r+\ell)^2)}}
\end{align*}
and $I_1(r)$ is an interval included in $[0,N-1]$.
By Lemma~\ref{Lecarry0} we conclude that $b_{\lambda,\infty}((n+\ell)^2) = b_{\lambda,\infty}((n+r+\ell)^2)$ for all but $\mathcal{O}(Nq^{-(\lambda-\nu-\rho)})$ values of $n$.
Therefore, we see that
\begin{align*}
S_1(r) = S_1'(r) + \mathcal{O}(q^{\nu-(\lambda-\nu-\rho)}),
\end{align*}
with
\begin{align*}
S_1'(r) = \sum_{n \in I_1(r)} \e\rb{\sum_{\ell=0}^{k-1} \alpha_\ell \rb{ b_\lambda((n+\ell)^2) - b_\lambda((n+r+\ell)^2)}}.
\end{align*}
This leads to
\begin{align*}
\abs{S_0}^2 \ll q^{2\nu-\rho+\rho_1} + q^{3\nu+\rho-\lambda} + \frac{q^{\nu}}{R} \sum_{R_1 < r < R} \abs{S'_1(r)}
\end{align*}
and, by using the Cauchy-Schwarz inequality to
\begin{align*}
\abs{S_0}^4 \ll q^{4\nu-2\rho+2\rho_1} + q^{6\nu+2\rho-2\lambda} + \frac{q^{2\nu}}{R} \sum_{R_1 < r < R} \abs{S'_1(r)}^2.
\end{align*}
For $\abs{S'_1(r)}^2$ we can use Lemma~\ref{lemma:van-der-corput} again:
Let $\rho'\in\mathbb{N}$ to be chosen later such that
$1\leq \rho'\leq \rho$.
After applying Lemma~\ref{lemma:van-der-corput}
with $Q=q^{\mu+m-1}$ and
\begin{align} \label{eq:definition-S}
S = q^{2\rho'} \leq q^{\nu-\mu},
\end{align}
we observe that for any $\widetilde{n}\in\mathbb{N}$ we have
\begin{align*}
b_{\lambda}((\widetilde{n}+sq^{\mu+m-1})^2) - b_{\lambda}(\widetilde{n}^2)
= b_{\mu,\lambda}((\widetilde{n}+sq^{\mu+m-1})^2) - b_{\mu,\lambda}(\widetilde{n}^2),
\end{align*}
and thus
\begin{align} \label{eq:S0-S2}
\abs{S_0}^4 \ll q^{4\nu-2\rho+2\rho_1} + q^{6\nu+2\rho-2\lambda} + \frac{q^{4\nu}}{S} + \frac{q^{3\nu}}{RS} \sum_{R_1 < r < R} \sum_{1\leq s < S} \abs{S_2(r,s)},
\end{align}
with
\begin{align*}
S_2(r,s) &= \sum_{n \in I_2(r,s)} \e\Biggl(\sum_{\ell=0}^{k-1} \alpha_\ell \bigl(b_{\mu,\lambda}((n+\ell)^2) - b_{\mu,\lambda}((n+r+\ell)^2) \\
&\qquad - b_{\mu,\lambda}((n+sq^{\mu+m-1}+\ell)^2) + b_{\mu,\lambda}((n+sq^{\mu+m-1}+r+\ell)^2)\bigr) \Biggr),
\end{align*}
where $I_2(r,s)$ is an interval included in $[0,N-1]$.
We now make a Fourier analysis similar to the case $K \equiv 0 (\bmod 1)$ - as in \cite{drmotaMauduitRivat2014}.
We set $U = q^{\lambda+m-1-\mu'}, U_3 = q^{\nu-\mu'+1}$ and $V = q^{\lambda-\mu+m-1}$.
We apply Lemma~\ref{Lecarry1} and detect the correct values of $u_1,u_2,u_3$ by characteristic functions.
This gives
\begin{align*}
S_2(r,s) &= \sum_{0\leq u_1 < U} \sum_{0\leq u_2 < U} \sum_{0\leq u_3 < U_3}\\
& \sum_{n \in I_2(r,s)} \e\Biggl(\sum_{\ell=0}^{k-1} \alpha_\ell \bigl(b_{\rho',\lambda-\mu+\rho'}(u_1+\ell u_3) - b_{\rho',\lambda-\mu+\rho'}(u_2+\ell u_3) \\
& \qquad \qquad - b_{\rho',\lambda-\mu+\rho'}(u_1+\ell u_3+ v(n) q^{\rho'} + 2 \ell s q^{m-1} q^{\rho'})\\
& \qquad \qquad + b_{\rho',\lambda-\mu+\rho'}(u_2+\ell u_3+ v(n) q^{\rho'} + 2(\ell+r) s q^{m-1} q^{\rho'}) \bigr) \Biggr) \\
& \chi_{U^{-1}}\rb{\frac{n^2}{q^{\lambda+m-1}} - \frac{u_1}{U}} \chi_{U^{-1}}\rb{\frac{(n+r)^2}{q^{\lambda+m-1}} - \frac{u_2}{U}}
\chi_{U_3^{-1}}\rb{\frac{2n}{q^\nu} - \frac{u_3}{U_3}}\\
& \qquad +\mathcal{O}(q^{\nu-\rho'}).
\end{align*}
Furthermore, we use Lemma \ref{lemma:better-koksma1} to replace the
characteristic functions $\chi$ by trigonometric polynomials.
Using \eqref{eqS-S} with
$U_1=U_2=U$,
$H_1=H_2 = U q^{\rho_2}$ and
$H_3 = U_3 q^{\rho_3}$,
and integers $\rho_2$,
$\rho_3$ verifying
\begin{align}\label{eq:condition-rho2-rho3}
\rho_2 \leq \mu-\rho',\ \rho_3 \leq \mu-\rho',\
\end{align}
we obtain
\begin{align*}
S_2(r,s) = S_3(r,s) &+ \mathcal{O}(q^{\nu-\rho'}) + \mathcal{O}\rb{ E_{30}(r)} + \mathcal{O}\rb{ E_{31}(0)} + \mathcal{O}\rb{ E_{31}(r)}\\
& + \mathcal{O}\rb{E_{32}(0)} + \mathcal{O}\rb{E_{32}(r)} + \mathcal{O}\rb{E_{33}(r)} + \mathcal{O}\rb{E_{34}(r)},
\end{align*}
for the error terms obtained by~\eqref{eqS-S} and $S_3(r,s)$ obtained by replacing the characteristic function by trigonometric polynomials.
We now reformulate $S_3(r,s)$ by expanding the trigonometric polynomials, detecting the correct value of $v = v(n)$ and restructuring the sums:
\begin{align*}
S_3(r,s) &= \frac{1}{q^{\lambda-\mu+m-1}} \sum_{0\leq h < q^{\lambda-\mu+m-1}} \sum_{\abs{h_1} \leq H_1} a_{h_1}(U^{-1},H_1)\\
&\quad \sum_{\abs{h_2} \leq H_2} a_{h_2}(U^{-1},H_2) \sum_{\abs{h_3} \leq H_3} a_{h_3}(U_3^{-1},H_3)\\
&\quad \sum_{0\leq u_1 < U} \sum_{0\leq u_2 < U} \sum_{0\leq u_3 < U_3} \sum_{0\leq v < V}
\e\rb{- \frac{h_1 u_1+h_2 u_2}{U} - \frac{h_3u_3}{U_3} -\frac{hv}{q^{\lambda-\mu+m-1}}}\\
&\quad \e\Biggl(\sum_{\ell=0}^{k-1} \alpha_\ell \bigl(b_{\rho',\lambda-\mu+\rho'}(u_1+\ell u_3) - b_{\rho',\lambda-\mu+\rho'}(u_2+\ell u_3)\\
& \quad \qquad \qquad - b_{\rho',\lambda-\mu+\rho'}(u_1+\ell u_3+ vq^{\rho'} + 2\ell s q^{m-1} q^{\rho'})\\
& \quad \qquad \qquad + b_{\rho',\lambda-\mu+\rho'}(u_2+\ell u_3+ vq^{\rho'} + 2(\ell+r) s q^{m-1} q^{\rho'}) \bigr) \Biggr)\\
& \quad \sum_{n \in I_2(r,s)} \e\rb{\frac{h_1n^2 + h_2(n+r)^2}{q^{\lambda+m-1}} + \frac{2h_3n}{q^\nu} + \frac{2hsn}{q^{\lambda-\mu}}}.
\end{align*}
One can estimate the error terms just as in \cite{drmotaMauduitRivat2014} and finds that they are bounded by either $q^{\nu-\rho_3}$ or $q^{\nu-\rho_2}$.
In conclusion we deduce that
\begin{align}\label{eq:S2S3_final}
S_2(r,s) = S_3(r,s) + \mathcal{O}(q^{\nu-\rho'}) + \mathcal{O}(q^{\nu-\rho_2}) + \mathcal{O}(q^{\nu-\rho_3}).
\end{align}
We now split the sum $S_3(r,s)$ into two parts:
\begin{align} \label{eq:S3-S4-S'4}
S_3(r,s) = S_4(r,s) + S'_4(r,s),
\end{align}
where $S_4(r,s)$ denotes the contribution of the terms for which
$h_1+h_2=0$ while $S'_4(r,s)$ denotes the contribution of the terms
for which $h_1+h_2\neq 0$.
We can estimate $S'_4(r,s)$ just as in \cite{drmotaMauduitRivat2014} and find
\begin{align*}
S'_4(r,s) &\ll \nu^4 q^{\nu+\frac{1}{2}(8\lambda-9\mu+7\rho'+\rho_2)}
\end{align*}
and it remains to consider $S_4(r,s)$.
Setting $u_1 = u_1'' + q^{\rho'} u'_1$,
$u_2 = u_2'' + q^{\rho'} u'_2$ and $u_3 = u_3'' + q^{\rho'} u'_3$,
(where $0\le u_1'', u_2'', u_3'' < q^{\rho'}$)
we can replace the two-fold restricted block-additive function by a truncated block-additive function
\begin{small}
\begin{align*}
&b_{\rho',\lambda-\mu+\rho'}(u_1 + \ell u_3) = b_{\lambda-\mu}\rb{u_1' + \ell u_3' + \floor{(u_1''+\ell u_3'')/q^{\rho'}}}, \\
&b_{\rho',\lambda-\mu+\rho'}(u_2 + \ell u_3) = b_{\lambda-\mu}\rb{u_2' + \ell u_3' + \floor{(u_2''+\ell u_3'')/q^{\rho'}}}, \\
&b_{\rho',\lambda-\mu+\rho'}(u_1 + \ell u_3+ vq^{\rho'} +2 \ell s q^{m-1} q^{\rho'}) \\
&\quad = b_{\lambda-\mu}\rb{u_1' +v + \ell (u_3'+2sq^{m-1}) + \floor{(u_1''+\ell u_3'')/q^{\rho'}}}\\
&b_{\rho',\lambda-\mu+\rho'}(u_2+\ell u_3+ vq^{\rho'} + 2(\ell+r) s q^{m-1} q^{\rho'}) \\
&\quad = b_{\lambda-\mu}\rb{u_2' + v + 2sr q^{m-1} + \ell (u_3'+2sq^{m-1}) + \floor{(u_2''+\ell u_3'')/q^{\rho'}}}.
\end{align*}
\end{small}
Using the periodicity of $b$ modulo $V:=q^{\lambda-\mu+ m-1}$,
we replace the variable $v$ by $v_1$ such that
$v_1 \equiv u'_1+v (\bmod q^{\lambda-\mu+m-1})$. Furthermore we introduce a new
variable $v_2$ such that
\begin{align*}
v_2 \equiv u_2' + v + 2sr q^{m-1} \equiv v_1 + u_2' -u'_1 + 2sr q^{m-1} (\bmod q^{\lambda-\mu+m-1}).
\end{align*}
We then follow the arguments of \cite{drmotaMauduitRivat2014} and find
\begin{align*}
S_4(r,s) &\ll q^{2\lambda-2\mu} \sum_{h =0}^{q^{\lambda-\mu+m-1}-1} \sum_{h'=0}^{q^{\lambda-\mu+m-1}-1} \sum_{\abs{h_2} \leq H_2} \min(U^{-2},h_2^{-2}) \\
&\quad \sum_{\abs{h_3} \leq H_3} \min(U_3^{-1},h_3^{-1}) \sum_{0\leq u''_1 < q^{\rho'}} \sum_{0\leq u''_2 < q^{\rho'}} \sum_{0\leq u''_3 < q^{\rho'}} \sum_{0\leq u'_3 < U'_3}\\
&\qquad \abs{H_{\lambda-\mu}^{I(u''_1,u''_3)}(h'-h-h_2,u_3')} \abs{H_{\lambda-\mu}^{I(u''_2,u''_3)}(h'-h_2,u_3')}\\
&\quad \qquad \abs{H_{\lambda-\mu}^{I(u''_1,u''_3)}(h'-h,u_3'+2sq^{m-1})} \abs{H_{\lambda-\mu}^{I(u''_2,u''_3)}(h',u_3'+2s q^{m-1})}\\
& \qquad \qquad \abs{\sum_{n \in I_2(r,s)} \e\rb{\frac{2h_2 rn}{q^{\lambda+m-1}} + \frac{2h_3n}{q^\nu} + \frac{2hsn}{q^{\lambda-\mu}}}},
\end{align*}
with
\begin{align*}
I(u,\tilde{u}) = \rb{\floor{\frac{u}{q^{\rho'}}}, \floor{\frac{u+\tilde{u}}{q^{\rho'}}}, \ldots, \floor{\frac{u+(k-1)\tilde{u}}{q^{\rho'}}}}
\,\text{ for } (u,\tilde{u})\in \mathbb{N}^2.
\end{align*}
The next few steps are again very similar to the corresponding ones in \cite{drmotaMauduitRivat2014} and we skip the details.
We find
\begin{align*}
S_4(r,s) &\ll (\lambda-\mu)\ \gcd(2s,q^{\lambda-\mu})\ q^{2\lambda-2\mu} \sum_{0\leq u''_1, u''_2, u''_3 < q^{\rho'}} \sum_{\abs{h_2} \leq H_2} \min(U^{-2},h_2^{-2})\\
& \qquad \qquad S_6(h_2,s,u''_1,u''_3)^{1/2} S_6(h_2,s,u''_2,u''_3)^{1/2}\\
& \qquad \sum_{\abs{h_3} \leq H_3} \min(U_3^{-1},h_3^{-1}) \min\rb{q^\nu, \abs{\sin \pi\tfrac{2 h_2 r+2q^{\lambda-\nu+m-1} h_3}{q^{\lambda+m-1}} }^{-1}}.
\end{align*}
where
\begin{align}\label{eq:def-S6}
S_6(h_2,s,u'',u''_3) &= \sum_{0\leq u'_3 < U'_3} \sum_{0\leq h' < q^{\lambda-\mu+m-1}} \\
&\qquad \abs{H_{\lambda-\mu}^{I(u'',u''_3)}(h'-h_2,u_3')}^2 \abs{H_{\lambda-\mu}^{I(u'',u''_3)}(h',u_3'+2sq^{m-1})}^2.
\end{align}
Here we
introduce the integers $H'_2$ and $\kappa$ such that
\begin{align}\label{eq:def-H'2}
H'_2 = q^{\lambda-\nu+m} H_3/R_1 = q^{\lambda-\mu+\rho'+\rho_3-\rho_1+m+1} = q^\kappa.
\end{align}
This leads to
\begin{align*}
S_4(r,s) \ll S_{41}(r,s) + S_{42}(r,s) + S_{43}(r,s),
\end{align*}
where $S_{41}(r,s)$, $S_{42}(r,s)$ and $S_{43}(r,s)$ denote the contribution of the terms
$\abs{h_2}\leq H'_2$,
$H'_2 < \abs{h_2}\leq q^{\lambda+m-1-\mu}$ and
$q^{\lambda+m-1-\mu} < \abs{h_2}\leq H_2$ respectively.
\paragraph{\texorpdfstring{Estimate of $S_{41}(r,s)$}{Estimate of S 41}}
By \eqref{eq:sum_inverse_sinus}
we have
\begin{align*}
\sum_{\abs{h_3} \leq H_3} \min\rb{q^\nu, \abs{\sin \pi\tfrac{2h_3 + 2h_2 rq^{\nu-\lambda-m+1}}{q^{\nu}} }^{-1} } \ll \nu q^\nu,
\end{align*}
and, therefore,
\begin{align*}
S_{41}(r,s) &\ll \nu\, (\lambda-\mu)\ \gcd(2s,q^{\lambda-\mu})\ q^{\nu+2\lambda-2\mu} U^{-2} U_3^{-1}\\
& \qquad \sum_{0\leq u''_1, u''_2, u''_3 < q^{\rho'}} \sum_{\abs{h_2} \leq H'_2} S_6(h_2,s,u''_1,u''_3)^{1/2} S_6(h_2,s,u''_2,u''_3)^{1/2}.
\end{align*}
By Proposition \ref{Pro2}
(replacing $\lambda$ by $\lambda-\mu$ and $L$ by
$\lambda-\mu-\kappa$), we find some $0<\eta'\leq 1$ such that
\begin{align*}
\abs{H_{\lambda-\mu}^{I(u'',u''_3)}(h'-h_2,u_3')} \ll q^{-\eta' (\lambda - \mu - \kappa)}
\max_{J\in \mathcal{I}_k} \abs{G_{\kappa}^{J}(h'-h_2,\lfloor u_3'/q^L \rfloor)}.
\end{align*}
By Parseval's equality
and recalling that $\#(\mathcal{I}_k) = q^{m-1} (q^{m-1}+1)^{k-1}$,
it follows that
\begin{align*}
& \sum_{\abs{h_2} \leq H'_2} \max_{J\in\mathcal{I}_k} \abs{H_{\kappa}^{J}\lfloor(h'-h_2, u_3'/q^L \rfloor)}^2 \\
&\qquad \le \sum_{J\in\mathcal{I}_k} \sum_{\abs{h_2} \leq H'_2} \abs{G_{\kappa}^{J}(h'-h_2,\lfloor u_3'/q^L \rfloor)}^2
\leq q^{m-1} (q^{m-1}+1)^{k-1}.
\end{align*}
We obtain
\begin{align*}
\sum_{\abs{h_2} \leq H'_2} \abs{H_{\lambda-\mu}^{I(u'',u''_3)}(h'-h_2,u_3')}^2
\ll q^{-\eta' (\lambda - \mu - \kappa)} = \rb{\frac{H'_2}{q^{\lambda-\mu}}}^{\eta'}
\end{align*}
uniformly in $\lambda$, $\mu$, $H'_2$, $u'_3$, $u''$ and $u''_3$.
The remaining proof is analogue to the corresponding proof in \cite{drmotaMauduitRivat2014}.
The only difference is again that by using Lemma~\ref{le:sum_sum_sin} we obtain a factor $(\lambda-\mu)^{\omega(q)}$ instead of $(\lambda-\mu)$.
This gives
\begin{align}
\label{eq:S41-final}
\frac{1}{RS} \sum_{R_1 < r < R} \sum_{1\leq s < S} S_{41}(r,s) \ll \nu\, (\lambda-\mu)^{\omega(q) +1}\ q^{\nu - \eta'(\rho_1-\rho'-\rho_3)},
\end{align}
which concludes this part.
\paragraph{\texorpdfstring{Estimate of $S_{42}(r,s)$ and $S_{43}(r,s)$}{Estimate of S 42 and S 43}}
By following the arguments of \cite{drmotaMauduitRivat2014} and applying the same changes as in the estimate of $S_{41}$ we find
\begin{align}
\label{eq:S42-final}
\frac{1}{RS} \sum_{R_1 < r < R} \sum_{1\leq s < S} S_{42}(r,s) \ll \rho (\lambda-\mu)^{2+\omega(q)}\ q^{\nu-\rho+\rho_1+\rho'-\rho_3}
\end{align}
and
\begin{align}\label{eq:S43-final}
\frac{1}{RS} \sum_{R_1 < r < R} \sum_{1\leq s < S} S_{43}(r,s) \ll \rho\ (\lambda-\mu)^{2+\omega(q)}\ q^{\nu-\rho+3\rho'}.
\end{align}
\paragraph{Combining the estimates for $S_4$}
It follows from \eqref{eq:S41-final}, \eqref{eq:S42-final} and
\eqref{eq:S43-final} that
\begin{align*}
\frac{1}{RS} \sum_{R_1 < r < R} \sum_{1\leq s < S} S_4(r,s) \ll \nu^{3+\omega(q)} q^{\nu} \rb{q^{ - 2\eta'(\rho_1-\rho'-\rho_3)} + q^{-\rho_3} + q^{-\rho+3\rho'}}.
\end{align*}
Choosing
\begin{align*}
\rho_1 = \rho-\rho',\
\rho_2 = \rho_3 = \rho',
\end{align*}
we obtain
\begin{align*}
\frac{1}{RS} \sum_{R_1 < r < R} \sum_{1\leq s < S} S_4(r,s) \ll \nu^{3+\omega(q)} q^{\nu} \rb{q^{ - 2 \eta'(\rho-3\rho')} + q^{-\rho'} + q^{-(\rho-3\rho')}}.
\end{align*}
Since $0<\eta'<1$, we obtain using \eqref{eq:S3-S4-S'4} and \eqref{eq:S2S3_final}, that
\begin{align*}
\frac{1}{RS} \sum_{R_1 < r < R} \sum_{1\leq s < S} S_2(r,s) \ll \nu^{3+\omega(q)} q^{\nu} \rb{q^{ - \eta'(\rho-3\rho')} + q^{-\rho'} + q^{\frac{1}{2}(8\lambda-9\mu+8\rho')}}.
\end{align*}
We recall
by \eqref{eq:definition-S} that
\begin{math}
S = q^{2\rho'}
\end{math}
and
by \eqref{eq:mu-lambda-rho} that
$\mu=\nu-2\rho$,
$\lambda = \nu+2\rho$
and insert the estimation from above in \eqref{eq:S0-S2}:
\begin{align*}
\abs{S_0}^4 \ll q^{4\nu-2\rho'} + q^{4\nu-2\rho} + \nu^{3+\omega(q)} q^{4\nu} \rb{q^{-\eta'(\rho-3\rho')} + q^{-\rho'} + q^{-\frac{\nu}{2}+17\rho+4\rho'} }.
\end{align*}
For $\rho'=\floor{\nu/146}$
and $\rho=4\rho'$, we obtain
\begin{align*}
\abs{S_0} \ll \nu^{(3+\omega(q))/4} q^{\nu - \frac{\eta'\rho'}{4}} \ll N^{1-\eta_1},
\end{align*}
for all $\eta_1 < \eta'/584$.
Therefore we have seen that Proposition~\ref{Pro2}
implies the case $K \not \equiv 0 (\bmod 1)$ of Theorem~\ref{Thexponentialsums}.
\section{Auxiliary Results}\label{cha:auxiliary}
In this last section, we present some auxiliary results which are used in \Cref{cha:proof}, to prove the main theorem.
For this proof, it is crucial to approximate characteristic functions of the intervals $[0,\alpha) \bmod 1$ where $0\leq \alpha < 1$ by trigonometric polynomials.
This is done by using Vaaler's method - see Section \ref{sec:vaaler}. As we deal with exponential sums we also use
a generalization of Van-der-Corput's inequality which we have already seen in Section \ref{sec:vandercorput}. In Section \ref{sec:sumgeometric}, we acquire
some results dealing with sums of geometric series which we use to bound linear exponential sums. Section \ref{sec:gauss} is dedicated to one
classic result on Gauss sums and allows us to find appropriate bounds on the occurring quadratic exponential sums in \Cref{cha:proof}.
The last part of this section deals with carry propagation. We find a quantitative statement that carry propagation along
several digits is rare, i.e. exponentially decreasing.
We would like to note that all these auxiliary results have already been presented in \cite{drmotaMauduitRivat2014}.
\subsection{Sums of geometric series}\label{sec:sumgeometric}
We will often make use of the following upper bound for geometric
series with ratio $\e(\xi), \xi \in \mathbb{R}$ and $L_1,L_2\in\mathbb{Z}$, $L_1 \leq L_2$:
\begin{align}\label{eq:estimate-geometric-series}
\abs{\sum_{L_1<\ell \leq L_2} \e(\ell \xi)} \leq \min(L_2-L_1,\abs{\sin\pi\xi}^{-1}),
\end{align}
which is obtained from the formula for finite geometric series.
The following results allow us to find useful estimates for special double and triple sums involving geometric series.
\begin{lemma}\label{lemma:sum_inverse_sinus}
Let $(a,m)\in\mathbb{Z}^2$ with $m\ge 1$, $\delta=\gcd(a,m)$ and $b\in\mathbb{R}$.
For any real number $U>0$, we have
\begin{align}\label{eq:sum_inverse_sinus}
\sum_{0\le n< m} \min \rb{ U, \abs{\sin\rb{ \pi \tfrac{an+b}{m}}}^{-1}} \leq
\delta \min\rb{U, \abs{\sin \rb{\pi\tfrac{\delta\, \norm{b/\delta}}{m}}}^{-1}} + \frac{2\, m}{\pi} \log (2\, m).
\end{align}
\end{lemma}
\begin{proof}
This is \cite[Lemma 6]{drmotaMauduitRivat2014}.
\end{proof}
\begin{lemma}\label{le:sum_sum_sin}
Let $m\geq 1$ and $A\geq 1$ be integers and $b\in\mathbb{R}$.
For any real number $U>0$, we have
\begin{align}\label{eq:double-sum-min}
\frac{1}{A} \sum_{1\leq a \leq A} \sum_{0\leq n < m} \min\rb{U, \abs{\sin \rb{\pi \tfrac{ a n + b}{m}}}^{-1}}
\ll \tau(m) \ U + m\log m
\end{align}
and, if $\abs{b}\leq \frac{1}{2}$, we have an even sharper bound
\begin{align}\label{eq:double-sum-min-sharp}
\begin{split}
&\frac{1}{A} \sum_{1\leq a \leq A} \sum_{0\leq n < m} \min\rb{U, \abs{\sin \rb{\pi \tfrac{ a n + b}{m}}}^{-1}}\\
&\qquad \ll \tau(m) \min\rb{ U, \abs{\sin \rb{\pi \tfrac{b}{m}}}^{-1} } + m\log m,
\end{split}
\end{align}
where $\tau(m)$ denotes the number of divisors of $m$.
\end{lemma}
\begin{proof}
See \cite{drmotaMauduitRivat2014}.
\end{proof}
\subsection{Gauss sums}\label{sec:gauss}
In the proof of the main theorem, we will meet quadratic exponential sums.
We first consider Gauss sums $\G(a,b;m)$ which are defined by:
\begin{align*}
\G(a,b;m) := \sum_{n = 0}^{m-1} \e\rb{\frac{an^2+bn}{m}}.
\end{align*}
In this section, we recall one classic result on Gauss sums, namely Theorem~\ref{theorem:complete-gauss-sum}.
\begin{theorem}\label{theorem:complete-gauss-sum}
For all $(a, b, m) \in \mathbb{Z}^3$ with $m\geq 1$,
\begin{align} \label{eq:complete-gauss-sum}
\abs{\sum_{n=0}^{m-1} \e\rb{\tfrac{an^2+bn}{m}}} \leq \sqrt{2m\gcd(a,m)}
\end{align}
holds.
\end{theorem}
\begin{proof}
This form was for example obained in \cite[Proposition 2]{mauduitRivat2009}.
\end{proof}
Consequently we obtain the following result for incomplete quadratic Gauss sums.
\begin{lemma}\label{lemma:incomplete-gauss-sum}
For all $(a, b, m, N ,n_0) \in \mathbb{Z}^5$ with $m\geq 1$ and $N\geq 0$,
we have
\begin{align} \label{eq:incomplete-gauss-sum}
\abs{\sum_{n=n_0+1}^{n_0+N} \e\rb{\tfrac{an^2+bn}{m}}} \leq \rb{ \tfrac{N}{m} + 1 + \tfrac{2}{\pi} \log\tfrac{2m}{\pi} } \sqrt{2m\gcd(a,m)}.
\end{align}
\end{lemma}
\begin{proof}
This is Lemma 9 of \cite{drmotaMauduitRivat2014}.
\end{proof}
\subsection{Carry Lemmas}\label{sec:carry_2}
As mentioned before, we want to find a quantitative statement on how rare carry propagation along several digits is.
\begin{lemma}\label{Lecarry0}
Let $(\nu,\lambda,\rho)\in\mathbb{N}^3$ such that
$\nu+\rho \leq \lambda \leq 2\nu$.
For any integer $r$ with $0\leq r\leq q^{\rho}$, the
number of integers $n < q^{\nu}$ for which there exists an integer
$j\ge \lambda$
with $\varepsilon_j((n+r)^2) \ne \varepsilon_j(n^2)$ is
$\ll q^{2\nu+\rho-\lambda}$.
Hence, we find for any block-additive function $b$, that the number of integers $n < q^\nu$ with
\begin{align*}
b_{\lambda-m+1}((n+r)^2) - b_{\lambda-m+1}(n^2) \ne b((n+r)^2) - b(n^2)
\end{align*}
is also $\ll q^{2\nu+\rho-\lambda}$.
\end{lemma}
\begin{proof}
A proof for the Thue-Morse sequence can be found in \cite{drmotaMauduitRivat2014} and it is easy to adapt it for this more general case.
\end{proof}
The next lemma helps to replace quadratic exponential sums depending only on few digits.
\begin{lemma}\label{Lecarry1}
Let $(\lambda,\mu,\nu,\rho')\in\mathbb{N}^4$
such that $0 < \mu < \nu < \lambda$, $2\rho' \le \mu \le \nu - \rho'$ and $\lambda-\nu \le 2(\mu-\rho')$and set
$\mu' = \mu - \rho'$.
For integers $n<q^\nu$, $s\ge 1$
and $1\le r\le q^{(\lambda-\nu)/2}$ we set
\begin{align}
n^2 &\equiv u_1 q^{\mu'} + w_1 (\bmod q^{\lambda+m-1}) & (0\le w_1 < q^{\mu'},\ 0\le u_1 < q^{\lambda +m-1- \mu + \rho'}) \nonumber \\
(n+r)^2 &\equiv u_2 q^{\mu'} + w_2 (\bmod q^{\lambda+m-1}) \label{equ1u2u3}
& (0\le w_2 < q^{\mu'},\ 0\le u_2< q^{\lambda +m-1- \mu + \rho'})\\
2n &\equiv u_3 q^{\mu'} + w_3 (\bmod q^{\lambda+m-1}) & (0\le w_3 < q^{\mu'},\ 0\le u_3 < q^{\nu+1 - \mu + \rho'}) \nonumber \\
2s q^{m-1} n &\equiv v (\bmod q^{\lambda-\mu+m-1}), & (0\le v < q^{\lambda-\mu+m-1}) \nonumber
\end{align}
where the integers $u_1=u_1(n)$, $u_2=u_2(n)$, $u_3=u_3(n)$,
$v=v(n)$, $w_1=w_1(n)$, $w_2=w_2(n)$ and $w_3=w_3(n)$ satisfy
the above conditions.
Then for any integer $\ell\ge 1$ the number of integers $n < q^\nu$ for which
one of the following conditions
\begin{align}
b_{\mu,\lambda}((n+\ell)^2) &\ne b_{\rho',\lambda-\mu+\rho'}(u_1+\ell u_3) \nonumber \\
b_{\mu,\lambda}((n+\ell+sq^{\mu+m-1})^2))
&\ne b_{\rho',\lambda-\mu+\rho'}(u_1+\ell u_3+ v q^{\rho'} + 2 \ell s q^{m-1} q^{\rho'}) \label{equ1u2u3-2} \\
b_{\mu,\lambda}((n+r+\ell)^2) &\ne b_{\rho',\lambda-\mu+\rho'}(u_2+\ell u_3)\nonumber\\
b_{\mu,\lambda}((n+r+\ell+sq^{\mu+m-1})^2))
&\ne b_{\rho',\lambda-\mu+\rho'}(u_2+\ell u_3+ v q^{\rho'} + 2 (\ell+r) s q^{m-1} q^{\rho'}) \nonumber
\end{align}
is satisfied is $\ll q^{\nu-\rho'}$.
\end{lemma}
\begin{proof}
A proof for the sum of digits function in base $2$ can be found in \cite{drmotaMauduitRivat2014} and it is straight forward to adapt it to fit this more general case.
\end{proof}
\subsection{Van-der-Corput's inequality}\label{sec:vandercorput}
The following lemma is a generalization of Van-der-Corput's inequality.
\begin{lemma}[\cite{mauduitRivat2009}]\label{lemma:van-der-corput}
For all complex numbers $z_1,\ldots,z_N$
and all integers $Q\geq 1$ and $R\geq 1$, we have
\begin{align}\label{eq:van-der-corput}
\abs{\sum_{n=1}^{N-1} z_n}^2 \le \frac{N+QR-Q}{R} \rb{\sum_{n=1}^{N-1} |z_n|^2 + 2\ \sum_{r=1}^{R-1}\rb{1-\frac{r}{R}} \ \sum_{n=1}^{N-Qr-1}\Re\rb{z_{n+Qr} \overline{z_n}}}
\end{align}
where $\Re(z)$ denotes the real part of $z\in\mathbb{C}$.
\end{lemma}
\subsection{Vaaler's method}\label{sec:vaaler}
The following theorem is a classical method to detect real numbers in an
interval modulo $1$ by means of exponential sums developed by Vaaler~\cite{vaaler}.
For $\alpha\in\mathbb{R}$ with $0\leq \alpha<1$, we denote by $\chi_\alpha$ the
characteristic function of the interval $[0,\alpha)$ modulo $1$:
\begin{align} \label{eq:definition-chi}
\chi_\alpha(x)=\floor{x}-\floor{x-\alpha}.
\end{align}
The following theorem is a consequence of the mentioned paper by Vaaler.
The presented form was first published by Mauduit and Rivat~\cite{mauduit_rivat_rs}.
\begin{theorem}\label{th:vaaler}
For all $\alpha\in\mathbb{R}$ with $0\leq \alpha<1$
and all integer $H\geq 1$,
there exist real-valued
trigonometric polynomials $A_{\alpha,H}(x)$ and $B_{\alpha,H}(x)$
such that for all $x\in\mathbb{R}$
\begin{align}\label{eq:vaaler-approximation}
\abs{ \chi_\alpha(x) - A_{\alpha,H}(x) } \leq B_{\alpha,H}(x).
\end{align}
The trigonometric polynomials are defined by
\begin{align}\label{eq:definition-A-B}
A_{\alpha,H}(x) = \sum_{\abs{h}\leq H} a_h(\alpha,H) \e(h x),\
B_{\alpha,H}(x) = \sum_{\abs{h}\leq H} b_h(\alpha,H) \e(h x),
\end{align}
with coefficients $a_h(\alpha,H)$ and $b_h(\alpha,H)$ satisfying
\begin{align}\label{eq:vaaler-coef-majoration}
a_0(\alpha,H) = \alpha,\
\abs{a_h(\alpha,H)} \leq \min\rb{\alpha,\tfrac{1}{\pi\abs{h}}},\
\abs{b_h(\alpha,H)} \leq \tfrac{1}{H+1}.
\end{align}
\end{theorem}
Using this method we can detect points in a $d$-dimensional box (modulo $1$):
\begin{lemma}\label{lemma:better-koksma1}
For $(\alpha_1,\ldots, \alpha_d) \in [0,1)^d$
and $(H_1,\ldots,H_d)\in\mathbb{N}^d$
with $H_1\geq 1$,\ldots, $H_d\geq 1$,
we have for all $(x_1,\ldots,x_d)\in\mathbb{R}^d$
\begin{align} \label{eq:better-koksma1}
\abs{\prod_{j=1}^d \chi_{\alpha_j}(x_j) - \prod_{j=1}^d A_{\alpha_j,H_j}(x_j)}
\leq \sum_{\emptyset \neq J \subseteq \{1,\ldots,d\}} \prod_{j\not\in J} \chi_{\alpha_j}(x_j) \prod_{j\in J} B_{\alpha_j,H_j}(x_j)
\end{align}
where $A_{\alpha,H}(.)$ and $B_{\alpha,H}(.)$ are the real valued
trigonometric polynomials defined by \eqref{eq:definition-A-B}.
\end{lemma}
\begin{proof}
See again \cite{mauduit_rivat_rs}.
\end{proof}
Let $(U_1,\ldots,U_d)\in\mathbb{N}^d$ with $U_1\geq 1$,\ldots,$U_d\geq 1$ and
define $\alpha_1=1/U_1$,\ldots,$\alpha_d=1/U_d$.
For $j=1,\ldots,d$ and $x\in\mathbb{R}$ we have
\begin{align}\label{eq:chi-partition}
\sum_{0\leq u_j < U_j} \chi_{\alpha_j}\rb{x-\frac{u_j}{U_j}} = 1.
\end{align}
Let $N\in\mathbb{N}$ with $N\ge 1$, $f:\{1,\ldots,N\} \to \mathbb{R}^d$
and $g:\{1,\ldots,N\} \to \mathbb{C}$ such that $\abs{g}\leq 1$.
If $f=(f_1,\ldots,f_d)$,
we can express the sum
\begin{align*}
S= \sum_{n=1}^N g(n)
\end{align*}
as
\begin{align*}
S = \sum_{n=1}^N g(n) \sum_{0\leq u_1 < U_1} \chi_{\alpha_1}\rb{f_1(n)-\frac{u_1}{U_1}} \cdots \sum_{0\leq u_d < U_d} \chi_{\alpha_d}\rb{f_d(n)-\frac{u_d}{U_d}} .
\end{align*}
We now define $(H_1,\ldots,H_d)\in\mathbb{N}^d$ with $H_1\geq 1$,\ldots, $H_d\geq 1$,
\begin{align*}
\widetilde{S} = \sum_{n=1}^N g(n) \sum_{0\leq u_1 < U_1} A_{\alpha_1,H_1}\rb{f_1(n)-\frac{u_1}{U_1}}
\cdots \sum_{0\leq u_d < U_d} A_{\alpha_d,H_d}\rb{f_d(n)-\frac{u_d}{U_d}}.
\end{align*}
\begin{lemma}
With the notations from above, we have
\begin{align}
\label{eqS-S} \abs{S-\widetilde{S}} &\leq \sum_{\ell=1}^{d-1} \sum_{1\leq j_1<\cdots<j_\ell} \frac{U_{j_1}\cdots U_{j_\ell}}{H_{j_1}\cdots H_{j_\ell}}
\sum_{\abs{h_{j_1}}\leq H_{j_1}/U_{j_1}} \cdots \sum_{\abs{h_{j_\ell}}\leq H_{j_\ell}/U_{j_\ell}}\\
&\qquad \abs{\sum_{n=1}^N \e\rb{h_{j_1} U_{j_1} f_{j_1}(n) + \cdots + h_{j_\ell} U_{j_\ell} f_{j_\ell}(n)}}.
\end{align}
\end{lemma}
\begin{proof}
See again \cite{mauduit_rivat_rs}.
\end{proof}
\bibliographystyle{abbrv}
|
1,108,101,563,014 | arxiv | \section{Introduction}
\label{sec:intro}
Since the recent swampland program (see
\cite{Palti:2019pca,vanBeest:2021lhn} for reviews)
postulates that not all self-consistent
quantum field theories admit a UV completion into a theory of quantum
gravity, the notion of naturalness has to be changed. What seems
natural from a purely low-energy point of view, could turn out to be in the
swampland after all. This new notion
has the potential to make quantum gravity and string theory
much more predictive than initially thought.
How such a logic can be made concrete was exemplified in the recent
work of Montero-Vafa-Valenzuela \cite{Montero:2022prj}. By combining swampland conjectures
with observational data, the authors suggest that our universe should lie
in a specific corner of the quantum gravity landscape.
The starting point of \cite{Montero:2022prj} is
the assumption that our universe is located in an asymptotic region of field space
where the generalization of the AdS distance conjecture \cite{Lust:2019zwm}
to dS space applies. It states that for a cosmological constant $\Lambda$ approaching
zero, a tower of states becomes light obeying the specific scaling behavior
\eq{
m\sim |\Lambda|^\alpha\,.
\label{ADC}
}
Here a factor $\alpha\ge 1/2$ prevents scale separation between the internal dimensions and the radius of (A)dS space. It was
conjectured \cite{Lust:2019zwm} that for AdS this is always the case.
However, for dS the Higuchi bound \cite{Higuchi:1986py} already
requires that $\alpha\le 1/2$. Taking one-loop
corrections into account, it was argued in \cite{Montero:2022prj} that
for four-dimensional dS the factor $\alpha$
should lie in the range ${1\over 4}\le \alpha\le {1\over 2}$.
Applying this to our universe with its observed tiny
cosmological constant
$\Lambda=10^{-122} M_{\rm pl}^4$ and taking bounds from
deviations of Newton's gravitational force law into account,
revealed that only the value $\alpha=1/4$ can be consistent.
Astrophysical constraints from the cooling/heating of neutron stars
then led to a model of a single large extra dimension, dubbed the
dark dimension, and a corresponding tower of KK modes
\eq{
\label{darkdimrel}
\Lambda^{1\over 4}=\lambda\, m_{\rm KK}
}
with $ 10^{-4}<\lambda<1$.
It was further argued that the KK modes should include an excess
of fermionic modes, leading to a kind of sterile neutrino species.
Moreover, the species scale \cite{Dvali:2007hz,Dvali:2007wp} related to this light tower
of one-dimensional KK modes came out in the intermediate
range $\Lambda_{\rm sp}\sim 10^9-10^{10}$GeV.
This is tantalizingly close to the scale of $10^{11}$GeV where
due to the running of its self-coupling, the Higgs
potential is believed to develop an instability.
Interpreting the upper bound of $10^{10}$GeV as the reason
for the sharp cutoff of the flux of ultra-high-energy cosmic
rays \cite{Anchordoqui:2022ejw}, led to the value $\lambda =
10^{-3}$. In \cite{Anchordoqui:2022txe}, it was argued that such a single mesoscopic
direction also changes the production rate of primordial black holes
such that they can provide a large fraction of the dark matter
of our universe.
The nice aspect of this derivation is that it only involves
a generic swampland conjecture and observational data.
However, consistency with one swampland conjecture
does not guarantee that this Dark Dimension Scenario
is really consistent with quantum gravity. Therefore,
it is an important question whether it can be realized
in a fully fledged theory of quantum gravity, like string theory.
In this note we take a first modest step in settling this question
by pointing out that a commonly used aspect
in string theory model building, a strongly warped throat, provides already
a natural mechanism to get $\alpha=1/4$ in the generalized distance
conjecture for dS while providing a lightest one-dimensional KK tower.
Finally we consider the LVS as a typical class of models
and find that due to the large volume, the bulk scales do not separate enough
from the dark dimension to be compliant with
astrophysical constraints.
\section{Realization in a warped throat}
\label{sec_two}
In string theory, realizations of AdS vacua are much better
understood than dS vacua. AdS vacua can for instance be realized via tree-level
flux compactifications and often give $\alpha=1/2$ in \eqref{ADC}.
It is fair to say that the question of scale separation for AdS vacua
is not yet completely settled, as for instance the DGKT
vacua \cite{DeWolfe:2005uu}
would be a candidate for an AdS
vacuum featuring scale separation.\footnote{
In an explicit example \cite{Blumenhagen:2019vgj}, it leads to a value of
$\alpha={7\over 18}$.}
Consistent with the dS swampland conjecture
\cite{Obied:2018sgi} (see also \cite{Dvali:2014gua,Dvali:2018fqu}
for alternative arguments), it is fair to say that so far there does not
exist any generically accepted construction of a controlled
dS vacuum.
However, the KKLT construction \cite{Kachru:2003aw} and the large volume
scenario (LVS) \cite{Balasubramanian:2005zx} are well studied candidates.
These start
with an AdS minimum and then invoke an uplift mechanism to dS,
that is usually the contribution of an anti-D3-brane. To allow
a controlled balance of the AdS vacuum energy and the uplift
energy one puts the anti D3-brane in a strongly warped throat.
In this way, all energy contributions
localized in the throat get redshifted
by the small warp factor.
By balancing the negative and positive contributions,
the former AdS minimum can be uplifted to a dS one.
Following this prescription, let us now assume that in a type IIB
orientifold set-up, by turning on 3-form fluxes on a Calabi-Yau manifold
and taking non-perturbative effects into account,
one has indeed arrived at a not necessarily supersymmetric
AdS minimum with negative vacuum energy $V_{\rm AdS}$.
In order to be able to uplift, we also assume that the complex structure
moduli stabilization involves one modulus $Z$ that controls
the size of a 3-cycle, which for $Z\to 0$ approaches
a conifold singularity. This situation has been analyzed in detail
in e.g. \cite{Bena:2018fqc,Blumenhagen:2019qcg}
and here we simply state and use some of their results.
Denoting the total volume modulus of the Calabi-Yau as ${\cal
V}$, for ${\cal V} |Z|^2\ll 1$ one is in the regime of strong
warping and the total geometry can be thought of as a long,
strongly warped Klebanov-Strassler (KS) throat \cite{Klebanov:2000hb}
of length $y_{\rm UV}$ glued to a bulk Calabi-Yau manifold. Here
$y$ denotes the radial direction in the KS metric.
Following \cite{Douglas:2007tu}, the $N=1$ supersymmetric low energy effective action for the
two moduli $Z$ and ${\cal V}$ is described
by a K\"ahler potential\footnote{It was shown in \cite{Lust:2022xoq} that
this K\"ahler potential receives corrections when going off-shell.
Note that here we are only interested in the behavior close to the minimum.}
\eq{
\label{kaehlerpotb}
K=-2\log({\cal V}) + {2 c' g_s M^2 \vert Z\vert^{2\over
3}\over {\cal V}^{2\over 3}} + \dots\,
}
where the dots denote terms involving all the other complex structure
and K\"ahler moduli of the CY threefold. Moreover the string coupling
constant is related to the vacuum expectation values of the dilaton
$g_s=e^{\langle\phi\rangle}$, which is the real part of the
complex axio-dilaton $S=e^{-\phi}+iC_0$ field.
Computing the periods close to a conifold point $|Z|\ll 1$ and turning
on R-R three-form flux $M$ along the A-cycle and
an NS-NS three-form flux $K$ along the B-cycle of the conifold
results in a superpotential of the form
\eq{
\label{superpotb}
W=-{M\over 2\pi i} Z \log Z +i {K S} Z +\ldots\,.
}
where the dots again contain more flux induced tree-level
contributions involving the complex structure moduli and
non-perturbative terms involving the K\"ahler moduli.
Then, the conifold modulus is stabilized at
\eq{\label{Z_minimum}
Z\sim \exp\left(-{2\pi K\over g_s M} \right)\,,
}
which self-consistently can be made exponentially small
by choosing appropriate fluxes.
For its mass one finds \cite{Blumenhagen:2019qcg}
\eq{
\label{massconi}
m_Z\simeq {1\over (g_s M^2)^{1\over 2}} \left(
|Z|\over {\cal V} \right)^{1\over 3} \,.
}
For an isotropic Calabi-Yau threefold one often considers the bulk KK tower of
mass scale
\eq{
M_{{\rm KK}}\sim {1\over \tau_b}\sim {1\over {\cal
V}^{2\over 3}}\,.
}
However, a compactification with a strongly warped throat is a
highly non-isotropic situation, so that this does not necessarily
reflect the lowest KK scale.
In \cite{Blumenhagen:2019qcg} (see also \cite{Blumenhagen:2022dbo})
it was shown to first approximation analytically, and
confirmed numerically, that there exists a one-dimensional
tower of redshifted KK modes
that are mainly supported close to the tip of the KS throat.
Their masses scale in the same way as the mass of the conifold modulus
$Z$, i.e.\footnote{
More precisely, the numerical analysis \cite{Blumenhagen:2019qcg} indicated
that the localization of the KK modes close to the tip of the throat
makes the KK masses \eqref{masskkthroat} insensitive to the length of the throat
$y_{\rm UV}$ beyond a critical length $y_{\rm UV} > y_{\rm UV}^{*}=O(10)$.
A further increase in throat length is not detected by the localized modes
and the scaling with $y_{\rm UV}$ stops.
}
\eq{
\label{masskkthroat}
m_{{\rm KK}}\sim {1\over (g_s M^2)^{1\over 2}\,y_{\rm UV}} \left(
|Z|\over {\cal V} \right)^{1\over 3} \,.
}
Note that while the warped modes contain a lower suppression
by the volume, the exponentially small value of $Z$
will easily make their mass scale much smaller than the bulk KK scale.
Control over the warped effective action requires $g_s M^2\gg 1$
which additionally suppresses the warped KK scale.
The strongly warped throat can be thought of as containing one
very long direction along the radial $y$-direction of the throat and thus being effectively 5-dimensional at
intermediate energy scales between $m_{{\rm KK}}$ and $m^{(1)}_{{\rm
KK}}$.
Here $m^{(1)}_{{\rm KK}}$ denotes the mass scale of the second lightest tower of KK modes. This is not necessarily the bulk mass scale, as for instance
we expect the KK modes localized on the $S^3$ in the KS throat to also
become redshifted and thus be lighter than the bulk modes.
The uplift contribution to the scalar potential for an anti D3-brane placed
at the tip of the KS throat is given by \cite{Bena:2018fqc,Blumenhagen:2019qcg}
\eq{
V_{\rm up}\sim {1\over (g_s M^2)} \left(
|Z|\over {\cal V} \right)^{4\over 3} \,.
}
We notice that parametrically (in the exponentially small quantity
$|Z|/{\cal V}$) we have the relation
\eq{\label{eq:KKthroat-Vup}
m_{{\rm KK}}\sim {1\over (g_s M^2)^{1\over
4} y_{\rm UV}} \big|V_{\rm up}\big|^{1\over 4}
}
between the warped KK mass and the uplift potential.
Choosing fluxes in \eqref{Z_minimum}
such that $V_{\rm up}\sim |V_{\rm AdS}|$,
we get a relation between the value of $Z$ and the cosmological
constant of the AdS vacuum before the uplift
\eq{
{ |Z|\over {\cal V} }\sim (g_s M^2)^{3\over 4}\, \big|V_{\rm
AdS}\big|^{3\over 4} \,.
}
If the uplift really works and there is a meta-stable dS minimum,
the cosmological constant in the dS minimum is then
given by $\Lambda=V_{\rm up}+V_{\rm AdS} \gtrsim 0$.
Note that with respect to the exponentially small and therefore most
relevant parameter $|Z|/{\cal V}$, the final cosmological constant
$\Lambda$ scales in the same way as $V_{\rm up}$ and $|V_{\rm AdS}|$.
The usual landscape philosophy says that playing with the warp factor
allows the cosmological constant to be tuned to hierarchically
smaller values.\footnote{Additional tuning could also arise by a
landscape of initial AdS minima.}
However, one has to keep in mind that according to
\eqref{Z_minimum} the VEV
of $Z$ is determined by quantized fluxes.
The number of fluxes one can use is expected to
be bound from above by tadpole cancellation conditions,\footnote{
To quantify the generic relative tuning available for $V_{\rm up}$ by choosing discrete fluxes in
\eqref{Z_minimum} we define the relative minimal distance
$\lambda'\sim {|Z_1|^{\nicefrac{4}{3}}-|Z_2|^{\nicefrac{4}{3}}\over |Z_1|^{\nicefrac{4}{3}}}$
for two values $|Z_1|>|Z_2|$.
Assuming that the tadpoles restrict the fluxes $M$ and $K$ to be
smaller than $N\gg 1$
one can derive the lower bound
\eq{\lambda'\sim \Big(1-e^{-{8\pi\over
3g_s} {|K_2M_1-K_1 M_2|\over M_1 M_2}}\Big) >\Big(1-e^{-{8\pi\over
3 g_s
N^2}}\Big)\sim {8\pi\over 3 g_s N^2}\,.\nonumber
}
For realistic values $g_s=1/10$ and $N=200$ one gets $\lambda'> 2\cdot
10^{-3}$.
} the genuine
quantum gravity constraints in string theory.
Moreover, as shown \cite{Lust:2022xoq} the minimum should not move too far away
from its initial position. This suggests that in quantum gravity the
actual tuning
one is allowed to do in a controllable manner is
limited. We include this tuning in our analysis by writing
\eq{
\Lambda=\lambda'\, |V_{\rm up}|
}
with $\lambda'<1$.
Finally we use eq. \eqref{eq:KKthroat-Vup} to arrive at the relation
\eq{
\Lambda^{1\over 4}= \Big[ (g_s M^2)^{1\over 4}y_{\rm UV}\lambda'\Big]\, m_{{\rm KK}}\,.
}
We observe that this is precisely the relation \eqref{darkdimrel} for the dark
dimension scenario with $\lambda=(g_s M^2)^{1\over
4}y_{\rm UV}\lambda'$. Imposing the bound $\lambda>10^{-4}$ from \cite{Montero:2022prj},
guaranteeing that a small value of $\lambda$
does not change the scaling too much, leads to
\eq{
\lambda'>{10^{-4}\over (g_s M^2)^{1\over
4}y_{\rm UV}}\,.
}
Consistent with what we just discussed, this restricts the amount
of ``fine tuning'' one can perform to get a small cosmological constant.
Note that if the former AdS vacuum
admits an uplift of this type, i.e. that fluxes can be found so that
the uplift condition $V_{\rm up}\sim |V_{\rm AdS}|$ holds and a meta-stable vacuum appears,
then it satisfies the scaling of the AdS distance conjecture with $\alpha=1/4$,
where the tower of states is given by KK modes in the long throat.
Thus, not only the uplifted dS but also the AdS vacuum is scale separated.
We conclude that an uplift by an anti D3-brane in a strongly
warped throat generically leads to a tower of one-dimensional KK-modes
parametrically satisfying the (A)dS distance conjecture with $\alpha=1/4$.
In view of the unsettled control
issues raised recently for both the
KKLT scenario\cite{Gao:2020xqh,Lust:2022lfc,Blumenhagen:2022dbo} and
the LVS\cite{Junghans:2022exo,Gao:2022fdi}, one might wonder whether
one can draw any lesson from our analysis in case
this idea of initial AdS with subsequent uplifting
does not really work. Clearly, our result persists
as long as there exists a strongly warped KS throat and
the final (quasi) dS vacuum or even quintessence
potential is dominated by the energy scale in the strongly warped throat.
\section{Example: uplifted LVS}
While this is quite a robust result, it is of course
just a single aspect of a whole string model. Putting aside
the not yet settled question whether dS vacua exist at all in string theory,
there are more mass scales in the game, like the other moduli masses
or other heavier KK towers that might still be in conflict with astrophysical bounds.
In this section we briefly discuss this for an uplifted large volume
scenario.
For the definition of the LVS and its moduli stabilization
scheme and the resulting mass scales we refer the reader to the
existing literature \cite{Balasubramanian:2005zx,Conlon:2005ki}.
Here we only need to know that there are two K\"ahler moduli,
the volume modulus $\tau_b\simeq \mathcal{V}^{\frac23}$
and $\tau_s$, where the second is stabilized at small radius by a
non-perturbative effect, whereas the first is stabilized
perturbatively by an intricate balancing of three terms at
\eq{
{\cal V}\sim \sqrt{\tau_s}\, e^{a\tau_s}\,.
}
The value of the cosmological constant in the non-supersymmetric AdS
minimum scales like
\eq{
V_{\rm AdS}\sim -{1\over \tau_s}\, e^{-3a\tau_s}\sim
{1\over {\cal V}^{3}}\,.
}
The masses of the small and the large K\"ahler moduli scale as
\eq{
m_{\tau_b}\sim {1\over {\cal V}^{3\over 2}}\,,\qquad
m_{\tau_s}\sim {1\over {\cal V}}\,.
}
Recalling that the fluxes are chosen such that $V_{\rm up} \sim |V_{\rm AdS}|$,
we express the value of the conifold modulus in terms of the volume
\eq{
|Z| \sim (g_s M^2)^{3\over 4}\, {\mathcal{V}^{-\frac54}}\,.
}
Then taking the scaling of the warped KK scale
$m_{{\rm KK}}\sim V_{\rm up}^{1\over 4}\sim 1/{\cal V}^{3\over 4}$
and the (naive) bulk KK mass scale $M_{{\rm KK}}\sim 1/{\cal V}^{2\over 3}$
into account we get the following
hierarchy of mass scales
\eq{
m_{\cal V} < m_{\tau_s}< m_{\rm
KK} < M_{\rm KK}\,.
}
Thus, we see that all K\"ahler moduli (and also almost all the complex
structure moduli) are lighter than the warped KK scale. In particular this
means that in LVS the conifold modulus is actually the heaviest
complex structure modulus.
Note that since ${\cal V}|Z|^2\sim {\cal
V}^{-3/2}$ we are indeed in the regime of strong warping.
As expected, the bulk KK modes are indeed heavier than the ones
arising in the throat.
However, for their ratio one finds
\eq{
{ M_{{\rm KK}}\over m_{{\rm KK}}}
\sim {\cal V}^{1\over 12}\sim \Lambda^{-{1\over 36}}\sim 2\cdot 10^{3}
}
so that the corresponding length scale in the bulk is only by factor of
$10^{-3}$ smaller than the length scale of the
throat.\footnote{We can be a bit more precise by taking into account that
the background is highly non-isotropic. In this case we better approximate ${\rm
Vol}=r_b^5\, r_t$ and define the bulk KK scale
as $M_{\rm KK}\sim {1\over r_b}$. In this way we find
${ M_{{\rm KK}}\over m_{{\rm KK}}}
\sim {\cal V}^{\nicefrac{1}{10}}\sim \Lambda^{-{\nicefrac{1}{30}}}\sim 10^{4}$.
}
This puts the uplifted LVS in tension with the astrophysical bounds
on KK modes in more than one large extra dimension.
Moreover, this means that new physics appear below the species
scale computed (naively) via the throat KK modes.
\section{Conclusions}
In this note we pointed out that a common aspect of
string theoretic dS constructions naturally gives rise to the requirements
of realizing the dark dimension scenario.
We argued that under fairly generic assumptions, the non-isotropy
caused by the presence of a strongly warped throat leads
precisely to the required exponent $\alpha=1/4$ in the
(A)dS distance conjecture. The reason for this is simply that
the energy density of an anti D3-brane at the tip of the throat
and the mass scale of the one-dimensional tower of redshifted KK modes
localized deep in the throat satisfy $m_{\rm KK}\sim V_{\rm
up}^{1\over 4}$. Using the anti D3-brane as an uplift
from an AdS minimum to dS parametrically correlates the uplift energy scale
with the dS cosmological constant. The important numerical factor of
$\lambda \sim 10^{-1}-10^{-3}$
in the dark dimension scenario is then related to the flux dependent
and therefore restricted ``tuning'' of $\Lambda$ relative to $V_{\rm up}$.
Our result is expected to persist even under the milder assumptions that
there exists a strongly KS throat in the geometry and
that the final quasi dS energy is dominated by the energy scale
in the strongly warped throat.
To point out observational issues that can arise
in more concrete string models with a dark dimension,
we analyzed the uplifted LVS in some more detail.
There we found that due to the large volume, the bulk KK modes
are still too light to avoid a conflict with the astrophysical constraints
on the size of extra dimensions. We believe that the appearance
of such additional, too light bulk or throat KK towers will be a
generic issue in concrete string realizations.
\vspace{0.4cm}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.